COURSE 05 / SOURCE 05_unprompted_solve_course_content.md

The Unprompted Solve

Cultivating initiative, judgment, and autonomous problem solving.

Course 05 Course Content Spec: The Unprompted Solve

1. Title and Source Files Used

Course title: The Unprompted Solve

Owned output file: 05-unprompted-solve/05_unprompted_solve_course_content.md

Source files used as base truth:

  • 05-unprompted-solve/curriculum.md
  • 05-unprompted-solve/website-prompt.md

Purpose of this document: This is the implementation-ready course-content spec for Course 05. It translates the base curriculum into a facilitator-operable delivery plan, assessment system, assignment architecture, and AI/LLM evaluation framework. Where this document adds detail, it is extending the curriculum rather than replacing it.

2. Design Decisions at the Top

Core design decisions

  1. Keep the course centered on one real problem, not a simulated exercise.

The curriculum is explicit that the real problem is the course. The delivery design therefore treats Sessions 1 and 2 as preparation, Sessions 3 and 4 as field execution, and Session 5 as public evidence of ownership.

  1. Teach ownership as behavior under ambiguity, not as a motivational concept.

Students should repeatedly practice noticing, scoping, acting, updating, and closing loops. Reflection matters, but only when paired with observable action.

  1. Optimize for evidence over polish.

Students are graded more heavily on a real first-draft implementation than on presentation smoothness. The course should reward imperfect action, not polished inactivity.

  1. Make the instructor deliberately non-rescuing.

Facilitator moves are designed to maintain psychological safety while refusing to over-direct. The course fails if the instructor becomes project manager for each student.

  1. Preserve the activist-manifesto tone from the website prompt.

Course language, slides, prompts, rubrics, and feedback should feel urgent, declarative, and responsibility-forward. The tone should say: "Someone has to fix it. Why not you?"

  1. Assess both product quality and ownership quality.

A good-looking artifact with weak initiative should not score as highly as a meaningful, self-started intervention with honest reasoning and strong loop-closing.

  1. Build AI grading around structured evidence, not vibes.

LLM evaluation should rely on anchored rubric criteria, required evidence fields, confidence scoring, and explicit failure modes to reduce over-generous scoring.

  1. Treat reflection as operational data.

The reflex journal is not a diary assignment. It is a behavior log showing when the student noticed a problem, what reflex fired, what they did, and what happened.

  1. Design for mixed-age, mixed-experience cohorts.

Students ages 15-25 may vary dramatically in authority, work context, and communication skill. Assignments therefore allow school, community, volunteer, online, and open-source contexts.

  1. Constrain the solve enough to force focus.

Students are not solving "everything wrong with the organization." They must define a narrow enough problem to produce a first implementation within two weeks.

Non-negotiables

  • No student may wait for a perfect brief.
  • No student may submit only slides, ideas, or recommendations without some implementation evidence.
  • No facilitator may fully solve a student's problem for them.
  • No grading system may reward confident storytelling over concrete work.
  • No feedback should shame incomplete work if the student acted with real ownership and honest reasoning.

3. Delivery Model Assumptions

Program format

  • Duration: 2-3 weeks
  • Live sessions: 5 sessions x 90 minutes
  • Async execution period: approximately 10-14 days between project launch and final delivery
  • Delivery mode: in-person, hybrid, or fully remote
  • Recommended cohort size: 12-24 students
  • Recommended facilitator staffing: 1 lead facilitator for up to 18 students; add 1 support coach for 19-24 students

Learner assumptions

  • Learners may have no formal job title, no team authority, and no prior project management experience.
  • Learners do not need technical skills, but they do need access to a real context where a problem exists.
  • Learners may source problems from school, clubs, nonprofits, family businesses, community groups, creator communities, online communities, or open-source projects.

Instructional assumptions

  • The facilitator can source a list of real problem opportunities before Session 3, or support students in identifying their own.
  • Students have access to basic productivity tools: docs, slides, camera/phone, email or messaging, and optionally no-code or coding tools.
  • Students can collect evidence during execution: screenshots, drafts, before/after states, stakeholder quotes, workflow maps, metrics, or prototypes.

Project scope assumptions

  • The problem must be narrow enough for a first implementation in under two weeks.
  • The student's goal is not full transformation; it is a credible first solve with visible ownership.
  • Students may not wait for perfect permission structures if the work is low-risk and within ethical bounds.
  • Students must escalate rather than act when the issue involves safety, legal exposure, privacy, finances, protected data, or high-impact organizational decisions.

Facilitation assumptions

  • The facilitator will model direct language, fast synthesis, and operational clarity.
  • The facilitator will use questions to push student judgment:
  • "What problem are you actually solving?"
  • "What would done look like?"
  • "What is the smallest real intervention?"
  • "Who needs to know?"
  • "What would make this irresponsible to do without escalation?"

Technology assumptions for AI/LLM use

  • An LLM may be used for formative feedback, grading support, rubric calibration, and draft coaching.
  • Final grading should be human-reviewed if stakes are high.
  • LLM scoring must be constrained by structured inputs and evidence excerpts, not full raw chat logs alone.

4. Detailed Course Content

Course-level arc

ModuleSession(s)Core questionOutput
Module 1: The Reflex and How It WorksSession 1Why do people walk past problems?Reflex audit + first journal entry
Module 2: The Unprompted Solve FrameworkSession 2How do I act without waiting for a brief?Ownership protocol worksheet + scenario responses + project scope draft
Module 3: The Real ProblemSessions 3-4 + asyncCan I define, scope, and execute a real solve under ambiguity?Problem brief + research evidence + first-draft implementation
Module 4: The DeliverySession 5Can I defend the solve, show evidence, and reflect honestly?Final presentation + artifact pack + reflex report

Module 1: The Reflex and How It Works

Session 1 overview

  • Session length: 90 minutes
  • Session goal: Make students aware of their default response to problems and create the psychological frame that ownership is learnable behavior.
  • Required materials: slide deck, student notebook/doc, reflex audit worksheet, scenario cards, timer, whiteboard or shared board

Session 1 agenda at a glance

TimeSegmentPurpose
0:00-0:10Opening manifesto + normsEstablish urgency and course culture
0:10-0:25The four responses to a problemBuild shared vocabulary
0:25-0:40Reflex audit individual exerciseSurface personal default patterns
0:40-0:55Small-group debriefNormalize resistance and compare triggers
0:55-1:10Why the reflex exists mini-lessonExplain social and organizational causes
1:10-1:22Reading/application discussionConnect ideas to lived situations
1:22-1:30Reflex journal setup + exit ticketStart longitudinal reflection

Session 1 lesson and activity detail

Lesson 1.1: Opening manifesto and course contract
  • Timing: 10 minutes
  • Learning objective: Students understand that the course is about action under ambiguity, not compliance with instructions.
  • Facilitator moves:
  • Open with the line: "Most people see a problem and wait. This course is about deleting that instinct."
  • Set norms: evidence over polish, action over permission-seeking, honesty over performance.
  • Clarify that incomplete but real work is valued more than polished fiction.
  • Student outputs:
  • Written response to prompt: "What kind of person do I currently become when I see something broken?"
  • One sentence course commitment
Lesson 1.2: The Four Responses to a Problem
  • Timing: 15 minutes
  • Learning objective: Students can distinguish solving, flagging, escalating, and ignoring.
  • Activity: Instructor-led scenario sort
  • Facilitator moves:
  • Present 6 short scenarios from school, work, community, and online settings.
  • Ask students to classify the response they would usually choose and the response a high-ownership operator would choose.
  • Press on ambiguity. Some cases should appropriately escalate; the point is not reckless action.
  • Student outputs:
  • Scenario classification notes
  • One example of each response type from their own life
Activity: Reflex Audit
  • Timing: 15 minutes
  • Learning objective: Students identify the emotional and social blockers behind inaction.
  • Directions:
  • Students list 3 moments from the last month where they saw a problem and did not solve it.
  • For each moment they record:
  • What was broken
  • What they noticed first
  • What they told themselves
  • What risk felt present
  • What action they took or avoided
  • What likely happened because they did nothing
  • Facilitator moves:
  • Require specifics; reject generic phrasing like "I was busy."
  • Offer blocker categories: fear, uncertainty, authority, effort, social risk, skill gap, diffusion of responsibility.
  • Student outputs:
  • Completed reflex audit worksheet with 3 incidents
Activity: Small-group debrief
  • Timing: 15 minutes
  • Learning objective: Students normalize the ownership barrier and identify common reflex patterns.
  • Group format: triads
  • Facilitator moves:
  • Give each group 2 prompts:
  • "Which blocker shows up most often for you?"
  • "Which blocker sounds noble but is actually avoidance?"
  • Circulate and listen for patterns to bring back in debrief.
  • Student outputs:
  • Group list of top 3 blockers
  • Personal statement: "My strongest reflex trigger is ______."
Lesson 1.3: Why the Reflex Exists
  • Timing: 15 minutes
  • Learning objective: Students can explain how social proof, career safety, and status cues create passivity.
  • Content points from curriculum:
  • Social risk
  • Efficiency trap
  • Diffusion of responsibility
  • Status quo comfort
  • Career safety through instruction-following
  • Facilitator moves:
  • Use one example for each cause from school or workplace contexts.
  • Ask: "Which of these is protecting you, and which is just flattering your fear?"
  • Student outputs:
  • Annotated notes linking one cause to a personal example
Reading/application discussion
  • Timing: 12 minutes
  • Learning objective: Students connect social proof theory to inaction in organizations.
  • Reading basis: assigned chapter/excerpt from Robert Cialdini's Influence
  • Facilitator moves:
  • Use discussion prompts:
  • "Why do groups become less likely to act as the number of observers increases?"
  • "What is the cost of waiting for a signal that no one owns the problem?"
  • If students did not complete reading, summarize the core mechanism in 2 minutes and move to application.
  • Student outputs:
  • One written takeaway: "The idea from the reading that most explains my behavior is..."
Reflex journal setup and exit ticket
  • Timing: 8 minutes
  • Learning objective: Students understand the journal as an operational log.
  • Facilitator moves:
  • Introduce the journal template with five required fields:
  • Situation
  • Reflex
  • Action taken
  • Result
  • What I would do next time
  • Assign first entry due before Session 2.
  • Student outputs:
  • Journal template copied or shared
  • Exit ticket with one ownership opportunity they will act on before next session

Session 1 facilitator watch-fors

  • Students moralizing about "good people help" instead of focusing on operational behavior
  • Students using authority as a blanket excuse even in low-risk situations
  • Students confusing escalation with ownership
  • Overconfident students who default to action without thinking about risk or scope

Module 2: The Unprompted Solve Framework

Session 2 overview

  • Session length: 90 minutes
  • Session goal: Give students a repeatable protocol for deciding when and how to act without a brief.
  • Required materials: 5-step protocol handout, scenario set, scope ladder template, project sourcing sheet

Session 2 agenda at a glance

TimeSegmentPurpose
0:00-0:10Journal debriefReinforce behavior change between sessions
0:10-0:28The 5-step ownership protocolTeach the operating sequence
0:28-0:45Scope and success-definition drillPrevent vague or oversized solves
0:45-1:05Solve vs. escalate scenario labBuild judgment under constraints
1:05-1:20Real problem sourcingMatch students to project contexts
1:20-1:30Project stake-your-claim draftPrepare for Session 3 launch

Session 2 lesson and activity detail

Journal debrief
  • Timing: 10 minutes
  • Learning objective: Students start to notice the reflex in real time.
  • Facilitator moves:
  • Ask for 3 volunteers to share one moment they acted and one moment they did not.
  • Reward specificity, not heroics.
  • Pull language from the website tone: "The instinct changes one moment at a time."
  • Student outputs:
  • One public or written example from their week
Lesson 2.1: The 5-Step Ownership Protocol
  • Timing: 18 minutes
  • Learning objective: Students can explain and apply the protocol.
  • Protocol components from curriculum:
  • Step 1: Assess scope
  • Step 2: Define success
  • Step 3: Stake your claim
  • Step 4: Execute and update
  • Step 5: Hand off or close
  • Facilitator moves:
  • Teach each step using a single running example.
  • Emphasize that "stake your claim" is not asking permission; it is creating clarity and accountability.
  • Show what a weak versus strong update looks like.
  • Student outputs:
  • Filled protocol notes
  • One-line definitions of each step in their own words
Activity: Scope and success-definition drill
  • Timing: 17 minutes
  • Learning objective: Students can turn a messy complaint into a solvable target.
  • Activity format: paired rapid reps
  • Directions:
  • Give pairs 4 vague problems such as:
  • "Volunteer onboarding is broken"
  • "The team communication is a mess"
  • "No one uses the docs"
  • "People keep missing deadlines"
  • Each pair rewrites each into:
  • narrow problem statement
  • solved-state sentence
  • likely owner/stakeholder
  • smallest first intervention
  • Facilitator moves:
  • Push students away from abstract language like "improve culture."
  • Ask: "What would someone see if this were fixed by Friday?"
  • Student outputs:
  • 4 mini scope statements
  • 4 solved-state sentences
Lesson 2.2 and activity: When to Solve vs. When to Escalate
  • Timing: 20 minutes
  • Learning objective: Students can distinguish initiative from recklessness.
  • Scenario categories:
  • Low-risk workflow gap
  • Missing documentation
  • Scheduling/coordination issue
  • Team conflict
  • Safety concern
  • Financial or legal decision
  • Access/privacy issue
  • Public-facing messaging issue
  • Facilitator moves:
  • Use a decision filter:
  • Do I have enough information to act?
  • Is the blast radius limited?
  • Is this reversible?
  • Would acting violate trust, safety, privacy, or authority boundaries?
  • If I do not act, what is the cost?
  • Make students justify both action and escalation decisions.
  • Student outputs:
  • Scenario response sheet with recommended action and rationale
Activity: Real problem sourcing
  • Timing: 15 minutes
  • Learning objective: Students identify candidate problems for the core project.
  • Facilitator moves:
  • Present project source categories:
  • nonprofit
  • school/club
  • small business
  • community group
  • creator or online community
  • open-source project
  • Require each student to list 3 candidate problems.
  • Screen out problems that are too large, too vague, or inaccessible.
  • Student outputs:
  • Candidate problem list with initial scope notes
Activity: Project stake-your-claim draft
  • Timing: 10 minutes
  • Learning objective: Students draft the first communication that signals ownership.
  • Facilitator moves:
  • Provide a template:
  • "I noticed X. I am going to investigate the problem, define a workable first solution, and share a draft by [date]. If there are constraints I should know about, tell me. Otherwise I’ll proceed."
  • Clarify that students are not asking to be managed.
  • Student outputs:
  • Draft claim message or announcement
  • Project choice for Session 3

Session 2 facilitator watch-fors

  • Students proposing projects too broad for two weeks
  • Students using "I need more context" as avoidance
  • Students assuming ownership means unilateral action in high-risk cases
  • Students selecting problems with no possible access to evidence or implementation

Module 3: The Real Problem

Module 3 overview

  • Total live time: 180 minutes across Sessions 3 and 4
  • Async work: 8-12 hours recommended
  • Module goal: Students identify a real problem, research it, produce a first implementation, and navigate ambiguity without a supervisor driving the work.

Required student deliverables during Module 3

  • Problem brief v1
  • Stakeholder map
  • Evidence log
  • Root-cause hypothesis
  • Proposed solution outline
  • First-draft implementation
  • Progress update
  • Reflex journal entries

Session 3 overview

  • Session length: 90 minutes
  • Session goal: Move students from project idea to operational definition and execution plan.

Session 3 agenda at a glance

TimeSegmentPurpose
0:00-0:12Project confirmation roundLock each student into a real problem
0:12-0:30Problem framing labImprove scope and problem statements
0:30-0:50Stakeholder and evidence mappingPrevent blind solving
0:50-1:08Root-cause hypothesis workshopMove beyond symptoms
1:08-1:22First implementation designDefine the smallest real solve
1:22-1:30Public commitment + next checkpointCreate pressure and accountability

Session 3 lesson and activity detail

Project confirmation round
  • Timing: 12 minutes
  • Learning objective: Every student leaves with one approved project scope.
  • Facilitator moves:
  • Each student states:
  • context
  • problem
  • affected people
  • why this is solvable in two weeks
  • Stop students whose projects are still complaints rather than problems.
  • Student outputs:
  • Finalized project statement
Problem framing lab
  • Timing: 18 minutes
  • Learning objective: Students sharpen the problem definition.
  • Required problem statement template:
  • "In [context], [specific group] experiences [specific friction/failure], which causes [consequence]. I believe the core cause may be [hypothesis]."
  • Facilitator moves:
  • Ask "How do you know?" after every claim.
  • Force distinction between surface symptom and root cause.
  • Student outputs:
  • Problem brief v1
Stakeholder and evidence mapping
  • Timing: 20 minutes
  • Learning objective: Students identify who is affected and what evidence they need.
  • Activity:
  • Map primary users, secondary stakeholders, blockers, approvers, and people who maintain the solve after handoff.
  • Build an evidence checklist:
  • observations
  • interviews/messages
  • existing documents
  • screenshots or workflow captures
  • baseline metrics
  • Facilitator moves:
  • Push students to collect direct evidence before solution lock-in.
  • Student outputs:
  • Stakeholder map
  • Evidence plan
Root-cause hypothesis workshop
  • Timing: 18 minutes
  • Learning objective: Students articulate 1-3 plausible causes and rank them.
  • Facilitator moves:
  • Use "5 whys" or cause tree method.
  • Ask students what evidence would disconfirm their top hypothesis.
  • Student outputs:
  • Ranked root-cause list
  • Evidence needed to confirm/disconfirm
First implementation design
  • Timing: 14 minutes
  • Learning objective: Students define a smallest real intervention.
  • Prompt:
  • "What is the most concrete thing you can build, change, draft, test, or implement before Session 4?"
  • Facilitator moves:
  • Ban vague deliverables like "research more."
  • Examples of acceptable first implementations:
  • draft onboarding checklist
  • prototype documentation site
  • meeting elimination proposal with calendar change
  • workflow automation
  • volunteer intake form and instructions
  • triage SOP draft with owner handoff
  • Student outputs:
  • Implementation plan with deadline and evidence type
Public commitment + next checkpoint
  • Timing: 8 minutes
  • Learning objective: Students create accountability pressure.
  • Facilitator moves:
  • Each student says:
  • what they will produce
  • by when
  • what evidence they will bring
  • Remind them that asking for help is allowed, but passivity is not.
  • Student outputs:
  • Public commitment statement

Async execution period between Sessions 3 and 4

  • Recommended time: 8-12 hours
  • Required student tasks:
  • Contact or observe stakeholders as needed
  • Collect evidence
  • Build the first implementation
  • Log blockers and decisions in reflex journal
  • Send at least one proactive update to relevant people
  • Facilitator role during async period:
  • Available but not proactive
  • Replies with questions before answers
  • Tracks who has gone silent; silence may indicate avoidance rather than lack of progress

Async student checkpoint template

  • What problem am I solving right now?
  • What evidence did I collect?
  • What changed in my understanding?
  • What did I build or test?
  • What is blocked?
  • What decision am I making next?

Session 4 overview

  • Session length: 90 minutes
  • Session goal: Pressure-test student judgment, surface common failure modes, and improve incomplete solves before final delivery.

Session 4 agenda at a glance

TimeSegmentPurpose
0:00-0:24Two-minute progress updatesForce concise reporting
0:24-0:46Peer hypothesis clinicUnstick projects without taking them over
0:46-1:06Failure mode teaching blockAddress common cohort patterns
1:06-1:22Presentation evidence buildPrepare final artifact pack
1:22-1:30Final sprint commitmentsLock remaining work

Session 4 lesson and activity detail

Two-minute progress updates
  • Timing: 24 minutes
  • Learning objective: Students report clearly on progress and blockers.
  • Student format:
  • Problem
  • What I learned
  • What I built
  • What is still broken
  • What I need to decide next
  • Facilitator moves:
  • Cut rambling. Require specificity.
  • Track recurring failure patterns on board:
  • symptom-only thinking
  • overbuilt solution
  • no stakeholder evidence
  • no handoff plan
  • hidden fear masked as "still researching"
  • Student outputs:
  • Short spoken update
Peer hypothesis clinic
  • Timing: 22 minutes
  • Learning objective: Students generate better questions, not premature fixes.
  • Format:
  • 3-4 students present stuck points
  • Peers respond only with:
  • possible root causes
  • missing evidence
  • reframed problem statements
  • No peer is allowed to say "just do X" in the first round
  • Facilitator moves:
  • Enforce question quality.
  • Model hypothesis language: "A possible reason this is failing is..."
  • Student outputs:
  • Revised project hypothesis or next action
Failure mode teaching block
  • Timing: 20 minutes
  • Learning objective: Students see common execution traps before final delivery.
  • Facilitator teaches:
  • Mistaking activity for progress
  • Solving a problem no one actually has
  • Building without a solved-state definition
  • Avoiding communication because work is imperfect
  • Delivering recommendations without implementation evidence
  • Student outputs:
  • Personal failure-mode risk checklist
Presentation evidence build
  • Timing: 16 minutes
  • Learning objective: Students prepare a coherent evidence package for Session 5.
  • Required artifact categories:
  • before state
  • research/evidence
  • what was built or changed
  • what happened after
  • what remains unresolved
  • Facilitator moves:
  • Show what counts as credible evidence.
  • Reject presentation decks that are not backed by artifacts.
  • Student outputs:
  • Draft presentation outline
  • Artifact pack checklist
Final sprint commitments
  • Timing: 8 minutes
  • Learning objective: Students leave with a narrow, executable plan.
  • Student outputs:
  • Final 72-hour plan
  • One stated risk and mitigation

Module 3 facilitator watch-fors

  • Students who are working hard but still have no evidence
  • Students who found a problem but built a solution disconnected from the root cause
  • Students whose intervention is too abstract to demo
  • Students who are avoiding upward communication because they fear being wrong
  • Students who solved something real but under-describe their work due to modesty or confusion

Module 4: The Delivery

Session 5 overview

  • Session length: 90 minutes
  • Session goal: Students present evidence of ownership, defend their reasoning, and articulate what changed in their reflex.
  • Presentation length: 6 minutes presentation + 4 minutes review/Q&A per student is ideal for 9 students; for larger cohorts split into panels or shorten to 4 + 3 minutes.

Session 5 agenda at a glance

TimeSegmentPurpose
0:00-0:08Reframe final session normsCelebrate evidence and honesty
0:08-1:10Student presentations and reviewAssess ownership, judgment, and execution
1:10-1:24Cohort debriefExtract patterns across solves
1:24-1:30Final reflex commitmentLock in identity shift

Student presentation structure

Students must cover all five components from the curriculum:

  1. Problem definition
  2. Solution
  3. Execution evidence
  4. What they would do next
  5. Reflex report

Required presentation artifact pack

  • Final problem brief
  • Evidence log
  • Before/after or baseline/current comparison
  • First-draft implementation artifact
  • Stakeholder communication example or summary
  • Reflex journal excerpt
  • Next-step plan

Session 5 facilitator moves

  • Open with: "Today is not about perfect answers. It is about whether you engaged reality."
  • During Q&A, ask:
  • "What evidence most changed your understanding?"
  • "What did you solve versus what remains unsolved?"
  • "What was the moment you were most tempted to wait for permission?"
  • "What did you ship before it felt ready?"
  • During review, emphasize:
  • quality of problem definition
  • relevance of intervention
  • degree of real action
  • honesty about limits

Peer review structure

After each presentation, peers submit:

  • One genuinely impressive aspect of the approach
  • One specific question the presenter should have considered
  • One thing they personally would have done differently

Session 5 student outputs

  • Final presentation
  • Artifact pack submission
  • Final reflex report
  • Personal commitment statement: "The next unprompted solve I will take on is..."

5. Assignments and Artifacts

Assignment overview

AssignmentDueWeightPurpose
Reflex JournalOngoing, weekly10%Track real moments of reflex and override
Problem Analysis WriteupBefore Session 4 or at Session 420%Define the problem accurately and show evidence
Solution DesignWith final submission20%Explain the chosen intervention and alternatives
Execution EvidenceFinal submission30%Prove a real first implementation happened
Final PresentationSession 520%Communicate what was solved, learned, and left open

Assignment 1: Reflex Journal

Objective: Build the habit of noticing and overriding the "not my job" reflex.

Minimum requirement:

  • 4 entries across the course
  • At least 2 entries must describe real action taken
  • At least 1 entry may describe a moment of avoidance if it is analyzed honestly

Required template per entry:

  • Situation
  • What I noticed
  • What reflex fired
  • What I told myself
  • What action I took or did not take
  • Why
  • Result
  • What I learned

Acceptable evidence:

  • Written entry
  • Voice note transcript
  • Timestamped screenshot with explanatory note

Assignment 2: Problem Analysis Writeup

Objective: Show that the student understands the actual problem rather than a surface complaint.

Required sections:

  • Context and affected group
  • Problem statement
  • Scope
  • Why it matters
  • Evidence collected
  • Root-cause hypotheses
  • Constraints and risks
  • Solve vs. escalate rationale

Expected length:

  • 500-1000 words, or equivalent slide/doc format with evidence

Minimum evidence threshold:

  • At least 3 concrete evidence points
  • At least 1 direct observation or direct stakeholder input

Assignment 3: Solution Design

Objective: Defend the chosen intervention with credible reasoning.

Required sections:

  • Solved-state sentence
  • Chosen intervention
  • Why this approach
  • Alternatives considered and rejected
  • Implementation plan
  • Communication plan
  • Handoff/closure plan

Minimum quality threshold:

  • Must include at least 2 rejected alternatives or explain why only one was feasible
  • Must show how the solution fits the actual scope

Assignment 4: Execution Evidence

Objective: Demonstrate that the student delivered a real first implementation.

Allowed artifact types:

  • live prototype
  • draft SOP or process doc in use
  • workflow automation
  • new onboarding asset
  • documentation page
  • changed scheduling process
  • communication artifact sent and acknowledged
  • dashboard, form, checklist, guide, or script

Not sufficient on its own:

  • slides
  • brainstorming notes
  • opinions
  • a list of recommendations with no implementation

Required evidence package:

  • artifact itself
  • evidence of before state
  • evidence of after state or initial effect
  • evidence of ownership communication
  • note on limitations and remaining issues

Assignment 5: Final Presentation

Objective: Present the work clearly, honestly, and defensibly.

Format options:

  • live presentation
  • recorded presentation if remote/asynchronous

Required structure:

  • What was broken
  • What I discovered
  • What I did
  • What happened
  • What I would do next
  • What I learned about my reflex

Artifact management requirements

Each student should maintain one shared folder or portfolio page containing:

  • reflex journal
  • writeup
  • design rationale
  • execution evidence
  • final slides or presentation outline
  • raw supporting evidence

This folder is the preferred input for both human review and LLM evaluation.

6. AI/LLM Grading and Assessment Framework

Role of AI in this course

LLMs are useful here because the course produces structured qualitative evidence: problem framing, reasoning, reflections, and artifacts with explanatory context. They should support, not replace, educator judgment.

Recommended grading workflow

  1. Student submits structured artifact pack.
  2. A preprocessing step extracts required fields into a standard evaluation sheet.
  3. LLM evaluates each rubric dimension separately using only the evidence tied to that dimension.
  4. LLM returns:
  • score
  • rationale
  • evidence cited
  • confidence
  • flags for missing evidence or suspected inflation
  1. Human reviewer spot-checks borderline, low-confidence, or high-stakes cases.

Structured evaluation inputs for the LLM

Every evaluation packet should include:

  • course name and assignment name
  • rubric criteria
  • student problem statement
  • solved-state sentence
  • evidence summary
  • artifact descriptions
  • selected journal excerpts
  • presentation transcript or notes
  • evaluator instructions

LLM grading principles

  • Score evidence, not effort language.
  • Penalize unsupported claims.
  • Do not reward confidence, charisma, or polished phrasing if artifacts are weak.
  • Distinguish "real but small" from "ambitious but imaginary." Real but small should usually score higher.
  • Reward correct escalation decisions when action would have been irresponsible.
  • Treat honest acknowledgment of limitations as a strength when backed by real work.

Suggested dimension-level heuristics

Problem Analysis heuristics

High-scoring indicators:

  • Problem statement names a specific affected group and specific friction.
  • Student distinguishes symptom from cause.
  • Evidence includes direct observation or stakeholder input.
  • Scope is narrow enough to act on.

Low-scoring indicators:

  • Problem is framed as a vague complaint.
  • No evidence shows the problem is real.
  • Student leaps to solution without diagnosis.
  • Problem scope is too large or undefined.

Solution Design heuristics

High-scoring indicators:

  • Chosen intervention clearly maps to the diagnosed problem.
  • Alternatives were considered seriously.
  • Solved-state sentence is concrete and observable.
  • Communication and handoff are planned.

Low-scoring indicators:

  • Solution is generic, fashionable, or disconnected from evidence.
  • Alternatives are missing or straw-manned.
  • No clear definition of success.

Execution Evidence heuristics

High-scoring indicators:

  • There is a concrete artifact or implemented change.
  • Before/after evidence or early signal of effect is shown.
  • Student communicated updates or ownership proactively.
  • The work clearly moved from idea to action.

Low-scoring indicators:

  • Submission contains only plans, notes, or slides.
  • Artifact is too incomplete to show any real implementation.
  • No sign the student actually engaged the real context.

Presentation heuristics

High-scoring indicators:

  • Student explains reasoning clearly and honestly.
  • Student can name what worked, what failed, and what remains unresolved.
  • Reflection is concrete, not self-congratulatory.

Low-scoring indicators:

  • Presentation hides uncertainty.
  • Reflection is generic ("I learned leadership") without specifics.
  • Student cannot answer basic questions about evidence or tradeoffs.

Reflex Journal heuristics

High-scoring indicators:

  • Entries show real moments of hesitation and choice.
  • Student names the reflex and the override.
  • Learning compounds across entries.

Low-scoring indicators:

  • Entries are performative or obviously backfilled.
  • No actual behavior change is visible.
  • Entries are too vague to verify.

Confidence scoring guidance for LLM evaluation

The LLM should output one of:

  • High confidence: sufficient evidence across required fields; rubric anchors clearly met
  • Medium confidence: some evidence gaps; score likely directionally right
  • Low confidence: missing artifacts, ambiguous submission, or conflicting signals

Low-confidence cases should be escalated to human review.

Missing-evidence flags the LLM should detect

  • No direct evidence that the problem exists
  • No implementation artifact
  • No sign of stakeholder contact or observation where it would be expected
  • Claims of impact with no proof
  • Reflection language that does not map to actual actions
  • Signs the student relied on AI to draft insight without lived specifics

Suggested anti-inflation checks

  • Compare claims of impact against the narrowness of the intervention.
  • Look for repeated generic phrases across journal, writeup, and presentation.
  • Check whether "what I learned" sections contain details only someone who did the work would know.
  • Require at least one concrete artifact reference in each rationale.

7. Rubrics, Scoring Criteria, and Evaluator Prompt Guidance

Master scoring rubric

ComponentWeight4 - Excellent3 - Competent2 - Emerging1 - Insufficient
Problem Analysis20%Problem is specific, evidence-based, scoped well, and root-cause awareProblem is mostly clear with some evidence and workable scopeProblem is partially clear but shallow, broad, or weakly evidencedProblem is vague, unsupported, or misidentified
Solution Design20%Solution fits diagnosis, alternatives considered, success defined clearlySolution is plausible and mostly aligned to problemSolution is partially aligned but under-justified or vagueSolution is disconnected, generic, or undefined
Execution Evidence30%Real artifact/change delivered with credible evidence of implementationSome real implementation shown, though limited in completeness or effectMinimal implementation; effort visible but evidence weakNo meaningful implementation shown
Presentation20%Clear, honest, evidence-driven, reflective, and defensibleUnderstandable and mostly honest, with minor gapsSome insight present but unclear, generic, or evasiveUnclear, unsupported, or performative
Reflex Journal10%Multiple specific entries show real behavioral changeEntries are specific enough and show some growthEntries are inconsistent, vague, or shallowEntries are missing, generic, or non-credible

Detailed scoring criteria by component

Problem Analysis scoring anchors

  • Score 4:
  • Names specific user/group, context, and consequence
  • Includes 3 or more strong evidence points
  • Distinguishes symptom vs. cause convincingly
  • Scope is realistically solvable
  • Score 3:
  • Problem is clear but evidence is lighter or causal reasoning is incomplete
  • Scope is workable with minor ambiguity
  • Score 2:
  • Problem exists but is poorly bounded, lightly evidenced, or overly symptom-focused
  • Score 1:
  • Problem definition is mostly opinion, abstraction, or projection

Solution Design scoring anchors

  • Score 4:
  • Intervention directly addresses diagnosed friction
  • Success is observable
  • Alternatives were seriously considered
  • Handoff/update plan is credible
  • Score 3:
  • Solution generally fits but rationale or alternatives are uneven
  • Score 2:
  • Solution is weakly matched to problem or underdeveloped
  • Score 1:
  • Solution is generic, unrealistic, or missing

Execution Evidence scoring anchors

  • Score 4:
  • Artifact is real, inspectable, and clearly owned by student
  • Before/after or early effect is documented
  • Student took initiative without waiting for a full brief
  • Score 3:
  • Real artifact exists, but implementation depth or effect evidence is partial
  • Score 2:
  • Some progress toward implementation exists, but artifact is thin or mostly preparatory
  • Score 1:
  • No credible implementation evidence

Presentation scoring anchors

  • Score 4:
  • Presenter is concise, specific, and honest about limits
  • Can explain tradeoffs and next steps
  • Reflex learning is concrete
  • Score 3:
  • Presentation is understandable with minor vagueness
  • Score 2:
  • Reflection or explanation lacks specificity
  • Score 1:
  • Presentation is evasive, unsupported, or confusing

Reflex Journal scoring anchors

  • Score 4:
  • Entries are timestamped or situationally credible, behaviorally specific, and show evolution
  • Score 3:
  • Entries are mostly credible and specific, with some repetition or surface-level reflection
  • Score 2:
  • Entries are inconsistent or too general
  • Score 1:
  • Entries appear fabricated, generic, or absent

Suggested evaluator prompt for assignment-level scoring

You are evaluating student work for Course 05: The Unprompted Solve.

Your job is to score this submission using the provided rubric only. Do not reward style, confidence, or polished language unless the evidence supports it. Reward real ownership, sound judgment, and concrete implementation.

Important principles:
- Evidence over polish
- Real but small beats ambitious but imaginary
- Correct escalation can score well when acting would be irresponsible
- Honest reflection is a strength only when tied to actual events
- Missing evidence must reduce scores

For each rubric category:
1. Give a score from 1 to 4
2. Cite the evidence that justifies the score
3. Name one strength
4. Name one gap
5. State confidence: High, Medium, or Low

Then calculate the weighted final score out of 100.

If the submission lacks enough evidence to grade fairly, say so clearly and mark confidence Low.
```

### Suggested evaluator prompt for artifact credibility checking

Review this submission for credibility and evidence quality.

Answer:

  1. What concrete evidence shows the student engaged a real problem?
  2. What claims appear unsupported or inflated?
  3. Is there a real implementation artifact, or only plans and recommendations?
  4. Does the reflection contain details that suggest lived experience rather than generic AI-generated language?
  5. What missing evidence most limits confidence in scoring?

Be skeptical but fair. Do not invent facts that are not present.


### Suggested evaluator prompt for reflex journal scoring

```text
Evaluate these reflex journal entries for behavioral specificity and evidence of ownership growth.

Look for:
- a real situation
- the internal reflex described clearly
- a decision point
- an action taken or avoided
- a concrete consequence
- learning that becomes sharper over time

Do not reward generic self-help language. Reward precise, lived, situational reflection.

Recommended output schema for LLM graders

{
  "assignment": "string",
  "scores": {
    "problem_analysis": {"score": 1, "confidence": "High", "evidence": ["..."], "strength": "...", "gap": "..."},
    "solution_design": {"score": 1, "confidence": "High", "evidence": ["..."], "strength": "...", "gap": "..."},
    "execution_evidence": {"score": 1, "confidence": "High", "evidence": ["..."], "strength": "...", "gap": "..."},
    "presentation": {"score": 1, "confidence": "High", "evidence": ["..."], "strength": "...", "gap": "..."},
    "reflex_journal": {"score": 1, "confidence": "High", "evidence": ["..."], "strength": "...", "gap": "..."}
  },
  "weighted_total": 0,
  "missing_evidence_flags": ["..."],
  "human_review_required": false,
  "summary": "string"
}
```

## 8. Feedback Strategy

### Feedback philosophy

Feedback in this course should strengthen agency, judgment, and truthfulness. It should not become therapy, vague encouragement, or punishment for imperfect first attempts. The right response is direct, evidence-based, and oriented to next action.

### What strong, average, and weak responses look like

#### Strong response profile

Characteristics:
- Student identified a real and narrow problem
- Collected concrete evidence
- Built or changed something real
- Communicated proactively
- Reflected honestly on tradeoffs and remaining gaps

How an LLM should respond:
- Name the strongest ownership behavior explicitly
- Point to the evidence that makes the work credible
- Highlight what makes the problem definition or implementation sharp
- Suggest one next-level improvement rather than many

Example feedback stance:
"This is strong because you moved from complaint to implementation. Your problem definition is specific, your evidence changed your understanding, and your artifact shows real action. The next step is to strengthen the handoff so the solve survives beyond you."

#### Average response profile

Characteristics:
- Student engaged a real problem but the scope or evidence is uneven
- Some implementation happened, but the artifact is partial
- Reasoning is plausible but not fully defended
- Reflection is real but somewhat generic

How an LLM should respond:
- Validate the real action taken
- Isolate the main missing element
- Give 1-2 concrete upgrades tied to the rubric
- Avoid flattening the work into generic praise

Example feedback stance:
"You did not wait to be told what to do, which matters. The main gap is that your evidence for root cause is still thin, so the solution feels more intuitive than proven. Strengthen this by adding direct user evidence and clarifying what changed after your intervention."

#### Weak response profile

Characteristics:
- Problem is vague or imagined
- No implementation artifact exists
- Submission contains recommendations, intentions, or polished language only
- Reflection sounds generic or backfilled
- Student may have mistaken motion for progress

How an LLM should respond:
- Be clear that evidence is insufficient
- Separate effort from outcome without being harsh
- Point to the exact rubric failures
- Give a short recovery path

Example feedback stance:
"This submission shows interest in the problem, but not yet a real solve. The main issue is the absence of implementation evidence and the lack of concrete proof that the defined problem exists in the way you describe. To improve, narrow the problem, collect direct evidence, and ship one tangible intervention you can show."

### Tone rules for LLM feedback

- Direct, not sugar-coated
- Respectful, not punitive
- Specific, not abstract
- Actionable, not overwhelming
- Anchored to rubric criteria and visible evidence

### Feedback structure the LLM should use

1. `What is working`
2. `What is not yet convincing`
3. `What evidence is missing`
4. `Best next move`

### Example short-form feedback template

```text
What is working:
[1-2 concrete strengths tied to evidence]

What is not yet convincing:
[main weakness tied to rubric]

What evidence is missing:
[specific missing artifact, metric, quote, before/after, or communication]

Best next move:
[single highest-leverage action]
```

### Escalation rules for feedback

The LLM should recommend human review when:

- Student work raises ethical, safety, privacy, or authority concerns
- Artifact authenticity is unclear
- Reflection suggests distress or interpersonal conflict beyond course scope
- Score confidence is low due to missing evidence

### Common failure patterns and recommended LLM responses

| Failure pattern | What it looks like | Recommended LLM response |
|---|---|---|
| Vague problem | "Communication is bad" | Ask for a narrower problem with a concrete affected group and consequence |
| Symptom-chasing | Fix addresses visible mess, not cause | Point out mismatch between diagnosis and intervention |
| Recommendation-only | Strong deck, no artifact | State clearly that planning is not implementation in this course |
| Fear masked as research | Endless context-gathering | Push for smallest reversible intervention |
| Reckless ownership | Student acted where escalation was needed | Note initiative, then mark judgment gap and explain why escalation mattered |
| Inflated reflection | Generic leadership language | Ask for one real moment, one actual decision, one observable consequence |

## Implementation Notes for the Facilitator Team

- Use the activist-manifesto language from the website prompt in slides and session openers.
- Display the four responses and the five-step protocol in the room every session.
- Keep a live board of ownership language:
  - "I noticed..."
  - "The actual problem is..."
  - "Done looks like..."
  - "I’m going to look into this..."
  - "Here’s what I learned..."
  - "Here’s what I shipped..."
- Celebrate real, bounded action.
- Do not let students hide behind either polish or uncertainty.

## Definition of Success for Course 05

This course succeeds if, by the end, students no longer experience unowned problems as neutral background noise. They should experience them as invitations to assess, act, update, and close. The final proof is not that every student solved a big problem. The final proof is that each student has evidence that they became someone who engages reality before being asked.