COURSE 11 / SOURCE 11_dri_ownership_course_content.md

DRI & Ownership

Scoped ownership, signal extraction, and execution without hierarchy.

Course 11 Course Content Spec

1. Title and source files used

Course title: Project Agni Course 11: DRI & Ownership Subtitle: The Directly Responsible Individual - Owning Outcomes Without Hierarchy

Source files used

  • 11-dri-ownership/curriculum.md
  • 11-dri-ownership/website-prompt.md

Purpose of this document

  • Turn the curriculum into an implementation-ready course-content specification.
  • Define the default delivery model, facilitator guidance, student deliverables, and assessment logic.
  • Provide AI/LLM evaluation rules that can be used consistently across a live cohort, async cohort, or productized course site.

2. Design decisions at the top

  1. DRI is taught as a behavior set, not a job title.

Students practice taking scoped ownership inside environments where they may have no formal authority.

  1. The course is built around one real mandate, not abstract discussion.

Every module feeds a single student-owned DRI problem so the course produces a portfolio, not just reflection.

  1. Scope clarity is the first gate.

Students cannot earn strong scores through ambition alone. If the mandate is vague, oversized, or not measurable, downstream work is capped.

  1. Failure is treated as signal extraction.

Learners are rewarded for detecting capability gaps, documenting failed assumptions, and translating breakdowns into system insight.

  1. Artifacts are first-class outputs.

Notes, decision logs, stakeholder maps, postmortems, and handoff documents are graded as operational tools, not administrative overhead.

  1. The tone should match the website prompt.

The course should feel like a mission briefing for edge operators inside a world-model-enabled organization: precise, dense, outcome-focused, and anti-fluff.

  1. Assessment prioritizes observable judgment.

Rubrics measure specificity, tradeoff quality, causality, and execution logic rather than charisma, confidence, or corporate language.

  1. AI grading is allowed, but never unconstrained.

LLMs score against explicit heuristics, must cite evidence from student submissions, and must escalate uncertain or contradictory cases for human review.

  1. The capstone is the course.

Session activities are structured so every artifact is reusable in the final DRI Project Portfolio.

  1. Implementation must work in live, blended, or fully async delivery.

The default model is blended intensive, but all activities specify what changes if the sync layer is removed.

3. Delivery model assumptions

Default format

  • Format: 3-session intensive with async prep plus live synchronous workshops
  • Recommended duration: 10-14 days total
  • Live sessions: 3 x 150 minutes
  • Async workload: 4.5-6 hours total
  • Capstone execution window: 7 days minimum between Session 2 and Session 3

Recommended cohort profile

  • Early-career to senior contributors, founders, operators, PMs, designers, engineers, analysts, or students running real projects
  • Best for learners who currently face ambiguous ownership or cross-functional coordination problems
  • No formal management experience required

Cohort size assumptions

  • Ideal live cohort: 12-24 students
  • Minimum viable cohort: 6 students
  • Maximum without extra facilitation: 30 students
  • Add one teaching assistant or second facilitator for cohorts above 24

Delivery infrastructure assumptions

  • LMS or course hub that can host readings, prompts, and upload artifacts
  • Shared discussion or chat space for peer review
  • Video meeting platform with breakout rooms for live delivery
  • A structured submission surface for LLM grading, ideally markdown or form-based
  • Artifact templates available as copyable markdown

Facilitation assumptions

  • Facilitator understands DRI as scoped ownership without hierarchical authority
  • Facilitator can coach students on narrowing scope, naming tradeoffs, and distinguishing symptoms from root causes
  • Facilitator does not need to be a Block insider, but must hold the course philosophy consistently

Assessment assumptions

  • LLM performs first-pass scoring on clearly structured submissions
  • Human review is required for:
  • safety or ethics concerns
  • obviously fabricated evidence
  • contradictory scoring signals
  • high-stakes borderline pass/fail decisions
  • cases where students propose organizationally sensitive actions

Accessibility and modality assumptions

  • All key content should exist in text form
  • Students can complete the course fully async if necessary
  • Live exercises must have written alternatives
  • Students may use AI tools to draft, but must disclose tool use and remain accountable for final submissions

4. Detailed course content broken down by module, session, lesson, activity, timing, facilitator moves, and student outputs

Course-level learning outcomes

By the end of the course, students will be able to:

  • define a DRI mandate with clear scope, boundaries, metrics, resource needs, and time horizon
  • act on a problem without defaulting to hierarchy for every decision
  • document artifacts that improve organizational intelligence
  • distinguish execution failure from system or model failure
  • run a short-cycle DRI project from definition through handoff

Course-level artifact set

Each student produces a single DRI Project Portfolio containing:

  1. DRI Mandate
  2. Scope Audit
  3. Stakeholder and resource map
  4. Artifact log
  5. Decision log
  6. Failure postmortem
  7. Final outcome summary
  8. Handoff memo
  9. Reflection on judgment calls and scope renegotiation

Module 1 / Session 1

Session 1: The DRI Construct - What It Actually Means

Session goal: Students leave with a scoped DRI mandate attached to a real problem worth owning.

Required prework

  • Read the short course brief on DRI versus manager versus team lead
  • Bring one example of a problem where ownership was unclear
  • Submit a 150-250 word pre-course note:
  • What problem in your environment slows progress?
  • Who is supposedly responsible?
  • Why is it still unresolved?

Session length: 150 minutes live Async equivalent: 60-75 minutes

Lesson 1.1: Why hierarchy creates latency

Timing: 20 minutes

Content focus

  • hierarchy as an information-routing mechanism
  • latency introduced by management layers
  • why DRI exists in systems where intelligence is pushed into infrastructure

Facilitator moves

  • Open with a concrete story of a stalled problem that had many stakeholders but no owner
  • Contrast "who approves?" with "who owns the outcome?"
  • Ask students to name where delay shows up in their current environment
  • Capture examples on a board under two columns: routing work vs owning work

Student outputs

  • live annotation or written note: one example of a decision delayed by routing

Lesson 1.2: What a DRI is and is not

Timing: 30 minutes

Content focus

  • DRI as owner of a specific problem or customer outcome
  • scoped, time-bound, outcome-defined mandates
  • distinction between authority to decide and authority to pull resources

Activity: Definition stress test

Timing: 15 minutes inside lesson

Instructions

  • Students review three sample mandates:
  • vague and role-based
  • overbroad and unrealistic
  • properly scoped and measurable
  • Students label each as weak, medium, or strong and explain why

Facilitator moves

  • Push students to explain scope boundaries, not just whether the mandate "sounds good"
  • Reject empty corporate phrasing such as "drive alignment" unless paired with an actual outcome
  • Reinforce that DRI is not equal to permanent ownership of a domain

Student outputs

  • short written critique of sample mandates

Lesson 1.3: Components of a well-scoped DRI mandate

Timing: 35 minutes

Content focus

  • problem statement
  • scope in / scope out
  • success metric
  • time horizon
  • resource access
  • decision rights and escalation triggers

Activity: Scope Audit

Timing: 20 minutes inside lesson

Instructions

  • Students select a real problem with unclear ownership
  • They map:
  • who was nominally responsible
  • who actually made decisions
  • where information broke down
  • what a proper DRI scope would need to include

Facilitator moves

  • Force specificity: "unclear communication" is not enough; ask where, between whom, and with what consequence
  • Help students reduce oversized problem frames into tractable slices
  • Ask "What would success look like in 7-30 days, not someday?"

Student outputs

  • Scope Audit worksheet draft

Lesson 1.4: Build the first DRI mandate

Timing: 45 minutes

Activity: 48-Hour DRI Sprint Draft

Instructions

  • Students draft a mandate containing:
  • problem statement
  • why the problem matters now
  • scope boundaries
  • measurable success indicators
  • resource dependencies
  • first 48-hour actions
  • likely risks

Peer review protocol

  • 10 minutes solo drafting
  • 10 minutes pair review
  • 10 minutes revision

Peer review prompts

  • Is the mandate small enough to own?
  • Can a third party tell when this is done?
  • Does the mandate name what is explicitly out of scope?
  • Is the first signal visible within 48 hours?

Facilitator moves

  • Circulate and narrow problem statements that are actually team charters
  • Encourage measurable proxies if direct business metrics are unavailable
  • Ask students to replace output metrics with outcome metrics when possible

Student outputs

  • Draft 1 of DRI Mandate

Session 1 close

Timing: 20 minutes

Wrap prompts

  • What changed when you moved from "someone should" to "I own"?
  • Where is your mandate still unsafe, vague, or oversized?

Homework after Session 1

  • Revise DRI Mandate
  • Complete Scope Audit
  • Post mandate to peer forum
  • Comment on two peers using the mandate critique rubric

Artifacts due before Session 2

  • DRI Mandate v2
  • Scope Audit

Module 2 / Session 2

Session 2: Operating Without a Manager - Edge Execution in Practice

Session goal: Students practice judgment, artifact discipline, and failure logging while executing the first stage of their mandate.

Required prework

  • Submit DRI Mandate v2
  • Bring one real execution obstacle encountered after Session 1
  • Read the artifact log and failure postmortem templates

Session length: 150 minutes live Async equivalent: 75-90 minutes

Lesson 2.1: The edge person's job

Timing: 25 minutes

Content focus

  • working with incomplete information
  • reading context the system cannot yet perceive
  • deciding when action is justified without full consensus

Activity: Act or escalate?

Timing: 15 minutes inside lesson

Instructions

  • Students review six mini-scenarios
  • For each scenario they choose:
  • act now
  • gather more context
  • escalate
  • renegotiate scope
  • They must defend the choice using evidence, not instinct alone

Facilitator moves

  • Draw out differences between risk tolerance and recklessness
  • Ask what information would materially change the decision
  • Make students name the cost of delay

Student outputs

  • Decision rationale notes

Lesson 2.2: Artifact discipline as model fuel

Timing: 30 minutes

Content focus

  • documentation as system intelligence input
  • what counts as a meaningful artifact
  • minimum viable documentation versus bureaucratic overproduction

Activity: Artifact triage

Timing: 15 minutes inside lesson

Instructions

  • Students sort common work products into:
  • useful for future intelligence
  • useful only if structured better
  • noise
  • Then they define a personal artifact protocol for the capstone week

Facilitator moves

  • Push for structure that makes artifacts machine-readable and human-usable
  • Ask what future decision a document would support
  • Warn against volume without signal

Student outputs

  • personal artifact capture plan

Lesson 2.3: Failure as signal, not disaster

Timing: 35 minutes

Content focus

  • difference between bad result, bad decision, and missing capability
  • extracting roadmap signals from failure
  • factual writing versus blame writing

Activity: Failure Postmortem Workshop

Timing: 20 minutes inside lesson

Instructions

  • Students pick a real failure from current or past work
  • They draft three sections:
  • what happened
  • why it happened
  • what it signals
  • They highlight which part is evidence and which part is inference

Facilitator moves

  • Intervene when students moralize instead of analyze
  • Separate proximate cause from system cause
  • Ask "What capability would have had to exist to prevent this?"

Student outputs

  • Postmortem Draft 1

Lesson 2.4: Resource orchestration without formal authority

Timing: 40 minutes

Content focus

  • making resource requests legible
  • naming dependencies and opportunity costs
  • negotiating support across functions without escalating by default

Activity: Resource Request Roleplay

Timing: 20 minutes inside lesson

Instructions

  • Students write a resource ask tied to their DRI mandate
  • In trios:
  • one student plays DRI
  • one plays resource owner
  • one scores clarity and realism
  • The DRI must explain:
  • what outcome is at stake
  • what help is needed
  • by when
  • what tradeoff is being requested
  • what happens if support is unavailable

Facilitator moves

  • Shut down vague asks such as "need support" without scope or deadline
  • Reward explicit tradeoffs and fallback plans
  • Normalize renegotiation when requested resources are unrealistic

Student outputs

  • resource request draft
  • stakeholder/resource map draft

Session 2 close

Timing: 20 minutes

Execution sprint briefing

  • Students now run a 5-7 day DRI execution window
  • They must ship at least one concrete intervention:
  • proposal
  • prototype
  • process change
  • experiment
  • documented failed attempt with clear learning

Homework after Session 2

  • Maintain artifact log for 5-7 days
  • Update decision log after each significant move
  • Execute one concrete intervention
  • Write failure postmortem for one breakdown or failed assumption
  • Draft stakeholder/resource map

Artifacts due before Session 3

  • Artifact log
  • Decision log
  • Failure postmortem
  • Stakeholder/resource map
  • Evidence of shipped intervention

Module 3 / Session 3

Session 3: DRI in Practice - Real Project Ownership

Session goal: Students synthesize their mandate, execution, and learning into a final DRI Project Portfolio with a handoff-quality close.

Required prework

  • Submit all execution-week artifacts
  • Bring one unresolved signal or scope question
  • Prepare a 3-minute capstone briefing

Session length: 150 minutes live Async equivalent: 90 minutes plus recorded presentation

Lesson 3.1: Scoping the unsolvable

Timing: 25 minutes

Content focus

  • choosing problems worth owning
  • identifying symptoms versus causes
  • recognizing prerequisite constraints
  • renegotiating scope intelligently

Activity: Scope correction clinic

Timing: 15 minutes inside lesson

Instructions

  • Students identify one place their original mandate was wrong:
  • too large
  • too shallow
  • wrong metric
  • wrong stakeholder
  • wrong time horizon
  • They rewrite the scope in one paragraph

Facilitator moves

  • Reward correction, not stubbornness
  • Ask students what evidence forced the scope change
  • Emphasize that good ownership includes changing the mandate when reality changes

Student outputs

  • scope renegotiation note

Lesson 3.2: The 90-day DRI arc

Timing: 30 minutes

Content focus

  • days 1-7 understand and map
  • days 8-30 execute and create signal
  • days 31-60 evaluate and adjust
  • days 61-90 resolve and hand off

Activity: Arc mapping

Timing: 15 minutes inside lesson

Instructions

  • Students place their current project on the 90-day arc
  • They define:
  • what stage they are actually in
  • what proof they have
  • what the next stage requires

Facilitator moves

  • Correct false certainty when students claim progress without evidence
  • Ask what would make the next phase impossible
  • Tie arc progression back to artifact production

Student outputs

  • 90-day arc map

Lesson 3.3: Capstone briefing and critique

Timing: 55 minutes

Activity: 3-minute DRI briefings

Instructions

  • Each student presents:
  • the problem
  • original scope
  • action taken
  • signal generated
  • what changed
  • what happens next

Peer critique protocol

  • 2 minutes peer questions after each briefing
  • questions must target:
  • scope clarity
  • evidence quality
  • decision logic
  • quality of signal extraction

Facilitator moves

  • Keep critiques operational, not motivational
  • Ask for evidence when students overstate impact
  • Highlight examples where a modest intervention produced strong learning

Student outputs

  • capstone briefing
  • peer critique notes

Lesson 3.4: The handoff

Timing: 20 minutes

Content focus

  • when to close, when to continue, when to transfer
  • what a useful handoff contains
  • how a DRI exits without dropping context

Activity: Handoff memo drafting

Timing: 15 minutes inside lesson

Instructions

  • Students draft a handoff memo containing:
  • current status
  • evidence gathered
  • decisions made
  • unresolved risks
  • recommended next owner or next move

Facilitator moves

  • Push for a memo another operator could genuinely use
  • Eliminate retrospective storytelling that hides uncertainty
  • Require explicit open questions

Student outputs

  • Handoff Memo Draft 1

Session 3 close

Timing: 20 minutes

Final reflection prompts

  • What part of your mandate did you actually own?
  • What did failure teach you that success would not have revealed?
  • What artifact from this course would be most useful to a future world model?

Final artifacts due

  • Final DRI Project Portfolio
  • Reflection memo

Async-only adaptation notes

  • Replace live lecture segments with short readings plus annotated examples
  • Replace breakout peer review with paired written critique
  • Replace capstone briefing with recorded 3-5 minute video or narrated slides
  • Require students to comment on at least two peers' portfolios using the rubric language
  • Keep deadlines tight so the mandate still has an execution rhythm

5. Assignments and artifacts

Assignment 1: Scope Audit

Purpose

  • Diagnose a real problem where ownership was unclear and identify what proper DRI scoping would require.

Submission requirements

  • 500-800 words or equivalent structured template
  • must identify:
  • nominal owner
  • actual decision-maker
  • breakdown points
  • proposed DRI scope

Success criteria

  • precise diagnosis of ambiguity
  • realistic scope correction
  • evidence that the student understands ownership as outcome-focused

Assignment 2: DRI Mandate

Purpose

  • Produce a well-scoped mandate that anchors the course.

Required sections

  • problem statement
  • why now
  • scope in
  • scope out
  • success metrics
  • time horizon
  • key stakeholders
  • resource needs
  • decision rights
  • escalation triggers
  • first 48-hour actions

Submission requirements

  • 400-700 words
  • written in plain operational language

Assignment 3: Stakeholder and Resource Map

Purpose

  • Make context, dependencies, and leverage visible.

Required sections

  • stakeholders by role in the problem
  • resource dependencies
  • likely friction points
  • who must be informed, consulted, or directly engaged
  • fallback plan if critical support is unavailable

Assignment 4: Artifact Log and Decision Log

Purpose

  • Build artifact discipline during execution.

Artifact log minimum

  • 8 entries across 5-7 days
  • each entry includes:
  • date/time
  • artifact type
  • brief description
  • why it matters
  • whether it improved context, decision quality, or coordination

Decision log minimum

  • 4 entries
  • each entry includes:
  • decision made
  • options considered
  • evidence used
  • uncertainty level
  • expected consequence

Assignment 5: Failure Postmortem

Purpose

  • Convert one failure or breakdown into system learning.

Required sections

  • what happened
  • why it happened
  • what it signals
  • what capability or process gap it reveals
  • what should change next

Submission requirements

  • 500-900 words
  • must clearly distinguish observation from interpretation

Assignment 6: Capstone DRI Project Portfolio

Purpose

  • Demonstrate end-to-end ownership on a real scoped problem.

Required contents

  • final DRI mandate
  • scope audit
  • stakeholder/resource map
  • artifact log
  • decision log
  • evidence of intervention
  • failure postmortem
  • outcome summary
  • scope renegotiation note, if applicable
  • handoff memo
  • 250-400 word reflection

Required artifacts checklist for completion

  • Scope Audit
  • DRI Mandate
  • Stakeholder and Resource Map
  • Artifact Log
  • Decision Log
  • Failure Postmortem
  • Evidence of one shipped intervention
  • Handoff Memo
  • Final Reflection

6. AI/LLM grading and assessment framework

Role of AI grading

LLM grading in this course is used for:

  • first-pass analytic scoring
  • consistency across structured artifacts
  • extraction of evidence-based feedback
  • flagging missing sections, weak specificity, and likely fabrication

LLM grading is not used for:

  • final judgment on ethically sensitive work
  • unverifiable claims of business impact
  • grading style, confidence, or verbal polish as a proxy for quality

Assessment architecture

Layer 1: Structural validation

  • check that required sections are present
  • check that timing, metrics, and boundaries are stated
  • check word-count or template completion

Layer 2: Heuristic scoring

  • score artifact quality against analytic rubrics
  • cite exact student evidence for each score
  • avoid rewarding vague but impressive-sounding language

Layer 3: Risk detection

  • flag contradictions
  • flag invented metrics without evidence
  • flag mandates that exceed learner authority without acknowledgement
  • flag missing causal reasoning in postmortems

Layer 4: Human escalation

  • send edge cases for review

Suggested weighting

  • Scope Audit: 10%
  • DRI Mandate: 20%
  • Stakeholder and Resource Map: 10%
  • Artifact and Decision Logs: 15%
  • Failure Postmortem: 20%
  • Final DRI Project Portfolio and Handoff: 25%

Global heuristics for all AI evaluation

The LLM should reward:

  • specificity over abstraction
  • bounded scope over broad aspiration
  • measurable signals over generic goals
  • evidence-backed reasoning over narrative confidence
  • explicit tradeoffs over hidden assumptions
  • honest reporting of failure over defensive framing

The LLM should penalize:

  • vague verbs such as "support," "align," or "improve" without defined outcomes
  • no clear scope-out section
  • metrics that are not observable within the time horizon
  • claims of ownership that depend entirely on formal authority the student does not have
  • postmortems that assign blame without identifying system conditions
  • artifact logs that are mere calendars rather than decision-support records

Concrete assessment heuristics by artifact

DRI Mandate heuristics

  • Problem quality: Is the problem concrete, current, and worth acting on now?
  • Scope boundaries: Does the student say what is not included?
  • Metric quality: Are success indicators observable, time-bound, and plausibly influenced by the student?
  • Resource realism: Are required inputs named and feasible?
  • Decision logic: Are escalation triggers or limits specified?

Automatic red flags

  • no time horizon
  • no measurable signal
  • mandate sounds like a department charter
  • student confuses coordination with ownership

Scope Audit heuristics

  • distinguishes nominal versus actual ownership
  • identifies real breakdown points instead of generic dysfunction
  • proposes a corrected scope that would reduce latency
  • shows understanding of information gaps

Stakeholder and Resource Map heuristics

  • key actors are named by function or role
  • dependencies are explicit
  • opportunity costs or tradeoffs are acknowledged
  • fallback path exists if support is denied

Artifact and Decision Log heuristics

  • entries capture decisions, context shifts, or learnings
  • entries are not repetitive filler
  • student shows disciplined observation over the execution window
  • decisions include options and uncertainty, not just outcomes

Failure Postmortem heuristics

  • separates facts from interpretation
  • identifies multiple causal layers where appropriate
  • names a system signal or capability gap
  • proposes a reasonable implication for future action

Automatic red flags

  • purely emotional narrative
  • pure blame assignment
  • no systemic learning
  • recommendations unrelated to the diagnosed cause

Final Portfolio and Handoff heuristics

  • portfolio tells a coherent end-to-end ownership story
  • student shows action, not just planning
  • evidence of intervention is present
  • scope changes are justified by reality
  • handoff is usable by another operator

Confidence and escalation protocol for LLMs

The evaluator must provide:

  • a score
  • a confidence level: high, medium, or low
  • a list of evidence citations from the submission
  • a list of uncertainties

Escalate to human review if:

  • confidence is low
  • the submission contains safety, legal, or HR-sensitive issues
  • the student claims major impact without any evidence
  • scores across rubric dimensions differ by more than 2 points on a 4-point scale without explanation
  • suspected AI-generated filler overwhelms concrete content

Recommended machine-readable output schema

{
  "artifact_type": "dri_mandate",
  "overall_score": 0,
  "overall_rating": "does_not_meet|approaching|meets|exceeds",
  "confidence": "high|medium|low",
  "dimension_scores": {
    "scope_clarity": 0,
    "metric_quality": 0,
    "resource_realism": 0,
    "decision_logic": 0,
    "ownership_strength": 0
  },
  "evidence": [
    {
      "dimension": "scope_clarity",
      "citation": "quoted or paraphrased student evidence"
    }
  ],
  "strengths": [],
  "risks": [],
  "missing_elements": [],
  "feedback_to_student": "",
  "escalate_to_human": false,
  "escalation_reason": ""
}
```

## 7. Rubrics, scoring criteria, and evaluator prompt guidance

### Master scoring scale

Use a 4-point scale for all dimensions:

- **4 - Exceeds:** precise, realistic, evidence-based, and immediately usable
- **3 - Meets:** solid and workable with minor gaps
- **2 - Approaching:** partially correct but materially underspecified or inconsistent
- **1 - Does Not Meet:** vague, missing, unrealistic, or not operational

### Rubric: DRI Mandate

| Dimension | 4 - Exceeds | 3 - Meets | 2 - Approaching | 1 - Does Not Meet |
| --- | --- | --- | --- | --- |
| Scope clarity | Clear problem, explicit boundaries, tangible finish line | Problem and boundary mostly clear | Problem is broad or boundaries incomplete | Scope is vague or unlimited |
| Success metrics | Observable, time-bound, aligned to problem | Metrics mostly measurable | Metrics partially measurable or output-only | No real metric or only vague aspiration |
| Ownership strength | Student can act meaningfully without waiting on hierarchy | Ownership mostly plausible | Ownership depends on others too heavily | Student is not actually positioned to own this |
| Resource realism | Dependencies, asks, and tradeoffs are explicit | Major resources named | Resource needs underdeveloped | No realistic plan for resourcing |
| Decision logic | Escalation triggers and risks are clear | Some decision logic present | Weak criteria for action or escalation | No evidence of judgment boundaries |

### Rubric: Scope Audit

| Dimension | 4 - Exceeds | 3 - Meets | 2 - Approaching | 1 - Does Not Meet |
| --- | --- | --- | --- | --- |
| Diagnosis of ambiguity | Clearly identifies where ownership broke down | Mostly clear diagnosis | Some ambiguity identified but superficial | Problem remains generic |
| System understanding | Names information and coordination failures accurately | Understands main system gaps | Partial understanding | No meaningful system analysis |
| Proposed DRI redesign | Corrected scope would materially reduce latency | Scope likely improves ownership | Scope still too broad or thin | No workable redesign |

### Rubric: Failure Postmortem

| Dimension | 4 - Exceeds | 3 - Meets | 2 - Approaching | 1 - Does Not Meet |
| --- | --- | --- | --- | --- |
| Factual clarity | Event is described concretely and neutrally | Mostly factual with minor interpretation bleed | Mixed fact and opinion | Mostly opinion or blame |
| Causal reasoning | Multiple meaningful causes identified | Main cause identified credibly | Cause discussion is thin | Cause is unclear or unsupported |
| Signal extraction | Clear system implication or capability gap | Useful signal identified | Signal is generic | No signal beyond "try harder" |
| Forward implication | Next step logically follows findings | Reasonable next step | Weak or partial next step | Recommendation disconnected from analysis |

### Rubric: Final DRI Portfolio and Handoff

| Dimension | 4 - Exceeds | 3 - Meets | 2 - Approaching | 1 - Does Not Meet |
| --- | --- | --- | --- | --- |
| End-to-end coherence | Portfolio shows a clear arc from mandate to outcome | Arc mostly coherent | Arc is fragmented | No coherent ownership story |
| Evidence of execution | Concrete action and artifacts are present | Some action evidence present | Limited proof of execution | Mostly planning, little action |
| Adaptation and scope management | Student adjusted intelligently to reality | Some adaptation shown | Adaptation is weak or reactive | No sign of learning from reality |
| Handoff quality | Another operator could continue immediately | Mostly usable handoff | Handoff missing key context | Handoff not operational |

### Completion thresholds

- **Course completion:** all required artifacts submitted plus weighted score of 2.5 or above
- **Strong completion:** weighted score of 3.25 or above with no individual artifact below 2
- **Needs revision:** any critical artifact scored below 2, especially DRI Mandate or Failure Postmortem

### Evaluator prompt guidance

#### System prompt for evaluator LLM

```text
You are evaluating student work for a course on DRI and ownership. Your job is to grade only what is present in the submission. Reward specificity, bounded scope, measurable outcomes, explicit tradeoffs, and evidence-backed reasoning. Penalize vague corporate language, unsupported claims, overbroad ownership, missing boundaries, and blame-oriented postmortems.

Do not infer competence from polish. Do not reward confidence unless the content is operationally sound. Cite evidence from the student's submission for every major score. If evidence is weak, say so. If the case is ambiguous or sensitive, recommend human review.
```

#### User prompt template for evaluator LLM

```text
Evaluate the following student submission for Course 11: DRI & Ownership.

Artifact type: {{artifact_type}}
Rubric: {{rubric_text}}

Student submission:
{{student_submission}}

Return:
1. Overall rating on a 4-point scale
2. Dimension-by-dimension scores
3. Evidence citations from the submission
4. Missing elements
5. Two strengths
6. Two improvement priorities
7. Feedback to student in direct, operational language
8. Confidence level
9. Whether human review is needed and why
```

### Prompt constraints for evaluator quality

- Never score a dimension above 3 if required evidence is absent
- Never infer business impact from intent
- If metrics are named but not measurable within the time horizon, cap metric quality at 2
- If the postmortem lacks a system-level implication, cap signal extraction at 2
- If the handoff lacks unresolved risks or next steps, cap handoff quality at 2
- If the student explicitly narrows scope based on new evidence, treat that as a strength, not a failure

## 8. Feedback strategy: what strong/average/weak responses look like and how an LLM should respond

### Feedback principles

- feedback should be operational, not inspirational
- feedback should name what was strong, what is risky, and what to do next
- feedback should distinguish between content weakness and scope mismatch
- feedback should avoid generic praise
- feedback should preserve student agency while being explicit about deficiencies

### What strong responses look like

Strong responses usually include:

- a sharply defined problem
- clear scope-out statements
- at least one metric visible inside the chosen time horizon
- honest acknowledgment of constraints
- concrete evidence of action
- postmortem language that separates fact from inference
- a handoff another person could use immediately

**How the LLM should respond to strong work**
- validate the specific behaviors that made the work strong
- point to one or two places where stronger leverage or broader signal extraction is possible
- avoid diluting useful rigor with generic compliments

**Example response pattern**
- "Your mandate is strong because the scope boundary is explicit and the 14-day signal is observable."
- "Your postmortem identifies both the proximate cause and the missing capability."
- "Next improvement: make the opportunity cost in your resource request even more explicit."

### What average responses look like

Average responses usually include:

- a real problem, but some scope blur
- outcome language mixed with activity language
- partial metrics
- artifacts that exist but are under-structured
- some evidence of action, but weak causality or tradeoff analysis

**How the LLM should respond to average work**
- identify the exact missing operational elements
- suggest targeted revisions rather than broad rewrites
- show the student how to turn partial thinking into usable execution logic

**Example response pattern**
- "You have a real problem and a plausible time horizon, but the finish line is still ambiguous."
- "Add an explicit scope-out section and rewrite your metric so a third party can verify it."
- "Your postmortem names a failure but not yet the capability gap it reveals."

### What weak responses look like

Weak responses usually include:

- abstract ownership claims
- no real time horizon
- no measurable signal
- artifact logs that are generic status updates
- postmortems built from blame or personal frustration
- handoffs that summarize effort rather than operational state

**How the LLM should respond to weak work**
- be direct and specific
- identify the minimum changes required for the submission to become workable
- do not pretend the work is stronger than it is
- recommend human review if the weakness may reflect misunderstanding of the assignment itself

**Example response pattern**
- "This submission does not yet define a DRI mandate because the problem, boundary, and success conditions are not operational."
- "The current metric, 'improve collaboration,' cannot be observed or verified."
- "Revise by naming one problem slice, one measurable signal, and one explicit out-of-scope area."

### Feedback dos for LLMs

- quote or paraphrase evidence
- use direct language
- prioritize the highest-leverage fix
- point to missing structure when relevant
- reward honest scope correction

### Feedback don'ts for LLMs

- do not congratulate students for effort alone
- do not hallucinate impact that is not documented
- do not hide a low score behind soft language
- do not demand enterprise-scale evidence from a small-scope student project
- do not penalize a student for reporting a failed intervention if the failure produced clear signal

### Revision guidance policy

If a student scores:

- **3.25 or above overall:** optional revision only
- **2.5 to 3.24 overall:** targeted revision recommended on lowest scoring artifact
- **below 2.5 overall:** revision required before completion
- **below 2 on DRI Mandate or Failure Postmortem:** mandatory resubmission of that artifact

### Final instructor note

This course should feel demanding because DRI work is demanding. The standard is not polished language or managerial theater. The standard is whether a student can take a real problem, define ownership, move it forward without hiding behind hierarchy, and leave behind artifacts that make the system smarter.