The closed loop for decision intelligence

Three stages. One continuous cycle. Most people only do the first — and wonder why they keep making the same mistakes.

1

Capture the decision at the moment it's made

The most valuable thing you can do is record decisions while memory is fresh. Reflect OS prompts you for the fields that matter: what you decided, why, what you considered and rejected, what risks you identified, your confidence level, and what else was happening at the time.

This isn't a form — it's a structured conversation. The situational context field alone is worth it. "What else was happening when you made this decision?" is the question that unlocks honest self-assessment later.

What you capture
Decision title & description
Rationale & alternatives considered
Confidence level (0–100%)
Risk assessment
Situational context
Stakeholders & IC votes
Evidence & linked assets
Projected outcome
2

Outcome checkpoints at the right moment

Most decisions aren't reviewed because nobody remembers to. Reflect OS surfaces your decisions when outcomes are due — 30, 90, 180 days, 12 months, or custom horizons aligned to how you actually measure things.

At each checkpoint, you compare what you expected with what happened. Not to assign blame — to understand the gap between your model of the world and how it actually works. That gap is where all the learning is.

Decisions stay in "unrealised" state when outcomes are partial or on a longer horizon. Reflect OS handles the ambiguity of real-world timelines without forcing premature closure.

3

Patterns across your full decision history

Individual decisions are useful. A library of decisions is where the intelligence lives. Over time, Reflect OS identifies patterns in your confidence calibration, recurring risk blind spots, decision quality by category or context, and outcome trends.

The first meaningful pattern the app surfaces about you should feel uncomfortable — because it's accurate. That's the signal that it's working.

Calibration curve

How your stated confidence correlates with actual outcomes

Bias patterns

Overconfidence, recency bias, and sector-specific blind spots

Quality score

Rolling outcome quality trend across categories and timeframes

Ready to start the loop?

Your decision record starts with the first one you log. Start today.