Understand-Research 150 Protocol
Goal
Perform deep, evidence-based research by mapping both core scope (100%) and boundary scope (50%), while maintaining a structured session log section that captures what was found and what to explore next.
Core principles
- •Evidence-based reasoning: Observe → Hypothesize → Predict → Test → Conclude.
- •Scope150: Fully cover the core (what is directly asked) and then cover boundary (adjacent or dependent areas).
- •Traceability: Every key finding is recorded in a research log.
- •Project search protocol: When locating code, follow an ordered search: interface → domain → patterns → usage.
- •Full-file + ecosystem reading: Prefer whole files and surrounding context, not fragments; map dependencies, patterns, and interactions.
- •Document all findings: Research is incomplete without recorded evidence.
Investigation Protocol (mandatory)
Never stop at the first answer. Dig until you reach bedrock truth.
Levels
- •Surface Observation (never stop here)
- •Read one file, see one pattern.
- •Treat as a starting point, not a conclusion.
- •Cross-Reference Validation (minimum required)
- •Find 3+ independent sources confirming the same fact.
- •Check production code, tests, models, and docs.
- •Contradiction Hunting (always do this)
- •Actively search for evidence that disproves the hypothesis.
- •Structural Logic Proof (gold standard)
- •Build a causal chain: X because Y because Z, each with evidence.
- •Use impossibility tests: “If A were true, B would be impossible, but B exists, therefore not A.”
Exhaustive Investigation Checklist
- •Data structure definition (models/entities)
- •API contract (request/response models)
- •Production usage (real call sites, not tests)
- •Test evidence (mocks, edge cases, assertions)
- •Multiple implementations (find 3+ usage patterns)
- •Logical impossibility test (what would disprove the hypothesis?)
Red Flags (investigation incomplete)
- •“probably / likely / should / usually” without verification
- •“based on the name” or “seems like”
- •only one usage checked
- •no contradiction search performed
Iron Logic Test (must answer with concrete evidence)
- •What facts support this?
- •What would disprove this?
- •Did you search for contradictions?
- •Can you prove causality?
- •Would a skeptical engineer accept this evidence trail?
Cognitive Forcing Phrases
- •“I see X, but I will verify with 3 independent sources.”
- •“This suggests Y, but what would disprove Y?”
- •“Found 1 example, need 2 more to confirm pattern.”
- •“Seems obvious, but can I prove causality?”
Investigation Workflow (mandatory)
- •Form initial hypothesis.
- •Find evidence source #1 (model/class).
- •Find evidence source #2 (production usage).
- •Find evidence source #3 (tests or docs).
- •Search for contradictions.
- •Build logical proof with evidence at each step.
- •Test against skepticism; if not convincing, return to step 2.
Anti-patterns to avoid (research rigor)
- •Documentation-only implementation
- •Read docs for context, then verify in code. Code wins on conflict.
- •Boundary scope blindness
- •Always identify consumers/callers, configuration, and dependencies.
- •Assumption cascade
- •Detect assumption phrases, stop, and verify with evidence.
- •Test data as reality
- •Tests often simplify; verify behavior in production code.
Verification hierarchy (trust order)
- •Executable/production code (highest truth)
- •API response/request models
- •Multiple production usages
- •Integration tests
- •Unit test mocks
- •Documentation (lowest truth; may be outdated)
Assumption indicators (trigger verification)
- •“probably”, “likely”, “should”, “typically”, “usually”
- •“based on the name”, “seems like”, “appears to”
- •“I assume”, “I expect”, “this suggests”
Replacement pattern: detect hedge → identify missing evidence → observe → state fact with reference.
Systematic exploration framework (unknown codebases)
- •Context layer: environment, build system, configuration.
- •Structure layer: directory layout, module boundaries.
- •Interface layer: endpoints, public APIs, data models.
- •Implementation layer: execution paths and conventions.
Avoid jumping directly to implementation without context/structure/interface.
Communication protocol (complex tasks)
For any investigation, design decision, or multi-step research:
- •Declare investigation strategy before acting:
- •Frameworks you will apply (Scope150, Evidence-Based Reasoning, Cross-Reference Validation, Anti-Pattern checks).
- •Concrete steps and expected evidence sources.
- •For simple, single-step actions, skip the declaration but still follow evidence-based reasoning.
Session research log (mandatory)
Create or reuse the active session log:
.sessions/SESSION_[date]-[name].md
If .sessions/ does not exist, create it. If no session log exists yet, create one now using the user-defined session_name (or propose date + short descriptive name). The session log is the single working memory for investigations, progress, and decisions.
Investigation structure (inside the session log)
## Investigations ### Investigation: <short topic> #### Core question - <what we are trying to answer> #### Scope - Core (100%): - ... - Boundary (50%): - ... #### Findings - <fact> (source: file path / command / web) - Subfinding #### Hypotheses - H1: ... - Prediction: ... - Test: ... - Status: pending/confirmed/rejected #### Next branches - ... - ...
Workflow
- •Define core question in the log.
- •List scope: core (100%) and boundary (50%).
- •Start observations (search/read/run commands). Use the project search protocol:
- •Interface: routes, UI text, public methods, endpoints, schemas.
- •Domain: model/entity names, i18n keys, enums.
- •Patterns: hooks, API clients, controllers, services.
- •Usage: imports, call sites, references. Record every solid finding in the log.
- •Form hypotheses based on evidence; record predictions and tests.
- •Review log, then decide next branch; update scope if it expands.
- •Repeat until the question is answered or all branches are exhausted.
- •Close out: write a concise summary in the log and in the response, and report completion status (see Output expectations).
Using web search
- •If the investigation needs up-to-date facts or external verification, use
web.runorweb searchtool. - •Capture external findings in the log with a clear source note.
Output expectations
- •Provide a short summary of findings.
- •Provide the path to the session log file.
- •Ask for confirmation before large changes based on the research.
- •Explicitly report completion status using technical criteria:
- •"Complete" only if all branches in the log are addressed, all hypotheses are confirmed/rejected, and no open scope items remain.
- •If not complete, list remaining branches or unknowns from the log.