Multi-Loop Document Supervision with LangGraph

Single-pass document generation produces uneven quality across multiple dimensions: literature coverage, structure, detail, and factuality. A single supervisor cannot effectively address all these dimensions because they require different approaches: expansion versus editing versus verification.

This pattern implements five specialized supervision loops executed sequentially, each targeting a specific quality dimension with optimal model selection.

The Core Insight

Different quality dimensions require different supervision strategies:

DimensionStrategyModel Tier
Literature coverageResearch expansionOpus (deep analysis)
StructureTwo-agent analysis and editingOpus + Sonnet
DetailParallel section refinementSonnet + Opus (holistic)
FactualityFast verificationHaiku

The key innovation is matching model capabilities to task requirements. Opus excels at analysis and reasoning. Sonnet balances quality with speed for execution. Haiku provides cost-effective verification at scale.

The Five Loops

graph TD
    A[START] --> B[Loop 1: Theoretical Depth]
    B --> C[Loop 2: Literature Base]
    C --> D[Loop 3: Structure]
    D --> E[Loop 4: Section Editing]
    E --> F[Loop 4.5: Cohesion Check]
    F -->|major issues| D
    F -->|pass| G[Loop 5: Fact-Check]
    G --> H[Finalize]
    H --> I[END]

Loop 1: Theoretical Depth (Opus)

Identifies theoretical gaps using Opus with extended thinking. Triggers targeted research expansion on identified issues. See Iterative Document Supervision for implementation.

Loop 2: Literature Base (Opus)

Identifies missing perspectives or foundational works. Runs a mini-review subgraph to find relevant papers and integrate them into the document.

Loop 3: Structure (Opus + Sonnet)

Two-agent pattern separating analysis from execution:

async def run_loop3_structure(state: dict) -> dict:
    """Two-agent structural analysis and editing."""
    current_review = state["current_review"]
 
    # Agent 1: Structural analysis with Opus (deep reasoning)
    analyzer_llm = get_llm(ModelTier.OPUS, thinking_budget=8000)
    analysis = await analyze_structure(current_review, analyzer_llm)
 
    if not analysis.issues:
        return {"loop3_complete": True, "loop3_skipped": True}
 
    # Agent 2: Generate edit manifest with Sonnet (execution)
    editor_llm = get_llm(ModelTier.SONNET, max_tokens=16000)
    edit_manifest = await generate_structural_edits(
        current_review, analysis.issues, editor_llm
    )
 
    updated_review = apply_structural_edits(current_review, edit_manifest)
 
    return {
        "current_review": updated_review,
        "structural_edits": edit_manifest,
        "loop3_complete": True,
    }

Loop 4: Section Editing (Sonnet + Opus)

Parallel section editing with holistic review:

  1. Split document into sections
  2. Edit each section in parallel using Sonnet (with concurrency limiting)
  3. Holistic coherence review using Opus
  4. Reassemble the document
async def run_loop4_section_editing(state: dict) -> dict:
    """Parallel section editing with holistic review."""
    sections = split_into_sections(state["current_review"])
 
    # Concurrency limit prevents API rate limiting
    semaphore = Semaphore(5)
 
    async def edit_section_limited(section):
        async with semaphore:
            llm = get_llm(ModelTier.SONNET, max_tokens=8000)
            return await improve_section(section, llm)
 
    edited_sections = await asyncio.gather(*[
        edit_section_limited(s) for s in sections
    ])
 
    # Holistic review catches cross-section coherence issues
    holistic_llm = get_llm(ModelTier.OPUS, thinking_budget=4000)
    coherence_edits = await review_coherence(edited_sections, holistic_llm)
 
    final_sections = apply_coherence_edits(edited_sections, coherence_edits)
    return {"current_review": reassemble_sections(final_sections)}

Loop 4.5: Cohesion Check (Sonnet)

Quick assessment of inter-section coherence. Can return to Loop 3 if major structural issues remain (maximum two repeats):

async def run_loop4_5_cohesion(state: dict) -> dict:
    """Check cohesion, may return to Loop 3."""
    loop3_repeats = state.get("loop3_repeats", 0)
    max_loop3_repeats = state.get("max_loop3_repeats", 2)
 
    llm = get_llm(ModelTier.SONNET, max_tokens=2000)
    cohesion_check = await check_cohesion(state["current_review"], llm)
 
    needs_loop3_repeat = (
        cohesion_check.has_major_issues
        and loop3_repeats < max_loop3_repeats
    )
 
    return {
        "cohesion_result": cohesion_check,
        "needs_loop3_repeat": needs_loop3_repeat,
        "loop3_repeats": loop3_repeats + 1 if needs_loop3_repeat else loop3_repeats,
    }

Loop 5: Fact-Check (Haiku)

Fast, cost-effective verification of citations and factual claims against the paper corpus.

Orchestration Graph

def create_orchestration_graph(state_class) -> StateGraph:
    builder = StateGraph(state_class)
 
    builder.add_node("loop1", run_loop1_theoretical_depth)
    builder.add_node("loop2", run_loop2_literature_base)
    builder.add_node("loop3", run_loop3_structure)
    builder.add_node("loop4", run_loop4_section_editing)
    builder.add_node("loop4_5", run_loop4_5_cohesion)
    builder.add_node("loop5", run_loop5_factcheck)
    builder.add_node("finalize", finalize_supervision)
 
    # Sequential flow
    builder.add_edge(START, "loop1")
    builder.add_edge("loop1", "loop2")
    builder.add_edge("loop2", "loop3")
    builder.add_edge("loop3", "loop4")
    builder.add_edge("loop4", "loop4_5")
 
    # Loop 4.5 can return to Loop 3
    builder.add_conditional_edges(
        "loop4_5",
        route_after_cohesion_check,
        {"return_to_loop3": "loop3", "continue": "loop5"},
    )
 
    builder.add_edge("loop5", "finalize")
    builder.add_edge("finalize", END)
 
    return builder.compile()

State with Reducers

For parallel operations in Loop 4, use reducers to accumulate results:

from typing import Annotated
from operator import add
 
class OrchestrationState(TypedDict, total=False):
    current_review: str
 
    # Parallel section edits need reducer
    section_edits: Annotated[list, add]
 
    # Error aggregation across loops
    errors: Annotated[list[dict], add]
 
    # Loop 4.5 -> Loop 3 return tracking
    loop3_repeats: int
    needs_loop3_repeat: bool

Quality Tier Integration

Quality settings control iteration bounds per loop:

Quality TierMax StagesUse Case
quick1Fast feedback
standard2Balanced
comprehensive3Thorough
high_quality5Maximum depth

Why Sequential Execution?

The loops execute sequentially because they have data dependencies:

  • Loop 2 excludes papers already discovered in Loop 1.
  • Loop 3 needs stable content from Loops 1 and 2.
  • Loop 4 operates on structurally-sound sections from Loop 3.
  • Loop 5 validates the final content.

Trade-offs

Benefits:

  • Comprehensive quality across multiple dimensions
  • Optimal model selection per task
  • Conditional feedback (Loop 4.5 can return for fixes)
  • Configurable depth via quality tiers

Costs:

  • Multiple Opus calls across loops (expensive for large documents)
  • Sequential execution adds latency (mitigated by parallelism within loops)