Multi-Angle Content Generation with Parallel Evaluation
Single-agent content generation has blind spots. The first framing chosen may not be the best. Iterating sequentially wastes time and may not explore fundamentally different approaches.
The pattern solves both problems by generating multiple content variants with distinct angles in parallel, then using structured cross-evaluation to select the best.
The Problem
When generating long-form content like essays, the framing angle affects quality:
- A puzzle angle suits investigative topics.
- A finding angle works for surprising research results.
- A contrarian angle fits nuanced topics with common misconceptions.
Single-shot generation picks one angle and commits. If it picks wrong, the output suffers.
The Solution
Generate three variants in parallel, each with a distinct angle, then evaluate and select:
Source Material
│
▼
┌─────────────────────────┐
│ validate_input │
└─────────────────────────┘
│
▼ (Send pattern - parallel fan-out)
┌─────┼─────┐
▼ ▼ ▼
PUZZLE FINDING CONTRARIAN
Writer Writer Writer
│ │ │
└─────┼─────┘
▼ (State aggregation)
┌─────────────────────────┐
│ choose_essay │
│ (6-dimension scoring) │
└─────────────────────────┘
│
▼
Selected Essay + Scores
Angle-Specific Prompts
Each angle has a distinct narrative purpose:
PUZZLE_SYSTEM_PROMPT = """You are transforming a literature review into an
engaging essay using the PUZZLE angle.
**Structure:**
1. Open with a specific, surprising detail from the literature
2. Unfold investigation around that puzzle
3. Use the puzzle as lens to understand broader themes
4. Surface tensions and unresolved questions
**Tone:** Curious, investigative, intellectually honest
**Best for:** Readers who want to follow an unfolding argument
"""
FINDING_SYSTEM_PROMPT = """You are transforming a literature review into an
engaging essay using the FINDING angle.
**Structure:**
1. Open with the most surprising quantitative result
2. Explain what this tells us about prior assumptions
3. Walk through the mechanism that produced the result
4. Expand to related themes and implications
**Tone:** Direct, energetic, intellectually engaged
**Best for:** Readers seeking concrete evidence
"""
CONTRARIAN_SYSTEM_PROMPT = """You are transforming a literature review into an
engaging essay using the CONTRARIAN angle.
**Structure:**
1. Articulate the comfortable assumption (steelman it)
2. Introduce the complication—what doesn't fit?
3. Work through evidence that complicates the simple story
4. Close with productive uncertainty
**Tone:** Thoughtful, fair, precise
**Best for:** Readers who value nuance
"""State with Reducer for Parallel Aggregation
The state uses an add reducer to merge outputs from parallel nodes:
from operator import add
from typing import Annotated, Literal, Optional
from typing_extensions import TypedDict
class EssayDraft(TypedDict):
angle: Literal["puzzle", "finding", "contrarian"]
content: str
word_count: int
class ContentGenerationState(TypedDict):
# Input
literature_review: str
# Writing phase - aggregates from parallel nodes
essay_drafts: Annotated[list[EssayDraft], add]
# Selection phase
selected_angle: Optional[Literal["puzzle", "finding", "contrarian"]]
selection_reasoning: Optional[str]
essay_evaluations: Optional[dict]
# Output
final_essay: Optional[str]
errors: Annotated[list[dict], add]The Annotated[list[EssayDraft], add] reducer merges essay lists from all three parallel writers.
Parallel Fan-Out with Send Pattern
The Send pattern enables parallel execution:
from langgraph.types import Send
def route_to_parallel_writers(state: ContentGenerationState) -> list[Send]:
"""Route to parallel writing agents."""
return [
Send("write_puzzle", {"literature_review": state["literature_review"]}),
Send("write_finding", {"literature_review": state["literature_review"]}),
Send("write_contrarian", {"literature_review": state["literature_review"]}),
]LangGraph executes all three nodes simultaneously—there is no latency penalty for generating three variants instead of one.
Structured Cross-Evaluation
The evaluator scores each essay on six dimensions:
from pydantic import BaseModel, Field
class EssayEvaluation(BaseModel):
"""Evaluation of a single essay."""
primary_strength: str
primary_weakness: str
hook_strength: int = Field(ge=1, le=5)
structural_momentum: int = Field(ge=1, le=5)
technical_payoff: int = Field(ge=1, le=5)
tonal_calibration: int = Field(ge=1, le=5)
honest_complexity: int = Field(ge=1, le=5)
subject_fit: int = Field(ge=1, le=5)
class ChoosingAgentOutput(BaseModel):
"""Structured output from the choosing agent."""
selected: Literal["puzzle", "finding", "contrarian"]
evaluations: dict[str, EssayEvaluation]
deciding_factors: str = Field(
description="Two to three sentences on what made winner stand out"
)
close_call: bool = Field(default=False)
close_call_explanation: str = Field(default="")The evaluator compares all three essays and provides explicit reasoning for the selection.
Graph Construction
The graph uses conditional edges for the fan-out:
from langgraph.graph import END, START, StateGraph
def create_content_generation_graph() -> StateGraph:
builder = StateGraph(ContentGenerationState)
builder.add_node("validate_input", validate_input_node)
builder.add_node("write_puzzle", write_puzzle_essay)
builder.add_node("write_finding", write_finding_essay)
builder.add_node("write_contrarian", write_contrarian_essay)
builder.add_node("choose_essay", choose_essay_node)
builder.add_edge(START, "validate_input")
builder.add_conditional_edges(
"validate_input",
route_to_parallel_writers,
["write_puzzle", "write_finding", "write_contrarian"],
)
# All writers converge to chooser
builder.add_edge("write_puzzle", "choose_essay")
builder.add_edge("write_finding", "choose_essay")
builder.add_edge("write_contrarian", "choose_essay")
builder.add_edge("choose_essay", END)
return builder.compile()LangGraph waits for all three parallel branches to complete before executing the evaluator.
Usage Example
result = await content_generation(
literature_review=lit_review_text,
target_word_count=3500,
)
print(f"Selected angle: {result['selected_angle']}")
print(f"Reasoning: {result['selection_reasoning']}")
# Access detailed evaluations.
for angle, scores in result["essay_evaluations"].items():
print(f"\n{angle.upper()}:")
print(f" Hook: {scores['hook_strength']}/5")
print(f" Momentum: {scores['structural_momentum']}/5")
print(f" Payoff: {scores['technical_payoff']}/5")When to Use This Pattern
Use when:
- Output quality justifies three times generation cost.
- Multiple valid framing approaches exist for the source material.
- Content benefits from different narrative structures.
- Selection criteria can be articulated as structured dimensions.
Do not use when:
- Simple, formulaic output is acceptable.
- Cost constraints preclude multiple generations.
- A single correct framing is obvious.
- Output will be heavily edited anyway.
Trade-offs
Benefits:
- Quality improvement from selection across three variants.
- No latency penalty due to parallel execution.
- Explainable selection with structured scores.
- Close-call detection flags when alternatives were competitive.
Costs:
- Three times generation cost for writing (mitigated by batch API).
- More nodes and state management than single-agent.
- Each angle needs careful prompt design.
- Evaluation dimensions may need tuning.