src.llm_interpreter.llm.prompts¶
Prompt templates for AMMM LLM report generation.
This module implements structured-first prompting: 1. First extract structured insights from evidence 2. Then generate narrative from structured data
Author: AMMM Team Created: 2025-04-10 Last Modified: 2025-04-10
Module Contents¶
- src.llm_interpreter.llm.prompts.format_evidence(insights_dict: dict[str, Any]) str¶
Format insights dictionary as readable evidence for LLM.
- Parameters:
insights_dict – Dictionary containing structured insights
- Returns:
Formatted evidence string
- src.llm_interpreter.llm.prompts.create_extraction_prompt(prompt_template: str, evidence: dict[str, Any]) str¶
Create an extraction prompt with formatted evidence.
- Parameters:
prompt_template – Template string with {evidence} placeholder
evidence – Dictionary of evidence data
- Returns:
Formatted prompt
- src.llm_interpreter.llm.prompts.create_narrative_prompt(prompt_template: str, structured_data: dict[str, Any]) str¶
Create a narrative generation prompt with structured data.
- Parameters:
prompt_template – Template string with {structured_data} placeholder
structured_data – Dictionary of structured insights
- Returns:
Formatted prompt