src.llm_interpreter.model_interpreter

Model Interpreter using LLM APIs.

This module provides the core functionality for interpreting AMMM model outputs using Large Language Models (Gemini or OpenAI).

Module Contents

class src.llm_interpreter.model_interpreter.ModelInterpreter(config: src.llm_interpreter.config.LLMConfig | None = None)

Interprets AMMM model outputs using LLM APIs.

interpret_convergence(diagnostics: Dict) str

Interpret convergence diagnostics.

Parameters:

diagnostics – Convergence diagnostics dictionary.

Returns:

Interpretation text.

interpret_model_fit(fit_metrics: Dict) str

Interpret model fit metrics.

Parameters:

fit_metrics – Model fit metrics dictionary.

Returns:

Interpretation text.

interpret_channel_performance(channel_data: Dict) str

Interpret marketing channel performance.

Parameters:

channel_data – Channel performance data dictionary.

Returns:

Interpretation text.

generate_executive_summary(full_results: Dict) str

Generate executive summary of results.

Parameters:

full_results – Complete aggregated results.

Returns:

Executive summary text.

generate_recommendations(full_results: Dict) str

Generate actionable recommendations.

Parameters:

full_results – Complete aggregated results.

Returns:

Recommendations text.

interpret_technical_details(full_results: Dict) str

Generate technical interpretation for data scientists.

Parameters:

full_results – Complete aggregated results.

Returns:

Technical interpretation text.

compare_model_runs(current_results: Dict, previous_results: Dict) str

Compare results between two model runs.

Parameters:
  • current_results – Current model results.

  • previous_results – Previous model results.

Returns:

Comparison text.

get_usage_stats() Dict[str, Any]

Get API usage statistics.

Returns:

Dictionary of usage statistics.

clear_cache()

Clear the response cache.