src.llm_interpreter.integration¶
Integration module for AMMM LLM Interpreter.
This module provides integration points with the main AMMM pipeline, particularly with driver_v2.py workflow.
Module Contents¶
- class src.llm_interpreter.integration.LLMInterpreterIntegration(config_path: str | None = None)¶
Main integration class for LLM interpretation functionality.
This class provides a simple interface for the AMMM pipeline to generate AI-powered interpretations of model results.
- generate_interpretation(results_dir: str = 'results/', config_path: str | None = None) Tuple[bool, str | None]¶
Generate LLM interpretation of model results.
This is the main entry point for the driver to request interpretations.
- Parameters:
results_dir – Directory containing model results.
config_path – Path to model configuration file.
- Returns:
bool, report_path: str or None).
- Return type:
Tuple of (success
- check_status() Dict[str, Any]¶
Check the status of LLM interpreter components.
- Returns:
Dictionary with status information.
- src.llm_interpreter.integration.integrate_with_driver(driver_instance, config_path: str, results_dir: str = 'results/') str | None¶
Convenience function to integrate LLM interpretation with driver workflow.
This function can be called from driver_v2.py after model fitting is complete.
- Parameters:
driver_instance – Instance of the driver class (for accessing model data).
config_path – Path to YAML configuration file.
results_dir – Directory containing model results.
- Returns:
Path to generated report or None if disabled/failed.
- src.llm_interpreter.integration.add_llm_interpretation_to_config(config_path: str, enable: bool = False)¶
Helper function to add LLM interpretation option to existing config.
- Parameters:
config_path – Path to YAML configuration file.
enable – Whether to enable the feature.