e39618f5df750c6feb5792e99b93f6c6e2541b43
Anthropic, Gemini, and OpenAi providers each repeated: API-key gate, chronological price-list building, response validation (direction/confidence/reasoning), TrendDirection::tryFrom, confidence cap at 85, and the top-level try/catch + Log::error. Now in AbstractLlmPredictionProvider: - LLM_MAX_CONFIDENCE constant - buildPriceList(Collection) helper - buildPrediction(input, ?source) — handles direction validation, confidence cap, model construction - defaultPrompt(priceList) — shared by Gemini and OpenAi - Default predict() flow (apiKey + callProvider + buildPrediction + try/catch). Gemini and OpenAi only implement apiKey() and callProvider(). Anthropic overrides predict() because of its multi-phase web-search + forced-tool flow but reuses the helpers. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The file is empty.
Description
Languages
PHP
60.5%
Vue
14.8%
Blade
14%
HTML
9.2%
JavaScript
1%
Other
0.5%