Anthropic, Gemini, and OpenAi providers each repeated: API-key gate,
chronological price-list building, response validation
(direction/confidence/reasoning), TrendDirection::tryFrom, confidence
cap at 85, and the top-level try/catch + Log::error.
Now in AbstractLlmPredictionProvider:
- LLM_MAX_CONFIDENCE constant
- buildPriceList(Collection) helper
- buildPrediction(input, ?source) — handles direction validation,
confidence cap, model construction
- defaultPrompt(priceList) — shared by Gemini and OpenAi
- Default predict() flow (apiKey + callProvider + buildPrediction +
try/catch). Gemini and OpenAi only implement apiKey() and
callProvider(). Anthropic overrides predict() because of its
multi-phase web-search + forced-tool flow but reuses the helpers.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>