Removes everything that was made redundant by the new forecasting
stack. Per docs/superpowers/specs/2026-05-01-prediction-rebuild-design.md,
this was the cleanup planned at the end of Phase 4.
Deleted services and code:
- App\Services\Prediction\Signals\* (the old six-signal aggregator —
trend, supermarket, day-of-week, brand-behaviour, stickiness,
regional-momentum, oil — replaced by RidgeRegressionModel).
- App\Services\NationalFuelPredictionService (the post-Phase-4 thin
shim; StationSearchService now depends on WeeklyForecastService
directly, set up in the previous commit).
- App\Services\LlmPrediction\* (AbstractLlmPredictionProvider plus
the four provider implementations — Anthropic, OpenAI, Gemini, and
the OilPredictionProvider router. Replaced by LlmOverlayService).
- App\Services\BrentPricePredictor and App\Services\Ewma. The Ewma
helper had no callers left after BrentPricePredictor went.
- App\Models\PricePrediction and its factory.
- App\Console\Commands\PredictOilPrices (the oil:predict command).
- App\Filament\Resources\OilPredictionResource and its Pages.
Schema and dashboard:
- Drop the price_predictions table via a new migration.
- Repoint the Filament StatsOverviewWidget tile from PricePrediction
to WeeklyForecast so the dashboard reflects the new pipeline.
- Remove the OilPredictionProvider binding from AppServiceProvider.
Test cleanup:
- Delete tests for every retired service.
- Update StatsOverviewWidgetTest to seed weekly_forecasts instead of
price_predictions.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Audit items #7 and #5.
#7 — BrentPricePredictor::generatePrediction previously wrote both an
EWMA row and an LLM row to price_predictions on every run. The
downstream OilSignal already prefers llm_with_context > llm > ewma, so
the EWMA row was dead weight 95% of the time. Now we try LLM first; if
it returns null (no API key, parse failure, etc.) we compute and persist
EWMA as a real fallback. This also avoids redundant work on the success
path.
Updated the "stores both" test to "stores only LLM" — asserts no EWMA
row is written when the provider succeeds.
#5 — BrentPricePredictor and AnthropicPredictionProvider both had
byte-identical computeEwma() methods with identical EWMA_ALPHA = 0.3
constants. Extracted to App\Services\Ewma::compute() and dropped both
private methods + their alpha constants.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Anthropic, Gemini, and OpenAi providers each repeated: API-key gate,
chronological price-list building, response validation
(direction/confidence/reasoning), TrendDirection::tryFrom, confidence
cap at 85, and the top-level try/catch + Log::error.
Now in AbstractLlmPredictionProvider:
- LLM_MAX_CONFIDENCE constant
- buildPriceList(Collection) helper
- buildPrediction(input, ?source) — handles direction validation,
confidence cap, model construction
- defaultPrompt(priceList) — shared by Gemini and OpenAi
- Default predict() flow (apiKey + callProvider + buildPrediction +
try/catch). Gemini and OpenAi only implement apiKey() and
callProvider(). Anthropic overrides predict() because of its
multi-phase web-search + forced-tool flow but reuses the helpers.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>