Removes everything that was made redundant by the new forecasting
stack. Per docs/superpowers/specs/2026-05-01-prediction-rebuild-design.md,
this was the cleanup planned at the end of Phase 4.
Deleted services and code:
- App\Services\Prediction\Signals\* (the old six-signal aggregator —
trend, supermarket, day-of-week, brand-behaviour, stickiness,
regional-momentum, oil — replaced by RidgeRegressionModel).
- App\Services\NationalFuelPredictionService (the post-Phase-4 thin
shim; StationSearchService now depends on WeeklyForecastService
directly, set up in the previous commit).
- App\Services\LlmPrediction\* (AbstractLlmPredictionProvider plus
the four provider implementations — Anthropic, OpenAI, Gemini, and
the OilPredictionProvider router. Replaced by LlmOverlayService).
- App\Services\BrentPricePredictor and App\Services\Ewma. The Ewma
helper had no callers left after BrentPricePredictor went.
- App\Models\PricePrediction and its factory.
- App\Console\Commands\PredictOilPrices (the oil:predict command).
- App\Filament\Resources\OilPredictionResource and its Pages.
Schema and dashboard:
- Drop the price_predictions table via a new migration.
- Repoint the Filament StatsOverviewWidget tile from PricePrediction
to WeeklyForecast so the dashboard reflects the new pipeline.
- Remove the OilPredictionProvider binding from AppServiceProvider.
Test cleanup:
- Delete tests for every retired service.
- Update StatsOverviewWidgetTest to seed weekly_forecasts instead of
price_predictions.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Audit items #7 and #5.
#7 — BrentPricePredictor::generatePrediction previously wrote both an
EWMA row and an LLM row to price_predictions on every run. The
downstream OilSignal already prefers llm_with_context > llm > ewma, so
the EWMA row was dead weight 95% of the time. Now we try LLM first; if
it returns null (no API key, parse failure, etc.) we compute and persist
EWMA as a real fallback. This also avoids redundant work on the success
path.
Updated the "stores both" test to "stores only LLM" — asserts no EWMA
row is written when the provider succeeds.
#5 — BrentPricePredictor and AnthropicPredictionProvider both had
byte-identical computeEwma() methods with identical EWMA_ALPHA = 0.3
constants. Extracted to App\Services\Ewma::compute() and dropped both
private methods + their alpha constants.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- BrentPriceFetcher owns ingestion (fetchFromEia / fetchFromFred, each throws on failure)
- BrentPricePredictor owns prediction and marks latest brent_prices row as generated
- oil:fetch command tries EIA, falls back to FRED, fails loudly if both fail
- oil:predict command prompts if latest price already has a prediction; --force bypasses
- add prediction_generated_at column to brent_prices
- delete OilPriceService (replaced by the two focused services)