The half-life of a signal: why "WTI ↑5% over 14d" doesn't mean the same thing on day 1 and day 12
Imagine two identical GeoPulse signals. Same rule, same asset (WTI), same direction (↑5% over 14 days), same confidence at creation (75%). The first was emitted 1 day ago. The second 12 days ago, and oil hasn't really moved since. Simple question: do you trust them equally?
Obviously not. And yet, until last week, our interface showed both with the same "75%" label. That's what this article fixes — by borrowing a concept from nuclear physics we should have imported from day one.
The problem: confidence is treated as a constant
When a GeoPulse rule detects a pattern, it estimates three things: the direction of the move, its magnitude, and the confidence it has in this prediction. These three values are written to the database the moment the signal is created. They never change until resolution.
That's defensible for direction and predicted magnitude — those are commitments made at T=0 you must be able to verify at T+horizon. But it's absurd for confidence, which should represent at every instant the probability that the signal still materialises.
A concrete example. On 14 April, a rule fires: "WTI will rise 5% in the next 14 days, confidence 75%". On 15 April, oil moves +1% — we're in the expected trajectory. On the 16th, +0.5%. And then... nothing. From 17 to 26 April, WTI fluctuates within a ±0.3% band around its entry price. The timeframe expires in two days. What's the signal worth now?
If we look at it coldly, the window in which a +5% move can still materialise has shrunk from 14 to 2 days. The probability that it still happens, with no preparatory move at all, is much lower than at day 1. The signal may still deserve to be displayed, but with an honest label. Not its original "75%".
The metaphor: why radioactivity
Physicists know this problem well. When a radioactive isotope emits a particle, you can't predict when exactly. But across a large population of atoms, you can measure the fraction that will have "reacted" at a given instant. That fraction follows an exponential law: every fixed period, called a half-life, half the remaining atoms decay. After one half-life, 50% remain. After two, 25%. After three, 12.5%.
That's exactly the right model for a financial signal's confidence.
At T=0, the signal has 100% of its informational value. At each fixed period — its half-life — it loses half. The formula is trivially simple:
confidence(t) = confidence(0) × 0.5 ^ (t / T_half)
For our WTI example: if the half-life is 7 days (half the 14-day timeframe, the value we calibrate), then:
- Day 1: confidence = 75% × 0.5^(1/7) = 68%
- Day 7: confidence = 75% × 0.5^(7/7) = 38%
- Day 12: confidence = 75% × 0.5^(12/7) = 23%
The signal that was showing "75% on day 12" without moving will now show "23% — half-dead". Much more accurate.
The half-life isn't magic: it's calibrated
A 7-day half-life didn't come out of thin air. It depends on the type of signal. A rule that detects a technical breakout (extreme RSI, breakout on volume) must have a short half-life: if the move doesn't happen within 24–48h, the opportunity has likely already passed. Conversely, a structural rule like the Sahm Rule (recession indicator) plays out over months — its natural half-life is several weeks.
Our approach: by default, half-life = timeframe / 2. Concretely, a 14-day signal decays to 50% at midpoint, 25% at the end. That matches the intuition "I've consumed half my timeframe, I've consumed half my confidence".
But that default is adjustable rule by rule, and auto-tuned by our calibration engine. The system observes two statistics on each rule's resolved corpus:
- The percentage of signals that finish
stale— i.e. die without the market ever reacting - The percentage that finish
expired— that reach their timeframe end without significant movement
If a rule produces lots of stale (≥ 30%), it's overpromising on duration — its half-life is automatically shortened by 30%. If conversely lots of expired (≥ 50%), moves do happen, just later than predicted — its half-life is extended by 30%.
All without human intervention. The system calibrates itself, every night at 03:00 UTC.
The stale status: new in v1.10.2
Before this month, a signal could have three states: pending (active), resolved (verified at expiration), or expired (timeframe reached without significant move). Those three states didn't suffice to describe what needed managing.
Consider our WTI again. On day 12, oil hasn't moved. The signal isn't expired (timeframe still has 2 days to run). It isn't resolved (nothing observed). It's pending, but with a probability of materialisation reduced to a third of the original. It's no longer a living signal — it's a dying signal.
The new stale status captures exactly that nuance. A signal becomes stale when:
- It has consumed more than 70% of its timeframe
- AND the market has moved less than 25% of the predicted magnitude
When both conditions are met, the signal is automatically marked stale and exits the active stats. It no longer pollutes accuracy calculations (since it's neither a success nor a failure — it's a non-event). But it stays counted separately, and that count feeds half-life auto-calibration.
Impact on allocation: Kelly decays too
For Premium subscribers using our Kelly allocation suggestion, confidence decay has a direct effect. The standard quarter-Kelly formula uses the win probability p (from the per-rule Bayesian posterior) and the gain/loss ratio b. Recommended position size:
fraction = (p × b - q) / b × 0.25
But that formula treats the signal as a binary event at T=0. If half the window has elapsed without anything happening, the edge has melted. Multiplying the recommendation by the decay factor reflects that honestly:
adjusted_fraction = fraction × decay_factor
Concretely: a signal that justified 8% allocation on day 1 only suggests 4% on day 7, and 2% on day 12. If the half-life is well calibrated, that matches exactly the fraction of your conviction that still deserves to be deployed.
The math is exposed transparently in the dashboard's Kelly card: you see the raw fraction (what classic Kelly would have computed), the decay_factor applied, and the final recommendation. No number is hidden.
Premium filter: "min freshness"
Last practical corollary. In /account, Premium subscribers can now configure a minimum freshness threshold on their signal filters. Setting "≥ 60%" means: only show me signals that still have at least 60% of their original value.
It's an actionable filter. If you trade short swings, you probably want ≥ 75% (recent and hot signals). If you take a step back and accept setups in evolution, ≥ 30% lets through more material.
The real-time preview shows you how many signals from the last 30 days would have passed the threshold — you calibrate by seeing the consequences before saving.
Why this was urgent
We shipped this overhaul on three constats accumulated over a month of live track record:
First: too many pending signals lingered in the scoreboard for days saying nothing more. The customer saw "WTI ↑5% confidence 75%" for eight days, the market hadn't moved, and the label remained identical. It was misleading by stagnation — a passive but real lie.
Second: we had no way to statistically distinguish rules that "fire too early but eventually have it right" from those that "fire and the market ignores". Both ended expired. The stale status now lets us separate.
Third: a Kelly suggestion based on a probability frozen at creation mechanically led to recommending excessive allocations on dying signals. An implicit cap via freshness fixes that while staying consistent with the math.
What we observe in production
First auto-calibration run the day after deploy: the system observed that no rule had yet enough resolved signals to adjust the half-life beyond default. As expected — it takes a few weeks of live track record before per-rule distributions become statistically meaningful.
But decay already applies to all pending signals, and freshness badges have started turning from green to yellow on signals past their half-life. The scoreboard is calmer — fewer misleading "75%", more honest "32% — half-dead".
The methodology stays public
The code is fully accessible in the signalFreshnessService.js service. The thresholds (70%/25% for stale, 30%/50% for half-life auto-calibration) are documented. The decay curve is exposed for every signal via the public API /api/signals/live, with fields current_confidence, freshness_pct, decay_factor and half_life_hours.
The goal: no decay should be invisible. If your favourite signal has lost 60% of its value in 4 days, you want to see it in the interface, not discover it at final resolution.
A radical-transparency promise we try to keep with each sprint. The half-life is its latest incarnation. The next will probably be the auto-deactivation of rules whose live accuracy has structurally collapsed — different topic, same spirit.
GeoPulse
Follow the markets in real time
GeoPulse correlates geopolitical events with financial markets using AI analysis of every event.
Create a free account