A few months ago I needed to set up anycast for a small service — nothing fancy, just two PoPs in Frankfurt and Amsterdam with the same /24 announced from both. The theory is simple: BGP selects the closest origin, traffic flows there. In practice, I spent three days debugging why traffic kept ignoring the Frankfurt node entirely.
The dampening problem
The Frankfurt upstream had route dampening enabled. Every time I bounced the session to test a config change, the route got a penalty. After the fourth or fifth bounce in an hour, the penalty exceeded the suppress threshold and the route was withdrawn for 45 minutes. Meanwhile Amsterdam was happily announcing the same prefix and swallowing all the traffic.
The fix was embarrassingly simple — coordinate maintenance windows, don't iterate configs under a live session. But I only realized dampening was the cause after staring at show ip bgp {prefix} output and noticing the "History" and "Dampened" flags I'd been skipping past.
AS path prepending gone wrong
I was using AS path prepending to balance load: Frankfurt got the bare announcement, Amsterdam got 3× prepend to make it less preferred for EU traffic. This works until you have a transit provider that strips prepends above a certain depth. One of my upstreams had exactly this policy, undocumented of course. Result: Amsterdam looked equally preferred to Frankfurt for about 40% of inbound paths.
The lesson: verify your transit provider's route manipulation policies with a looking glass, not assumptions. DE-CIX LG saved me here — I could see exactly how my announcement looked after each hop.
What actually worked
In the end I combined two approaches:
- Local preference adjustments at each PoP to prefer local exits
- MED values to signal preference to multi-homed upstreams
- Monitoring with
bgpq4to alert on unexpected prefix state
Anycast is powerful but the failure modes are subtle. Most of the pain I hit was in the gap between "the route is announced" and "traffic actually goes where I expect."