The symptom: a server handling 200K concurrent connections starts dropping packets intermittently. No obvious errors, no CPU spike, just missing ACKs and retransmits climbing. dmesg | grep conntrack reveals the problem immediately.
What conntrack does
Netfilter's connection tracking table maps every active TCP/UDP flow to a state entry. When the table fills, new connections are dropped — not RST'd, just silently discarded. The default nf_conntrack_max is typically 65536 on most distributions, which is fine for a desktop but catastrophic for a proxy.
Sizing the table
A rule of thumb: set nf_conntrack_max to 2× your expected peak connection count. Each entry consumes about 320 bytes, so 500K entries costs ~160 MB of kernel memory — reasonable on any modern server.
net.netfilter.nf_conntrack_max = 1048576
net.netfilter.nf_conntrack_tcp_timeout_established = 300
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 15
The timeout values matter as much as the table size. The default tcp_timeout_established is 5 days — meaning idle connections hold entries for days. Reducing it to 5 minutes is safe for any service doing keepalive properly.
Monitoring
cat /proc/sys/net/netfilter/nf_conntrack_count
cat /proc/sys/net/netfilter/nf_conntrack_max
Alert when count exceeds 80% of max. I've seen servers hit 100% during a traffic spike and spend 20 minutes dropping connections before anyone noticed.