Infinite loops between servers can lead to performance degradation or network overload. We describe two cross-service UDP loop incidents affecting Google's QUIC implementation several years ago, and share some general conclusions about how to avoid them.
We typically fight our own fires, but if one of us sees something interesting/new we often ask others (after the fire is out) if anyone else saw a similar attack (which could be a new botnet, a new attack method, or whatever). In this case we realized we were all looking at the same thing (which could have huge impact on smaller targets), so collaborated on understanding the problem and coordinated the security response with all webserver vendors.
It's mostly not infected computers, but rather poorly configured proxies that are open for anyone to bounce malicious traffic through. Convincing everyone to clean up their open proxies is a long-term, hard problem. But I plan to tackle it soon....
Get a few companies to agree that open proxies are a scourge that needs to be stopped. They each apply some action to open proxies (user-facing messaging, loss of functionality, captcha, or complete block), and the users of those proxies will get the problem fixed.
The hard part (and it truly is hard!) is convincing a few companies to do this. It risks user complaints in the short term, to solve a problem that may not be very acute for the largest companies (who can simply absorb these attacks).
How about downgrading all connections from said proxies to http 1.1? This can be done in coordination, but it ought not to be too hard to embed such ‘graylisting’ functionality in a webserver.
(No I don’t expect any response but I am just leaving this thought for those who stumble on this thread in the future).
Effective or efficient?
Would seem rather inefficient to spend time researching all the possible ways to gain route on x number of servers, finding an exploit, crafting some plan to execute it, keeping your prints clean etc etc
We don't need to share a block-list, but yes, blocking all traffic from open proxies (which nearly all the large attacks of the 2020s have used) is definitely part of the long-term plan. Any legitimate users of those proxies will experience some short-term pain, but they'll patch and life will go on.
The attacks on VOIP vendors mostly used UDP amplification, which relies on having a server that can fake its source IP due to an incompetent (or complicit!) network provider, while this is a botnet (that is only about a week old).
The trend-lines were really just to guide the eye (the text gives this context), but if you really must know: the R^2 for all three fits was above 0.9, suggesting the exponential growth model is reasonable.
As you say, the pps and bps don't show much curvature, so a linear fit could indeed work for them for the displayed portion of the graph. But it's non-sensical when you look further back in time... predicting negative attack volumes prior to mid-2011. ;)
> the R^2 for all three fits was above 0.9, suggesting the exponential growth model is reasonable
What is the R^2 for other fits? Say linear or quadratic (the most obvious alternative choices)?
> it's non-sensical when you look further back in time
Of course, because the factors that are at work in producing the data are not constant in time. So there is no reason to expect a single curve fit with a single set of parameters to be applicable for all times.
The criticism that it would go negative in the past is meaningless. Of course it’s unphysical, but that doesn’t mean a linear model isn’t appropriate for today’s data.
You would just choose a point of time in the past where you believe the model is inapplicable. Maybe it was a different linear model then, or maybe mostly constant, who cares, we’re modeling today’s range of data, which might be well explained by a linear model, doesn’t really matter what 2011’s data was doing unless we separately believe the same growth law had to apply across all the years, and I see no reason why.
> The trend-lines were really just to guide the eye
That's the problem, no? You can draw a line/curve over any data and convince a lot of people that it's relevant, but that's more cognitive bias than anything.
Another alternative model besides linear would be quadratic (or another X^(1+epsilon) polynomial where epsilon is small). This would avoid the problem of negative data and likely fit the data better than an exponential.
I think the question is really about the growth of volume of connected, compromised devices. Growth curves are often sigmoid shaped, meaning, exponential until they're not. The exponential is often great for modeling growth trends up until the plateau, but it's hard to know when the corner will turn.
Exponentials are also well motivated by differential equations... (Say, if you're modeling growth of IOT devices based on word of mouth.) Polynomials with degree 1+epsilon, less so.