I feel like you are understanding the title as I first understood it, meaning like, "the external facing portions of infrastructure". However, reading the article, it seems clear that he's referring to the edges of a distribution curve (i.e. Infrequent events that impact experience nonetheless).
From the article: "It’s tempting to focus on the peak of the curve. That’s where most of the results are. But the edges are where the action is. Events out on the tails may happen less frequently, but they still happen. In digital systems, where billions of events take place in a matter of seconds, one-in-a-million occurrences happen all the time. And they have an outsize impact on user experience."
EDIT: IMO, the title is still a little annoying in this respect. I think everyone would agree if a request to your site fails 5% of the time, that is unacceptable, even though it "usually works." The discussion of the distribution curve simply to make the point that spikes in usage cause backed up queues which impact performance isn't necessarily helpful as far as I can tell, and it seems done largely in service to the title. In my mind while reading this, I'm thinking, "Okay, cool, but how does the fact that this interesting issue exists at the edge of the curve help me identify it?" Answer: It doesn't. If you see errors occurring, you will investigate them once they are noticed. Being at the edge of the curve may mean it takes longer to notice, but like, what kind of alerting system are you using that discriminates against rare issues in favor of common ones?
Discussing queues, over provisioning, back pressure, etc. are all super interesting and helpful.
This concept evokes many familiar memories of reading other people's code, which is always extremely hard. I feel multiple ways about this concept. On the one hand, I believe programs can be sectionally reduced to a inputs, outputs, and a sequence of states in between, and a programmer can understand those things well enough to extend an existing programmer competently in many cases. It must be true, because it does happen.
On the other hand, while a programmer can learn from source and documentation the wheres, whens, and whats of the program, there is always the remaining question of "Why?", which is central to this discussion. Here, I think good high-level examples of usage tend to do a good job of covering the inputs and outputs. But with regard to all of the intermediary states of the program... there is too much detail there to really document it. Those details evolve as an evolution more than a design. There is code added, then replaced or omitted entirely. Things are designed which work, but then are restructured for performance, organization, or to eliminate repetition. In these cases, there is information that is manifested in the absence of code, and the second rendering of the code better captures its function, but obscures its evolutionary history.
This game programmer (Sebastian Lague, who is excellent, by the way) walks through the development of procedural terrain generation in Unity. What's fascinating to me is the way he does it does this really effective job of "theory building". Things are implemented; results are observed; some code is deleted altogether that was only ever present to allow building up to that illustration, but will no longer be necessary at the next stage of evolution.
This is the way programmers work. Information is lost. If you weren't there to experience it at inception, only a great imagination and testing can replace it--at which point, you may find yourself actually rewriting the code, using existing code as a reference.