Those are fair points. Let me take them in turn:
"Opting for the "quiet path" by not escalating errors can lead to issues going unnoticed, only to be discovered later when the impact has grown significantly."
Hang on - you can't have it both ways! Either the system is not escalating the errors, thus they’re “going unnoticed", or else it is escalating them until their "impact has grown significantly". What I’m advocating here is: we self-stabilise an erroring (or an impacted) system - we diminsish the impact of error conditions, and thus stabilise the system. And also report the fact that the error (and its remediation) has occurred.
Logically, I don't see a rational alternative to this. Most of our systems do amplify errors, causing internal crashes, database corruption, and technical messages being shown to users who can’t interpret or action them at all (unless they’re hackers, of course, who can use everything). That’s why software has a reputation for being brittle and unreliable. But I don't see that status quo as being remotely defensible (other than: we don't know any better - which we do).
"In my experience, simply logging an error can be insufficient, as logs are often saturated with numerous valid errors ..."
My experience too, sadly. But my conclusion is not that logging errors is inefficient, but that logging non-errors is inefficient (or at least, that not having decent log triggers is inefficient). It seems to me that if you don't have useful and useable health monitoring installed and manned, then your system has already malfunctioned, but you have no way of knowing. I mean, if something goes wrong, you have only three options: tell your systems people, tell your user, or tell no-one. I honestly can't see how you could ever conscientiously argue for either of the last two.
So, the “quiet path” is not pretending all is well, or faking something wrong but plausible. It's specifically about stabilising a stressed system before the stress turns into a malfunction or a catastrophic failure. It’s about having failsafes you can rely on, protecting your system (as a whole), your user, and your business. But it's not quiet: if the safeties have kicked in (other than as part of normal system metabolism), you need to make a really big noise about it to the people who can do something about it - that's the tech team (or, in extremis, legal!)