“Good Enough” Security: When Acceptable Risk Becomes a Dangerous Assumption
Good Enough” Security: When Acceptable Risk Becomes a Dangerous Assumption
Richard’s post — est. reading time: 9 minutes
In the high-pressure world of continuous delivery and fast-paced product cycles, security decisions are increasingly framed by speed and pragmatism. The concept of “good enough” security—where controls are deemed sufficient rather than robust—has become not only accepted but institutionalised. Yet beneath this surface lies a growing danger: that acceptable risk, when poorly managed or normalised, evolves into systemic exposure.
Many organisations pride themselves on mature DevSecOps practices, integrated tooling, and streamlined vulnerability management. But even in these environments, critical issues often slip through the cracks—not because they’re invisible, but because they’ve been acknowledged and tolerated. Accepted risk becomes the unspoken compromise, quietly traded for delivery speed, budgetary constraints, or cross-team ambiguity. And as these deferrals accumulate, so does organisational fragility.
The Illusion of Informed Risk
The process of accepting risk is not inherently flawed. Mature security programmes often maintain formal risk acceptance workflows. They require documentation, sign-off, and regular reviews. On paper, this introduces traceability and accountability. But in practice, the quality of this governance is rarely consistent.
One multinational logistics firm experienced the fallout of this inconsistency. A deprecated authentication protocol—marked as a known risk—had been repeatedly reviewed and deferred due to system dependencies and resourcing gaps. The assumption was that it would be replaced in “a future release.” Over a year later, it remained active, unpatched, and vulnerable. When attackers exploited it to gain access to a privileged console, the incident was not the result of ignorance. It was the product of inertia. Everyone had known. No one had acted.
Security leaders often argue that all risk is relative and that trade-offs are necessary. This is true. But trade-offs must be time-bound, owned, and visible. In too many cases, risk acceptance becomes a passive default rather than a deliberate act. Exceptions remain open-ended. Reviews are skipped. Teams rotate, and institutional memory fades. The longer the deferral, the more it becomes part of the norm—no longer questioned, no longer challenged.
Drift by Design
In DevSecOps cultures, this drift is particularly insidious. Decentralised architectures and autonomous teams can lead to a diffusion of responsibility. Each squad optimises for its sprint goals, confident that risk has been “handled” upstream or downstream. Over time, these micro-decisions accumulate into macro-failures. What begins as a short-term exception hardens into technical debt—and, eventually, into attack surface.
This drift is rarely malicious. It’s a side effect of velocity. Teams push to meet business goals. Security exceptions are granted to avoid blocking releases. The intention is always to revisit the issue “later.” But later rarely comes unless someone is explicitly tasked to bring it forward. In such conditions, risk decisions lose urgency, ownership blurs, and governance becomes reactive rather than preventative.
Security Theatre vs. Security Reality
Executive teams often operate with the impression that their organisation is secure because metrics are trending in the right direction: vulnerability scans completed, penetration tests passed, compliance audits cleared. But these indicators are often a form of security theatre—measuring what’s easy rather than what’s meaningful.
The more accurate metric would be: how many risks have been accepted without expiry? How many have unclear ownership? How often are exceptions reviewed and challenged? Security maturity is not measured by the number of tickets resolved but by the ability to prevent exceptions from becoming the default operating mode.
Acceptable risk becomes dangerous when it replaces decision-making with assumption. When leadership signs off on risk registers without interrogation. When exceptions become routine. When the systems designed to protect the business instead create blind spots where issues fester—unnoticed until it’s too late.
The Operational Cost of Normalised Exceptions
There’s also a business cost. Every time an organisation defers addressing a known issue, it incurs hidden interest. That interest compounds over time. When the breach occurs—as it did for the logistics firm—the fallout extends beyond technical remediation. Reputational damage, customer trust, compliance penalties, and opportunity loss all accrue. And perhaps most damaging of all: the realisation that it was avoidable.
These events often trigger soul-searching. Boards ask why a known vulnerability wasn’t addressed. Regulators question why documentation didn’t translate into action. Security leaders are left explaining governance models that looked good on paper but failed in execution.
Rethinking Risk Governance
The solution is not to eliminate risk acceptance—nor is it to enforce rigid, centralised control. Instead, organisations must rethink how they govern and operationalise security risk in a distributed, high-speed environment. This means integrating risk management into the day-to-day rhythms of engineering, not just quarterly reviews.
It also means enforcing structural guardrails. Every exception must have:
- A defined expiry date
- A named accountable owner
- A business justification linked to strategic priorities
- A recurring review mechanism tied to pipeline checkpoints
Tools can support this—but process discipline must come first. Risk registers should not be hidden in spreadsheets; they must live in the same systems where work is tracked. And exception reviews should be part of routine team rituals—not rarefied committee meetings divorced from reality.
Culture Change: Risk Is a First-Class Citizen
Beyond governance, this is ultimately a cultural challenge. Security must be reframed from a gatekeeper function to a risk management partner. Engineers should be rewarded not just for delivering features, but for reducing long-tail exposure. Product leaders should view deferred risks as technical liabilities that constrain innovation.
In mature organisations, risk is visible at every level. Developers know the trade-offs. Product managers know the exposure. Executives understand the implications. This visibility drives better decisions—and fosters a shared commitment to improvement.
The Role of the C-Suite: From Passive Oversight to Active Accountability
Executive teams must lead from the front. Security is not just a technical issue—it’s a strategic one. When risks are accepted at the technical level but ignored in the boardroom, a dangerous gap emerges. The C-suite must ask the hard questions:
- Are our accepted risks aligned to business priorities?
- Are we tracking deferrals—and do they have deadlines?
- What is our threshold for cumulative risk?
- Do we know where our biggest unknowns are?
This level of engagement signals that risk isn’t tolerated—it’s managed. It shows that security is not a compliance burden, but a business enabler. And it ensures that the organisation doesn’t confuse silence for safety.
Final Thought
Security isn’t what you’ve documented—it’s what you’ve done. Risk acceptance, when treated as a living, governed, accountable practice, can support innovation and velocity. But when used as a shield for indecision or delay, it becomes a dangerous assumption. In today’s threat landscape, that assumption can—and will—be tested. The question is whether your organisation will pass the test, or become the case study others learn from.
Ready to Transform?
Partner with OpsWise and embark on a digital transformation journey that’s faster, smarter, and more impactful. Discover how Indalo can elevate your business to new heights.
Contact Us Today to learn more about our services and schedule a consultation.