How Can You Measure Strategic Security Effectiveness?

How Can You Measure Strategic Security Effectiveness?

Matilda Bailey is a seasoned cybersecurity strategist with over 25 years of experience in information security and networking technologies. Throughout her career, she has transitioned from the technical trenches of next-gen cellular solutions to the high-level boardroom discussions where risk management and financial efficiency collide. She is widely recognized for her ability to dismantle the “gut-feeling” approach to security, replacing it with rigorous, evidence-based frameworks that treat cyber defense as a disciplined business function. By bridging the gap between technical efficacy and economic reality, she helps organizations navigate the increasingly complex landscape of modern digital threats.

The following discussion explores the critical necessity of a data-driven approach to evaluating security controls. It covers the shift from simple compliance-based audits to a holistic evaluation model that prioritizes three specific dimensions: effectiveness, maturity, and economic efficiency. We delve into how organizations can identify underperforming legacy systems, the hidden costs associated with maintaining “Initial” level processes, and the strategic importance of calculating the amount of risk reduced per dollar spent. By moving away from information-free strategies, security leaders can gain the clarity needed to optimize their portfolios and address the opportunity costs of their investment choices.

Evaluating security controls often requires looking at effectiveness, maturity, and economic efficiency. How do you distinguish between a control that technically functions and one supported by a mature, resilient process, and what specific indicators help you determine if a control is truly optimized rather than just managed?

To understand the difference, I often think back to a ridiculous ad I once saw for a “zero questions asked” energy audit that could be done over the phone. It is objectively absurd to claim you can improve a system without measuring anything, yet many CISOs try to do exactly that with their security controls. A control that technically functions is merely “effective”—it does the job it was designed for, like a firewall blocking a specific port. However, a mature process is about reliability and predictability; it is the difference between a disorganized team reacting to alerts and a well-documented, standardized system that survives even if key personnel leave the company. To determine if a control is truly optimized, I look for a continuous improvement loop, which is the fifth and highest level of the Capability Maturity Model. In an optimized state, the control isn’t just “managed” according to requirements; it is being constantly refined through a data-driven feedback loop that proactively identifies and fixes performance gaps before they become liabilities.

Effectiveness measurements often involve analyzing the ratio of false positives to true negatives or comparing remediated versus unremediated issues. How do you establish the correct scope for these metrics in a large environment, and what is your process for adjusting controls that fail to meet coverage expectations?

Establishing the correct scope requires moving beyond a simple “is it on?” binary and looking at the actual performance data within the context of the environment. In a large-scale enterprise, you have to measure how much of the environment the control actually covers—for instance, checking if your endpoint protection is active on 100% of your servers or just the 80% that are easiest to reach. I focus on specific metrics like the ratio of quarantined malware versus unquarantined threats to see where the “leakage” is occurring. When a control fails to meet coverage expectations, the process involves a rigorous realignment or rescoping; we don’t just accept the gap, we analyze why the control is failing to perform in those specific segments. This might mean identifying legacy systems that are incompatible or adjusting the control’s configuration to reduce the noise of false positives that distract the team from real remediated issues.

Process maturity ranges from ad-hoc, reactive responses to standardized, quantitatively managed systems. What are the operational risks of relying on a control with an “initial” maturity level, and what step-by-step improvements are necessary to move a process into a state of continuous, data-driven improvement?

Relying on a control at the “initial” maturity level is a dangerous gamble because the process is inherently unpredictable, ad-hoc, and reactive. The primary operational risk here is fragility; if the one person who knows how to run the system leaves the organization, the entire security outcome collapses because nothing was documented or standardized. To move out of this chaos, an organization must first transition to the “managed” level by planning the process with controlled requirements, and then move to “defined” by documenting everything to ensure consistency across the team. The real shift happens at the “quantitatively managed” stage, where we start using hard metrics to drive the process rather than just following a checklist. Finally, we reach the “optimizing” state by establishing that continuous improvement loop I mentioned earlier, where every piece of data collected is used to make the process more resilient and efficient.

Total cost of ownership for security includes hard dollars, soft costs like personnel time, and infrastructure outlays like compute or storage. How do you aggregate these diverse expenses into a single unit cost, and how does this financial clarity change your approach to long-term risk management?

Aggregation starts with the budget—capturing the hard dollars spent on licenses and products for both year 1 and subsequent years—but that is only the tip of the iceberg. To get a true total cost of ownership (TCO), we must factor in the “soft” costs, such as the headcount required to support the tool and the actual staff time spent on its daily operations. We also have to account for the physical and virtual infrastructure, including the data center footprint, compute power, storage, and even the bandwidth used by the control. When you combine these into a single unit cost, it completely changes your long-term risk management approach because you can finally see which controls are black holes for resources. This financial clarity allows a CISO to defend their budget by demonstrating the “risk reduced per unit cost,” turning security from a vague insurance policy into a measurable business investment.

Analyzing “risk reduced per unit cost” can reveal that legacy tools provide less value than modern alternatives like container scanning or cloud posture management. How do you objectively identify which controls to decommission, and how do you explain the concept of opportunity cost to stakeholders during budget discussions?

Objectively identifying controls for decommissioning requires a cold, hard look at whether the risk mitigation provided justifies the ongoing TCO. For example, a legacy control like a modem wardialer might have been vital 25 years ago, but in a modern ecosystem, its value is negligible while its maintenance costs remain. I explain this to stakeholders through the lens of “opportunity cost”—every dollar and every hour of staff time spent keeping that wardialer alive is a resource we cannot spend on container scanning, secrets management, or cloud security posture management. It is a probabilistic discipline, and while it is scary to remove a control that might catch an attack once a decade, the risk of not investing in modern defenses like large language model gateways is far greater. By showing stakeholders exactly what they are giving up to keep legacy tools on life support, the conversation shifts from “cutting security” to “upgrading defense.”

What is your forecast for the future of security control evaluations?

My forecast is that we are entering an era where “information-free” security strategies will become a liability that boards of directors simply will not tolerate. As budgets tighten and the complexity of environments increases, the reliance on quantitative risk modeling will shift from a niche specialty to a core requirement for every security leader. I expect to see a much more aggressive decommissioning of legacy tools as organizations realize they can no longer afford the opportunity costs of inefficient controls. Ultimately, the future belongs to those who can articulate their security posture in terms of economic efficiency, proving that their investments are not just technically sound, but are providing the maximum possible risk reduction for every dollar spent. We are moving away from the era of “more is better” and toward an era of “measured is better,” where data is the only metric that counts.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later