AI Powered SOCs - Noise, Metrics, Trust
- Stefan Dumitrascu
- May 19
- 2 min read

AI has been on everyone's lips in the last few years. RSAC this month showed further increase the "AI Hype". Whether you are full believer in it or a major skeptic it's definitely here to stay. The real use cases can vary wildly. From lowering the barrier of entry for attackers to having the same effect on the defensive side by empowering L1 analysts to do more or helping you get more efficient SOAR workflows, sometimes it feels like we can be bombarded with potential.
Noise, Metrics & Trust
When it comes to noise, we are talking about both marketing claims and alert fatigue. New tools are added to help empower your SOC team across various workflows. How do you know what can actually benefit your business? You need to identify key metrics to track within your business, set appropriate prioritization for them and then make the correct decisions when investing in new solutions.
No one can give you the right answer of which metrics are most important, yet different vendors claim their metrics are the most crucial.
In our evaluation we've combined our own research with public information on SOC workflows and priorities to establish key metrics. The goal to showcase improvements brought by these tools. Forrester's approach is to look at categorizing them strategic, operational and tactical metrics. While classic metrics such as detection accuracy and false positives are important in such complex deployments, businesses need to be able to establish what's more important for them.
Our evaluation gives the opportunity to vendors to showcase which metrics they can have an effect on and what they can improve on. As such we track 9 metrics across those categories by splitting our evaluation into different operations, each with its own goal.
Detection accuracy & False Positive rate
Dwell Time/Mean Time to Detect
Alert Efficiency
Time to Contain & Remediate
Time to Initial Understanding
Time to Advise & Implement
In order to trust these metrics these need to be looked in the context of each other and never on their own. This allows customers to have good picture of how the tested solution can benefit their own needs.
How is this evaluated?
This is the only evaluation that looks at these metrics and such it is a very complex procedure with unprecedented cooperation between teams. We've designed a complete target organization with target employees, relationships built their own external vendors, normal staff workflows during the evaluation process and threat prioritization. You can read more about this in our methodology and in upcoming posts!
AI can be costly both in terms of $$$ spent and person-hours. The promise of more efficient workflows and increased security posturing, whether it's better detection rates, lower false positives is very appetizing for decision makers. Before you make a big commitment to add a new fancy AI Companion, tool or solution consider the priorities you've set up in your long term strategy.
Our evaluation is here to help, get in touch or encourage your vendor to take part!