The Questions Every CISO Should Ask an Exposure Management Vendor (But Usually Doesn't)
by Brad Hibbert, COO & CSO//9 min read/

Most exposure management evaluations start in the wrong place.
The standard checklist (features, integrations, pricing, references) tells you whether a platform can do the things it claims. What it doesn't tell you is whether deploying it will actually change anything. Whether the program will stick. Whether the teams who need to act on its output will trust it enough to do so.
I've been building and working in this space for over 30 years. And I recently had a conversation with Drew Simonis, a CISO with equal time on the other side of that equation, that crystallized something I've believed for a while: the questions that determine whether an exposure management program succeeds or fails are rarely the ones that get asked in the evaluation.
Drew Simonis on what he actually wants to know before buying
Drew offered three. I've added a fourth, one of equal importance, that the market is still learning to ask.
1. Is this actually exposure management… or just better visibility?
This is Drew's first question, and it's the one that filters out the most vendors immediately.
The exposure management market is full of platforms that are very good at finding things. Scanning, correlating, scoring, reporting; the discovery layer has never been more capable. What's much rarer is a vendor that can show you evidence of outcomes. Not a prioritized list. Not a dashboard. Evidence that exposures get assigned, tracked, acted on, and closed.
“Is it really management, or is it exposure visibility that’s most important? That's first and foremost. ”
— Drew Simonis, CISO in Residence, Insight Partners
The question to ask is simple: show me a customer that deployed your platform and reduced measurable risk. Not one that improved their visibility score. Not one that expanded their asset coverage. One where the exposure backlog moved in the right direction because your platform made remediation easier to execute, own, and track.
If the answer is a case study about findings volume or scan coverage, you have your answer. The platform is a scanner with a better interface. That's not nothing, but it's not exposure management.
At Brinqa, we'd point to a large enterprise customer who told us unprompted that the two problems breaking their program were ownership and deduplication. The same two problems we built to solve. The metric that matters isn't findings surfaced. It's whether the right teams are acting on the right ones.
2. Is this a team you're willing to struggle with?
Exposure management programs don't fail at the point of sale. They fail six months in, when the deployment hits something unexpected, the account team turns over, and there's nobody left who understands the original intent of the configuration.
Drew's framing is straight to the point: when things get hard, is this a team you can rally around? Are they willing to admit when something isn't working? Will the people you're building relationships with now still be there when you need them?
“Is this vendor somebody that you're going to be willing to struggle with? When things get tough, is this a team that you think you can rally around with and be partners with and collectively develop solutions to the challenges that you're facing?”
— Drew Simonis
These aren't soft questions. Vendor stability and team continuity have hard downstream consequences for program continuity. If the account team turns over every 18 months, the institutional knowledge of your environment goes with them. If the company is in the middle of a platform pivot, the roadmap commitments made during the sale are someone else's problem by the time they come due.
The practical test: ask to speak with the people who will actually own your account post-sale, not just the sales team. Ask those people what they do when a customer reports something broken. Ask for a specific example of a time they changed course based on customer feedback. The answers will tell you more than any reference call.
3. Have you done the hard work of bringing other teams into this?
This one is directed at the CISO, not the vendor, and it's the most honest question on Drew's list.
Exposure management programs that fail don't usually do so because of the technology. They fail because the security team bought a platform and handed it to IT ops, developers, or engineering teams who had no say in the selection, no stake in the outcome, and no reason to trust the recommendations coming out of it.
“If other people don't embrace the technology and embrace the process changes that come with it, demonstrate a willingness to work in a new way, then you're pushing a rope uphill.”
— Drew Simonis
The question to ask yourself before you sign: have you had the conversation with the remediation teams? Not to inform them, to involve them. Do they understand what's changing about how exposures will be assigned? Do they trust the prioritization? Can they explain the reasoning behind a recommendation to their own manager?
This is where explainability stops being a product feature and becomes a program requirement. The teams doing the remediation work need to be able to defend their actions – to engineering leadership, to IT management, to auditors. If the platform can't give them a clear, traceable chain of reasoning from exposure to recommended action, the handoff will break down regardless of how good the prioritization logic is.
At Brinqa, we hear this consistently from customers who make programs work: the technology decision and the organizational decision have to happen together. You can't deploy a platform and then figure out who owns remediation. The platform has to be built around an answer to that question, and that answer has to be one the remediation teams helped define.
4. How are you testing your AI outcomes in a non-deterministic world?
This one isn't on Drew's list, but it belongs here.
Every exposure management vendor has an AI story right now. Most of them are variations on the same theme: more defensible prioritization, faster triage, and more traceable recommendations. The problem isn't that AI can't do these things; it's that the quality of what AI does is entirely dependent on the quality of the data it's reasoning on. And in most platforms, that data is incomplete, context-poor, or derived from a single source with a single lens on your environment.
“Making sure your recommendations are explainable and defensible — not just to your CISO who has the technical chops, but to the business. And making sure that if you run these models over and over again, they’re repeatable. You’re not going to get a different answer every time so nobody knows what’s going on.”
— Brad Hibbert, COO & CSO, Brinqa
The question to ask any vendor with an AI capability is this: how are you testing your outputs in an environment where the model won't give you the same answer twice? How do you ensure that the recommendations are explainable, defensible, and repeatable enough that an operational team will actually act on them?
This matters because the IT ops teams, developers, and engineers actually doing remediation are being asked to act on AI recommendations that affect production systems. They need to trust those recommendations before they'll act. And trust requires consistency, explainability, and a visible chain of reasoning from the data to the decision.
Brad Hibbert on AI explainability and what vendors need to get right
At Brinqa, we've had to make significant changes to how we test and validate AI-generated recommendations precisely because of this. The answer isn't that every output needs to be identical; there are often multiple valid paths to the same outcome. The answer is that every output needs to be traceable: here's the exposure, here's the threat intelligence, here's the asset context, here's the business impact, here's the recommended action and why. That chain of reasoning has to hold up under scrutiny from auditors, from boards, and from the operational teams who are being asked to act on it.
That's not a feature. That's the prerequisite for any AI capability that's going to survive contact with a real security program.
The evaluation that actually matters
Most vendor evaluations are designed to answer one question: can this platform do the things it claims? That's a necessary question. It's not a sufficient one.
The evaluation that actually determines whether an exposure management program succeeds asks something harder: will this platform change how my organization acts? Will it make it easier for the right teams to own the right problems, act on defensible recommendations, and demonstrate measurable progress over time?
Those questions don't get asked enough. Drew's thirty years of experience on the buying side of this conversation is worth more than any RFP response. The vendors worth talking to are the ones who welcome them.
Watch the full conversation with Drew Simonis and Brad Hibbert
Drew Simonis and Brad Hibbert cover the state of exposure management, the vendor trust problem, and what it actually takes to close the gap between finding and fixing. Watch the interview or register for the live webinar to hear the conversation in full.


