November 15, 2023

Vulnerability management programs form the foundation of most managed security services and SOC efforts. Despite a plethora of scanning and discovery tools, I remember as a former SOC Manager how difficult it was to maintain comprehensive visibility of managed assets and to provide quick reaction to detected vulnerabilities. Remediation efforts were often laborious and it was difficult in many instances to know what to prioritize first. So I got excited when I ran across Nucleus Security at a conference this past summer and saw how they are helping organizations take vulnerability management to the next level. I was delighted when Patrick Garrity, a researcher and VP at Nucleus, agreed to sit down and talk to me about Nucleus. So learn more about Nucleus Security in this interview with Active Cyber™ below or listen to my interview with Patrick at this podcast link.

Spotlight on Mr. Patrick Garrity

» Title: Security Researcher and VP of Nucleus Security

» Website: https://nucleussec.com/

» LinkedIn: linkedin.com/in/patrickmgarrity

Read his bio below.


Chris Daly, Active Cyber™ – How did Nucleus get started and where is it today? Where is it going tomorrow?

Mr. Patrick Garrity, Security Researcher and VP, Nucleus Security – Our founders, Scott and Steve, were working as federal contractors and working on vulnerability management at large scale about a decade ago. They started to build some tooling to help with the challenges of spreadsheets all over the place and trying to get a sense of all the assets and vulnerabilities. And as they were doing this, they made a conscious decision to say, hey, we think this is really valuable, so they commercialized it and turned it into a product. They first focused on the commercial markets to really hone in on the features and get things right. We’ve since evolved and have been really focused on scaling enterprise vulnerability management. We have reiterated continually and now we’re in the process of bringing the product back to market in the federal space. So pretty cool as far as where we’ve been, where we’re going.

A lot of the challenges we’re seeing organizations face in the vulnerability management space is you have a lot of disparate data. A lot of different scanners, a lot of different information. We’ll talk about that a little bit more. So you just have asset and vulnerability data everywhere, especially when you’re talking about large scale like federal government and large commercial enterprises. And then you have disparate teams. So you have teams doing development on different projects, managing different infrastructure. And so how do you actually correlate all this information and get the right information, the right people so you can make the right decisions and take the right actions? And that’s the challenge which Nucleus has been focused around enabling and supporting. It is really a practitioner-oriented tool to help build the right workflow automation for vulnerability management and cyber security teams.

Active Cyber™ –  Vulnerability Management and exploitation has recently become a hot topic. Why do you think that is?

Mr. Garrity – Yes. So first off, I think vulnerability management at its core is basically scanning and discovery of vulnerabilities and assessment of its exploitability. However, historically, most of the initial attack vectors have been credential compromise. I worked at Duo Security for eight years and it was great. Every DBR report, every Mandiant report, it’s credential compromise, credential compromise, credential compromise. It’s not when I say it’s great to be a victim of compromise, it was great helping people with deploying MFA. You know, MFA solved a lot of that problem and it made it harder for attackers to get initial access, which is great. It’s not perfect, it’s not a silver bullet, there’s still ways to get in, but it made it a lot harder.

And while it got harder on the credential side for attackers, it got easier on the exploit side as information became more available for social engineering, the attack surface expanded, and more exploits became available. And so attackers noticed and shifted their approach. It’s hard to do MFA in social engineering, maybe. So we’re going to start using exploits which have become more available. You can go get them off of GitHub, you have Metasploit, other tools. So definitely a really quick change. And to give you an example of how quick this changed in 2020, the M Trends Mandiant report said everything’s credential compromise, it’s phishing, phishing, phishing, it’s all credentials. Then two to three years later, 2022 report for M Trends with Mandiant, number one initial attack vector is now exploits. So everyone is essentially getting caught in this scenario of, wow, the attacks, paths and where they’re coming for initial access has changed. And I think it’s kind of cool because that reflects keen foresight by our founders. Just in the last couple of years, even the last year, everyone made fun of me when I was going to a vulnerability management company because they thought it was a nascent market. And that’s a good indication that opportunity exists for disruption. But those are the things from my perspective that really have changed overnight. And that’s why we’re now seeing boards, executives and organizations across the world trying to scramble to figure this out. It’s a pretty big problem.

Active Cyber™ –  I agree that the threat has definitely changed today. But that also brings up the question about how Mandiant and other threat intelligence companies are now tying into vulnerability management capabilities like Nucleus. So why is intelligence-led vulnerability management important? Let’s start with what it is and then why is it important for the customers that you serve?

Mr. Garrity – So historically you would scan all your network devices, maybe some other things. Maybe you’d have an application scanner as well. Then you’d use the Common Vulnerability Scoring System (CVSS) to help prioritize what was the most severe problem and start there. Like most people, you just use the CVSS base score because that’s what’s in the tool to make a determination whether you should fix something. The challenge in this approach is we’ve seen an exponential rise in exploitation. CVSS only considers metrics of the vulnerability itself. It doesn’t consider whether the vulnerable asset is under attack, whether there’s a proof of concept exploit, whether there’s an active exploit, whether it’s being exploited, whether there’s threat actors associated with it, whether there’s ransomware associated with the vulnerability.

So, up until recently, the only place you could get that intelligence would be commercial threat intelligence – companies like Mandiant, Intel 471, Grey Noise, there’s a bunch of others. But more recently we’ve seen other open sources emerge like GitHub, CISA advisories, EPSS (exploit prediction scoring systems), KEV (CISA Known Exploited Vulnerabilities catalog). This stuff wasn’t available until recently. So now you can make more relevant, context-based decisions on how you prioritize vulnerabilities by incorporating not only vulnerability in asset context, but now threat context. And threat allows you to pinpoint really high risk- like we know there’s people actively exploiting it. So when we talk about threat intelligence and intelligence-led vulnerability management, it’s really about incorporating that threat component into your vulnerability management program and using that to do things like prioritize, to make decisions on risk acceptance, exceptions handling – what things should we treat as the highest, most urgent priority. It’s about being informed about what’s going on with vulnerabilities in assets within your environment.

Active Cyber™ –  So I agree 100% with what you just said. When you think about large enterprises, though, they can be very diverse in the terms of what kinds of assets they’re managing, and where the assets are located. So what are some challenges of tying in threat context to vulnerability management programs in large scale and diverse environments like big federal customers or big commercial enterprises that you guys see? And how do you go about helping them?

Mr. Garrity – So number one, a large commercial enterprise or federal organization is going to have different business units or organizational units. They’re going to use different scanning tools. So they’ll probably have Qualys or Rapid7. They’ll have Tenable Nessus. They’ll have application scanners like Invicti. And despite how much they want to consolidate and say, hey, we only use these tools, even if you use one tool for network scanning, one for application scanning, one for your source code repositories, one for your asset management, suddenly like it just explodes. And so that’s part of the challenge. If you’re a small organization, you can take a Tenable Nessus and maybe a couple other tools and get away with it. But the tooling just doesn’t scale in the large enterprise when you’re talking about how do I look at what vulnerabilities pose the most risk within my organization as a whole. And then, how do I get that information to the right people so they can make decision on it themselves with the assets they own or are responsible for?

So, the challenge is how do you get that alignment, that accountability and have shared goals. And I think that’s really important when you’re looking at the same things and you’re spending time and understanding to build a common understanding of what the real risks and challenges are for the organization. And doing this at very large scale is, you know, not an easy feat for organizations. It’s interesting to me because in a way that puts vulnerability management as a way of managing your tool sprawl because you can bring it all together and look at all your assets across your whole enterprise – all the different things that the different tools bring to you.

Active Cyber™ –  You can actually see what it’s all looking like together. I like that. So, what do you find most interesting about your research of tying in these threat intelligence sources with these vulnerability management scanning tools?

Mr. Garrity – Each of these scanning products and threat intel services can help provide visibility within their respective isolated use case. But it doesn’t tell you the bigger picture. And what we find, too, is when you pull together a Tenable scan and you pull together a CrowdStrike scan and you pull together a Microsoft Defender scan, they’re all going to provide different perspectives as well. And so a lot of times when you normalize and correlate this information, you’re going to learn a completely different story than what you would have got out of just a single scanner.

The first step is correlating all this information. I think getting an understanding of where you’re at as an organization with these different tools like KEV and EPSS is essential first. We include Mandiant threat intelligence in our product for every customer as well. Next let me just look at a few things that are in my environment that are externally-facing using KEV as a great place to go next – see if it’s a known exploited vulnerability reported by a credible source – the US government. And since it’s externally facing, so you probably should fix that and you probably should fix it relatively quickly. You can do the same with taking the exploit prediction scoring system (EPSS), which we’ll talk more about, and looking at high scored EPSS assets. And then I can also go beyond that and take Mandiant threat intelligence and say, EPSS says this is rated high – why might it be high and what intelligence can I gain from commercial threat intelligence to understand the risk that it poses to my organization? So that’s where Mandiant is going to come through and tell you there are threat actors associated with it. Is there malware associated with it? What’s the true risk? What does it take to actually exploit it? Go and test these things and run at it with people power. And so one of the things I’ve learned is the more information you have, the more informed decisions you can make.

The second thing I have learned is not everything that that you’re going to do can be automated. You want to automate a lot, but the more you can put intelligence and information behind those decisions while automated or not, is going to help you move quicker, and make more informed decisions. And in a lot of my research I like playing around with the data sets. I find some interesting discrepancies where none of these threat intelligence sources tell the same exact story. I think the more sources you have, the harder it is to correlate that information and get a sense of where you should prioritize your efforts without something to aid you like Nucleus.

So my suggestion is for most people is start small and reiterate when you’re looking at vulnerability and threat intelligence, because you can overwhelm yourself and the rest of your teams very quickly by trying to boil the ocean. So starting with CISA KEV or starting with Mandiant to identify vulnerabilities associated with threat actors or starting with high EPSS scores, those are all good options. But you should start taking action and then you’ll learn. And once you learn, you can then reiterate from that as well.

Active Cyber™ –  All right, sounds great. So let’s dive a little deeper into this exploit prediction scoring system, EPSS. Tell me a little bit more about it, what it is and how do your customers use scoring systems?

Mr. Garrity – First off, people always are wondering if EPSS is the silver bullet that answers all my questions regarding a scoring system that I can just use and fix the top things and move on. There’s not a silver bullet in vulnerability management. Prior to EPSS there was CVSS and there still is CVSS and it is doing some great things with the new version 4. The CVSS base score doesn’t consider threat. You can enrich it though, and you can get what’s called a BT score which includes a threat metric. So I think that’s very promising with the new version coming out, where with the proper enrichment, you can get BT and BTE, which is threat and environmental metrics. And that scoring system might be a lot better than it used to be if you choose to take those actions. The challenge is we’re not there yet to do proper enrichment with the tools we have today. There’s still a long time horizon until tools adopt version 4 and we see how it works.

The exploit prediction scoring system is also managed by FIRST.org. So it’s loosely affiliated with CVSS and a lot of the same people that created CVSS are working on EPSS. It looks at the probability of a vulnerability of being exploited in the next 30 days. So if you take 100 vulnerabilities that have 0.03 score, it’s expected that three out of 100 of them will be exploited in the next 30 days. And so this is really useful. It’s powered by different data sources than CVSS and uses a machine learning model. There’s thousands of attributes in the model database. GitHub exploits, Metasploit exploits. It takes all that and then it learns and creates these probabilities for every CVE. And it’s very useful and it’s very accurate from what I’ve seen in identifying high risk vulnerabilities that pose the likelihood of being exploited. So it’s a good place to start. It’s not perfect. But if you do know something’s being exploited, you should go fix that first. So you probably want to start with KEV and then use EPSS behind it. And the EPSS developers check every so often to validate the model and to ensure the probabilities are actually accurate or not. And then they readjust the machine learning model. So it’s really doing a lot of the work for people in identifying high risk vulnerabilities.

And some organizations are setting scoring thresholds to say if there’s a probability of five percent or higher that a vulnerability can be exploited, we got to fix it in X number of days. And then there’s some comparisons that can be made between EPSS and CVSS as far as the work effort in getting to confirmed exploited vulnerabilities from a patch cycle perspective. So EPSS is proven to require less work effort to remediate the number of vulnerabilities to accomplish the same exploitation coverage. I know I’m getting in the weeds a little bit. Yeah, I appreciate EPSS. It’s really, really, really cool as far as how useful it is and how helpful it is to organizations for patch prioritization processes.

Active Cyber™ –  Are EPSS models getting fed the amount of data needed for accurate machine learning? And where’s that data coming from?

Mr. Garrity – I’m a member of the EPSS SIG. We can always use more information from anybody that has threat intelligence. There are organizations contributing data like Gray Noise, F5, Shadow Server, Cisco, Talos, Fortinet, Alien Vault,  Cyentia, Netscout/Efflux. And one of the things that we always emphasize as part of EPSS is if you have vulnerability data, or threat data, or exploitation information, then come talk to us. The more contributions from a community perspective, this is an open standard, the more the EPSS model is enhanced. You’re getting me on my EPSS soapbox a little bit, but part of it is education. So people understand they can contribute, but also they can use the scoring system freely. And that’s what’s really great. It’s probably, after CVSS, the most widely adopted scoring system in the world right now. And because it’s an open standard, that’s really promising. I think there’s over 50 different products that now support EPSS, including Nucleus Security.

Active Cyber™–  Nice. Okay. That sounds great. So one of the things that when I was the SOC director that I always had a problem with was I have all these scanning tools and all these other asset management tools. But they would all come up with different numbers as far as what assets I had out there. So what are some tools that you use to integrate so you guys can capture a complete asset inventory for your customers? And then how do you rationalize the differences that these tools come up with so you come up with a single view of the truth as far as enterprise assets are concerned?

Mr. Garrity – Everyone wants to solve the asset problem. I think it’s a problem that there’s so many products out there trying to solve it. I think some do better than others. You always want to get as complete a picture as you can. And as I mentioned, all these scanning and discovery tools are going to have a different perspective. None of them are perfect either. At Nucleus, from a product perspective we assume you don’t have an asset inventory. What I mean by that is if you ingest a Tenable scan or a Qualys scan or a Rapid7 scan or an Invicti scan, we’re going to build an asset inventory based on the information that we have from that scan. And that’s really important fundamentally, because even if you do have an asset inventory, it’s probably wrong, especially in a large enterprise. So we integrate with other  products like Axonius, RunZero, ServiceNow. We also allow you to ingest your custom CMDB. And so a lot of it is, it’s like, yes, we are going to correlate that information, everything you can give us, and build that inventory purpose-built for vulnerability management. So we’re not going to solve some of the traditional asset inventory problems that maybe an Axonius or a ServiceNow are going to focus on. IT asset management is much different, but we’re going to help you solve the correlation of all those assets purpose-built for risk management and vulnerability management. Certainly. We have to assume you don’t have that in order to do it properly from a product perspective.

Active Cyber™ –  So you may have many network devices or applications in your enterprise that have components from a common library like OpenSSL or Java. And then you find out that there is a vulnerability in that component. How do you determine what devices or applications are affected? I remember when I was at IBM, we would have a hard time sometimes running this down. And so being able to be able to drill down to a software level or using these things called Software Bills of Material (SBOMs) for vulnerabilities – is that something that’s part of the asset inventory that you guys track? And how important do you think it is to be able to track vulnerability impacts using SBOMs?

Mr. Garrity – I will emphasize there are hundreds, if not thousands of different types of assets. You have assets related to your product development life cycles, the software, what libraries you’re developing. That’s going to be different than patching network devices, patching servers, patching workstations, patching the card reader for your PCI environment. So there’s just such a vast amount of differences. So I think you need to be aware that there is a lot of information, a lot of different things that have vulnerabilities and many of them might not even be able to be fixed, that is a real reality too. And so you have to consider those things. The software bill of materials, SBOM, is something that we can ingest. Different tools will kick off SBOMs on the scan side. So that could be a Veracode device or your GitHub repos. I always emphasize with SBOM that it’s early in maturity and adoption from an operational perspective. But as it relates to vulnerability management in the context of your different assets, it’s something that’s going to be useful as part of the larger vulnerability management program.

Active Cyber™ –  I think I agree with you. SBOM is still early in the adoption life cycle. I do know that there is a federal government executive order to help push folks to adoption. And I know that the Linux Foundation and Microsoft both have SBOM format standards out there with ways to characterize software bill of materials. So I do believe it’s on a path to get to wide adoption down the road, but I agree it’s still kind of new. So one of the things that you’ve been talking about is having all this information and being able to do something with it to figure out how to prioritize vulnerabilities. So is that what you mean by context awareness? Is this threat awareness? Is this vulnerability awareness, the scoring – a way to do risk awareness? How do you use this context to improve your overall vulnerability management program?

Mr. Garrity – When we talk about context, we’re talking about three different components. 1) The vulnerability itself. So information is going to come from the national vulnerability database (NVD). Also, the common vulnerability scoring system (CVSS). So that’s information about the vulnerability. 2) Then you have the asset, like who owns the asset? What’s the compliance scope? Is there data sensitivity? Is there network exposure? There’s probably five other things that you could consider as it relates to the asset or a thousand things. 3) And then you have threat. So threat can be described by the exploit prediction scoring system – the exploit has a high score. Add information that may come from Mandiant threat intelligence, for example, the threat is associated with ransomware. Much higher risk. It’s associated with specific threat actors. And we can kind of go on and on, but those three components – vulnerability, asset, and threat are really what I think about when we’re talking about context aware.

We’re trying to automate as much of this as possible. So automating your patch processes, automating your workflow processes for vulnerability management, automating the ingest of information, and ensuring the right people and the right stakeholders that own the assets can take action, making sure that information is accurate. So that’s really what I think about when we talk about context-aware in helping drive a vulnerability management program. It’s helping the organization get to a better place through proper decisioning in automation, based on these three different components of vulnerability, asset, and threat.

Active Cyber™ –  Interesting. So let’s switch gears and talk some about managing the root causes of vulnerabilities. Vulnerabilities could come from your own DevOps process. It could come from your supply chain. It could come from a variety of different ways your employees bring data and code into your enterprise. So how does your tool help to identify the root cause of a vulnerability? And how does this knowledge help you in the vulnerability management and remediation processes?

Mr. Garrity – So that’s a great question. First, getting your assets and vulnerabilities in a single database so you can do root cause analysis is really important. Looking at different groupings of vulnerabilities, what software are they associated with? What devices are they associated with asset-wise? Next, looking at threat intelligence, because a lot of times threat intelligence will give us more information on whether the vulnerability is even needed to be fixed.

Root cause analysis is important to determine where a problem may exist, and what can we do as an organization to limit the amount of vulnerabilities that we’re seeing that are posing the organization higher risk. So that’s in a lot of ways how Nucleus is helping organizations navigate their vulnerability management program and identifying which opportunities there are for improvement.

You can also better understand the connection between your vulnerability management program and your supply chain management program using root cause analysis by identifying vendors from where the problems are originating. So root cause analysis can help with your attack surface management and third party risk management efforts. You can also use it to help guide your bug bounty program. All that stuff can be ingested into Nucleus and then you can take action on that and incorporate it with the rest of your traditional vulnerability management tooling. In terms of fixing accountability for remediating, there’s a lot of cases where vulnerability management teams are throwing vulnerabilities over the fence to a development team. And the vulnerability isn’t actually something that is active in the code. So I think there’s so many things that you have to look at from a root cause analysis perspective to determine whether something is actually a problem or not.

Active Cyber™ –  Okay. So, so let’s talk about automation. You mentioned automation a few times and I think that’s one of the key capabilities of Nucleus. So what are some of the interesting features of Nucleus’ automated ticketing and automation rules and how does that all kind of come together?

Mr. Garrity – Yes, automation is a huge feature of Nucleus and I think sometimes we don’t talk about it as much as we should. So we’re moving from a vulnerability management world of Excel spreadsheets using CSV files, using dumps from scanners, email even. And so, when you set up an integration with Nucleus we’re automating the ingest of that data. We correlate that data across assets and vulnerabilities. We normalize, de-dupe and contextualize all that. We are taking all the vulnerability data, things like NVDs, CVSS, threat data, EPSS, Mandiant, CISA KEV, and so on. So there’s a lot of automation that just happens that customers don’t even realize. And a lot of that helps people get time to value. They get to a state more quickly where they’re making progress in their vulnerability management program.

I can’t tell you how many friends I have that are just Python scripting this all together and doing it themselves. They come back a year or two later, realizing they don’t want to maintain a product. Or someone leaves in that organization, and the next person picks it up and has no idea what to do or where to even start.

Now we’re talking a whole different ballgame. We’re diving deeper into the automation side, like workflow automation. Recasting severity of vulnerabilities is probably the biggest use case that we see for workflow. So, we are taking the asset context, risk scoring context, threat context, and then recasting the severity based on those conditions. Some might consider recasting as stakeholder-specific vulnerability categorization since it helps identify where does this vulnerability fit within our organization from a risk perspective in critical, high, medium, or low terms. This is really an important thing because conditions change quickly. You might have new threat intelligence that tells you this vulnerability is now being exploited all over the place. You might want to take urgent action on that. So that’s really where recasting severity comes into play a lot. The other big use case is automating workflow assignments, ticketing, and ensuring that that’s bi-directional as well. So if someone’s working in a ticketing system, and another person is working in Nucleus, they’re speaking the same language to each other, and they’re not having to close out tickets twice and all that fun stuff.

I always emphasize to people, you want to get to automation quick, but make sure that you do your due diligence on what to automate and why. Because you can also automate yourself into a scenario where you might think works, but you didn’t look at the data and end up with bad results. Or you might be overwhelming people with a lot of open tickets, for instance, that they don’t have the people resources. So really, the alignment and discussions on what to automate are really important to have between people.

Active Cyber™ –  Yes, having two-way communication that’s automated between Nucleus and a ticketing system so everyone is on the same page is critical, I think, but how do you do that? Because when I was working in enterprises, nothing got done unless there was a ticket associated with it. The integration is not easy, and especially when you’re talking to large ServiceNow kind of capabilities and you want to maintain the same frame of reference between both systems. So how do you tie into ticketing systems for two-way communication?

Mr. Garrity – We’ve had to build out custom ServiceNow modules, custom JIRA integrations. That’s another example where people are like, I’m just going to build a hook to JIRA or ServiceNow, and it’s going to open a ticket. And that’s what a lot of organizations have done or are doing, but there is still back-and-forth finger pointing because a one-way hook doesn’t work. So, at Nucleus, we’ve built purpose-built modules that allow for bi-directional communication between the ticketing systems and Nucleus that eliminate that challenge and problem. And frankly, I don’t know any other vulnerability management tools that does this. They’ll say, I have a ServiceNow integration. What that means is I’ll open a ServiceNow ticket or I’ll open a JIRA ticket. It doesn’t necessarily mean that you have bi-directional communication between the systems.

Active Cyber™ –  Yes, because you want to make sure if it’s closed in the ticketing system, it’s also closed in your Nucleus system and vice versa. So let’s extend this discussion around workflow automation further. So, what about POA&Ms (Plan of Action and Milestones)? Can you talk to me a little bit about how Nucleus supports POA&Ms? Because a lot of people that go to my website and listen to my podcast are federal customers, and they really emphasize POA&Ms as a way of doing business.

Mr. Garrity – We use findings in Nucleus as POA&Ms when it comes to federal reporting. Our whole concept of a “finding” is meant to be able to make it easy to report POA&Ms as a bundle of owned instances that need to be fixed. And, you know, that includes things like the tasks that need to be accomplished, description and solutions fields, completion dates, due dates, milestones, comments, and all that. So you can coordinate on the management and workflow of a POA&M.

Active Cyber™ –  Okay, so you definitely have that covered. Are there any other types of special capabilities that you offer to federal government customers then?

Mr. Garrity – Yes. So, you know, federal government is very strategic for us. It’s a key part of our origin story, where the tool was built out and developed from an idea perspective. And since then, we have focused efforts mostly on the commercial side until the product got to a level of maturity where it was ready for the scale that federal demands. So a few different things have been happening. On the FedRAMP side, we’re in late stages of approval for that. So we’re anticipating some news coming soon in relation to FedRAMP, which is a really, really exciting milestone. So just so everyone knows, FedRAMP approval makes Nucleus cloud-accessible for federal customers. We do have federal customers today where we have office and on-premise capabilities with Nucleus. And then we do have some federal agencies that use our commercial cloud service and cloud product, with exceptions. I don’t know how that process all works. But, you know, when there’s a need, certainly there’s a way.

And there are a lot of feature sets that we’ve purpose-built for federal, as well as things that we’re building. So there’s a lot more news to come. I can’t tell all the details today, but we are heavily investing in the federal use cases. One of the most exciting things I didn’t talk about is a combination of multi-tenancy, project-based assignment, and role-based access – something we call asset group access control. If you’re a federal agency or you’re a state agency, you could actually have many sub-organizations. With asset group access control and Nucleus projects, you can delegate access to those sub-organizations for their projects. And then that can roll up vulnerability reporting into the larger organization. And that becomes really important when you’re talking about very large scale organizations or agencies. So that’s some of the things we’re seeing from a use case perspective and nobody else can accomplish this. The fact that we can automate asset grouping to what people see and then that can roll up, it’s just a game changer for federal and state organizations.

Active Cyber™ –  That sounds great. When I was a solution architect for the CDM program, there was this whole compliance roll-up reporting thing using Archer that you had to do and during my time it never worked well. So that sounds great that you guys are attacking that problem.

Mr. Garrity – The fact that we’re so focused and involved in open standards like CVSS and EPSS is important to our federal presence. It’s really important as an industry that we standardize on some aspect of open standards, because when you look at things like PCI in regulation, they say “use this standard in order to decide what things you’re going to fix.” So I am really, really bullish in that area and how focused we are in open standards coming out of CISA and FIRST and other parts of the government as well.

Active Cyber™ –  Speaking of CISA, they have that Vulnerability Disclosure Policy program (VDP) program. So how does that impact what you do with respect to federal vulnerability management processes?

Mr. Garrity – I think it’s a really exciting program. If you’re a federal agency, you can sign up for the VDP program, which is essentially a bug bounty program. It’s important on the Nucleus side because a bounty is treated as a finding you can ingest. You can take your VDP results, and import them into Nucleus. I also think, from a federal agency perspective, that since our tool is built with multi-tenancy in mind, that we can help provide access to different people at different organizations to that bounty information as well with our tool. So just very, very complementary to what CISA is working on and what federal agencies are working on as a whole.

Active Cyber™ –  Why don’t you summarize a little bit about why buy Nucleus security?

Mr. Garrity – I think first is our ability to address the challenge of getting full visibility into assets and vulnerabilities, getting time to value. I believe we can provide up to 10x the progress of what you’ve done with other tooling or your existing tooling in relation to operationalizing the visibility of vulnerability management as a whole. Our product is really practitioner-focused. Our strengths are in automation and building out workflows and helping people that are doing vulnerability management every day to do it faster and do it well. And it’s built in a way that allows you to do it without writing hundreds of rules. It reflects a lot of thought in relation to how to really operationalize vulnerability management at scale. I think these strengths go back to our founders’ vision as well. So if any of that is problems you’re dealing with are things we discussed, come and have a conversation with us, talk to our team. And then for federal organizations, too, we have existing partnerships and contract vehicles. I’m not going to go into the details here, but you can go to https://nucleussec.com/, hit “contact us” and we will loop you in with our federal team. And they can communicate all the details as far as all the partnerships that we have going on and the contract vehicles for procurement and all that fun stuff as well.


Patrick, thank you so much for all the information today. I think Nucleus is on a wonderful journey so far. I really feel you have something unique for managing vulnerabilities and to finally help organizations to turn the corner on their vulnerability management program. I am looking forward to seeing Nucleus Security continue to do great things in the marketplace. 

And thanks to my subscribers and visitors to my site for checking out ActiveCyber.net! Let us know if you are innovating in the cyber space or have a cybersecurity product you would like discussed on Active Cyber™. Please give us your feedback because we’d love to know some topics you’d like to hear about in the area of active cyber defenses, authenticity, PQ cryptography, risk assessment and modeling, autonomous security, digital forensics, securing OT / IIoT and IoT systems, AI/ML, Augmented Reality, or other emerging technology topics. Also, email chrisdaly@activecyber.net if you’re interested in interviewing or advertising with us at Active Cyber™.


About Mr. Patrick Garrity. Patrick Garrity is a globally recognized cybersecurity researcher known for his pioneering work in translating vulnerability data into actionable visualizations that empower vulnerability management teams and security professionals. With over 15 years of experience in the field, Patrick has played pivotal roles in the growth of several high-impact SaaS cybersecurity startups, including Duo Security, Censys, Blumira, and his current position as Cybersecurity Researcher and VP at Nucleus Security. At Nucleus Security, he spearheads GTM strategy while conducting cutting-edge security research on vulnerability disclosure, prioritization, exploitation, and scoring.