October 23, 2024

I received a call a while back from my good friend, Jim Rice, who wanted to introduce me to a company with whom he had been collaborating on a solution. Jim has a knack to be on top of the next big market wave – in this case it was zero trust – and so I was eager to hear more about the solution. Jim introduced me to Chris Romeo, the CEO of OneTier who is partnered with Vantiq – Jim’s employer. When I heard more from Chris, I learned that this solution was all-encompassing, covering the foundation layers and five pillars of the zero trust framework.

Courtesy of Canadian Centre for Cybersecurity

It is the first one I have seen that was so comprehensive. By being built on top of Vantiq’s real-time, intelligent, orchestration platform, the partnership embeds zero trust principles while enabling real-time monitoring, adaptive threat response, and seamless automation. So you can read the interview below to learn more about this zero trust solution partnership, or you can listen to the podcast here, or do both.


Spotlight on OneTier/Vantiq Zero Trust Solution

» Title: Data Secure / Zero Trust Command Center

» Website: https://onetier.com/solutions/  and https://vantiq.com/

» Panelists: Chris Romeo and Taber West – OneTier; Jim Rice – Vantiq.

» Linkedin: https://www.linkedin.com/in/chromeo

» Linkedin: https://www.linkedin.com/in/taberwest

» Linkedin: https://www.linkedin.com/in/ricejim


Chris Daly, Active Cyber™: Hi everyone. Welcome to the Active Cyber Zone and today I’m joined by members of a collaboration of a Zero Trust Solution. First I’m going to introduce you to the members of this collaboration and then we’ll take you through the details of how the Zero Trust solution is coming together and why it’s important to find out more about. So I’d like to first welcome Chris Romeo from OneTier and Taber West from OneTier and I also have Jim Rice from Vantiq. So welcome everybody. So Chris Romeo, let’s start with you. Let’s talk a little bit about how you got started in cyber, how you evolved into the position you’re in now, and how this solution originated.

Mr. Chris Romeo, Chief Executive Officer, OneTier: Yes, so Taber and I are the co-founders of a company called OneTier. And our goal when we started out to create OneTier was to be able to assemble for the first time all of the essential capabilities aligned to the pillars of Zero Trust and to the zero trust maturity model as defined by the US Department of Homeland Security’s Cybersecurity Infrastructure and Security Agency [CISA]. So that aspect of being able to pull these elements together was really our core focus in our company’s mission.

Active Cyber™: So Taber, how did you get started in cyber and how did you evolve into the role that you’re in right now with OneTier?

Mr. Taber West, Chief Technology Officer, OneTier: Well, I started off in the late eighties in the US Navy in asymmetrical warfare and later spent years in the Department of Energy where I was running the first TCPIP networks. So this was back in the days of Token Ring and early DOS. And so I was there from the very beginning in terms of threat actors trying to attack networks. Later I worked at  Accenture and such as a global CTO. I’m an electrical engineer, systems software engineer. Chris Romeo and I met some years ago and we were both looking at the secure networking problem and we said there’s a way that this can be done better and so that is when we launched OneTier and we formed up the company and off we go.

Active Cyber™: Sounds like an interesting history. So Jim, tell me a little bit about the evolution of Vantiq.

Mr. Jim Rice, VP North America,Vantiq, Inc.: Thank you so much for having me. My name is Jim Rice and Vantiq actually started in 2015. Our co-founders, Marty Sprinzen and Paul Butterworth wanted to set out to revolutionize real-time computing – basically connecting data with autonomous agents that could then make decisions. That’s why Vantiq is such a good fit for cybersecurity -Vantiq’s about ease of usability and scalability which matches up well with Zero Trust solutions. The Vantiq platform features generative AI that is patented. So in a lot of ways Vantiq does things that no other platform can do. We can run anywhere, connect to anything and communicate seamlessly or take action all based on AI and run in real time. That’s a little bit of history of Vantiq and what we do.

Active Cyber™: Sounds great. We’ve been throwing around this term “zero trust” a little bit already, but there seems to be a lot of different definitions in the cybersecurity field of what it is. So Chris, can you help the audience understand what is zero trust, at least how it’s defined from a OneTier perspective?

Mr. Romeo: So from our perspective, we’ve aligned our solution to how the US government already  defines “zero trust.” Specifically if you look at the CISA definition, they’ve defined it as composed of really two different things – foundation layers and pillars. There are three foundation layers, starting with governance and governance meaning all of the compliance stuff that I need to have – NIST 800 standards, things of that nature. Then there’s the automation and orchestration layer, which is making sure that I can move things around as necessary and I can automate processes as necessary. And then there’s the visibility and analytics layer and that’s really so that I can see and understand what’s going on throughout the entire environment. That includes measuring different aspects of the environment so that I have good empirical data to base automation and orchestration decisions on.

Next comes the five pillars that stand on top of those three foundation layers. Starting with identity verification pillar, this is really the most critical pillar because without authentic identity, nothing else really works because I can’t trust anything.  Then I have device trust which measures and associates attributes of endpoints on my network including end user computers, IoT devices, things of that nature. Then I have the network itself in which everything transmits across all those devices, including characteristics of how the network is segmented. Then I have my workloads and my applications, which then are the actual information and applications working in the network itself. Finally I have the data that those applications and workloads generate, which is ultimately stored on some type of storage, whether that be object-based or file-based or whatnot. And so that is really what defines zero trust by the CISA standard and how OneTier also defines it.

Active Cyber™: Okay. However, what do you mean by zero trust?

Mr. Romeo: So inherently we want to say that we don’t trust anything and so if we don’t trust anything then the assumption is that there’s a breach and so therefore implicitly nothing can be trusted in this network.

Active Cyber™: Okay, so Taber, if you can’t trust anything, well that means we got this threat landscape that’s pretty ominous. So how does this threat landscape of today really affect the zero trust solution that you guys have brought together?

Mr. West: So the threat landscape is growing exponentially. Think of it like a spider web that just keeps crawling out or, a giant nest full of ants or termites that just keeps expanding outward. The threat landscape is morphing at a rate that humans cannot functionally keep up with on their own. If you’re that one poor security person in the massive organization with tens of thousands of endpoint devices, network traffic moving through in the terabytes on a daily or sometimes even hourly basis depending on where you work, we’ve literally hit the limits of physical reality here in your ability to combat this. It doesn’t matter how motivated you are. So we need new tools that can combat a new threat and that new threat is driven by AI. So everybody looks at the models as being very beneficial. Wow, I’ve got large language models, I’ve got neural nets, I can do amazing things, I can have cars drive automatically, airplanes take off and land. Well that’s wonderful. Also, threat actors, who by and large are very smart and they’ve decided to take their intelligence and direct it towards malicious activities, they have access to the same tools, but there’s a key difference here. We have limits, we have rules, but they have no rules. So we have to define tools that can combat a threat world that has no rules. So at OneTier we’ve assembled a platform of multiple different modules, each one designed to work with each other but to target specific threat areas.

Active Cyber™: Okay, so Chris, can you break down some of these functional components that constitute your zero trust solution? And by the way, what do you call your zero trust solution?

Mr. Romeo: Yes, so we call it Data Secure and the idea is that we want to keep your data very secure very simply. The capability that we’re building right now is something called Zero Trust Command Center [ZTCC], which will fit into the overall Data Secure platform and really be the overarching way in which all the other pieces are orchestrated and work well together. As we talk to the Data Secure platform as it exists today, it starts with that governance component that we talked about with the Zero Trust foundation layer. We created a module called Risk Engagement to align to that governance aspect and it’s to help  everybody understand what their current security posture state of their environment is today. Do they have the NIST 800 controls in place? Or are they working by a different framework than that? Because governance is not just strictly defined by NIST, it could be ISO, it could be MITRE ATT&CK. We support those too. You choose what governance model you want to go by, but whichever one you choose, with Risk Engagement you have at least the baseline understanding of how your environment applies to it today and where the gaps are. And that’s to identify where the gaps are so that we can help you to engage in remediating those gaps. So that’s really the first and foremost solution component.

The second piece is what we call Any Cloud Orchestration and that is aligned to the Automation and Orchestration foundation layer. And again, the idea behind this is that I want to be able to work with my on-prem environments; I want to work with my cloud environments; and whatever I want to push out, whether that be a whole new system, a whole new application, whatever it is, I want to be able to clearly be able to orchestrate that quickly, automate that process, push it out to either or both environments. If I detect threats through zero trust command center, we’ll be able then to use orchestration to move and update environments to keep ahead of the bad guys – that’s the goal of our orchestration module.

The next and last piece of the foundation layer is that analytics and visibility piece, and that’s where the Vantiq’s analytics capability ties into that, where now you can in real time see all of the information you need to make informed decisions. In our case you can also allow the ZTCC to do that if you want to do it in an automated way, but most importantly, you need to have the information, the empirical information to understand how to make those decisions. So analytics gives you that information.

The next piece of our Data Secure solution is that identity pillar we talked about. We call it Secure Access. Secure Access is designed so that I can now for the first time take privileged access management and really take control of it, understand the authorization and access components related to somebody coming into the network from an identity perspective.

The next pillar is the Device Trust. This is what we call Security Overwatch, and we provide a single unified agent on the system that can do everything from endpoint threat detection, to remediation, to authorization and access for secure remote networking – all of those different things we’re looking at on the endpoint.

That transitions into the workloads and the applications component of our solution. And so with that, we have something called KubeZT Secure Apps.  For this component we’ve taken all of the core services and applications that one might interact with, whether they be in their on-prem environment or in the cloud such as DNS, directory services, etc. And we take those services and put them into a robust package of Kubernetes containers that I can very quickly deploy into any environment. Now I have a secure baseline of services that I can always work with across any application that I have – the core common microservices that people count on in those environments. Now we can say, yes, there is a secure foundation layer to all applications being built, which is something I don’t think you’ve been able to truly say to this point in time.

Then, of course the network pillar is so germane to all of this because without a network, data goes nowhere. So we call our network pillar Stealth Networking. We want to make sure that our network is hidden because we assume that your network as you have it today has already been compromised. The only way we can really get around that is to put a overlay network in place that only the authorized users are aware of. And if the bad guys have a hard time discovering the network, if it’s hard to find, then it’s going to be hard to access and get to the data. We’ve put a lot of thought into how to secure that and how to hide that network and obfuscate the traffic on it.

The next component of the Secure Data solution, and the last piece, is the Data component, and that’s where our Global Data Security file system comes into play. And for the first time, we can say, yes, there’s a global immutable file system. We’re able to make sure that your data is always being backed up inside of every minute. And as result of that, if that ransomware attack comes or when it comes, we know that we can very quickly roll back to a point in time where you were at a consistent state and you don’t lose a lot of data.  And data equals money because that is the most precious resource for an organization – almost like oil is today. Right?

Active Cyber™: That sounds fascinating. There’s a lot of different components. What’s the glue that brings all these together into kind of an underlying synergistic platform? Is that the Vantiq platform that works that?

Mr. Romeo: Absolutely. And so the partnership between OneTier and Vantiq is enabling Taber and I to realize a vision that we had starting off with all of this. We went out and we assembled all the pieces that we needed to build the zero trust platform, but we didn’t have that overarching piece to kind of pull it all together. And Vantiq has a couple of advantages that allow us to bring all the zero trust components together. First is that it was started to create an API engine – to connect pieces and parts together. So that allows us to take all these disparate components, pull them into a unified platform, which is the Data Secure platform, and then infuse AI into it to orchestrate outcomes using AI. And so we can now see data in any place in the net, whether it be on the network, whether it be in the storage, whether it be on the end points, and now be able to correlate that information in a way we couldn’t before. Because they’re all connected together and we now have insight into all of those different aspects at one time.

Active Cyber™: So I did a little homework on Vantiq prior to our call today, and one thing the Vantiq web site talks about is something called an event driven architecture [EDA]. So what is this event driven architecture and how is it integral to the overall solution that Vantiq has put together with OneTier. So Jim can you comment a little bit about that?

Mr. Rice: Yes, absolutely. If you think about Vantiq at the highest level, it’s an orchestration runtime and it is deployed to run at the edge or wherever we’re given compute resources and the business application models. These models are not AI models, but the applications that are deployed to the runtime are basically looking at a common pattern for event-driven architecture. That is what we refer to as sense, analyze, and orchestrate, or you could think of it as the OODA loop –  observe, orient, decide, and act; that is, any processing needed for when certain runtime events occur and these conditions are true, then orchestrate these outcomes. And so in the world of cyber, my opinion is that all these zero trust cyber frameworks and solutions and tools are pieces looking at aspects of a problem, and when they see something, they say something. And so the zero trust command center with OneTier is looking to create this common dashboard, for lack of a better term, that provides an operations center view of what are my tools telling me.

And to eliminate all the noise and just find the signal, we can leverage AI, generative AI or just standard anomaly detection analytics on those stream of events that come out of each of these security tools. If I think about it from an ISO stack model (going old school here), you’ve got the storage system, you’ve got the networking system, you’ve got the compute system, and then you’ve got the user experience system. And so if anomalies are happening in the storage system, let’s say a log file’s gone amuck, someone fat-fingered something, and now all these debug records are being recorded and the storage system is getting overwhelmed –  that’s an anomaly. If in the networking system the firewalls that are looking for anomalies detect something, then they say something. If a content management systems that is looking for data being exfiltrated through whatever means, that’s yet another source of events.

So Vantiq is able to take these asynchronous, non-deterministic streams of events and allow for the orchestration of outcomes when these things are happening. Then I want to bring a human in the loop, let’s call it our cyber offensive team or defensive team, and make them aware that these things are occurring. To help reduce the cognitive load on the cyber teams further, because a lot of these SMEs don’t really have multi-tool expertise, is to create an AI prompt on behalf of the user that says, “I’m seeing these things from these systems,” next, it will generate a prompt on the behalf of the user, send it into a tactics and procedures vector database and LLM/generative AI combination of technologies, and get back a recommendations on what’s the next best action, and then deliver that to the human in the loop saying: “Hey, these things are all occurring from these tools and here is the recommendation based on cyber training and the tactics and procedures is that you should do the following things. Boom, boom, boom, boom, boom. Now if you want, the generative AI could also then execute those steps for the operator if they’re given the appropriate permissions and authority to do so. That could be then to go execute source scripts, these playbooks, because time is of the essence in a cyber realm. So the sooner you can detect these things, the sooner you can analyze them as an anomaly and the sooner you can dispatch the appropriate actions, the better.

Active Cyber™: It sounds good to me, and you covered quite a bit there. A question for me was when you do kick off an event, how do you know it completed and that it completed correctly?

Mr. Rice: Okay, great question. Again, if we have human in the loop, then they’re the ones who basically said, okay, go do this. If there’s an escalation procedure in place and the human in the loop did not respond within a certain period of time, then Vantiq has a pattern that will automatically go tell the next best person. But if we also have, again, these events subscriptions, APIs coordinated, and we can say, go execute this script one from product vendor A and this script two from product vendor B, usually there’s an ability for it to tell me when it’s completed and whether it completed successfully or not. So again, that could come back as a feedback loop (going back to the OODA loop), observe, orient, decide and act. You get the feedback from these appropriate technologies and say: “Hey, good news, all the actions have been achieved successfully.”

Or if they haven’t completed successfully, that’s now another asynchronous process that needs to say: “Hey, someone needs to go manually deal with this situation because we tried to go shut down port 8080, or whatever the procedure needs to be, or we need to go transfer the workload from server one on cloud A to server two on cloud B, whatever is the act of mitigation. If those things don’t accomplish and there is a mechanism for us to get the status of those things after we say go do it, then we can also incorporate that into the feedback loop.

Active Cyber™: Okay. One thing I know when I was a SOC director there was a requirement for ticketing systems to be incorporated in everything we did. So Taber, how do ticketing systems work with this zero trust command center? Is it something that is critical for its functionality? Also, is it something, Jim, that Vantiq has already incorporated into Its API and its event-driven architecture?

Mr. Rice: Yes.

Active Cyber™: And how does it work in the complex schemes that you see for orchestration.

Mr. Rice: Most ticketing systems have APIs, so that’s pretty much the answer. But Taber go ahead.

Mr. West: So functionally as said, we have multiple modules that do different things. That means that each module is generating a ticket that could be interrelated, and Vantiq is going to do a handoff between a trigger in one area and a trigger in another area, or something has triggered it in one spot and we want it to trigger something in another area. And as Jim just stated, there’s API calls where we can combine the logs. So say you’ve got Splunk Defender, whatnot out there where you want to dump everything and aggregate it so that people can go read it, but you accumulate massive amounts of data which makes it difficult for a human analyst. What we actually want is we want the system to read the salient information because if you had 5,000 tickets on email that came through that looked malformed, well that’s great – a great overload.

We’re all getting emails continuously bombarding us trying to get through. What’s important is the one that a user actually clicked on that initiated some kind of malicious action that went out. We really want to know about that. Now, the endpoint system acts on that automatically, but how does anything else in the system know that that has occurred, including the person. So this is where we want to take both the action and the actual documentation of that event – the ticket being created – and notify a person and say, you really want to pay attention to what just happened here.

Active Cyber™: Let me expand on this problem a little bit further too, because I know ticketing systems have the ability to create what I would call a “group ticket,” for lack of a better word, and it would assign out sub-tickets or tasks to different folks or different tools to say, “Hey, go do this thing.” And it wasn’t until all of those sub-actions came back successfully completed before you could close the group ticket. So in an orchestration environment where you have this event driven architecture, it could create a group ticket to go out for this tool and this tool and this tool to go scan this and go check this and go do this.

So you could work through the ticketing system or you could work directly through Vantiq and through its APIs to these other tools and have it done, but then, is the ticketing system shadowing the orchestration processes of this system as another system that must be fed separately, or you bringing all together and you kind of doing it all at once? How’s that working? Because I know that when you talk about SLA performances on a contract for example, a lot of it’s based on how quickly you can close a ticket. That’s how you’re measured. So if that’s how your measurements are being driven, you kind have to play that scheme in a way. So I’m trying to figure out how does that work when you’re talking about using event driven architecture to tie tools together, but also how do you deal with the ticketing environment that kind of drives your performance on your contract?

Mr. Rice: My 2 cents on that is, again, if so, most ticketing systems are request/response and are not always event driven. I’ll call them more of a polling-based approach. And so I believe the orchestration that we can facilitate is going to be a much more real-time streaming connection where I don’t have to keep polling the ticketing system and say: “Hey, has this trouble ticket been closed?” But instead I can have that stream open a connection with the ticketing system and it says: “Hey, Fred just closed this ticket out and therefore I know I can basically include that information in the information that I’m processing.” So anyway, Vantiq can take whatever role we need to take for those situations, but I believe the timeliness of getting ticket notification, e.g. “Hey, this cyber alert is happening,” I think that’s important, because time matters, whatever the SLA policy is.

Active Cyber™: So one thing I would add to that is I always looked at ticketing systems as kind of a drag to the mitigation process that I would be running.

Mr. Rice: I would agree with you a hundred percent.

Active Cyber™: I think they’re important. Really what you guys are putting together as a zero trust solution, it can run ahead of the ticketing system in a way to close things out. All you have to do is make sure you tidy up the backend and make sure that you still get that ticketing system report done in a timely way. So in fact, you might be able to even employ the AI piece to bring it all together for you.

Mr. Rice: That’s where I was going to go. That’s exactly where I was going to go. So because gen AI is so good at generating textual reports, we can take the history of everything that happened and basically use gen AI to create an email with all the information that came from all the events and say: “Hey, populate this template or this situation report and record it into the trouble ticket system for ticket number 55222. That way, from a reporting perspective, all the data is in one place. It’s pushed back into the system of record, and people can check a box saying, yep, this is a record of all the things that happened and now I can know that there’s no open tickets or that ticket was closed within this timeframe from an SLA perspective.

Mr. West: Yes, you just hit on something really interesting and I’m going to pick on a couple of large companies. There are alerts that are automatically created, those alerts, there can be thousands or tens of thousands of them – “network port open, network port closed, network port didn’t close.” Now we want to know those alerts because they mean something, an anomaly may have occurred. Then there’s the typical ticket which Remedy and ServiceNow have based massive empires upon. Chris Romeo is very cognizant of this type of ticket because of previous work at NICE and whatnot where you’ve got service desks and the program people are functionally in control of this. These tickets often have almost nothing to do with protecting your system or your enterprise. The focus is on measuring for meeting a SLA – i.e., how many tickets has a person opened or closed in a day so that we can measure them, not measuring what have they actually mitigated on the network. If I’ve got a person doing a really good job with the right tools, they’re probably sitting there twiddling their fingers a lot of the time because everything is operating beautifully on the system. If they’re opening and closing tons of tickets, the system is working very poorly.

Active Cyber™: Yes, I agree a hundred percent with that and that’s kind of why I was always advocate for adding the AI piece to it, to doing auto-complete and then letting there be maybe a human check on the auto-complete to make sure it was done and I can complete the ticket. So again, the concept that you guys have of having mitigation and orchestration running ahead of this ticketing system, I think is really the right approach while including a human in the loop as needed.

So the other thing I want to talk about, and Chris Romeo you kind of talked about this earlier, was the role of identity and how that’s linked across the board to the user, the workload, the apps, the devices and everything else. So identity brings together what I would call a context-aware environment. And you hear that term a lot when you talk about zero trust environments too. Can you explain what that means and how “context” gets translated into policies?

Mr. Romeo: Yes. Context really means, “do I understand the location of the device or user? Do I understand the device that’s being used? Do I understand the time of day of the login attempt, the network, the connection that’s coming from the user and the typical behavior of that user such as their typing speed, navigation habits, or previous access patterns?” And then context ultimately means understanding the risk data, which are really based on the device health or real-time events and access patterns that are going on behind the scenes. And so if I can understand all those different elements, then I can say, okay, this is either a good login if it follows all the preset conditions or it’s a bad login and I should therefore alert one of the components of zero trust command center to make sure that’s remediated as quick as possible.

So hopefully an AI is getting involved to escalate the event and we’re cutting the network connection as probably the first thing we’d want to do and then block that particular network connection from ever being able to come back into the environment again.

Active Cyber™: I know that one of the aspects that I ran across in my environment was the sensitivity of the data too, and that’s coming into play a lot now with the CMMC stuff – controlled, unclassified information – and you can go beyond that in terms of data sensitivity as well. How does sensitivity of data play into the context awareness for your zero trust solution? How do you deal with sensitivity of data in your solution?

Mr. Romeo: Yes, so we’re always looking for patterns, right? That’s really what it comes down to. If the patterns in the data look good, then we continue forward. If it looks bad, you got to disable access to the data store. And so we’re protecting ultimately the overall environment in some way, shape or form.

Active Cyber™: Does your solution label data?

Mr. Romeo: Absolutely. And that’s a key aspect of this. One of the first things that we do from a global data security perspective is we want to label all of your data and understand where the critical data is and the stuff that we don’t really care about. And so therefore everything’s being labeled, so therefore we have context as to what types of data or where and what’s being accessed and why it’s being accessed. And so when we see access behaviors that don’t look right, then we want to make sure that we’re disabling access to that particular type of data for that user. We’re also trying to see somebody gain escalation to access data that they shouldn’t be allowed to access. And that goes to managing privileged access management and making sure that context for PAM is fully understood in the environment as well – as part of that identity aspect that we talked about earlier.

Active Cyber™: That also brings up the attributes of the user and the device or whatever that you’re talking about because they have to match up to the sensitivity of the data. For example,  are you secret cleared or are you in the role of a financial analyst or whatever those attributes might be. So do you also have the ability to monitor and track, I guess through some type of application, the attributes of the device or user as part of your zero trust command center?

Mr. Romeo: Yes, a hundred percent.

Active Cyber™: So it sounds to me like you guys really thought this solution through really well and you’ve got key capabilities that fit all the different needs of zero trust.

Mr. Romeo: Yes, it’s really been a labor of love for Taber and I over the last two years.

Mr. West: Yes, so I have been a derivative classifier in the federal sense multiple times throughout my career of the most critical data on earth. And this is the kind of stuff where there’s four of us who can go into a SCIF [Special Compartmented Information Facility] and that’s it, no one else anywhere. And so that means you have to design the system to handle the data, and that’s where a lot of our efforts are rooted from because just taking something off the shelf and sticking super critical data inside of it is guaranteed 100% failure.

We had to design everything from the inside out. And so when we talk about the edge versus the core and hybridized systems, the edge is actually the application doing the work with the data, not the computer or the network or the cloud. And unfortunately, most systems engineers look at it the other way around. Oh, I’ve got a big cloud system, I’ve got Azure, GCC – Government Compute Cloud, or DoD AWS, GovCloud, something like that. And then you’ve got networks and you’ve got SCIFs and you’ve got systems, but the criticality is the application that can do something with the data. So we have to start by actually identifying the data and then the software that’s going to do something with it, because that software can move data from system to system. We need to be able to track it as it does. So we have to tag the data, tag the system, the applications. Other systems out there simply were not taking this approach, so they just weren’t sufficient.

Active Cyber™: And it brings up in my mind, one of the big things I’m a big talker about, is authenticity of data. So you might have data, but how do you know it’s the authentic data or it’s the original data? Because you can get a lot of things changed around in between or even faked. And so there are ways you can validate or verify the authenticity of content, but it starts with labeling. So it’s nice that you guys really have that basic foundation needed for content authenticity. I really think that’s going to become a bigger and bigger need or goal for systems as we move forward, especially with AI being able to fake things such as when we talk about imagery or other things like that as well. So I’m happy to see that you guys have that in mind as you move forward.

Mr. Romeo: And I would add one more piece to that before you move on. And it’s not just labeling of the data, but it’s also making sure that when it’s written, it’s immutable when anything is appended to that file, so I know that I have the original source of truth and that you couldn’t overwrite my original file because you don’t have access to do that.

Active Cyber™: Great point.

Mr. West: Yes. Don’t forget the blockchain. So having a token that is validated and encrypted and having a ledger so that all sources that need to know the data have a copy and can validate that what they’re seeing is the actual original immutable data.

Active Cyber™: So you mentioned blockchain for the first time, I think. Okay, so your capability incorporates a blockchain ledger, so to speak?

Mr. West: A lot of them, and it depends on where you’re handing it off to, whether it’s a network data storage, subsystems, authentication subsystems. You’ve got to maintain multiple ledgers and you’ve got to be able to pass common tokens across the ledgers that have that encryption, decryption verification, because you don’t necessarily want to mix data types in the same ledger. And this is kind of critical, whether it’s supply chain in the commercial world or whether it’s critical military data, it’s the same thing. You want to keep a ledger that has a clean track and that way you can look for, as Jim said before, anomalies in the system.

Active Cyber™: Okay. So is this a global file system that you guys use as part of the underlying technology?

Mr. West: Yes.

Mr. Romeo: Absolutely.

Active Cyber™: Okay. So we’ve talked around AI a little bit, but I’d kind of like to dive in a little bit deeper. So Jim Rice, why don’t you start us off here if you could, what’s the role that AI plays in your solution? Does Vantiq have an AI element or is it above the Vantiq layer in terms of this overall solution? How does AI really get used and have you guys been developing LLMs to support or are you using some open source ones? How’s that all coming?

Mr. Rice: Alright, great question Chris. So we do not produce AI models. We are an orchestrator of real-time data with analytics to produce outcomes. AI fits really well into the analytics piece. It’s ultimately a scoring engine and what comes out of it is either a confidence score high or low, or in the world of generative AI, it’s creating content that’s relevant to a prompt. And so we don’t care what AI models are being orchestrated to produce the final outcome.

And so let’s just go off track here a second and talk about a couple examples. AI for fire detection – so a lot of people have trained AI models on the detection of fire and smoke, lots of cameras. In the case of California, there’s 1200 plus cameras that are scattered around the hillside looking for smoke, early warning signs of fire. The AI might be pretty good at detecting what is smoke, but it also produces false positives.

For example, weather elements, certain types of cloud formations, also fog or ice on a lens, solar flares, or when the sun’s in the wrong place and it creates a distortion. All these things can trigger an AI model “to see” something that’s not really there. And so you sometimes are going to want to have a multi-model approach because one model’s trained to look for something specific. Another model is trained to look for the absence of something, and so you can have multiple positive models and a few negatives. And if all of them are in agreement, then we now have a high quality event. In the case of fire alerts, we would basically iterate and retrain the model on an ongoing basis. That set of models would then be leveraged by Vantiq so that the recommendations to a human in the loop were always continually being improved based on the subject matter expert saying, “yeah, that actually wasn’t fire or smoke, it was this fog phenomenon,” or “it was actually something else.”

But also when it does come to ice on a lens or a cracked lens or whatever, then that also could fire off a different asynchronous workflow for a trouble ticket for a field tech to go out and fix the camera. So it may be that it’s not a subject matter expert for cyber that needs to be dispatched, it’s someone else who needs to go out and go reboot or replace a failing sensor. Anyway, we have multiple demos in healthcare, and oil and gas, and we will be producing a similar one with Chris and Taber that basically shows the incorporation of AI and generative AI. But the first place to go is usually what are the existing LLMs, either publicly available LLMs, or, ideally the air-gapped, private cloud deployable LLMs that we can orchestrate so that data that’s being fed in those prompts or in that scoring process isn’t released out into the wild. We want to keep that kind of stuff close to the vest. Does that help?

Active Cyber™: Yes. Thank you. So Chris Romeo, what about the OneTier side? Are you guys also looking at building models to support the orchestration functions that you’re doing?

Mr. Romeo: Absolutely. Taber and I have been working with various LLMs that are out there right now and then learning and actually looking to teach those models what they need to know so they can be effective at detecting various cyber events going on inside of a particular network.

Active Cyber™: How are these models different from your typical Python playbooks that are out there?

Mr. Rice: Taber, before you answer, I’ll just say Vantiq doesn’t care. We orchestrate Python models or anybody’s model. When you see something, you say something. So we just subscribe to the output of those models, and it could be anybody’s model produced with anybody’s tools. We don’t ultimately care as long as we can either push things into the model or somebody else does. But ultimately we need to be tapped into the outbound data stream out of the model so that we can correlate that with other things.

Mr. West: And that’s actually key and germane to this because we use different models and model types. So as an engineer, it’s all an algorithm. I just need to tune the algorithm for what I want it to do. Now if we’re talking about data, I want generative, I want it to analyze data, I want it to analyze data feeds, speeds, inputs, mass volume content, and give me metrics on what it is I’m looking at. Can I discard this? Can I de-dupe it? Is there a threat? Has the threat already occurred? Might a threat happen?

Some of this has nothing to do with cyber. It simply has to do with speeds and feeds and good housekeeping and the protection of the data itself. Now, if I’m talking about the network and I’m moving that data over the network, say I’ve got a database cluster, we talked about hybridization a while ago, we all know that if I build an Oracle RAC cluster or SQL cluster and my data store is too far away in terms of milliseconds of transit time, the whole cluster will fail.

So if I’m trying to distribute a cluster on the edge, I have an issue. But with our network model, we can have a neural net, analyze the all available network pathways, specifically analyze the data we’re going to move and then create a path structure that will optimize the movement of that data fully encrypted and secure across the wire. So that’s using several completely discreet AI models to do work in one cohesive manner, and Vantiq will be able to orchestrate each one of those pieces into the whole, even though they’re doing different types of work at different segments.

Active Cyber™: So we’ve talked a lot about the solution, so let’s talk about how you’re going to approach the market now with this capability. This is a very important capability in my mind, but you kind of have to make sure you target the right way or it just doesn’t get going. So what’s your target market for this to start – not to end but to start to get it going – and then how are you’re approaching the market? So Chris Romeo why don’t you start us with that?

Mr. Romeo: Yes, happy to. So when you think about the target markets, there’s a couple that just scream to you and say, this is where this solution belongs. And first and foremost, government is a great place for this because there are a lot of nation states that obviously want our information. And so governments are there to protect that information, and we want to protect those governments from those threat actors. Second and foremost, it’s about the money and you have to be able to protect the money flow – that now puts our solution into the finance sector and making sure that we’re protecting capital markets, banks, credit unions, everything in between. Then the next target is healthcare. If we look at the data that we get out of healthcare, it is massive and it defines us as people. And if somebody was to get a hold of that data, they could potentially compromise us. So protecting healthcare data is extremely important and a place where our solution provides a good fit. So I would say if I was to start, those are the three target markets for this solution right away. And of course, , everybody can use this capability in some way, shape or form because everybody is affected by cyber. However, I don’t know that everybody has the resources to buy this solution out of the gate, and so therefore we have to focus on the places where this fits best.

Active Cyber™: Gotcha.

Mr. Rice: Can I chime in on that?

Active Cyber™: Go ahead, Jim.

Mr. Rice: Yes, just a quick comment. A potential target market that is a combination of almost all the above is tribal. Tribal is rich in cash because of the participation in the gaming industry. Yet they don’t always have the best cyber defense, and so therefore they’re very vulnerable. In the case of ransomware, they may pay the ransom, but they still don’t do anything to mitigate it. So they’re just going to get hit repeatedly by the same threat actors and they’re just keep paying. But they have a source of funds needed for this solution. Also tribal healthcare is a good place to look into. They have all their population data inside the tribal healthcare system and they have responsibilities beyond that. But I would say overall they’re a vulnerable population that has cyber defense needs.

Active Cyber™: So the final rules have been set aside for Cybersecurity Maturity Model Certification (CMMC). Do you see CMMC as a driver for your solution as well?

Mr. Romeo: Certainly if you’re looking at Defense Industrial Base, CMMC is 100% something everybody needs to adhere to. If you go back to the governance aspect of the solution –  understanding where your gaps are today – and then looking at OneTier’s Data Secure solution to be able to help you fill those gaps and ultimately become compliant is a huge opportunity for us. Zero Trust Command Center will ultimately be that overarching solution, especially for the large prime integrators, to be able to manage not only their networks, but their supplier networks that are feeding into their network. And because that’s where the nation states are going to go after first, right? They’re going to go after the weakest link that is your suppliers, then you need to have something there looking at that and always guarding the gate to make sure nobody’s getting in who shouldn’t be in there. And that is where Zero Trust command Center will be able to identify where those exploits are happening and then shut them down very quickly and maybe even take suppliers offline as necessary.

Mr. West: Well, so we work very with the CMMC folks to define this rule and get the rule out, and they’re a little bit panicked. And it’s very interesting because this is kind of like Groundhog Day. I did the same thing with HIPAA and HITRUST more than 20 years ago in contributing and trying to help them vet the process. And, in particular, CMMC is based on several NIST special publications. So you’ve got a set of rules and actions, you’ve got a set of definitions, and you’ve got a set of compliances which are enormous, hundreds and hundreds and hundreds of rules to follow. And people look at it who are well-versed in the subject and they’re like, I don’t even know where to start. And then the fact that you’ve got to get up through compliance levels one, two, and ultimately three where you get government audits.

So not only do you have to have an independent auditor and audit yourself, but then you’re going to be subject to a government audit and all of this so that you can maintain a contract for the benefit of the government and the public. So there has to be a set of tools that quantify this, literally walk you through. So we at OneTier have created the templates, we populate the templates in the automated scans, and we bring it all together and then educate the user as well as the auditor on the other side as to what does and doesn’t work, because otherwise it will ultimately crash and burn.

strong>Active Cyber™: Okay. So you are thinking CMMC as a targeted offering as far as moving forward down the road.

Mr. West: Yes, absolutely.

Active Cyber™: So there are other zero trust solutions out there. Some are by some of these very large integrators or security providers that have a lot of the moving parts, so to speak, or there may be some other folks like you that are trying to put solutions together across a variety of different tools. So how do you guys differentiate what you’re doing versus what you see out there in the market already?

Mr. Romeo: Yes, absolutely. There’s a lot of people that are contributing and developing for Zero Trust, and this is one of the things I commonly hear from folks is, “yeah, there’s no one size fits all for zero trust.” We do containerization over here. We do zero trust network access over here. I mean, there’s all these different aspects to it and people are doing pieces and parts. There are other folks that are saying, we’re going to take a cloud-based approach to that. And that’s great. That certainly works for some. We think that you should be in control of all aspects of zero trust and you should have the tools to be able to do that. And that is ultimately why we created OneTier Data Secure. It’s ultimately why we’re creating Zero Trust Command Center to automate all of that data, to be able to sift through it and to take action when necessary, and ultimately to create a synergistic human-AI capability that ultimately is there to protect the data, the devices, the users, the access to all that information.

Active Cyber™: Nice. Okay. So how do your customers get started? How do they start looking at implementing this and when?

Mr. Romeo: Yes, that’s real simple. Come out to OneTier’s website. It’s www.onetier.com and learn more about not only what we’re doing with the Data Secure platform, but also dive specifically into Zero Trust Command Center.

Active Cyber™: Nice. Okay. And Jim, how do they find out more about Vantiq and what you guys are doing?

Mr. Rice: As far as Vantiq is concerned, https://www.vantiq.com has the bulk of our content, and then we also have the community portal for the system integrators and partner ecosystem who are looking at how do they create their own solutions that would plug into OneTier’s command center and or leverage Vantiq’s capabilities in other domains, where say for example, a multi-domain operations center or a multi command and control ISR fusion center makes sense. Get the right information to the right people at the right time is not just restricted to cyber.

I would also like to mention that OneTier also has a cyber posture assessment methodology, which I think is really helpful for those who may be not already empowered. So anyway, an assessment – am I green, yellow, or red outward facing and, am I green, yellow, red on an inward perspective. And so I think those assessments, scorecard kind of technologies, will help give the customer a report on where they’re weak, and so they can have a prioritized action plan to go after the things that matter.

Active Cyber™: Well, it all sounds good. I really appreciate everyone’s time today. I think you guys should find a lot more interest in what you guys are doing because this is definitely, to me, one of the best overall capabilities I’ve seen for zero trust capability. So thanks for your time today. I look forward to seeing this in action soon. So let us know so I can let my audience know when they can start to see the Zero Trust Command Center working in a demo environment.

Mr. Romeo: Yes. We’ll certainly keep you updated on that, Chris, and we certainly appreciate you having us on today. This has been great.