Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Rick Caccia

👤 Person
171 total appearances

Appearances Over Time

Podcast Appearances

We just talk about risks around AI. They get it and they get right into how the product works and can they buy it.

Once we had a clear idea of what we wanted to do, from that point to the first beta, Proof of Concepts was about six months. It's built as a set of Kubernetes microservices. We stand them up as a new instance for each customer. When we talk about these guardrails that we have around user activity, they're really separate microservice-based AI policy engines.

Once we had a clear idea of what we wanted to do, from that point to the first beta, Proof of Concepts was about six months. It's built as a set of Kubernetes microservices. We stand them up as a new instance for each customer. When we talk about these guardrails that we have around user activity, they're really separate microservice-based AI policy engines.

Once we had a clear idea of what we wanted to do, from that point to the first beta, Proof of Concepts was about six months. It's built as a set of Kubernetes microservices. We stand them up as a new instance for each customer. When we talk about these guardrails that we have around user activity, they're really separate microservice-based AI policy engines.

So like one of them might look at your prompts in a chat window to detect jailbreaking. Another one might look at prompts to detect use of confidential data. We use a mix of standard technologies and we use a bunch of custom built stuff as well. All the AI engines are custom trained. We've also incorporated a lot of open source stuff.

So like one of them might look at your prompts in a chat window to detect jailbreaking. Another one might look at prompts to detect use of confidential data. We use a mix of standard technologies and we use a bunch of custom built stuff as well. All the AI engines are custom trained. We've also incorporated a lot of open source stuff.

So like one of them might look at your prompts in a chat window to detect jailbreaking. Another one might look at prompts to detect use of confidential data. We use a mix of standard technologies and we use a bunch of custom built stuff as well. All the AI engines are custom trained. We've also incorporated a lot of open source stuff.

I think AI is interesting because there's a lot of open source stuff available. There's new stuff popping up all the time. We've also been using some early stage platform technology from some other early companies and that may or may not work out for us over time. We're trying to sort that one out.

I think AI is interesting because there's a lot of open source stuff available. There's new stuff popping up all the time. We've also been using some early stage platform technology from some other early companies and that may or may not work out for us over time. We're trying to sort that one out.

I think AI is interesting because there's a lot of open source stuff available. There's new stuff popping up all the time. We've also been using some early stage platform technology from some other early companies and that may or may not work out for us over time. We're trying to sort that one out.

We started this company thinking about the security of AI use in a way that most security startups also do, and we got it wrong. So we had to revisit and trade some things off. So we looked at this and said, oh, this is going to be like any other new type of security issue. You're going to have new types of attacks. AI-oriented attacks are going to be the big deal.

We started this company thinking about the security of AI use in a way that most security startups also do, and we got it wrong. So we had to revisit and trade some things off. So we looked at this and said, oh, this is going to be like any other new type of security issue. You're going to have new types of attacks. AI-oriented attacks are going to be the big deal.

We started this company thinking about the security of AI use in a way that most security startups also do, and we got it wrong. So we had to revisit and trade some things off. So we looked at this and said, oh, this is going to be like any other new type of security issue. You're going to have new types of attacks. AI-oriented attacks are going to be the big deal.

Let's figure out how to talk about those and prevent them. And then we went out and we talked to maybe a dozen CISOs. And the interesting thing was none of them cared. Nobody cared. They thought that was years away, and instead, they cared about much less sexy things like visibility. Like, I don't care about some crazy new attack.

Let's figure out how to talk about those and prevent them. And then we went out and we talked to maybe a dozen CISOs. And the interesting thing was none of them cared. Nobody cared. They thought that was years away, and instead, they cared about much less sexy things like visibility. Like, I don't care about some crazy new attack.

Let's figure out how to talk about those and prevent them. And then we went out and we talked to maybe a dozen CISOs. And the interesting thing was none of them cared. Nobody cared. They thought that was years away, and instead, they cared about much less sexy things like visibility. Like, I don't care about some crazy new attack.

I care about just seeing, are my employees using some new LLM-driven chatbot that happens to be hosting data in China? How do I enforce acceptable use? We ended up having to make decisions to trade off the kind of whizzy, sexy security features for things that are much less whizzy, like visibility and policy enforcement. And when we made that trade off, the results were just crazy.

I care about just seeing, are my employees using some new LLM-driven chatbot that happens to be hosting data in China? How do I enforce acceptable use? We ended up having to make decisions to trade off the kind of whizzy, sexy security features for things that are much less whizzy, like visibility and policy enforcement. And when we made that trade off, the results were just crazy.

I care about just seeing, are my employees using some new LLM-driven chatbot that happens to be hosting data in China? How do I enforce acceptable use? We ended up having to make decisions to trade off the kind of whizzy, sexy security features for things that are much less whizzy, like visibility and policy enforcement. And when we made that trade off, the results were just crazy.

We went from not being able to get a single design partner, early customer, to getting 25 design partners in a month after we changed that decision and saying we're going to trade off the sort of sexy security stuff for the boring visibility, compliance, governance stuff. And the uptake was just amazing. It was like we flipped a switch.