news

CalypsoAI beefs up generative AI chatbot moderation with customizable security scanners – Business

Spread the love

Generative artificial intelligence-focused security startup Calypso AI Corp. is beefing up its software-as-a-service offering with the launch of what it calls “next-gen security scanners” and enhanced functions for enterprise collaboration platforms such as Slack and Microsoft Teams.

The new features, set to launch at the RSA security conference in San Francisco on May 6, are aimed at making it easier for companies to identify and manage the risks associated with generative AI models.

CalypsoAI debuted its SaaS security platform last year. Its flagship feature is a generative AI governance tool that actively monitors how company employees are using large language models such as GPT-4 or Gemini in real-time, with full auditability, traceability and attribution for costs, content and user engagement.

The platform is called CalypsoAI Moderator, and it helps to prevent employees sharing sensitive company information with LLMs, while ensuring their outputs are verified and grounded in truth, which is an especially important capability for consumer-facing chatbots.

The new Generative AI Security Scanner builds on that foundation, giving organizations a simple way to build their own, customized security scanners for AI chatbots. It’s designed so that companies can specify more specific vulnerabilities and threats and establish more detailed policies to block or redact particular categories of content. In this way, users can adapt their LLM security settings according to their precise needs, in order to safeguard corporate secrets such as new drug formulations or proprietary financial data.

In addition, CalypsoAI announced a new security enhancement for generative AI chatbots that live in platforms such as Slack and Microsoft Teams. The new offering is said to integrate easily into existing workflows, extending the security capabilities of the CalypsoAI Moderator platform to those tools, without disrupting the chatbots’ day-to-day operations. It supports chatbots powered by models from companies that include OpenAI and Anthropic PBC, as well as various open-source models.

The addition of the new tools allows CalypsoAI to advance generative AI security by providing organizations with the ability to create, build, test and deploy tailored security measures that go beyond the basic capabilities of its platform, the company said.

Co-founder and Chief Executive Neil Serebryany said this will help companies to adapt more quickly to deal with any emerging threats they come across and maintain rigorous compliance in the face of changing regulations.

“These solutions revolutionize enterprise security by allowing organizations to precisely tailor their AI defenses and control AI chatbot usage securely and compliantly,” Serebryany said. “This integrated approach not only enhances data protection but also optimizes communication workflows across platforms, ensuring teams can collaborate safely and effectively.”

Serebryany dropped by theCUBE during SiliconANGLE’s and theCUBE Research’s “Supercloud 6: AI Innovators” event in March, when he detailed the company’s vision to enable every enterprise to take advantage of generative AI’s potential.

“Solutions like ours are a key part of being able to actually verify whether information is trustworthy or not,” he said. “We actually have a feature that allows enterprises to, as they get responses from generative AI models, verify if a response is true or not true and build their own internal taxonomy of trustworthy information on the generative AI side of the house.”

Here’s the full interview with Serebryany and his take on the challenges of security, access and orchestration in the development of generative AI models:

/Microsoft Designer

.

 

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” –

THANK YOU