NVIDIA Announces NeMo Guardrails: An Open-Source Tool Designed to Improve the Performance and Safety of AI-Powered Chatbots like ChatGPT


As artificial intelligence (AI) technology advances, developing effective guardrails for generative AI applications has become a pressing issue. In response to this challenge, NVIDIA has released an open-source software called NeMo Guardrails, aimed at helping developers ensure the security, accuracy, and appropriateness of AI applications using large language models (LLMs).

NeMo Guardrails provides businesses with all the necessary code, examples, and documentation to add safety measures to their AI apps. With the increasing adoption of LLMs in various industries for tasks such as answering customer queries and accelerating drug design, NeMo Guardrails offers a solution to keep these applications safe.

The software enables developers to create three types of boundaries: topical guardrails to keep the app within desired areas, safety guardrails to filter out unwanted language and ensure accurate responses, and security guardrails to restrict connections to safe third-party applications. NeMo Guardrails is designed to work with a variety of tools, including LangChain and Zapier, and is open source, making it accessible to software developers with varying levels of expertise. The goal is to make AI a dependable and trusted part of the future by prioritizing safety, security, and trust in AI development.

NVIDIA has announced that it will integrate NeMo Guardrails into its NeMo framework, a comprehensive tool for training and tuning language models using proprietary data. The NeMo framework is already partially available as open-source code on GitHub. Enterprises can obtain it as a complete and supported package as part of NVIDIA’s AI Enterprise software platform. NVIDIA AI Foundations offers cloud services to help businesses create and deploy customized generative AI models using their data and expertise. One of the tools available in this suite is the NeMo framework.

Several organizations, including a leading mobile operator in South Korea, have already used the NeMo framework to build intelligent assistants. A research team in Sweden has used it to create LLMs that automate text functions for the country’s hospitals, government, and business offices.

Developing effective guardrails for generative AI is a complex challenge requiring continuous research as AI technology advances. NVIDIA has recognized this challenge and has made its NeMo Guardrails software open source to contribute to the developer community’s ongoing efforts to ensure AI safety. By working together on guardrail development, companies can keep their intelligent services aligned with safety, privacy, and security requirements, ensuring that these innovative tools remain on track and trustworthy.

Integrating NeMo Guardrails into the NeMo framework will help developers incorporate safety measures into their AI applications from the start of the development process. As a result, businesses can have greater confidence in their AI applications’ accuracy, security, and appropriateness, and the public can trust that these applications are safe to use. Using guardrails in AI development is essential to ensuring that these powerful tools are used responsibly and ethically, and NVIDIA’s NeMo Guardrails is a step in the right direction.


Don’t forget to join our 20k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at [email protected]

???? Check Out 100’s AI Tools in AI Tools Club


Niharika is a Technical consulting intern at Marktechpost. She is a third year undergraduate, currently pursuing her B.Tech from Indian Institute of Technology(IIT), Kharagpur. She is a highly enthusiastic individual with a keen interest in Machine learning, Data science and AI and an avid reader of the latest developments in these fields.