news

AWS updates Amazon Bedrock with new foundation models, AI management features – Business

Spread the love


Amazon Web Services Inc. is rolling out a series of new foundation models to Amazon Bedrock, its managed artificial intelligence service.

The cloud giant detailed the new models today alongside a set of other enhancements. According to AWS, Bedrock customers will gain the ability to run customized neural networks on the service. They will also have access to new features for comparing AI models’ performance and ensuring they comply with content safety standards.

Introduced last April, Bedrock provides access to managed foundation models from AWS and a half-dozen other companies. The models are available through an application programming interface that removes the need for customers to manage the underlying infrastructure. As a result, there’s less work involved in integrating AI models into enterprise applications.

As part of today’s update, AWS is making an image generation model that it previewed last November generally available in Bedrock. Amazon Titan Image Generator can not only create images but also edit existing ones based on natural language instructions. It embeds an invisible watermark into the files that it creates to ease the task of identifying AI-generated content.

Next week, Bedrock users will receive access to another new model called Amazon Titan Text Embeddings V2. It’s an enhanced version of Bedrock’s existing model for creating embeddings. Those are mathematical structures in which a neural network stores the information it uses to generate responses.

AWS is also expanding Bedrock’s catalog of third-party models. Llama 3, the latest iteration of Meta’s open-source large language series, is now available on the service. Down the road, AWS will add the Command R and Command R+ models from Cohere, a well-funded LLM startup. Cohere R+, the more advanced of the two models, became available earlier this month with support for 10 languages.

“With today’s announcements, we continue to innovate rapidly for our customers by doubling down on our commitment to provide them with the most comprehensive set of capabilities and choice of industry-leading models, further democratizing generative AI innovation at scale,” said Swami Sivasubramanian, vice president of AI and data at AWS.

Customers whose requirements are not fully addressed by Bedrock’s built-in AI catalog can bring their own custom models into the service. According to AWS, this is made possible by a new feature called Bedrock Custom Model Import that is rolling out as part of today’s update. It provides the ability to make external AI models available in Bedrock with a few clicks.

On launch, the feature will work with customized versions of open-source models from Mistral AI and Meta’s Llama series. There’s also support for Flan-T5, an open-source LLM developed by Google LLC. It’s one of the newest additions to a language model series the search giant originally introduced in 2019.

Custom LLMs brought by users into Bedrock can access many of the features that are available with the built-in models, including a capability called Guardrails for Amazon Bedrock. Launched into general availability this morning, the feature is designed to prevent AI models from generating harmful content.

Customers can configure the feature by entering natural language descriptions of what prompts should be rejected. A company could, for example, block requests that contain sensitive data such as credit card numbers. Guardrails can also be used regulate an AI model’s outputs: the feature lends itself to tasks as preventing a customer support LLM from generating investment advice. 

Determining which model is best-suited for a given application project sometimes requires hours of manual testing. To speed up the task, AWS has made a tool called Model Evaluation generally available in Bedrock. It allows users to select a subset of the models available in the service and compare their accuracy by having them answer a set of test prompts.

Model Evaluation can also compare neural networks based on other metrics. A company could, for example, check how well the responses an AI generates adhere to its content style guidelines. For situations where an AI’s responses may be difficult to assess using automated methods, Model Evaluation provides the option to have human testers rate the quality of the model’s output.

Image: AWS

 

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” –

THANK YOU