news

Lamini raises $25M for its AI development and inference platform – Business

Spread the love


Lamini, a startup with a platform for building artificial intelligence models and deploying them in production, has received $25 million from a who’s who of tech investors.

The company announced the investment, which was spread over two funding rounds, on Thursday. Lamini’s institutional backers include Advanced Micro Devices Inc.’s venture capital arm, First Round Capital and Amplify Partners. They were joined by AI pioneer Andrew Ng, OpenAI co-founder Andrej Karpathy and the chief executives of Dropbox Inc., Figma Inc. and Louis Vuitton parent company LVMH.

Lamini CEO Sharon Zhou earned her doctorate degree under Ng at Stanford University, where she was a faculty member before launching the startup. Co-founder Greg Diamos, Lamini’s Chief Technology Officer, previously co-founded the MLPerf machine learning consortium. The group develops benchmarks that are used to compare the performance of neural networks, graphics cards and related technologies.

Lamini, officially PowerML Inc., provides a software platform that software teams can use to train AI models. It can run neural networks on graphics processing units from AMD or Nvidia Corp. in both cloud and on-premises environments. Companies that go down the on-premises route may deploy Lamini on air-gapped infrastructure, or hardware that is isolated at the network level for cybersecurity reasons.

Lamini built its platform with large-scale AI projects in mind. According to the company, customers can distribute workloads across more than 1,000 graphics cards when necessary.

One of the most challenging tasks involved in training a large language model is configuring its hyperparameters, settings that define details such as how many artificial neurons it includes. Lamini provides a set of default hyperparameters that spare developers the hassle of setting up everything from scratch. At the same time, software teams with more advanced requirements have access to a tool for defining custom LLM settings.

Lamini says its platform can also be used to fine-tune AI models that have already been trained. That’s the process of optimizing a neural network in a way that allows it to perform a specific task more effectively. The platform provides several ways of going about the task.

Traditionally, fine-tuning an LLM required modifying a significant number of parameters, configuration settings that influence how an AI processes data. Lamini supports a fine-tuning approach called PEFT that significantly reduces the number of parameter changes involved in the process. The technique can reduce the cost of adapting neural networks to new tasks.

Some AI projects use a different fine-tuning method, dubbed RAG, that makes it possible to teach a model new tasks without code changes. Lamini supports that technique as well. For added measure, it provides a dashboard that enables developers to compare the accuracy of their fine-tuned models with the original version.

Besides streamlining AI development, Lamini also promises to ease the task of deploying newly created LLMs in production. It provides a set of inference management features that allow developers to regulate the style in which a language model generates text, the format of the outputted data and related details. It claims its platform makes it possible to perform inference significantly more cost-efficiently than with proprietary LLMs such as Claude 3.

Lamini will use its newly disclosed funding to hire more employees and expand its AI infrastructure. The effort will place a particular emphasis on adding more AMD graphics cards. In conjunction, it plans to develop “deeper technical optimizations” for machine learning workloads.

Image: Unsplash

.

 

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” –

THANK YOU