The AI revolution is here. You’ve been prompt-engineering LLMs and asking yourself two questions:
Cue: easy, fast LLM training.
The future of software engineering will be architecting a new layer of LLM infrastructure above foundation models. It will be about steering LLMs towards better performance with powerful programs and robust data pipelines.
From the perspective of AI researchers, the LLMs that you are playing with today will be the worst ones you will have used in the next decade.
That is to say, this is just the beginning of improving LLMs. You’ll get to heavily personalize LLMs with your own data. And, LLMs will dramatically transform user experience, lowering the barriers to entry on every product and feature you’ve built or seen before.
But steering LLMs like that feels impossible now, for many reasons:
This can’t possibly be the only way to build in this AI revolution. You’re right.
That’s why we’re excited to show a Lamini demo for any software engineer to specialize the most powerful LLMs to their use case, on proprietary data and infrastructure.
Sign up, log in, and get started with the Lamini library to do this today!
Top technology leaders have told us:
Table: Play with this LLM live now! Just use your Google account to sign into Lamini and start asking questions. Please note that the results are always improving on our live version, so expect some differences.
Internal engineering documentation (and code) can be difficult to navigate and find the relevant information, to understand the code structure, and to identify dependencies. It would be helpful to ask someone knowledgeable about that part of the codebase to get the right answer immediately. But those people are often hard to reach.
Now, an LLM that has read all of your code and documentation could help both you and your customers navigate it. In many cases, this would need to run locally to keep your source code private. We had the same idea, so we set out to prompt-engineer a model with retrieval to do this.
However, in addition to data privacy concerns, off-the-shelf solutions were not able to achieve good performance for this use case. They:
So, we used the Lamini library to specialize a general LLM to this specific use case, by training it on all of Lamini’s internal engineering documentation.
In the above Table, you can compare the two approaches of a Lamini-optimized LLM, e.g. with training, and a prompt-engineered LLM, e.g. with retrieval. The Lamini-optimized LLM does not hallucinate false information (row 1), is able to find the relevant information (row 2), and tries to steer the conversation back on track when the user tries to ask other things (row 3).
You can train LLMs using Lamini, by writing code to connect your data from your data warehouse or data lake.
To do this on your own infrastructure, you just need to install Lamini locally. Sign up for our waitlist or start training on our infrastructure now!
Team++: We are growing our team with people who are passionate about making LLMs widely accessible to empower new, extraordinary use cases. If that’s you, please apply via https://jobs.lever.co/laminiai 🤝
- from the Lamini team on June 15, 2023