Since Lamini released free finetuning a few weeks ago, many of you have finetuned your LLMs. So fast and easy! But what exactly is finetuning? How is it different from prompt engineering? When and why do we need to finetune? So many questions are swirling in people’s heads 🤯
That’s why we’re releasing a new free short-course “Finetuning Large Language Models,” co-created by our CEO and AI researcher, Sharon Zhou, and her good friend and Stanford colleague, Andrew Ng.
Over a hundred million people – a third of the US population – are using ChatGPT: writing emails, studying math, debugging code, prepping interviews, planning travel, and asking for relationship advice. What made ChatGPT so useful and successful? Finetuning!
To make foundation models, such as Meta’s open-source Llama 2, experts on your use case, you need to finetune them on your data.
Finetuning your own LLM has many other benefits, such as stopping hallucinations, reducing irrelevant info, having more control over model behavior, increasing reliability and stability, and lowering cost!
In this short course, you will:
- Level up from prompt engineering and learn finetuning best practices.
- Finetune your own LLMs on private data.
- Finetune with only a few lines with Lamini and explore our open-source core with HuggingFace, PyTorch, and more.
Through hands-on code, you’ll learn how to preprocess and prepare your data, finetune LLMs, evaluate their performance, and take away practical tips for finetuning.
By the end of the course, you'll be able to finetune thousands of new LLMs, each within minutes!
Lamini makes finetuning LLMs super simple with just a couple of lines of code. We’re open-sourcing Lamini's core in this course – the code that makes the magic happen!
We’re excited to see what you build! Please share on Twitter @LaminiAI or firstname.lastname@example.org. We'll showcase the best Lamini llamas with the world!
- from the Lamini team on August 23, 2023