' Fine tune a 70B language model at home - Toppertrick : Be Topper Always

ad

Latest News

Saturday, 9 March 2024

Fine tune a 70B language model at home

Fine tune a 70B language model at home
146 comments on Hacker News.
Jeremy from Answer.AI here. This is our first project since launching our new R&D lab at the start of this year. It's the #1 most requested thing I've been hearing from open source model builders: the ability to use multiple GPUs with QLoRA training. So that's why we decided to make it our first project. Huge thanks to Tim Dettmers for helping us get started to this -- and of course for creating QLoRA in the first place! Let me know if you have any questions or thoughts.

  • Blogger Comments
  • Facebook Comments

0 comments:

Post a Comment

Item Reviewed: Fine tune a 70B language model at home Description: Rating: 5 Reviewed By: Unknown
Scroll to Top