AI - Running LLM On Desktop With GPU
Knoark
0 follower
Follow
0
281 views • 03/22/2024

Mike Adam's recently shared that his new model, Neo (https://brighteon.ai/Home/), uses Dolphin Mistral as a base model. I decided to test out this model and get it running on my home desktop in preparation for the release of Neo. I thought it might be helpful to share the steps in order to get the model to run on a desktop with a consumer-grade GPU. My GPU is the NVIDIA GeForce RTX 2070 SUPER, which has 8GB of RAM. The key to getting a 7B parameter model to load on a consumer-grade GPU is "load_in_4bit", which greatly reduces the memory footprint.

There are a lot of prerequisites not shown here, such as installing the correct NVIDIA driver and CUDA set up, so feel free to ask questions in the comment section below, or hit me up on BrighteonSocial (https://brighteon.social/@magpie84). Have a great day and God bless!

Keywords
FREE email alerts of the most important BANNED videos in the world
Get FREE email alerts of the most important BANNED videos in the world that are usually blacklisted by YouTube, Facebook, Google, Twitter and Vimeo. Watch documentaries the techno-fascists don't want you to know even exist. Join the free Brighteon email newsletter. Unsubscribe at any time. 100% privacy protected.
Your privacy is protected. Subscription confirmation required.