Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
Hazarth9472157dIm running 7B models on a 3 year old laptop with RTX 3050 Ti with 4GB at good speeds even though the models don't fit entirely.
I can do a 13B at barely usable speeds too.
I can also run the 7B models fine on the Ryzen 5 that's in the laptop, usable in CPUs.
Also a Mac M2 is perfectly capable of running 13B models very fast.
You should be able to run 3B models on even older Hardware. Models like Starcoder for local copilot can run on almost anything. -
Hazarth9472157d@cafecortado I did run a small model on a Radxa RockPi 4 SE, similar to Pi 4 with some advantages.
I ran a 7B model on it, excrusiatingly slow and started to over heat easily, but it did run ..
Even 3B models were a bit too slow for practical use, but they do run
Related Rants
-
dfox35This is a view from a rooftop in NYC that I sometimes get the pleasure to work from. I really like the view an...
-
Redp1ll12Interview HR: So .. tell us .. where do you see our AI acting in 5 years? ME: Doing your job minus the stupi...
-
kekayan7When you wanted to know deep learning immediately
Tried to run ollama with a small model on a laptop that's like 10 years old and use that inside vs code. Weak CPU, GPU not to mention. I didn't really expect it to work. I was still a bit disappointed, though as expected. It was crying for help.
Are there any laptops powerful enough?
rant
ai
ollama
laptop