local_llms

Local LLMs

Return to LLMs, Ollama

Remember Cloud-based LLMs are always tremendously faster, more accurate and cheaper than a local LLM.

https://ollama.com/library

https://ollama.com/library/llama3.1

https://www.youtube.com/watch?v=0EInsMyH87Q

What is the biggest parameter model one can fit into a 24 GB vram NVidea 4090?

local_llms.txt · Last modified: 2025/02/01 06:43 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki