Local LLMs

Return to LLMs, Ollama

Remember Cloud-based LLMs are always tremendously faster, more accurate and cheaper than a local LLM.

https://ollama.com/library

https://ollama.com/library/llama3.1

https://www.youtube.com/watch?v=0EInsMyH87Q

What is the biggest parameter model one can fit into a 24 GB vram NVidea 4090?