Ollama: running Large Language Models locally
I recently discovered a new tool called Ollama, which allows you to run Large Language Models (LLMs) locally, without the need of a cloud service. Its usage is similar to Docker, but it’s specifically designed for LLMs. You can use it as an interactive shell, through its REST API or using it from a Python library.Ollama is a tool to run and manage Large Language Models locally. It’s designed to be easy to use and to be used in different ways:After you installa Ollama, you can start chatting with...
Read more at andreagrandi.it