Open-source project that serves as a powerful and user-friendly platform for running LLMs on your local machine
# setup poetry for UI poetry run python scripts/setup # define a specific model to be used in ollama ollama pull model_name # show list of models pulled in local machine ollama list # start ollama without running the desktop application ollama serve # run a model in command line ollama run model_name # stop ollama running in the background sudo pkill -9 ollama Ollama ## stop on linux systemctl stop ollama.service # remove model ollama rm model_name