Saturday, June 22, 2024

AI On Your Local Machine: LLM Embeddings

Generate LLM Embeddings On Your Local Machine - YouTube

using

Ollama: https://ollama.ai/

ollama/ollama: Get up and running with Llama 3, Mistral, Gemma, and other large language models. @GitHub

ollama run llama3

4.7 GB download, it works with 32GB RAM


NeuralNine (NeuralNine) @GitHub

NeuralNine (NeuralNine) / Repositories

No comments:

Post a Comment