Ollama 提供免费的本地 LLM 运行时 — 在您的机器上运行 Llama 3、Mistral 和 Gemma

📄 中文摘要

Ollama 允许用户在本地运行高质量的语言模型,如 Llama 3、Mistral、Gemma 和 CodeLlama,避免了 OpenAI、Anthropic 和 Google 按照 token 收费的高昂成本,尤其在开发和测试阶段。该软件支持 macOS、Linux 和 Windows 系统,提供一键安装功能,并且拥有超过 50 种模型可供选择。Ollama 还提供与 OpenAI 兼容的 API,能够作为 OpenAI SDK 的替代品,并支持 NVIDIA、AMD 和 Apple Silicon 的 GPU 加速,确保用户在使用时享有完全的隐私和零 API 成本。

📄 English Summary

Ollama Has a Free Local LLM Runtime — Run Llama 3, Mistral, and Gemma on Your Machine

Ollama allows users to run high-quality language models locally, such as Llama 3, Mistral, Gemma, and CodeLlama, eliminating the high costs associated with token-based pricing from OpenAI, Anthropic, and Google, particularly during development and testing phases. The software supports macOS, Linux, and Windows systems, offering a one-command installation feature and access to over 50 models. Ollama also provides an OpenAI-compatible API, serving as a drop-in replacement for the OpenAI SDK, and supports GPU acceleration for NVIDIA, AMD, and Apple Silicon, ensuring complete privacy and zero API costs for users.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等