Ollama 提供免费 API:通过一条命令在本地运行 LLM
📄 中文摘要
Ollama 是一个能够在本地运行开源大型语言模型(LLM)的工具,用户只需通过一条命令即可下载和运行模型,如 Llama 3、Mistral、Gemma、Phi 和 CodeLlama。与云端 AI API 不同,Ollama 运行在用户的笔记本电脑上,确保数据的隐私性和离线操作。它还提供与 OpenAI 兼容的 REST API,用户可以通过 localhost:11434 进行访问。使用 Ollama,用户可以快速体验 LLM 的强大功能,而无需担心数据泄露或高昂的费用。
📄 English Summary
Ollama Has a Free API: Run LLMs Locally With One Command
Ollama is a tool that allows users to run open-source large language models (LLMs) locally with a single command. Models such as Llama 3, Mistral, Gemma, Phi, and CodeLlama can be downloaded and executed in seconds. Unlike cloud AI APIs, Ollama operates on the user's laptop, ensuring data privacy and offline functionality. It also exposes a REST API compatible with OpenAI, accessible via localhost:11434. With Ollama, users can quickly experience the powerful capabilities of LLMs without concerns over data leaks or high costs.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等