我厌倦了搜索“我的 GPU 能运行这个 LLM 吗?”所以我构建了这个工具
📄 中文摘要
许多用户希望在本地运行大型语言模型(LLMs),如 DeepSeek、Llama 3 和 Mistral,但在搜索时常常面临不确定性。用户通常会在网上查找相关信息,却得到不同的答案和不可靠的建议,最终下载了不适合其 GPU 的模型,导致浪费时间和资源。为了解决这个问题,作者开发了一个简单的工具,可以在短短 5 秒内告诉用户其 GPU 是否能够运行特定的 LLM,避免了繁琐的搜索过程。
📄 English Summary
I Got Tired of Googling "Can My GPU Run This LLM?" So I Built This
Many users want to run large language models (LLMs) locally, such as DeepSeek, Llama 3, and Mistral, but often face uncertainty when searching for compatibility information. Users typically Google questions like 'Can my RTX 3060 run Llama 3?' and receive mixed answers, leading to wasted time and resources when they download models that their GPU cannot support. To address this issue, the author has created a simple tool that instantly informs users whether their GPU can run a specific LLM in just 5 seconds, eliminating the hassle of searching online.
Powered by Cloudflare Workers + Payload CMS + Claude 3.5
数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等