Search "Local LLM"
14 results
A tool that lets you easily run large models on your own computer, chat with AI offline, completely free and privacy-friendly.
An open-source AI coding assistant running on your terminal or desktop, helping write code and fix bugs for free without uploading data to the cloud.
A repository collecting system prompts and model info for popular AI tools like Cursor and Devin, helping developers optimize their AI usage.
A Microsoft open-source Python tool that quickly converts common files like PDF, Word, and Excel to Markdown, specifically designed for AI model consumption.
An open-source AI assistant that can actually write code, run tests, and fix bugs, like having a tireless programming partner.
A tool that lets you easily fine-tune 100+ large language models with web and CLI support, training custom AIs without writing code.
A graphical tool to run and fine-tune large AI models locally for free, making it easy for anyone to train AI with less memory and faster speed.
An open-source tool that converts complex documents like PDFs and Office files into AI-readable Markdown, helping you easily extract content and structure.
Microsoft's AI agent collaboration framework enabling multiple AI roles to work as a team, but now in maintenance mode; new users should check the successor.
An open-source project simulating an AI hedge fund, featuring AI avatars of 19 investment masters like Buffett to analyze stocks and make decisions, for learning and entertainment only.
A self-contained offline survival computer system integrating Wikipedia, AI assistants, offline maps, and educational resources, keeping you informed without internet.
A free, open-source local AI collaboration platform enabling agents to operate files, write code, and automate tasks with multiple model support, no subscription needed.
A locally deployable AI personal tutor supporting private models, helping you plan, answer questions, and generate quizzes like a dedicated teacher.
A tool that speeds up large model generation by predicting blocks of content to reduce waiting time.