Rankings/LLaMA Factory

LLaMA Factory

hiyouga/LlamaFactory

A tool that lets you easily fine-tune 100+ large language models with web and CLI support, training custom AIs without writing code.

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

Stars
70,270
Forks
8,600
Watchers
332
Issues
977
💡

A tool that lets you easily fine-tune 100+ large language models with web and CLI support, training custom AIs without writing code.

📂 AI & AutomationđŸ€– AI RelatedđŸ’» Python📄 Apache-2.0

AI Summary

🔍

What This Project Does

It is an all-in-one platform for fine-tuning large models, turning various models (like Llama3, Qwen) into your exclusive assistant.

🔧

What Problems It Solves

Solves the problems of writing complex code, difficult tuning, and environment configuration for fine-tuning AI models, making training as simple as ordering from a menu.

đŸ‘„

Who It's For

Developers who want to customize private AI, students researching large models, or companies wanting to build exclusive knowledge bases at low cost.

📋

Typical Use Cases

1. Let AI learn your company documents; 2. Rewrite open-source models into specific styles; 3. Train vertical domain models on local computers.

⭐

Key Strengths & Highlights

Supports many models, has a visual interface, supports training on consumer-grade GPUs, active community with full documentation.

🚀

Getting Started Requirements

Requires some Linux basics, usually needs a GPU environment, but supports zero-code web version operation.

🎯

Purpose

Suitable for low-cost custom LLM scenarios with some hardware basis, not for pure beginners without technical knowledge or GPU resources.

Tech Stack

—

Project Info

Primary Language
Python
Default Branch
main
License
Apache-2.0
Created
May 28, 2023
Last Commit
7 days ago
Last Push
7 days ago
Indexed
Apr 19, 2026