Welcome to Ragflow’s documentation!
Contents:
- About RAGFlow: Named Among GitHub’s Fastest-Growing Open Source Projects
- Security Concerns
- RAGFlow System Architecture
- From RAG to Context - A 2025 Year-End Review of RAG
- example:
- Synergy of the Three Models
- Why vLLM is Used to Serve the Reranker Model
- Serving vLLM Reranker Using Docker (CPU-Only)
- Integrating vLLM with RAGFlow via Docker Network
- Batch Processing and Metadata Management in Infiniflow RAGFlow
- How the Knowledge Graph in Infiniflow/RAGFlow Works
- Running Llama 3.1 with llama.cpp
- Running Multiple Models on llama.cpp Using Docker
- Deploying LLMs in Hybrid Cloud: Why llama.cpp Wins for Us
- How InfiniFlow RAGFlow Uses gVisor
- RAGFlow GPU vs CPU: Full Explanation (2025 Edition)
- Why Does RAGFlow Still Need a GPU Even When Using Ollama?
- Complete RAGFlow Pipeline (with GPU usage marked)
- What DeepDoc Actually Does (and Why GPU Makes It 5–20× Faster)
- Real-World Performance Numbers
- When Do You Actually Need ragflow-gpu?
- Recommended Setup in 2025 (Best of Both Worlds)
- Monitoring & Verification
- Conclusion
- Upgrade to latest release :
- upload document
- Graphrag
- Chat
- Why Infinity is a Good Alternative in RAGFlow
- MinerU and Its Use in RAGFlow
- What is Agent context engine?
- Using SearXNG with RAGFlow
- homelab