A flexible, secure,
vendor-agnostic
knowledge platform.

Bring your documents, pick your stack, and turn them into an AI-powered knowledge engine with intelligent caching, enterprise-grade security, and full control over your data.

💬

What are the security policies for document access?

🤖

Based on your organization's documents, here are the key security policies...

What It Is

RAG Fortress is a plug-and-play Retrieval-Augmented Generation system with support for multiple LLMs, vector stores, embedding providers, and custom security layers. It features semantic caching for up to 80% cost reduction, message encryption at rest, and unified configuration for effortless provider switching.

Open source, local-LLM friendly, and production-ready with enterprise security. Build the RAG system you need, not the one a vendor wants you to use.

Why It's Not Just Another RAG Tool

🔌

Vendor Agnostic by Default

Choose your LLM, your embeddings, your vector store, and your database. Nothing is locked in.

🏠

Internal LLM Support

Local models via llama.cpp or any self-hosted provider. No document leaves your environment if you don't want it to.

🔒

Enterprise-Grade Security

HTTPOnly cookie authentication, message encryption at rest, automatic log sanitization, and multi-level security clearance enforcement.

Intelligent Semantic Caching

Multi-tier semantic cache reduces LLM API costs by up to 80% and delivers instant responses for similar queries.

🎯

Unified Configuration

Switch between LLMs, embeddings, and vector stores with unified settings. Supports hybrid search, fallback LLMs, and system diagnostics for complete control.

👥

Designed for Real Teams

User management, admin dashboard, optional error reporting, logging, and organization support.

Core Features

Easy setup with environment-based configuration
Document ingestion with multiple file formats
Supports OpenAI, Gemini, Llama.cpp, and HuggingFace models
Plug-and-play vector store integrations (Qdrant, Pinecone, FAISS, Chroma)
Multi-tenant ready with department isolation
Role-based access control (RBAC)
Chat interface + search interface
Admin controls for LLM config, embeddings, vector store
Semantic caching with RedisVL for 80% cost reduction
Message encryption at rest with optional cache encryption
HTTPOnly cookie authentication & automatic log sanitization
Hybrid search (vector + keyword) with configurable weights
System diagnostics & health monitoring
Unified configuration for effortless provider switching
RESTful API endpoints for integration
Document approval workflow
Demo mode for public showcases

Who Benefits from RAG Fortress?

From developers to enterprises, RAG Fortress adapts to your specific knowledge management needs.

👨‍💻

Developers & Engineers

  • Spin up RAG systems without vendor lock-in
  • Experiment with different LLM architectures
  • Build custom RAG solutions for clients
🏢

Enterprises & Businesses

  • Secure internal knowledge base with RBAC
  • Department-level access control
  • Policy & compliance document management
🎓

Educational Institutions

  • Course materials & lecture notes hub
  • Student AI study assistants
  • Research paper repository with semantic search
🔐

Security-Conscious Organizations

  • 100% on-premise deployment with local LLMs
  • Air-gapped environment support
  • Multi-level security clearance enforcement
🌍

NGOs & Nonprofits

  • Community knowledge systems
  • Resource libraries without cloud costs
  • Multi-language document support
⚖️

Legal & Compliance Teams

  • Case law & regulation research
  • Contract & agreement analysis
  • Audit trail & activity logging

How It Works

1

Upload Documents

Add PDFs, text files, CSVs, and more to your knowledge base

2

Configure Providers

Choose your LLM, embeddings, and vector store

3

Index Content

Process and vectorize your documents

4

Start Querying

Ask questions and get context-aware answers

See It In Action

Experience the power of intelligent document conversations

💬 RAG Fortress Chat
RAG Fortress Chat Interface

Intelligent Conversations with Your Documents

Get context-aware answers powered by your knowledge base

💬

Natural Conversations

Chat naturally with your documents and get intelligent responses

🔐

Enterprise Security

HTTPOnly cookies, E2E encryption, log sanitization & RBAC

Lightning Fast

Semantic caching reduces costs by 80% with instant responses

🎯

Context-Aware

Understands your documents and provides relevant answers

🤖

Multiple LLMs

Support for OpenAI, Gemini, HuggingFace, and local models

📊

Activity Tracking

Complete audit trails and analytics dashboard

Complete Platform Overview

Admin Dashboard

Admin Dashboard

Activity monitoring & analytics

Application Settings

Application Settings

Configure LLMs & vector stores

Department Management

Department Management

Organize teams & access control

User Invitations

Multi-Tier Invitations

Email-based user onboarding

Activity Logger

Activity Logger

Detailed audit logs

Settings Details

Detailed Configuration

Fine-tune every setting

Quick Start Guide

Get RAG Fortress running in minutes with our streamlined setup process.

🔧 Backend Setup
# Clone repository
git clone https://github.com/nurudeen19/rag-fortress.git
cd rag-fortress/backend

# Install with uv (recommended)
uv sync

# Activate environment
.venv\Scripts\Activate  # Windows
# source .venv/bin/activate  # macOS/Linux

# Configure environment
cp .env.example .env
# Edit .env with your API keys

# Initialize database
python setup.py

# Start server
python run.py
🎨 Frontend Setup
# Navigate to frontend
cd frontend

# Install dependencies
npm install

# Configure environment
cp .env.example .env
# Set API_URL if needed

# Start development server
npm run dev

# Access at:
# Frontend: http://localhost:5173
# Backend: http://localhost:8000
# API Docs: http://localhost:8000/docs

Ready to Build Your Knowledge Platform?

Start building with RAG Fortress today. Open source, flexible, and built for teams.