Run a multi-platform AI assistant on your own machine with full privacy (Telegram, Discord, WhatsApp, Slack).
π 100% Private · π₯οΈ Runs Locally · π§ Powered by Open-Source LLMs · π Multi-Platform
CrustAI is a self-hosted AI assistant that runs 100% locally using Ollama. It integrates with Telegram, Discord, WhatsApp and Slack so you can chat with your assistant in tools you already use—without sending your conversation data to cloud LLM providers.
Built with Node.js and powered by Ollama (local LLM runtime), CrustAI is designed for developers, privacy enthusiasts, and anyone who wants an AI assistant that truly belongs to them.
| Feature | Description |
|---|---|
| π 100% Local & Private | Conversations stay on your machine |
| π§ LLM via Ollama | Use tinyllama, llama3.2, phi3 and more |
| π± Multi-platform Adapters | Telegram, WhatsApp, Discord, Slack |
| 𧬠Long-term Memory | Store and retrieve user facts |
| β‘ REST API | Integrate CrustAI into external workflows |
| π Personality Config | Customize tone, style and identity |
| π Bilingual UX | English + Portuguese support |
Watch the system boot up and connect to the local AI model
The bot responds instantly — running 100% offline
Ask anything — the answer comes from your own machine
/ping → Check if the bot is alive
/help → Show all commands
/model → Show which AI model is running
/remember → Store a fact in long-term memory
/forget → Erase all stored facts
/clear → Clear conversation history
Adapters (Telegram / Discord / WhatsApp / Slack)
│
βΌ
Message Orchestrator
┌────────┴────────┐
βΌ βΌ
Ollama Client Memory Store
│ │
└────────┬────────┘
βΌ
REST API
Design note: adapter boundaries make it easy to add new channels without changing core conversation logic.
- Node.js ≥ 20.0
- Ollama installed and running
- A Telegram Bot Token from @BotFather
# 1. Clone the repository
git clone https://github.com/DaveSimoes/CrustAI.git
cd CrustAI
# 2. Install dependencies
npm install
# 3. Start Ollama and pull a model
ollama serve
ollama pull tinyllama # lightweight (600MB)
# or
ollama pull llama3.2 # more powerful (2GB, needs 8GB RAM)
# 4. Configure the project
cp config/config.example.yml config/config.yml
# Edit config/config.yml with your Telegram token and model
# 5. Run CrustAI
npm startEdit config/config.yml:
model: tinyllama # or llama3.2, phi3, mistral...
ollama_url: http://localhost:11434
language: pt-BR
telegram:
enabled: true
token: YOUR_BOT_TOKEN_HERE
allowed_user_ids: [] # leave empty to allow all users
discord:
enabled: false
token: ""
whatsapp:
enabled: false
voice:
enabled: false
port: 8765| Technology | Purpose |
|---|---|
| Node.js | Runtime environment |
| Ollama | Local LLM inference engine |
| node-telegram-bot-api | Telegram integration |
| @whiskeysockets/baileys | WhatsApp integration |
| discord.js | Discord integration |
| @slack/bolt | Slack integration |
| Fastify | REST API server |
| sql.js | Embedded database for memory |
| yaml | Configuration management |
crustai/
├── src/
│ ├── core/
│ │ ├── index.js # Main orchestrator
│ │ ├── llm.js # Ollama LLM client
│ │ └── commands.js # Command handler
│ ├── adapters/
│ │ ├── telegram/ # Telegram bot
│ │ ├── discord/ # Discord bot
│ │ ├── whatsapp/ # WhatsApp bot
│ │ └── slack/ # Slack bot
│ ├── memory/
│ │ └── store.js # Long-term memory
│ ├── personality/
│ │ └── prompt.js # System prompt builder
│ ├── voice/
│ │ └── server.js # Voice WebSocket server
│ └── api/
│ └── server.js # REST API
├── config/
│ ├── config.yml # Your configuration (git-ignored)
│ ├── config.example.yml # Template
│ └── personality.yml # Assistant personality
├── demo/
│ ├── terminal.gif # Boot demo
│ ├── ping.gif # Telegram connection demo
│ └── chat.gif # AI conversation demo
└── data/ # Local database (git-ignored)
CrustAI was built with privacy as its core principle:
- β All conversations stay on your machine
- β No API keys sent to external AI services
- β No telemetry or usage tracking
- β Open source — inspect every line of code
- β Your data, your rules
- Web UI dashboard
- Image understanding (multimodal LLMs)
- Plugin system for custom tools
- Docker one-click deployment
- Mobile app companion
Dave Simoes
- π GitHub: @DaveSimoes
- πΌ LinkedIn: Dave Simoes
This project is licensed under the MIT License — see the LICENSE file for details.
CrustAI é um assistente de IA totalmente privado e auto-hospedado que roda inteiramente na sua própria máquina — nenhum dado sai do seu computador. Ele se conecta a plataformas de mensagens populares como Telegram, WhatsApp, Discord e Slack, oferecendo o poder de uma IA conversacional sem abrir mão da sua privacidade.
Construído com Node.js e alimentado pelo Ollama (motor de LLM local), o CrustAI foi projetado para desenvolvedores, entusiastas de privacidade e qualquer pessoa que queira um assistente de IA que realmente lhe pertença.
| Funcionalidade | Descrição |
|---|---|
| π 100% Privado | Todos os dados ficam na sua máquina. Sem nuvem |
| π§ LLM Local | Powered by Ollama — suporta llama3.2, tinyllama e mais |
| π± Multi-Plataforma | Telegram, WhatsApp, Discord, Slack — um só bot |
| 𧬠Memória Longa | Lembra fatos sobre você entre conversas |
| π£οΈ Voz Offline | Fala e escuta sem internet (pt-BR) |
| β‘ REST API | API integrada para integrações customizadas |
| π Personalidade | Configure o nome, tom e comportamento do assistente |
O sistema inicializando e conectando ao modelo de IA local
O bot respondendo instantaneamente — 100% offline
Pergunte qualquer coisa — a resposta vem da sua própria máquina
# 1. Clone o repositório
git clone https://github.com/DaveSimoes/CrustAI.git
cd CrustAI
# 2. Instale as dependências
npm install
# 3. Inicie o Ollama e baixe um modelo
ollama serve
ollama pull tinyllama
# 4. Configure o projeto
cp config/config.example.yml config/config.yml
# Edite config/config.yml com seu token do Telegram
# 5. Inicie o CrustAI
npm startDave Simoes — Desenvolvedor apaixonado por IA, privacidade e código aberto.
- π GitHub: @DaveSimoes
- πΌ LinkedIn: Dave Simoes
β Se este projeto te ajudou, deixe uma estrela! / If this project helped you, leave a star! β
Made with π¦ and β€οΈ by Dave Simoes









