Hosted service layer for Engram: GitHub login, shared workspaces, API keys, invite flow, a hosted MCP bridge, downloadable starter skills, and a dashboard that makes the memory engine usable as a service.
Engram is strong as a local engine. A hosted version needs a different layer around it:
- user identity
- workspace isolation
- service auth
- audit history
- a deployment model that fits a long-lived memory process
This repo is that wrapper.
Recommended: VPS
Why:
- Engram wants a long-lived Python process
- direct Postgres access is simpler
- background ingestion is easier
- future MCP and websocket work fits more naturally
Vercel is fine for:
- marketing pages
- auth shell
- a thin frontend
For the actual memory runtime, a VPS is the clean default.
- GitHub OAuth login
- user/workspace metadata in Postgres
- one Engram-backed workspace store per workspace
- dashboard to create workspaces, inspect stats, search memory, and write memories
- workspace invites
- workspace API keys
- audit trail for workspace actions
- structured API usage tracking per workspace key
- paste, file, and batch API ingestion into workspace memory
- ingestion run history with source metadata and item counts
- JSON endpoints for search, remember, status, recent memories, audit history, and usage history
- recent memory export endpoint for backups and inspection
- warm workspace runtime cache so search and memory writes do not rebuild Engram state on every request
- workspace bootstrap endpoint for agents
- hosted MCP-style bridge for retrieval, handoff, skills, curation, and memory health tools
- public capability index with 100+ service, site, and agent-facing capabilities
- public service, capability, and MCP manifests for clients and agent launchers
- SDK snippet and playbook pages plus JSON endpoints
- API explorer with request and response fixtures for client builders
- workspace connection kit endpoints for agent config JSON and
.envgeneration - hardened browser and API boundary with CSP, frame blocking, host/origin checks, request-size limits, safer session cookies, basic throttles, malformed JSON accounting, and probe-path blocking
- themed browser error pages for common HTTP failures while preserving JSON errors for API clients
- starter skill downloads in JSON and markdown
- public docs, architecture, use-case, operations, integrations, examples, service status, security, and changelog pages
- robots.txt and sitemap.xml for the public site
- FastAPI
- Jinja templates
- SQLAlchemy
- Authlib GitHub OAuth
- Postgres
engram-memory-system
- Copy
.env.exampleto.env - Set GitHub OAuth credentials
- Start Postgres
- Install dependencies
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
uvicorn app.main:app --reload --port 8090Open:
- http://127.0.0.1:8090
- http://127.0.0.1:8090/docs
- http://127.0.0.1:8090/connect
- http://127.0.0.1:8090/architecture
- http://127.0.0.1:8090/use-cases
- http://127.0.0.1:8090/operations
- http://127.0.0.1:8090/integrations
- http://127.0.0.1:8090/examples
- http://127.0.0.1:8090/api-explorer
- http://127.0.0.1:8090/sdks
- http://127.0.0.1:8090/security
- http://127.0.0.1:8090/status
Run tests:
pytest -qRequired:
ENGRAM_CLOUD_SECRET_KEYENGRAM_CLOUD_BASE_URLENGRAM_CLOUD_POSTGRES_DSNENGRAM_CLOUD_ENGRAM_POSTGRES_DSNENGRAM_CLOUD_GITHUB_CLIENT_IDENGRAM_CLOUD_GITHUB_CLIENT_SECRET
Security controls:
ENGRAM_CLOUD_ALLOWED_HOSTSENGRAM_CLOUD_SECURE_COOKIESENGRAM_CLOUD_SESSION_MAX_AGE_SECONDSENGRAM_CLOUD_MAX_REQUEST_BYTESENGRAM_CLOUD_AUTH_RATE_LIMIT_PER_MINUTEENGRAM_CLOUD_API_RATE_LIMIT_PER_MINUTE
This service uses:
- one shared Postgres database for app metadata
- one Engram schema per workspace for memory data
That keeps the service layer separate from the memory layer while still using the Engram package directly.
Use:
Dockerfiledocker-compose.yml
Run behind Caddy or nginx with HTTPS.
Not recommended as the primary memory backend runtime.
If you want, use Vercel later for:
- a separate frontend shell
- marketing/docs
- auth-only surfaces
while the real Engram runtime lives on a VPS.
Each workspace can expose:
GET /api/workspaces/{slug}/bootstrapGET /api/workspaces/{slug}/connectGET /api/workspaces/{slug}/envGET /api/workspaces/{slug}/statusGET /api/workspaces/{slug}/memories/recentPOST /api/workspaces/{slug}/searchPOST /api/workspaces/{slug}/rememberPOST /api/workspaces/{slug}/ingestGET /api/workspaces/{slug}/auditGET /api/workspaces/{slug}/usageGET /api/workspaces/{slug}/ingest/runsGET /api/workspaces/{slug}/export/recentGET /api/workspaces/{slug}/mcp/toolsPOST /api/workspaces/{slug}/mcp
The bridge includes tool discovery with argument hints. Current tools cover:
- status, health, memory map, quality metrics, and grouped counts
- recall, compact context, hints, recent memories, entity lookup, and fuzzy entity search
- focused task briefs, layered prompt context, and procedural skill selection
- remember, decisions, errors, interactions, negative knowledge, and project state
- session checkpoints, handoff snapshots, and resume context
- hotspots, query comparison, export, memory status history, tags, pin, and forget
The service also exposes starter skills:
GET /api/skillsGET /api/skills/{name}GET /api/skills/{name}.md
Public service metadata:
GET /api/healthGET /api/service/statusincluding runtime cache metricsGET /capabilitiesfor the public capability indexGET /architectureGET /use-casesGET /operationsGET /integrationsGET /api/service/manifestGET /api/capabilitiesGET /api/mcp/manifestGET /api/sdk-snippetsGET /api/playbooksGET /api/examplesGET /robots.txtGET /sitemap.xml
Proprietary. All rights reserved. See LICENSE.