Memory Gateway
Use 83 models from 11 providers with your ekkOS memory. One endpoint. Any AI.
Why Use the Gateway?
Point any AI tool at mcp.ekkos.dev/v1 and get:
- Pattern Injection — Your relevant patterns automatically included in every request
- Context Capture — Conversations are saved for future retrieval
- Provider Freedom — Switch between GPT, Claude, Grok, or Gemini anytime
- Unified Memory — All conversations feed the same memory substrate
Quick Setup
Cursor / Cline / Continue / Aider
Set your OpenAI Base URL:
https://mcp.ekkos.dev/v1
In Cursor: Settings → OpenAI → Base URL
Claude Code (CLI)
Set environment variable:
export ANTHROPIC_BASE_URL=https://mcp.ekkos.dev
Direct API Usage
curl https://mcp.ekkos.dev/v1/chat/completions \
-H "Authorization: Bearer YOUR_EKKOS_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'Supported Providers
OpenAI
22 models
GPT-5.1, GPT-4o, o3, o4
Anthropic
11 models
Claude 4.5, Claude 4, Claude 3.x
xAI (Grok)
12 models
Grok 4, Grok 3, Grok 2
Google (Gemini)
6 models
Gemini 3, Gemini 2.5, Gemini 2.0
DeepSeek
5 models
DeepSeek V3, DeepSeek Coder
Meta (Llama)
5 models
Llama 4, Llama 3.x
Mistral
8 models
Mistral Large, Codestral
83 models total across 11 providers. More coming soon.
Model Selection
The gateway auto-detects which provider to use based on the model name:
gpt-*→ OpenAIclaude-*→ Anthropicgrok-*→ xAIgemini-*→ GoogleAlso supports provider/model format (e.g., openai/gpt-4o)
How It Works
Retrieve
Gateway searches your memory for relevant patterns
Inject
Patterns are injected into the system prompt automatically
Forward
Request is forwarded to your chosen LLM provider
Store
Conversation is saved for future retrieval
API Endpoints
POST/v1/chat/completionsOpenAI-compatible chat endpoint
POST/v1/messagesAnthropic-compatible messages endpoint
GET/v1/modelsList all 83 supported models
GET/v1/gateway/healthGateway health check
Privacy & Security
Your data is stored in your memory substrate only
Never shared with other users
Never used to train models
SOC 2 compliance in progress