Installation & Setup
This page covers the installation and initial configuration of Automatos AI for local development. It provides step-by-step instructions for running the platform using Docker Compose, including database initialization, environment configuration, and service verification.
Related documentation:
For detailed configuration options and feature flags, see Configuration Guide
For the first-time user onboarding experience, see First-Time User Experience
For production deployment strategies, see Deployment & Infrastructure
For advanced Docker configuration, see Docker Compose Setup
Prerequisites
Before installing Automatos AI, ensure you have the following installed on your system:
Docker
20.10+
Container runtime
Docker Compose
2.0+
Multi-container orchestration
Git
2.30+
Repository cloning
Optional but recommended:
OpenAI API Key or Anthropic API Key - Required for agent LLM functionality. Without these, agents cannot generate responses, though the platform will still run.
Clerk Account - Required for user authentication in production. Development mode supports anonymous access when
REQUIRE_AUTH=false.AWS Account - Required only if using marketplace plugins or S3-based features.
Sources: docker-compose.yml:1-197, README.md:57-60
Quick Start
The fastest path to running Automatos AI locally:
That's it! No .env file is required for local development. The system uses secure defaults for infrastructure services (PostgreSQL, Redis) and manages API keys through the Settings UI.
Note: On first launch, database initialization takes 10-40 seconds. Watch the logs for ✅ Database connection test successful from the backend service.
Sources: docker-compose.yml:1-15, README.md:53-78
Docker Compose Architecture
The following diagram shows the complete Docker Compose service topology, including exact service names, ports, and dependencies:
Health Check Details:
postgres
pg_isready -U postgres
10s
10s
redis
redis-cli ping
10s
5s
backend
curl -f http://localhost:8000/health
30s
40s
frontend
wget --spider http://localhost:3000
30s
60s
Key architectural features:
Dependency chain: Frontend waits for Backend, Backend waits for Postgres + Redis
Named volumes: Data persists across container restarts (
postgres_data,redis_data)Hot-reload: Source code mounted for development (
./orchestrator:/app,./frontend:/app)Isolated network: All services communicate via
automatos_networkbridge
Sources: docker-compose.yml:17-197
Service Configuration Details
PostgreSQL Database
Configuration:
pgvector extension enabled for vector similarity search
Max connections: 200 concurrent connections
Shared buffers: 256MB allocated for query caching
Schema initialization: Automatic on first run via
init_complete_schema.sql
Default credentials (overridable via environment):
Database:
orchestrator_dbUser:
postgresPassword:
automatos_dev_pass
The database schema is automatically initialized on first container startup using the SQL file mounted at /docker-entrypoint-initdb.d/01-schema.sql.
Sources: docker-compose.yml:21-42, orchestrator/config.py:36-42
Redis Cache & Pub/Sub
Configuration:
Max memory: 256MB with LRU eviction policy (
allkeys-lru)Authentication: Password-protected (default:
automatos_redis_dev)Persistence: RDB snapshots saved to
redis_datavolume
Used for:
Workflow execution event streaming (Pub/Sub channels)
Plugin content caching (TTL: 3600s by default)
Tool metadata caching (Composio app/action schemas)
Sources: docker-compose.yml:47-63, orchestrator/core/redis/client.py:1-199
Backend API (FastAPI)
Development mode features:
Hot-reload enabled: Code changes trigger automatic restart via
--reloadflagSource mounting:
./orchestrator:/appfor live editingEntrypoint script:
docker-entrypoint.shfor database readiness checksUvicorn workers: Single worker in dev mode
Runtime dependencies:
Python 3.11-slim base image
System packages:
gcc,g++,postgresql-client,tesseract-ocrNLTK data:
punkt,stopwords(pre-downloaded to/usr/local/nltk_data)
Sources: docker-compose.yml:68-123, orchestrator/Dockerfile:1-116
Frontend (Next.js)
Development mode features:
Fast Refresh: Hot module replacement for React components
Source mounting:
./frontend:/appwith excludednode_modulesand.nextNode 20 Alpine: Lightweight container with
python3,make,g++for native modules
Build-time environment variables:
NEXT_PUBLIC_API_URL: Backend API endpoint (default:http://localhost:8000)NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY: Clerk authentication (optional for dev)
Sources: docker-compose.yml:131-155, frontend/Dockerfile:1-120
Environment Variables
Automatos AI uses a hybrid configuration approach: infrastructure defaults (no .env needed) with credentials managed via UI.
Configuration Loading Flow
Sources: orchestrator/config.py:24-285
Required Variables (Infrastructure)
These are set automatically by docker-compose.yml with secure defaults:
POSTGRES_DB
orchestrator_db
Database name
POSTGRES_USER
postgres
Database user
POSTGRES_PASSWORD
automatos_dev_pass
Database password
POSTGRES_HOST
postgres
Database hostname (service name)
POSTGRES_PORT
5432
Database port
REDIS_HOST
redis
Redis hostname (service name)
REDIS_PORT
6379
Redis port
REDIS_PASSWORD
automatos_redis_dev
Redis password
Sources: docker-compose.yml:80-92
Optional Variables (Features)
Add these to a .env file or configure via Settings UI after first login:
LLM Providers:
Authentication (Production):
Marketplace & Plugins (Optional):
Feature Flags:
Location to create .env:
Backend:
orchestrator/.env(loaded byconfig.py:24-26)Frontend:
frontend/.env.local(loaded by Next.js)
Sources: orchestrator/.env.example:1-64, orchestrator/config.py:28-285
Database Initialization
The PostgreSQL database is automatically initialized on first container startup using a volume-mounted SQL script.
Initialization Flow
Key tables created:
Core:
workspaces,users,workspace_membersAgents:
agents,agent_skills,personas,agent_templatesWorkflows:
workflows,workflow_recipes,recipe_executionsMarketplace:
marketplace_plugins,workspace_enabled_plugins,agent_assigned_pluginsTools:
agent_tool_assignments,composio_app_cache,composio_action_cacheCredentials:
credential_types,credentials,credential_audit_logsSystem:
system_settings,skill_sources,skills
pgvector extension is enabled for vector similarity search used by:
Skill recommendations (lexical scoring)
Document embeddings (S3 vectors feature)
Sources: docker-compose.yml:33-34, orchestrator/database/init_complete_schema.sql (file referenced in docker-compose.yml)
Local Development Setup (Without Docker)
For active development where you need direct access to Python/Node processes:
Backend Setup
Frontend Setup
Important flags:
--legacy-peer-deps: Required for npm due to peer dependency conflicts in React 18/19 packagesHot-reload is automatic in both Python (via Uvicorn
--reload) and Next.js (Fast Refresh)
Sources: docs/LOCAL_SETUP_GUIDE.md:1-214
Verification & Testing
After starting the system, verify each service is running correctly:
Health Check Endpoints
Database Connection Test
Redis Connection Test
Configuration Validation
The backend performs automatic configuration validation on startup. Check logs:
Sources: orchestrator/config.py:249-272
Troubleshooting
Port Already in Use
Symptom: Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use
Solution:
Database Connection Failed
Symptom: Backend logs show ❌ Database connection test failed
Solutions:
Check Postgres is running:
Check health check logs:
Verify credentials match:
Manual connection test:
Redis Connection Failed
Symptom: Backend logs show Redis connection test failed or Redis unavailable for plugin cache
Solutions:
Check Redis is running:
Test authentication:
Check password matches:
Note: Redis is optional for core functionality. If Redis is unavailable:
Plugin content fetches directly from S3 (slower)
Workflow execution events not streamed (polling fallback)
Tool metadata not cached (slower tool loading)
Sources: orchestrator/core/redis/client.py:149-198, orchestrator/core/services/plugin_cache.py:54-74
Frontend Build Errors
Symptom: npm ERR! code ERESOLVE or Cannot find module 'next'
Solutions:
Clear cache and reinstall:
Node version mismatch:
Docker volume conflicts:
Missing LLM API Keys
Symptom: Agent execution fails with "No LLM provider configured"
Solution:
Add keys via Settings UI (recommended):
Navigate to Settings → Credentials in the web interface
Add OpenAI or Anthropic credentials
Test connection
Or add to .env file:
Restart backend to load new keys:
Sources: orchestrator/config.py:84-110, orchestrator/core/credentials/service.py:1-852
Permission Denied Errors
Symptom: PermissionError: [Errno 13] Permission denied: '/app/logs'
Solutions:
Fix Docker volume permissions:
For production images (non-root user):
Sources: orchestrator/Dockerfile:98-99
Next Steps
After successful installation:
Configure API keys and credentials via Settings UI → Credentials section
Follow the onboarding flow - see First-Time User Experience
Create your first agent - see Creating Agents
Review configuration options - see Configuration Guide
For production deployment, see Production Deployment for scaling considerations, security hardening, and Railway-specific configuration.
Last updated

