Getting Started
This guide provides a quick start for installing and running Automatos AI locally. It covers the essential prerequisites, environment setup, and first-time configuration needed to get the platform operational.
For detailed installation procedures, see Installation & Setup. For comprehensive configuration options, see Configuration Guide. For hands-on tutorials creating agents and workflows, see Quick Start Tutorial.
Prerequisites
Before installing Automatos AI, ensure you have the following installed on your system:
Docker
20.10+
Container orchestration
Docker Compose
2.0+
Multi-service management
Git
Any
Repository cloning
Text Editor
Any
.env file configuration
Required API Keys (obtain before installation):
OpenAI API Key or Anthropic API Key - At least one LLM provider required for agent execution
Clerk Account (optional) - For multi-tenant authentication; can be disabled for local development by setting
REQUIRE_AUTH=false
System Resources:
RAM: 4GB minimum, 8GB recommended
Disk: 10GB free space for Docker images and volumes
Network: Internet connection for pulling images and accessing LLM APIs
Sources: docker-compose.yml:1-280, orchestrator/.env.example:1-65, orchestrator/requirements.txt:1-108
System Architecture Overview
Automatos AI follows a containerized multi-tier architecture with separate services for frontend, backend, database, cache, and workers.
Docker Services Architecture
Service Dependencies:
frontenddepends onbackendbeing healthybackenddepends onpostgres+redisbeing healthyworkspace-workerdepends onpostgres+redisbeing healthy
Docker Compose Profiles:
default - Core services only (
postgres,redis,backend,frontend)workers - Adds
workspace-workerfor task executionall - Adds admin tools (
adminer,gotenberg)
Sources: docker-compose.yml:18-279, orchestrator/main.py:219-406
Quick Start
1. Clone the Repository
2. Create Environment File
Copy the example environment file and set required values:
Minimum Required Configuration (edit .env):
Important: All LLM provider credentials (OpenAI, Anthropic, etc.) can be managed via the Settings UI after installation. Only one provider key is required initially in the .env file to bootstrap the system.
Sources: orchestrator/.env.example:1-65, orchestrator/config.py:1-423
3. Start the Application
Start core services:
Or with workers enabled:
Or with all services (including admin tools):
The --build flag rebuilds images if any code has changed. Omit it for faster subsequent starts.
Application Startup Flow
Startup Duration: 30-60 seconds on first run (includes image pulling and database initialization). Subsequent starts: 10-20 seconds.
Sources: docker-compose.yml:78-138, orchestrator/main.py:219-335, orchestrator/core/database/load_seed_data.py:25-192
Verification Steps
1. Check Service Health
All services should show healthy status:
Expected output:
2. Access Health Endpoints
Backend API health:
Expected response:
Frontend access:
Should return the Next.js HTML page (status 200).
3. View Logs
Check for any errors during startup:
Expected log patterns:
Backend:
"Starting Automotas AI API Server..."followed by"Database ready"and"Dashboard services initialized successfully"Frontend:
"ready started server on 0.0.0.0:3000"
Sources: orchestrator/main.py:200-335, docker-compose.yml:131-138
Access the Application
Web Interface
Open your browser and navigate to:
Default Access Modes:
Development (Auth Disabled)
REQUIRE_AUTH=false
Direct access, no login required
Production (Auth Enabled)
REQUIRE_AUTH=true
Clerk authentication required
API Documentation
Interactive API documentation is available at:
The API docs show all available endpoints organized by tags (Agents, Workflows, Documents, etc.) with example requests and responses.
Database Admin (Optional)
If started with --profile all, access Adminer at:
Login credentials:
System: PostgreSQL
Server:
postgresUsername:
postgresPassword: (value from
.envPOSTGRES_PASSWORD)Database:
orchestrator_db
Sources: orchestrator/main.py:408-553, docker-compose.yml:223-236
Initial Configuration
1. Configure LLM Providers (Settings UI)
After accessing the web interface, navigate to Settings → Credentials to add additional LLM provider keys:
Supported Providers:
OpenAI (GPT-4, GPT-3.5)
Anthropic (Claude 3 family)
Google (Gemini)
OpenRouter (100+ models)
Azure OpenAI
Cohere (for reranking)
Configuration Loading Architecture
Credential Resolution Order (highest priority first):
User workspace credentials (Settings UI)
Workspace-level system settings
Global system settings
Environment variables from
.envAgent-specific model configuration
System defaults
Sources: orchestrator/config.py:28-423, orchestrator/core/llm/manager.py(credential resolution)
2. Verify Database Schema
The database is automatically initialized on first startup with:
Core Tables:
agents- Agent definitions and configurationsworkflows- Workflow templatesworkflow_recipes- Step-by-step execution recipesdocuments- Knowledge base documentscredentials- Encrypted API keyssystem_settings- Platform configurationsystem_prompts- Prompt templatespersonas- Agent personality presets
Seed Data Loaded:
System settings (LLM defaults, RAG thresholds)
Credential types (provider schemas)
Personas (10 predefined: Senior Engineer, Code Reviewer, etc.)
Plugin categories (19 categories: code-review, testing, etc.)
LLM models (OpenAI, Anthropic, Google model catalog)
Verify seed data:
Expected output: Personas loaded: 10
Sources: orchestrator/core/database/load_seed_data.py:25-192, orchestrator/core/seeds/seed_personas.py:19-256, orchestrator/core/seeds/seed_plugin_categories.py:19-213
First Steps
1. Create Your First Agent
Via UI:
Navigate to Agents in the sidebar
Click New Agent
Configure:
Name: "My First Agent"
Description: Brief description of purpose
Model: Select from available providers (e.g.,
gpt-4o)Persona: Choose from predefined personas or create custom
Click Create Agent
Via API:
2. Test Agent in Chat
Navigate to Chat and select your agent from the dropdown. Send a test message to verify the agent responds correctly.
3. Explore Core Features
Agents
/agents
Create and manage AI agents
Workflows
/workflows
Design multi-step automation
Recipes
/workflows/recipes
Step-by-step guided workflows
Documents
/documents
Knowledge base management
Tools
/tools
Browse and connect Composio apps
Analytics
/analytics
Usage tracking and cost analysis
Settings
/settings
Credentials and configuration
Sources: orchestrator/api/agents.py, orchestrator/api/chat.py, frontend/app (route structure)
Common Issues
Port Conflicts
Symptom: Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use
Solution: Change ports in .env:
Then restart: docker-compose up
Missing API Keys
Symptom: Agent creation succeeds but chat fails with No LLM credentials configured
Solution:
Go to Settings → Credentials
Add at least one LLM provider key
Test by sending a chat message
Database Connection Errors
Symptom: Backend logs show Failed to connect to database
Solution:
Ensure
POSTGRES_PASSWORDis set in.envCheck postgres service is healthy:
docker-compose ps postgresRestart postgres:
docker-compose restart postgres
Redis Connection Errors
Symptom: Backend logs show Redis connection test failed
Solution:
Ensure
REDIS_PASSWORDis set in.envCheck redis service is healthy:
docker-compose ps redisRedis is optional - disable features by removing
REDIS_URLfrom.env
Sources: orchestrator/config.py:36-80, orchestrator/core/redis/client.py:149-198
Development vs Production
Development Mode (Default)
Configured in docker-compose.yml:78-170 with:
Hot-reload enabled for both frontend and backend
Source code mounted as volumes
Development Dockerfile targets
API docs enabled at
/docsLess strict security headers
Restart after code changes:
Production Deployment
For production deployment to Railway, Heroku, or other platforms:
Use production Dockerfile targets
Set
ENVIRONMENT=productionin environment variablesSet
REQUIRE_AUTH=trueto enforce Clerk authenticationConfigure external PostgreSQL and Redis services
Set
DATABASE_URLandREDIS_URLfor managed databasesDisable API docs by removing
NEXTAUTH_URL
See Deployment & Infrastructure for detailed production setup.
Sources: orchestrator/Dockerfile:87-130, frontend/Dockerfile:83-115, docker-compose.yml:78-170
Next Steps
Now that you have Automatos AI running locally, proceed to:
Installation & Setup - Detailed installation procedures, troubleshooting, and advanced configuration
Configuration Guide - Comprehensive guide to environment variables, system settings, and credential management
Quick Start Tutorial - Step-by-step tutorial for creating agents, workflows, and recipes
Core Documentation:
Agents - Complete guide to agent creation, configuration, and lifecycle management
Workflows & Recipes - Multi-step workflow orchestration and recipe execution
Knowledge Base & RAG - Document management and retrieval-augmented generation
Tools & Integrations - Composio integration for 880+ external applications
Chat Interface - Real-time streaming chat with complexity assessment
Advanced Topics:
Universal Router - Six-tier intelligent agent routing system
Workspace Execution - Sandboxed code execution and GitHub integration
Community Marketplace - Plugin discovery and publication
Sources: orchestrator/main.py:1-1341 (complete application structure)
Last updated

