Getting Started

chevron-rightRelevant source fileshashtag

This guide provides a quick start for installing and running Automatos AI locally. It covers the essential prerequisites, environment setup, and first-time configuration needed to get the platform operational.

For detailed installation procedures, see Installation & Setup. For comprehensive configuration options, see Configuration Guide. For hands-on tutorials creating agents and workflows, see Quick Start Tutorial.


Prerequisites

Before installing Automatos AI, ensure you have the following installed on your system:

Requirement
Version
Purpose

Docker

20.10+

Container orchestration

Docker Compose

2.0+

Multi-service management

Git

Any

Repository cloning

Text Editor

Any

.env file configuration

Required API Keys (obtain before installation):

  • OpenAI API Key or Anthropic API Key - At least one LLM provider required for agent execution

  • Clerk Account (optional) - For multi-tenant authentication; can be disabled for local development by setting REQUIRE_AUTH=false

System Resources:

  • RAM: 4GB minimum, 8GB recommended

  • Disk: 10GB free space for Docker images and volumes

  • Network: Internet connection for pulling images and accessing LLM APIs

Sources: docker-compose.yml:1-280, orchestrator/.env.example:1-65, orchestrator/requirements.txt:1-108


System Architecture Overview

Automatos AI follows a containerized multi-tier architecture with separate services for frontend, backend, database, cache, and workers.

Docker Services Architecture

spinner

Service Dependencies:

  • frontend depends on backend being healthy

  • backend depends on postgres + redis being healthy

  • workspace-worker depends on postgres + redis being healthy

Docker Compose Profiles:

  • default - Core services only (postgres, redis, backend, frontend)

  • workers - Adds workspace-worker for task execution

  • all - Adds admin tools (adminer, gotenberg)

Sources: docker-compose.yml:18-279, orchestrator/main.py:219-406


Quick Start

1. Clone the Repository

2. Create Environment File

Copy the example environment file and set required values:

Minimum Required Configuration (edit .env):

Important: All LLM provider credentials (OpenAI, Anthropic, etc.) can be managed via the Settings UI after installation. Only one provider key is required initially in the .env file to bootstrap the system.

Sources: orchestrator/.env.example:1-65, orchestrator/config.py:1-423

3. Start the Application

Start core services:

Or with workers enabled:

Or with all services (including admin tools):

The --build flag rebuilds images if any code has changed. Omit it for faster subsequent starts.

Application Startup Flow

spinner

Startup Duration: 30-60 seconds on first run (includes image pulling and database initialization). Subsequent starts: 10-20 seconds.

Sources: docker-compose.yml:78-138, orchestrator/main.py:219-335, orchestrator/core/database/load_seed_data.py:25-192


Verification Steps

1. Check Service Health

All services should show healthy status:

Expected output:

2. Access Health Endpoints

Backend API health:

Expected response:

Frontend access:

Should return the Next.js HTML page (status 200).

3. View Logs

Check for any errors during startup:

Expected log patterns:

  • Backend: "Starting Automotas AI API Server..." followed by "Database ready" and "Dashboard services initialized successfully"

  • Frontend: "ready started server on 0.0.0.0:3000"

Sources: orchestrator/main.py:200-335, docker-compose.yml:131-138


Access the Application

Web Interface

Open your browser and navigate to:

Default Access Modes:

Mode
Configuration
Access

Development (Auth Disabled)

REQUIRE_AUTH=false

Direct access, no login required

Production (Auth Enabled)

REQUIRE_AUTH=true

Clerk authentication required

API Documentation

Interactive API documentation is available at:

The API docs show all available endpoints organized by tags (Agents, Workflows, Documents, etc.) with example requests and responses.

Database Admin (Optional)

If started with --profile all, access Adminer at:

Login credentials:

  • System: PostgreSQL

  • Server: postgres

  • Username: postgres

  • Password: (value from .env POSTGRES_PASSWORD)

  • Database: orchestrator_db

Sources: orchestrator/main.py:408-553, docker-compose.yml:223-236


Initial Configuration

1. Configure LLM Providers (Settings UI)

After accessing the web interface, navigate to Settings → Credentials to add additional LLM provider keys:

Supported Providers:

  • OpenAI (GPT-4, GPT-3.5)

  • Anthropic (Claude 3 family)

  • Google (Gemini)

  • OpenRouter (100+ models)

  • Azure OpenAI

  • Cohere (for reranking)

Configuration Loading Architecture

spinner

Credential Resolution Order (highest priority first):

  1. User workspace credentials (Settings UI)

  2. Workspace-level system settings

  3. Global system settings

  4. Environment variables from .env

  5. Agent-specific model configuration

  6. System defaults

Sources: orchestrator/config.py:28-423, orchestrator/core/llm/manager.py(credential resolution)

2. Verify Database Schema

The database is automatically initialized on first startup with:

Core Tables:

  • agents - Agent definitions and configurations

  • workflows - Workflow templates

  • workflow_recipes - Step-by-step execution recipes

  • documents - Knowledge base documents

  • credentials - Encrypted API keys

  • system_settings - Platform configuration

  • system_prompts - Prompt templates

  • personas - Agent personality presets

Seed Data Loaded:

  • System settings (LLM defaults, RAG thresholds)

  • Credential types (provider schemas)

  • Personas (10 predefined: Senior Engineer, Code Reviewer, etc.)

  • Plugin categories (19 categories: code-review, testing, etc.)

  • LLM models (OpenAI, Anthropic, Google model catalog)

Verify seed data:

Expected output: Personas loaded: 10

Sources: orchestrator/core/database/load_seed_data.py:25-192, orchestrator/core/seeds/seed_personas.py:19-256, orchestrator/core/seeds/seed_plugin_categories.py:19-213


First Steps

1. Create Your First Agent

Via UI:

  1. Navigate to Agents in the sidebar

  2. Click New Agent

  3. Configure:

    • Name: "My First Agent"

    • Description: Brief description of purpose

    • Model: Select from available providers (e.g., gpt-4o)

    • Persona: Choose from predefined personas or create custom

  4. Click Create Agent

Via API:

2. Test Agent in Chat

Navigate to Chat and select your agent from the dropdown. Send a test message to verify the agent responds correctly.

3. Explore Core Features

Feature
Location
Purpose

Agents

/agents

Create and manage AI agents

Workflows

/workflows

Design multi-step automation

Recipes

/workflows/recipes

Step-by-step guided workflows

Documents

/documents

Knowledge base management

Tools

/tools

Browse and connect Composio apps

Analytics

/analytics

Usage tracking and cost analysis

Settings

/settings

Credentials and configuration

Sources: orchestrator/api/agents.py, orchestrator/api/chat.py, frontend/app (route structure)


Common Issues

Port Conflicts

Symptom: Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use

Solution: Change ports in .env:

Then restart: docker-compose up

Missing API Keys

Symptom: Agent creation succeeds but chat fails with No LLM credentials configured

Solution:

  1. Go to Settings → Credentials

  2. Add at least one LLM provider key

  3. Test by sending a chat message

Database Connection Errors

Symptom: Backend logs show Failed to connect to database

Solution:

  1. Ensure POSTGRES_PASSWORD is set in .env

  2. Check postgres service is healthy: docker-compose ps postgres

  3. Restart postgres: docker-compose restart postgres

Redis Connection Errors

Symptom: Backend logs show Redis connection test failed

Solution:

  1. Ensure REDIS_PASSWORD is set in .env

  2. Check redis service is healthy: docker-compose ps redis

  3. Redis is optional - disable features by removing REDIS_URL from .env

Sources: orchestrator/config.py:36-80, orchestrator/core/redis/client.py:149-198


Development vs Production

Development Mode (Default)

Configured in docker-compose.yml:78-170 with:

  • Hot-reload enabled for both frontend and backend

  • Source code mounted as volumes

  • Development Dockerfile targets

  • API docs enabled at /docs

  • Less strict security headers

Restart after code changes:

Production Deployment

For production deployment to Railway, Heroku, or other platforms:

  1. Use production Dockerfile targets

  2. Set ENVIRONMENT=production in environment variables

  3. Set REQUIRE_AUTH=true to enforce Clerk authentication

  4. Configure external PostgreSQL and Redis services

  5. Set DATABASE_URL and REDIS_URL for managed databases

  6. Disable API docs by removing NEXTAUTH_URL

See Deployment & Infrastructure for detailed production setup.

Sources: orchestrator/Dockerfile:87-130, frontend/Dockerfile:83-115, docker-compose.yml:78-170


Next Steps

Now that you have Automatos AI running locally, proceed to:

  1. Installation & Setup - Detailed installation procedures, troubleshooting, and advanced configuration

  2. Configuration Guide - Comprehensive guide to environment variables, system settings, and credential management

  3. Quick Start Tutorial - Step-by-step tutorial for creating agents, workflows, and recipes

Core Documentation:

Advanced Topics:

Sources: orchestrator/main.py:1-1341 (complete application structure)


Last updated