PRD-26: System Settings Audit - Current Status
Executive Summary
Critical Issues Found:
❌ Settings are saved but may be reset by seed script
❌ Many settings in UI are NOT actually used by services
❌ CodeGraph settings show "Select LLM Provider" but still works (using hardcoded defaults)
❌ Settings refresh/reset after page reload (likely seed script overwriting)
Settings Usage Matrix
✅ General Settings Tab
environment
❌ NO
Config uses os.getenv("ENVIRONMENT")
Not wired
log_level
❌ NO
Python logging uses os.getenv("LOG_LEVEL")
Not wired
embedding_model
❌ NO
Services hardcoded
Not wired
openai_embedding_model
❌ NO
Services hardcoded
Not wired
embedding_cache_dir
❌ NO
Not used
Not implemented
embedding_max_seq_length
❌ NO
Not used
Not implemented
vector_store_type
❌ NO
Not used
Not implemented
vector_store_dimensions
❌ NO
Not used
Not implemented
chunk_size
❌ NO
Not used
Not implemented
chunk_overlap
❌ NO
Not used
Not implemented
max_context_length
❌ NO
Not used
Not implemented
deploy_host
❌ NO
No deployment service
Not used
deploy_port
❌ NO
No deployment service
Not used
deploy_user
❌ NO
No deployment service
Not used
deploy_key_path
❌ NO
No deployment service
Not used
deploy_enabled
❌ NO
No deployment service
Not used
nextauth_secret
⚠️ PARTIAL
Frontend env var (if NextAuth exists)
Partially used
nextauth_url
⚠️ PARTIAL
Frontend env var (if NextAuth exists)
Partially used
next_public_api_url
✅ YES
Frontend API client
Used
✅ Orchestrator LLM Settings Tab
orchestrator_llm.provider
✅ YES
llm_provider/manager.py line 91
WORKING
orchestrator_llm.model
✅ YES
llm_provider/manager.py line 92
WORKING
orchestrator_llm.temperature
✅ YES
llm_provider/manager.py line 245
WORKING
orchestrator_llm.max_tokens
✅ YES
llm_provider/manager.py line 246
WORKING
orchestrator_llm.top_p
❌ NO
Not in LLMConfig
Not wired
orchestrator_llm.frequency_penalty
❌ NO
Not in LLMConfig
Not wired
orchestrator_llm.presence_penalty
❌ NO
Not in LLMConfig
Not wired
orchestrator_llm.streaming_enabled
❌ NO
Not in LLMConfig
Not wired
orchestrator_llm.timeout_seconds
❌ NO
Not in LLMConfig
Not wired
orchestrator_llm.max_context_length
❌ NO
Not in LLMConfig
Not wired
orchestrator_llm.cache_enabled
❌ NO
Not implemented
Not wired
orchestrator_llm.retry_count
❌ NO
Not implemented
Not wired
orchestrator_llm.retry_delay
❌ NO
Not implemented
Not wired
❌ CodeGraph Settings Tab
codegraph.provider
❌ NO
CodeGraph uses hardcoded OpenAI
NOT USED
codegraph.model
❌ NO
CodeGraph uses hardcoded GPT-3.5-turbo
NOT USED
codegraph.embedding_model
❌ NO
CodeGraph uses hardcoded text-embedding-ada-002
NOT USED
codegraph.temperature
❌ NO
CodeGraph doesn't use LLM for embeddings
Not applicable
codegraph.max_tokens
❌ NO
CodeGraph doesn't use LLM for embeddings
Not applicable
codegraph.analysis_settings.*
❌ NO
Not implemented
Not wired
codegraph.performance_settings.*
❌ NO
Not implemented
Not wired
Note: CodeGraph works because it uses hardcoded OpenAI client initialization. Settings are displayed but not consumed.
❌ Logging Settings Tab
All logging settings
❌ NO
Python logging configured in code
NOT USED
Note: Python logging is configured in utils/logging_utils.py and main.py. Settings tab exists but nothing reads from it.
❌ Rate Limiting Settings Tab
All rate limiting settings
❌ NO
No rate limiting middleware
NOT USED
Note: mcp_bridge.py has rate limiting but uses environment variables, not settings.
⚠️ API Keys Settings Tab
All API key settings
⚠️ DUPLICATE
Same as General tab
REDUNDANT
Note: This tab duplicates General settings. Should be merged or removed.
Root Cause Analysis
Why Settings Don't Persist
Seed Script May Overwrite
Seed script runs on database init
If seed script runs after settings save, it may overwrite
Need to verify seed script logic
Settings Save to Database
bulkUpdateSettings()API endpoint exists and worksSettings ARE saved to database
Issue: Settings may be reset when:
Seed script runs again
Database is reinitialized
Settings are refreshed from seed data
Frontend Reload Issue
Frontend loads settings from
/api/system-settings/by-categoryIf seed script runs, settings reset to defaults
Need to ensure seed script NEVER overwrites existing values
Why Settings Don't Work
Services Use Hardcoded Values
CodeGraph:
OpenAI(api_key=openai_api_key)hardcodedRAG:
model="text-embedding-ada-002"hardcodedConfig:
os.getenv("ENVIRONMENT")instead of settings
Settings Not Read by Services
Services don't call
get_system_setting()Services use environment variables or hardcoded values
Need to wire services to read from settings
Missing Settings Integration
No middleware to inject settings
No settings reload mechanism
No real-time settings updates
Immediate Action Items
Critical (Fix First)
Fix Seed Script ✅ PRIORITY 1
Ensure seed script NEVER overwrites existing values
Only set
default_value, nevervalueif setting existsTest: Save setting, run seed script, verify setting persists
Verify Settings Save ✅ PRIORITY 2
Add verification endpoint:
GET /api/system-settings/verify/{category}/{key}Add frontend verification after save
Add logging to track saves
Wire CodeGraph Settings ✅ PRIORITY 3
Update
codegraph_service.pyto use settingsUse LLM service instead of direct OpenAI
Read
codegraph.provider,codegraph.model,codegraph.embedding_model
High Priority
Wire LLM Parameters ✅ PRIORITY 4
Add
top_p,frequency_penalty,presence_penaltyto LLMConfigRead from settings in
llm_provider/manager.pyPass to LLM clients
Wire General Settings ✅ PRIORITY 5
Update
config.pyto readenvironmentfrom settingsUpdate logging to read
log_levelfrom settingsWire
next_public_api_url(already used in frontend)
Medium Priority
Remove Unused Settings Tabs ✅ PRIORITY 6
Logging Tab: Remove or implement logging service
Rate Limiting Tab: Remove or implement rate limiting
API Keys Tab: Merge with General or remove
Add Settings Usage Indicators ✅ PRIORITY 7
Show badge: "Active" or "Not Used" in UI
Help users understand which settings matter
Add tooltips explaining usage
Testing Checklist
Settings Persistence
Settings Usage
Settings Validation
Files to Review/Modify
Backend (Critical)
orchestrator/seeds/seed_system_settings.py- Fix overwrite issueorchestrator/services/llm_provider/manager.py- Wire all LLM parametersorchestrator/services/codegraph_service.py- Wire CodeGraph settingsorchestrator/config.py- Wire general settingsorchestrator/api/system_settings.py- Add verification endpoint
Frontend (Critical)
frontend/components/settings/SystemSettingsTab.tsx- Add save verificationfrontend/components/settings/CodeGraphSettingsTab.tsx- Ensure settings savefrontend/components/settings/OrchestratorLLMSettingsTab.tsx- Wire all parametersfrontend/components/settings/GeneralSettingsTab.tsx- Remove unused or wire themfrontend/components/settings/SystemLoggingSettingsTab.tsx- Remove or implementfrontend/components/settings/APIRateLimitingSettingsTab.tsx- Remove or implementfrontend/components/settings/BackendAPIKeysSettingsTab.tsx- Merge or remove
Recommendations
Short Term (This PR)
✅ Fix seed script to never overwrite
✅ Verify settings save correctly
✅ Wire CodeGraph settings (critical for user experience)
✅ Wire remaining LLM parameters
Medium Term (Next PR)
Wire general settings (environment, log_level)
Remove unused settings tabs
Add settings usage indicators
Long Term (Future)
Add settings audit log
Add settings rollback capability
Add real-time settings reload
Add settings import/export
Conclusion
Current State: Settings infrastructure exists but is not fully utilized. Many settings are "UI-only" and not connected to services.
Goal: Make every setting shown in UI actually functional, or remove it if not needed.
Priority: Fix seed script first (prevents settings reset), then wire up critical settings (CodeGraph, LLM parameters).
Last updated

