PRD-108 — Complete Algorithm Specification
Inventor: Gerard Kavanagh Date: March 21, 2026 Implementation: orchestrator/modules/context/adapters/vector_field.py Git Evidence: Commits 674474722 and f2e4a4e6f
This document describes every algorithm, formula, data structure, and decision step in the Resonance Field Coordination system. Each section includes the mathematical definition, pseudocode, implementation reference, and rationale.
Table of Contents
1. System Constants
All values are configurable via environment variables. Defaults are tuned for mission durations of 1-48 hours.
Derived constants:
Implementation: orchestrator/config.py lines 393-401
2. Data Structures
2.1 Pattern (stored in Qdrant)
Each knowledge pattern is a point in a Qdrant vector collection.
Implementation: vector_field.py lines 139-155
2.2 Query Result (returned to agents)
Implementation: vector_field.py lines 196-204
2.3 Stability Report
Implementation: vector_field.py lines 254-261
2.4 Field Collection (Qdrant)
Implementation: vector_field.py lines 75-93
2.5 Metric Event (Instrumentation)
Implementation: instrumentation.py lines 17-31
3. Algorithm 1: Field Creation
Purpose: Initialize a shared semantic field for a multi-agent mission.
Input:
team_agent_ids: list[int]— IDs of agents participating in the missioninitial_data: dict[str, str]?— Optional seed data (e.g., mission goal)
Output:
field_id: string— UUID identifying the created field
Steps:
Rationale:
UUID prevents collision across concurrent missions
Payload indexes enable O(1) dedup lookups and filtered queries
agent_id=0for seed data marks it as system-provided, not agent-generatedEach mission gets its own collection for isolation and clean destruction
Implementation: vector_field.py lines 67-104
4. Algorithm 2: Pattern Injection
Purpose: An agent adds a knowledge pattern to the shared field.
Input:
context_id: string— Field UUIDkey: string— Semantic labelvalue: string— Knowledge contentagent_id: int— Contributing agentstrength: float— Injection strength (default 1.0)
Output: None (side effect: pattern added to field)
Steps:
Rationale:
Content string is
"{key}: {value}"— the key provides semantic context for the embeddingSHA-256 dedup prevents the same knowledge from cluttering the field
Reinforce-on-collision means repeated injection strengthens existing knowledge
Boundary permeability allows future experimentation with injection barriers
access_countstarts at 0 — strength builds only through actual usage
Implementation: vector_field.py lines 116-156
5. Algorithm 3: Content Deduplication
Purpose: Check if identical content already exists in the field.
Input:
context_id: string— Field UUIDcontent_hash: string— SHA-256 hex digest
Output:
existing_pointorNone
Steps:
Rationale:
SHA-256 is collision-resistant — two different contents will not produce the same hash
Payload index on
content_hashmakes this an O(1) lookup, not a full scanReturns the full point so the caller can access its ID for reinforcement
Implementation: vector_field.py lines 283-293
6. Algorithm 4: Semantic Query with Resonance Scoring
Purpose: An agent queries the field for relevant knowledge. This is the core algorithm.
Input:
context_id: string— Field UUIDquery: string— Natural language queryagent_id: int— Querying agenttop_k: int— Maximum results to return (default 10)
Output:
results: list[QueryResult]— Ranked by resonance score, descending
Steps:
Rationale:
Over-fetch at 3× ensures enough results survive archival filtering
Decay is computed at query time, not stored — patterns are never destructively aged
Resonance scoring (Algorithm 7) combines semantic relevance with temporal strength
Hebbian reinforcement (step 9) is a side effect of querying — reading strengthens
Results include both resonance score and raw cosine for transparency
Implementation: vector_field.py lines 160-221
7. Algorithm 5: Temporal Decay
Purpose: Compute how much a pattern's strength has diminished since last access.
Mathematical Definition:
Decay curve:
Key property: Half-life = ln(2) / λ = 6.93 hours
Verified: Test 2 in stress tests — fresh pattern 1.0000, 24h-old pattern 0.0907
Implementation: vector_field.py line 277
8. Algorithm 6: Access Boost
Purpose: Frequently accessed patterns resist decay.
Mathematical Definition:
Boost curve:
Key property: Cap at 2.0 prevents runaway reinforcement
Verified: Test 3 in stress tests — 10 accesses = strength 1.50, 0 accesses at 6h = 0.55
Implementation: vector_field.py lines 278-280
9. Algorithm 7: Resonance Score Computation
Purpose: Combine semantic relevance with temporal strength into a single ranking score.
Mathematical Definition:
Full expansion:
Why squared cosine:
0.95 (highly relevant)
0.95
0.9025
Preserved (95% → 90%)
0.80 (relevant)
0.80
0.6400
Slightly reduced
0.50 (marginal)
0.50
0.2500
Halved — pushed down
0.30 (noise)
0.30
0.0900
Suppressed to 9%
0.10 (irrelevant)
0.10
0.0100
Effectively zero
Squaring amplifies the gap between relevant and irrelevant matches. A pattern with cosine 0.95 scores 9× higher than one with cosine 0.30 after squaring (vs only 3× with raw cosine). This creates sharper relevance discrimination.
Example calculation:
Implementation: vector_field.py line 194
10. Algorithm 8: Archival Filtering
Purpose: Exclude effectively dead patterns from query results.
Rule:
Time-to-archival (no access, initial strength 1.0):
A pattern with no accesses is archived after ~30 hours.
With access boost (n=20, cap reached):
Maximum reinforcement extends life by ~7 hours.
Key property: Patterns are NOT deleted — they remain in the collection but are invisible to queries. Direct retrieval by ID still works.
Verified: Test 4 in stress tests — 72h-old pattern at strength 0.000747, well below 0.05
Implementation: vector_field.py lines 190-191
11. Algorithm 9: Hebbian Reinforcement (Single)
Purpose: When a duplicate injection is detected, reinforce the existing pattern instead of creating a new one.
Input:
context_id: string— Field UUIDpoint_id: string— ID of existing pattern to reinforce
Steps:
Rationale:
Incrementing
access_countincreases futureB(n)(Algorithm 6)Updating
last_accessedresets the decay clock (Algorithm 5)Combined effect: duplicate injection makes the pattern stronger and younger
The
strengthfield is NOT modified here — only access metadata changes
Implementation: vector_field.py lines 294-312
12. Algorithm 10: Hebbian Reinforcement (Batch with Co-Access)
Purpose: Reinforce all patterns returned in a query result. Patterns retrieved together get an additional co-access bonus.
Input:
context_id: string— Field UUIDpoint_ids: list[string]— IDs of all patterns in the query result
Mathematical Definition (Co-Access Bonus):
Co-access bonus table:
Steps:
Key distinction from Algorithm 9:
Algorithm 9 (single): Only updates
access_countandlast_accessedAlgorithm 10 (batch): Also mutates the stored
strengthvia co-access bonus
Rationale for co-access:
"Neurons that fire together wire together" (Hebb, 1949)
Patterns frequently retrieved together are semantically related
Strengthening co-retrieved patterns makes clusters of related knowledge persist longer
The 2% rate is conservative — prevents runaway reinforcement while still rewarding association
Cap at
FIELD_REINFORCE_CAP × initial_strengthprevents unbounded growth
Implementation: vector_field.py lines 314-356
13. Algorithm 11: Field Stability Measurement
Purpose: Quantify how converged the field is — are agents building shared understanding or is knowledge fragmented?
Mathematical Definition:
Component interpretation:
avg_strength
[0.0, ~2.0]
How alive is the field? Fresh, accessed patterns → high. Old, ignored patterns → low.
organization
[0.0, 1.0]
How uniform is the strength distribution? All patterns similar strength → high (agents referencing same things). Wide variance → low (mix of hot and cold).
stability
[0.0, ~1.6]
Weighted composite. 60% aliveness, 40% uniformity.
Example calculations:
Steps:
Verified: Test 6 in stress tests:
Empty: stability 0.0
1 fresh: stability 1.0
11 fresh: stability 1.0
Mixed (fresh + old): stability 0.57
Implementation: vector_field.py lines 224-261
14. Algorithm 12: Field Destruction
Purpose: Clean up a field when the mission ends.
Steps:
Rationale:
Deleting the entire collection is O(1) in Qdrant — no need to iterate points
Failure is non-fatal — the field may already have been destroyed (idempotent)
No data needs to be preserved after mission completion (experiment metrics are captured separately by instrumentation)
Implementation: vector_field.py lines 106-112
15. Algorithm 13: Mission Lifecycle Integration
Purpose: Automatically manage field lifecycle during mission execution.
Steps in coordinator_service.py:
Implementation: coordinator_service.py lines 85-210
16. Algorithm 14: A/B Backend Selection
Purpose: Select between vector field and Redis baseline for controlled experiments.
Steps:
Key property: The InstrumentedSharedContext wrapper captures metrics for BOTH backends identically, enabling fair comparison.
Implementation: factory.py lines 19-50
17. Algorithm 15: Experiment Instrumentation
Purpose: Capture metrics for every field operation to support A/B comparison.
Steps (for each operation):
Aggregation properties on ExperimentMetrics:
Implementation: instrumentation.py lines 96-212
18. Complete Query Pipeline (End-to-End)
This traces a single agent query from request to response through every algorithm.
19. Complete Inject Pipeline (End-to-End)
20. Proof Results
20.1 Unit Tests (57/57 passing)
TestComputeDecayedStrength
11
Decay math, access boost, cap boundary, scaling, combined effects
TestCreateContext
5
UUID generation, collection creation, payload indexes, initial seeding
TestInject
6
Embedding generation, payload structure, dedup, reinforce-on-collision
TestQuery
9
Resonance formula, archival filtering, sorting, over-fetch, Hebbian trigger
TestDestroyContext
3
Collection deletion, error handling
TestHebbian
7
Batch retrieval, access count, timestamp update, co-access bonus
TestStability
7
Empty field, single pattern, uniform vs varied, organization metric
TestHelpers
9
Hash lookup, single reinforcement, filter construction
20.2 Stress Tests (16/16 assertions passing)
Resonance ranking
Python ML pattern in top 2 for ML query
Position 2
Resonance ranking
Go concurrency NOT #1 for ML query
Position 5
Temporal decay
Fresh > 24h-old strength
1.0000 > 0.0907
Temporal decay
24h-old below 0.15
0.0907
Hebbian
Popular has 10 accesses
10
Hebbian
Lonely has 0 accesses
0
Hebbian
Popular stronger than lonely
1.50 > 0.55
Archival
Alive pattern returned
Yes
Archival
72h-old below threshold
0.000747 < 0.05
Stress
150 patterns from 50 agents
150
Stress
Returns 5 results
5
Stress
>50 queries/sec
4,000 qps
Stability
Empty = 0
0.0
Stability
One pattern > 0
1.0
Stability
Old patterns reduce stability
0.57 < 1.0
Telephone
Agent C finds finding #7
Found at position 3
20.3 A/B Comparison (Vector Field vs Message Passing)
Same 3-agent EU AI Act mission. 10 research findings, 3 analyses, 7 test queries.
Editorial comment: This is best described as a controlled preliminary A/B comparison, not a general proof of superiority across all missions. The measured result is useful evidence for the mechanism, but broader claims still require more mission types, larger samples, and production-scale evaluation.
Context coverage
86% (6/7)
43% (3/7)
Information loss
1 finding
4 findings
Patterns visible to Agent C
13 (all)
6 (only what B forwarded)
Findings lost by message passing: biometric_ban, social_scoring, employee_monitoring, transparency_rules — all findings Agent B never referenced but Agent C needed.
Finding lost by vector field: penalties — a bag-of-words embedding limitation, not a system limitation. Real 2048-dim embeddings would likely resolve this.
Editorial comment: The sentence above is a reasonable hypothesis, but it is still a hypothesis. If you want this document to read as strictly evidentiary, say that the miss may be attributable to the embedding choice used in the test, rather than stating that better embeddings would resolve it.
Appendix A: File Inventory
core/ports/context.py
63
SharedContextPort abstract interface
modules/context/adapters/vector_field.py
357
Vector field implementation (Algorithms 1-12)
modules/context/adapters/redis_context.py
199
Redis baseline implementation
modules/context/instrumentation.py
213
Metric capture wrapper (Algorithm 15)
modules/context/factory.py
51
Backend selection factory (Algorithm 14)
modules/context/experiment.py
89
A/B comparison report generator
modules/tools/discovery/actions_field.py
48
Tool definitions for agents
modules/tools/discovery/handlers_field.py
152
Tool handler implementations
services/coordinator_service.py
+143
Mission lifecycle integration (Algorithm 13)
config.py
+9
Configuration constants
tests/test_vector_field.py
770
57 unit tests
tests/demo_field_stress.py
403
16 stress assertions
tests/demo_ab_comparison.py
374
A/B comparison test
tests/demo_field.py
275
Integration demo
Total new code: ~3,100 lines across 14 files
Appendix B: Prior Art Differentiation
Multi-agent shared field
YES
No (single agent)
YES (symbolic)
YES (spatial)
High-dim semantic embeddings
YES (2048)
YES (2D reduced)
No
No
cos² resonance scoring
YES
No (PDE diffusion)
No
No
Exponential temporal decay
YES
YES (PDE-based)
No
YES
Hebbian access reinforcement
YES
No
No
YES (pheromone)
Co-access bonus
YES
No
No
No
Content-hash dedup with reinforce
YES
No
No
No
Field stability metric
YES
No
No
No
Mission-bound lifecycle
YES
No
No
No
Agent-callable tools
YES
No
No
No
A/B experiment infrastructure
YES
No
No
No
Production LLM orchestrator integration
YES
No
No
No
Every algorithm in this document has a corresponding implementation in code, referenced by file and line number. Every measured result has been produced by running the actual code against a real Qdrant instance. Nothing in this document is theoretical.
Gerard Kavanagh — March 21, 2026
Last updated

