Resonance Field Coordination: A Shared Semantic Vector Field for Multi-Agent LLM Task Orchestration
Technical Disclosure Document
Inventor: Gerard Kavanagh Date of First Reduction to Practice: March 21, 2026 Git Commit: 674474722ac998c06a016cda4c2e5d05f9fbda68 Repository: Automatos AI Platform (private) Status: Working implementation with verified test results
1. Abstract
This document discloses a system and method for coordinating multiple large language model (LLM) agents through a shared semantic vector field with biologically-inspired dynamics. Unlike conventional multi-agent architectures that rely on discrete message passing between agents, this system creates a continuous shared embedding space where agent-generated knowledge patterns are injected, queried by semantic meaning, temporally decayed, and reinforced through usage — enabling any agent to discover any other agent's contributions by relevance alone, without requiring explicit inter-agent communication channels.
The system eliminates the "telephone game" problem inherent in sequential multi-agent pipelines, where information available to early agents is lost or distorted as it passes through intermediate agents who may not recognize its relevance to downstream consumers.
2. Problem Statement
2.1 The Telephone Game in Multi-Agent Systems
In conventional multi-agent LLM architectures (AutoGen, CrewAI, LangGraph, OpenAI Swarms), agents coordinate through message passing: Agent A produces output, which is forwarded to Agent B, whose output is forwarded to Agent C.
Agent A ──message──> Agent B ──message──> Agent CThis creates a fundamental information bottleneck: Agent C can only access Agent A's knowledge if Agent B explicitly includes it in its output. If Agent A produces 20 findings and Agent B references only 3 of them, the other 17 are invisible to Agent C — even if finding #7 is directly relevant to Agent C's task.
Every intermediate agent acts as an information filter. Each hop loses context. This is the multi-agent equivalent of the telephone game.
2.2 Existing Approaches and Their Limitations
CrewAI
Sequential/hierarchical message passing
Information filtered at each hop
AutoGen (Microsoft)
Conversational message exchange
Bounded by context window; no persistence
LangGraph
Shared state dictionary passed between nodes
No semantic retrieval; manual key management
OpenAI Swarms
Conversation handoff between agents
No shared memory; full context transfer
Google A2A
Agent-to-agent protocol messages
Explicit point-to-point; no broadcast by relevance
Mem0
Per-user memory layer with vector search
Per-user, not shared between agents
Letta/MemGPT
Per-agent long-term memory
Agent-local; no cross-agent field
No existing system provides a shared semantic space where multiple agents inject knowledge and any agent can query by meaning with dynamic relevance surfacing.
3. Prior Art
3.1 Blackboard Architecture (Hayes-Roth, 1985)
The blackboard architecture introduced a shared workspace ("blackboard") where multiple independent "knowledge sources" read and write. A control component decides which knowledge source acts next.
Relationship to this invention: The blackboard is the conceptual ancestor — shared workspace, multiple contributors, independent readers. However, blackboard systems use symbolic representations, not continuous vector spaces. They have no semantic retrieval, no temporal decay, no usage-based reinforcement, and no resonance scoring. Knowledge sources read the blackboard by matching rules, not by querying meaning.
Patents: US6574653B1 (blackboard-centric layered software architecture), US5506999A (event-driven blackboard processing). Both describe symbolic blackboard systems without vector embeddings, decay, or reinforcement.
3.2 Stigmergy (Grasse, 1959; Dorigo et al., 1996)
Stigmergy is indirect coordination through environmental modification — e.g., ants depositing pheromones that decay over time and are reinforced by subsequent traversal. The decay function rho(t+1) = (1-alpha) * rho(t) + beta is structurally similar to the temporal decay in this invention.
Relationship to this invention: Stigmergy provides the biological analogue for decay and reinforcement in a shared medium. However, stigmergic systems operate on spatial pheromone fields (2D/3D grids), not on high-dimensional semantic embedding spaces. They have no concept of meaning-based retrieval. The "agents" in stigmergy are simple reactive entities, not LLMs producing complex textual knowledge.
3.3 Dynamic Neural Field Theory (Amari, 1977)
Amari's neural field equation describes continuous activation fields across neural populations:
where u is activation, w is the connectivity kernel, and f is the firing rate function. Activation peaks compete, strong peaks suppress weak ones, and sustained stimulation forms stable attractors.
Relationship to this invention: The resonance scoring, temporal decay, and Hebbian reinforcement in this system directly parallel neural field dynamics. However, Amari's theory describes biological neural populations, not artificial LLM agents operating on text embeddings. This invention applies neural field principles to a fundamentally different substrate (LLM-generated semantic vectors) for a fundamentally different purpose (multi-agent task coordination).
3.4 Field-Theoretic Memory for AI Agents (Mitra, 2026)
ArXiv paper 2602.21220, published January 31, 2026 — 7 weeks before this implementation. Proposes treating agent memory as continuous fields governed by partial differential equations (modified heat equation with decay and source terms). Evaluated on conversation benchmarks (LoCoMo, LongMemEval). Reports +116% F1 on multi-session reasoning.
Relationship to this invention: This is the closest prior art. Both systems use continuous field dynamics with decay for agent memory. However, critical differences exist:
PDE-based (heat equation with diffusion)
Direct vector search with post-hoc resonance scoring
2D semantic manifold (dimensionality reduction)
Native high-dimensional embedding space (2048-dim)
Single-agent memory enhancement
Multi-agent shared coordination field
Diffusion spreads activation automatically
Semantic similarity surfaces relevance on query
Evaluated on conversation benchmarks
Integrated into production mission orchestration
Memory persistence for one agent
Cross-agent knowledge discovery
No concept of mission lifecycle
Field creation, seeding, injection, and destruction tied to mission workflow
No agent-callable tools
Platform tools (inject, query, stability) callable by agents during execution
No deduplication mechanism
Content-hash dedup with reinforce-on-collision
No co-access bonus
Hebbian co-access strengthening between co-retrieved patterns
No stability measurement
Field convergence metric for telemetry and experiment control
The key differentiator: Mitra solves "how does one agent remember better?" This invention solves "how do multiple agents share a brain?"
4. Detailed Description of the Invention
4.1 System Overview
The system creates a shared semantic vector field for each multi-agent mission. The field is implemented as a vector database collection (Qdrant) where each point represents a knowledge pattern contributed by an agent.
4.2 Pattern Structure
Each knowledge pattern in the field consists of:
4.3 Field Lifecycle
The field is tied to a mission's lifecycle:
Creation — When a mission begins, a new Qdrant collection is created. Payload indexes are built for
content_hash(keyword),agent_id(integer), andcreated_at(keyword). If initial data exists (e.g., mission brief), it is seeded into the field at strength 1.0 withagent_id=0.Injection — As each agent completes a task, its output is automatically injected into the field by the mission coordinator. Agents may also inject patterns directly during execution via the
platform_field_injecttool.Query — Agents query the field by semantic meaning via
platform_field_query. Results are ranked by resonance (Section 4.5), not by insertion time or agent identity.Destruction — When the mission reaches a terminal state (completed, failed, cancelled), the field collection is deleted. A tick-based cleanup process garbage-collects fields for terminal missions that were not cleaned up synchronously.
4.4 Temporal Decay
Every pattern's effective strength decays exponentially from its last access time:
where:
S_0is the initial injection strength (typically 1.0)lambdais the decay rate (default 0.1, yielding half-life of ~6.93 hours)tis the age in hours since last accessB(n)is the access boost function (Section 4.6)
Decay is not applied at write time — the stored strength remains constant. Decay is computed at query time from the elapsed time since last_accessed. This means patterns are never destructively modified by the passage of time; they simply score lower when queried later.
Verified behavior:
0
1.0000
1.0000
1
0.9048
0.9048
7 (half-life)
0.4966
0.4966
24
0.0907
0.0907
48
0.0082
0.0082
72
0.0007
0.0007
4.5 Resonance Scoring
When an agent queries the field, raw cosine similarity results are transformed into resonance scores:
where cos(theta) is the cosine similarity between the query embedding and the pattern embedding, and S(t) is the temporally-decayed strength.
Why squared cosine: Squaring the cosine similarity amplifies high-similarity matches and suppresses noise. A pattern with cosine 0.9 gets resonance contribution 0.81 (81%) while a pattern with cosine 0.5 gets only 0.25 (25%). This creates sharper relevance discrimination than raw cosine scoring.
Archival filtering: Patterns whose S(t) falls below the archival threshold (default 0.05) are excluded from results entirely. They remain in the collection but are effectively archived.
Over-fetching: The system queries for 3 * top_k raw results from the vector database, then applies decay filtering and resonance re-ranking before returning the final top_k results. This compensates for patterns that will be filtered out by the archival threshold.
4.6 Hebbian Reinforcement
Patterns that are retrieved in query results are reinforced — their access_count is incremented and last_accessed is updated to the current time. This has two effects:
Access boost:
where n is the access_count. Each access adds 5% to the effective strength multiplier, capped at 2.0x (reached at 20 accesses). This means frequently-queried patterns resist decay up to twice as effectively as unaccessed patterns.
Co-access bonus: When multiple patterns are retrieved together in a single query result set, each pattern receives an additional strength boost:
where k is the number of co-retrieved patterns and cap is the reinforcement cap (default 2.0). This implements a "neurons that fire together wire together" dynamic: patterns that are frequently relevant to the same queries become structurally associated through mutual strengthening.
Unlike the access boost (which operates on the access_count field at query time), the co-access bonus mutates the stored strength value. This is an intentional design choice: co-access represents an emergent relationship between patterns, not just individual popularity.
4.7 Content Deduplication
Before injecting a new pattern, the system computes the SHA-256 hash of the content string ("{key}: {value}") and checks for an existing pattern with the same hash via a payload-indexed filter query.
If a duplicate is found, instead of creating a new point, the existing pattern is reinforced — its access_count is incremented and last_accessed is updated. This means repeated injection of the same knowledge strengthens the existing pattern rather than cluttering the field with duplicates.
4.8 Boundary Permeability
Each pattern's initial strength is scaled by a configurable boundary_permeability coefficient (default 1.0):
This parameter is designed for future experimentation: reducing permeability below 1.0 would weaken newly-injected patterns, simulating a "barrier to entry" for new knowledge. At 1.0, all injections enter at full strength.
4.9 Field Stability Measurement
The system computes a convergence metric for each field:
where:
avg_strengthis the meanS(t)across all patterns in the fieldorganization = max(0, 1 - (stddev / mean))measures how uniform the strength distribution is
Interpretation:
Stability = 0: Empty field or all patterns fully decayed
Stability rising: Active agents reinforcing shared patterns
Stability falling: Field aging without new contributions, or injection of many disparate patterns
High organization: Patterns have similar strengths (agents referencing the same things)
Low organization: Wide variance in strengths (mix of fresh and stale, popular and ignored)
This metric is exposed via the platform_field_stability tool, enabling agents and the mission coordinator to monitor field health. It also serves as an experimental signal for the A/B comparison against message-passing (PRD-107).
4.10 Abstract Interface (Ports and Adapters Pattern)
The field implementation sits behind an abstract interface (SharedContextPort) with four methods:
This enables A/B testing: the same mission can run against VectorFieldSharedContext (this invention) or RedisSharedContext (conventional message-passing baseline) with identical calling code.
5. Integration Architecture
5.1 Mission Coordinator Integration
The field is managed by the mission coordinator service (CoordinatorService), which orchestrates multi-agent missions:
_create_mission_field(db, run)— Called when a mission starts or is approved. Creates the Qdrant collection and stores thefield_idin the mission run's config JSONB column._inject_task_output_into_field(run, task, agent_id)— Called after each task completes verification. Injects the task output into the field so downstream agents can discover it._destroy_mission_field(run)— Called when a mission reaches a terminal state._cleanup_terminal_fields(db)— Called on each coordinator tick to garbage-collect fields for completed/failed missions.
5.2 Agent-Callable Platform Tools
Three platform tools allow agents to interact with the field during task execution:
platform_field_query
field_id, query, top_k
Returns resonance-ranked patterns
platform_field_inject
field_id, key, value
Injects a pattern (max 4000 chars) with dedup
platform_field_stability
field_id
Returns stability metrics
These are registered through the platform's standard tool discovery pipeline and are available to any agent assigned to a mission with an active field.
6. Experimental Verification
6.1 Unit Tests (57 tests, all passing)
Mocked Qdrant client and embedding manager. Tests cover:
Decay mathematics (11 tests): Zero age returns full strength, half-life accuracy at 6.93h, access boost formula correctness, cap at 2.0x, boundary values, scaling with initial strength, combined decay + boost, zero strength edge case, large age approaches zero.
Context lifecycle (8 tests): UUID generation, collection creation, payload index creation, initial data seeding, destruction, error handling on missing collection.
Injection (9 tests): Embedding generation, payload structure, deduplication by content hash, reinforcement on duplicate injection, boundary permeability scaling.
Query with resonance (9 tests): Resonance = cosine^2 * decayed_strength (verified to rel_tol=1e-3), archival threshold filtering, result sorting, top_k limiting, required output fields, 3x over-fetch factor, embedding of query string.
Hebbian reinforcement (7 tests): Batch retrieval, access count increment, last_accessed update, co-access bonus computation, no reinforcement when no results.
Stability measurement (7 tests): Empty field returns zero, single-pattern stability, uniform vs varied strengths, organization metric, active/decayed pattern counting.
Internal helpers (6 tests): Content hash lookup, single-pattern reinforcement, find-by-hash with filter construction.
6.2 Integration Stress Tests (16 assertions, all passing)
Real Qdrant instance (in-memory mode), real vector computations, real mathematical assertions. No mocks.
Test 1 — Resonance ranking is correct: Injected 5 patterns about different programming languages. Queried "What programming language is best for machine learning?" Python ML pattern ranked in top 2. Go concurrency correctly ranked last (negative cosine score).
Test 2 — Temporal decay works: Two patterns with identical content, one fresh (0h) and one stale (24h). Fresh pattern: decayed strength 1.0000, resonance 0.044834. Stale pattern: decayed strength 0.0907, resonance 0.000242. 91% decay over 24 hours confirmed.
Test 3 — Hebbian reinforcement changes ranking: Two patterns aged 6 hours: "popular" (10 accesses) and "lonely" (0 accesses). Popular: strength 1.5000. Lonely: strength 0.5488. Reinforcement successfully counteracts decay.
Test 4 — Archival threshold filters dead patterns: 72-hour-old pattern: decayed strength 0.000747, well below threshold of 0.05. Pattern effectively archived.
Test 5 — 50-agent stress test: 50 agents, each injecting 3 findings = 150 patterns total. 100 queries executed in 0.02 seconds (4,000 queries/sec). No performance degradation.
Test 6 — Field stability evolution: Empty field: stability 0.0. One fresh pattern: stability 1.0. Eleven similar fresh patterns: stability 1.0 (high organization). Mixed ages (added 5 patterns aged 20-60h): stability dropped to 0.5667 (avg strength fell to 0.7008 due to heavily-decayed old patterns).
Test 7 — Cross-agent visibility (the telephone game proof): Agent A injected 10 research findings. Agent B queried, accessed only findings 0-2, injected its own analysis referencing only those 3. Agent C queried "research finding about subtopic 7" — found research_7 in the results, despite Agent B never mentioning it. This directly demonstrates that the shared field eliminates the telephone game: information is discoverable by meaning, not filtered by intermediate agents.
6.3 Integration Demo
Three-agent EU AI Act compliance scenario (Researcher, Analyst, Writer) demonstrating full workflow: injection, cross-agent query, resonance ranking, Hebbian reinforcement of accessed patterns, temporal decay table, and stability measurement.
7. Claims of Novelty
The following aspects of this system are believed to be novel in combination, based on review of prior art as of March 21, 2026:
Claim 1 — Multi-Agent Shared Semantic Field
A method for coordinating multiple LLM agents comprising: creating a shared vector embedding space (field) for a mission; each agent injecting knowledge patterns as high-dimensional vector points with metadata into the shared field; and each agent querying the shared field by semantic similarity to retrieve knowledge contributed by any agent, without requiring explicit message passing between agents.
Claim 2 — Resonance-Based Retrieval Scoring
A method for ranking query results in a shared agent field, where the resonance score is computed as the squared cosine similarity between query and pattern embeddings multiplied by the temporally-decayed strength of the pattern, such that both semantic relevance and temporal recency contribute to ranking.
Claim 3 — Hebbian Co-Access Reinforcement in Multi-Agent Fields
A method for strengthening associative relationships between knowledge patterns in a shared field, comprising: incrementing each retrieved pattern's access count; updating each retrieved pattern's last-accessed timestamp; and when multiple patterns are co-retrieved in a single query, applying a co-access strength bonus to each, implementing a "fire together, wire together" dynamic across agent-generated knowledge.
Claim 4 — Content-Hash Deduplication with Reinforce-on-Collision
A method for preventing duplicate patterns in a shared agent field, comprising: computing a cryptographic hash of pattern content before injection; checking for an existing pattern with the same hash; and if a duplicate is found, reinforcing the existing pattern (incrementing access count and updating timestamp) instead of creating a new entry.
Claim 5 — Field Stability Metric for Multi-Agent Convergence
A method for measuring the convergence state of a shared agent field, comprising: computing the mean temporally-decayed strength of all patterns; computing the organization score as one minus the coefficient of variation of pattern strengths; and combining these into a weighted stability metric that indicates whether the field is converging (agents reinforcing shared understanding) or diverging (stale, fragmented knowledge).
Claim 6 — Mission-Bound Field Lifecycle
A system for managing shared agent fields within a task orchestration pipeline, comprising: automatic field creation when a mission begins; automatic injection of task outputs into the field as tasks complete; provision of agent-callable tools for direct field interaction during task execution; and automatic field destruction when the mission reaches a terminal state.
8. Differentiation Summary
Feature
Blackboard (1985)
Stigmergy
Mitra (2026)
This Invention
Shared workspace
Yes (symbolic)
Yes (spatial)
No (per-agent)
Yes (semantic vectors)
Multi-agent
Yes
Yes
Single-agent benchmarks
Yes (production multi-agent)
Semantic retrieval
No (rule matching)
No (spatial proximity)
Yes (embedding search)
Yes (embedding + resonance)
Temporal decay
No
Yes (pheromone evaporation)
Yes (PDE-based)
Yes (exponential, query-time)
Usage reinforcement
No
Yes (pheromone deposit)
No explicit Hebbian
Yes (access boost + co-access)
Resonance scoring
No
No
Diffusion-based
cos^2 x decayed_strength
Content dedup
No
N/A
No
Yes (SHA-256 + reinforce)
Stability metric
No
No
No
Yes (avg_strength x 0.6 + org x 0.4)
Mission lifecycle
No
No
No
Yes (create/seed/inject/destroy)
Agent-callable tools
No
No
No
Yes (query/inject/stability)
LLM agent coordination
No (expert systems)
No (swarm robotics)
No (memory benchmark)
Yes (production orchestrator)
9. Theoretical Foundations
This invention draws from three established scientific frameworks, applying their principles to a novel domain (LLM multi-agent coordination):
Dynamic Neural Field Theory (Amari, 1977) — Continuous activation fields with decay, lateral inhibition, and attractor formation. Our resonance scoring, temporal decay, and stability measurement directly parallel neural field dynamics.
Stigmergy (Grasse, 1959) — Indirect coordination through environmental traces that decay and are reinforced by usage. Our shared field is a digital stigmergic medium where the "pheromones" are semantic vector embeddings.
Hebbian Learning (Hebb, 1949) — "Neurons that fire together wire together." Our co-access bonus implements this principle: patterns retrieved together in response to the same query strengthen each other.
The synthesis of these three frameworks into a unified system for LLM agent coordination is, to the best of our knowledge, novel.
10. Reduction to Practice
10.1 Implementation Artifacts
Abstract interface
core/ports/context.py
63
SharedContextPort ABC
Field implementation
modules/context/adapters/vector_field.py
357
Core invention
Coordinator integration
services/coordinator_service.py
+143
Mission lifecycle
Tool definitions
modules/tools/discovery/actions_field.py
48
3 ActionDefinitions
Tool handlers
modules/tools/discovery/handlers_field.py
151
3 handler functions
Tool dispatch
modules/tools/discovery/platform_executor.py
+3
Handler registration
Keyword routing
consumers/chatbot/auto.py
+12
Agent tool discovery
Configuration
config.py
+8
8 environment variables
Dependencies
requirements.txt
+1
qdrant-client>=1.12.0
Unit tests
tests/test_vector_field.py
763
57 tests
Stress tests
tests/demo_field_stress.py
403
16 assertions
Integration demo
tests/demo_field.py
275
3-agent scenario
10.2 Evidence Chain
2026-03-15
PRD-108 specification completed
docs/PRDS/108-MEMORY-FIELD-PROTOTYPE.md
2026-03-21 03:23:25 UTC
First commit with working implementation
674474722ac998c06a016cda4c2e5d05f9fbda68
2026-03-21
57 unit tests passing
tests/test_vector_field.py
2026-03-21
16/16 integration assertions passing
tests/demo_field_stress.py
2026-03-21
This technical disclosure document
docs/PRD-108-TECHNICAL-DISCLOSURE.md
10.3 Configuration Defaults
11. Future Work
Editorial comment: This section should distinguish between a preliminary controlled A/B demonstration already documented elsewhere and a broader production-grade A/B evaluation that remains future work. Without that distinction, a reader may see an avoidable contradiction between the implementation/spec docs and this disclosure.
Expanded A/B Comparison — Same mission executed with Redis message-passing (PRD-107) and Vector Field (this invention), measured on context quality, task accuracy, token efficiency, and latency across a broader set of mission types and production-like runs.
Cross-Mission Field Persistence — Allowing fields from completed missions to be queried by future missions, creating organizational memory.
Adaptive Decay Rate — Adjusting lambda based on mission urgency or field stability.
Attention-Weighted Injection — Using the querying agent's task description to weight which aspects of an injection are most relevant.
Field Visualization — Real-time 2D/3D projection of field state showing pattern clusters, decay fronts, and reinforcement hotspots.
12. References
Amari, S. (1977). "Dynamics of pattern formation in lateral-inhibition type neural fields." Biological Cybernetics, 27:77-87.
Grasse, P.P. (1959). "La reconstruction du nid et les coordinations interindividuelles chez Bellicositermes natalensis et Cubitermes sp." Insectes Sociaux, 6:41-80.
Hebb, D.O. (1949). The Organization of Behavior. Wiley.
Hayes-Roth, B. (1985). "A blackboard architecture for control." Artificial Intelligence, 26(3):251-321.
Mitra, S. (2026). "Field-Theoretic Memory for AI Agents: Continuous Dynamics for Context Preservation." arXiv:2602.21220.
Dorigo, M., Maniezzo, V., & Colorni, A. (1996). "Ant System: Optimization by a Colony of Cooperating Agents." IEEE Transactions on Systems, Man, and Cybernetics, 26(1):29-41.
This document constitutes a technical disclosure of the invention described herein. It is intended to establish prior art, support potential patent filings, and serve as a complete record of the invention's design, implementation, and verification.
Prepared by Gerard Kavanagh with technical assistance from Claude (Anthropic), March 21, 2026.
Last updated

