The Missing Layer in AI Context Graphs: Collaboration and Process Flow
Foundation Capital's recent piece on context graphs named something important: Enterprise AIs need systems of record for decisions, not just current state. The decision traces that currently live in Slack threads, deal desk conversations, and inside people's heads - the exceptions, overrides, and precedents that actually govern how work gets done - are essential infrastructure for agentic AI.
This is a meaningful step forward. But as AI moves from individual agent tasks to genuine multi-stakeholder collaboration, is there another dimension of context that this framing doesn't yet capture?
What Context Graphs Get Right
The core insight from the piece is powerful: decision traces capture the "why" behind outcomes, while cross-system context surfaces valuable information no single tool can see on its own. When precedent becomes searchable, agents can learn from history rather than starting from scratch every time.
This is necessary AI infrastructure. But it is built on an assumption worth examining.
The Hidden Assumption
Context graphs, as currently framed, model agents as individual decision-makers with better memory. An agent encounters a situation, queries the context graph for relevant precedents, and makes a more informed decision.
But much of the valuable work in organizations doesn't happen that way.
Consider an enterprise deal with custom terms: a 20% discount, NET-60 payment, and a custom SLA. The decision to approve those terms isn't made by a single actor consulting better context. It emerges from multiple stakeholders, each operating from a different reasoning framework:
- Sales needs this discount to close the deal and hit the quarter. They're optimizing for revenue now.
- Finance is pushing back. This discount sets a bad precedent, damages margins, and NET-60 hurts cash flow. They're optimizing for profitability and predictability.
- Customer Success knows this customer had product issues last quarter. They're wary of overcommitting on a custom SLA they may not be able to deliver. They're optimizing for retention and operational feasibility.
- Legal is concerned about the custom SLA language and what it exposes the company to. They're optimizing for risk.
Here's the thing: each of these perspectives is right and correct within its own framework. The tension is real, not just informational. The approved terms don't come from any single stakeholder having better context; they emerge from negotiation and trade-offs between genuinely conflicting objectives.
The Missing Layer
Context graphs capture what was decided and the decision traces that led to it. What they don't capture is how conflicting objectives got resolved.
The current discourse gives us excellent infrastructure for organizational memory. What's missing is the collaborative synthesis layer - the part that models how different reasoning frameworks constructively conflict and then converge.
Three concepts are absent from the current framing:
- Stakeholder maps: Who cares about this type of decision, and why? Not just who was involved, but what each role is structurally optimizing for.
- Goal and constraint profiles: What does success look like for each perspective? What constraints are they operating under? These aren't hidden in data, they're embedded in organizational structure.
- Collaboration patterns: How do these perspectives actually synthesize? What are the negotiation dynamics that produce outcomes? When does escalation happen, and to whom?
This isn't simply workflow orchestration, focused on sequencing tasks and routing approvals. It's modeling how organizations actually make decisions that require balancing competing interests.
Context For Collaboration
That's the alternate framing: context for multi-agent systems isn't just about better individual recall; it's about how multiple actors with different roles coordinate and converge.
Organizational decisions emerge from collaboration. The value is in how different perspectives come together, not in any single participant having more information.
For agents to participate in real organizational work, they need context at multiple levels:
Static context is organizational common sense - the baseline any agent needs to operate. Domain knowledge, company structure, who reports to whom, documented processes, standard approval thresholds. This is onboarding material: what you'd give a new employee on day one.
Dynamic context is the envelope around a specific task that evolves as the process unfolds: the customer's history, the accumulated decisions and exceptions from this particular deal, the fact that Finance already pushed back once and escalated. This context flows with the task as it moves through the organization.
Learned context is what agents discover through execution - the unspoken rules, the informal relationships that create exceptions, the patterns that aren't documented anywhere. This layer compounds over time and becomes dynamic organizational knowledge that helps future agent instances navigate more effectively.
All three layers matter. This approach highlights the differences for multi-agent collaboration: agents don't just need to know what happened before, they need to understand the structural tensions they're operating within, and how those tensions typically get resolved.
Open Questions
If multi-agent collaboration requires more than just better memory, that leads to several interesting questions:
- How do we formally represent conflicting stakeholder objectives? Not just "who approved," but "what were they optimizing for"?
- What does negotiation look like in practice? Is it agents representing stakeholder perspectives working with each other? Agents facilitating human collaboration? Or some combination of both?
- How do we ensure learned collaboration patterns remain valid as organizations evolve?
- What governance frameworks prevent this layer from encoding biases or outdated practices?
Moving Forward
Context graphs are essential infrastructure. The work being done on decision traces, cross-system context, and organizational memory is moving us in the right direction.
But the AI agentic future requires more than giving individual agents better recall. It requires modeling organizations as collaborative systems where valuable outcomes emerge from the synthesis of conflicting perspectives, not just from smarter individual decisions.
Memory is necessary but not sufficient. Collaboration and process context is what unlocks multi-agent orchestration at scale.