mech · WorkSpan canon

Trust Boundary

The line AI has to cross to do partnership work. The blocker isn’t vendor trust — it’s the absence of a sanctioned cross-company environment.

An alliance leader at a global systems integrator — one of the largest and most AI-forward professional services firms in the world, which has trained tens of thousands of professionals in Claude — described what happens when an AI agent tries to access partnership data:

“That’s where the blockers all come in on our side. It’s an agent or an outside force acting upon data, and they consider that data to be sacred.” He went further: “How could I use any AI tool, even one that we built ourselves?”

The blocker isn’t vendor trust. It’s that homegrown AI tools operating on partnership data don’t have an authorization model both companies have signed off on. When the firm’s IT team blocks an external agent from touching partner data, they’re correct — the agent is operating outside any sanctioned environment.

Practitioners we’ve spoken to describe the same risk in their own terms: the scariest thing about homegrown AI tooling is the vulnerability you don’t know you’re building in. The vulnerability isn’t just security — it’s the absence of an authorization model the partner has accepted.

What practitioners ask

  • “How do I let AI act on partnership data without breaking my partner’s data governance?”
  • “Why does my IT team block our own AI agent from touching shared deal data?”
  • “What does a sanctioned cross-company environment for AI actually look like?”

The answer

The trust boundary is an architectural concept, not a vendor concern. It is the line — drawn by contract, regulation, and category-level data classification — that determines whether an AI agent is allowed to read, infer on, or write data belonging to two companies. The boundary exists whether or not a partnership exists. What changes in a partnership is that the boundary now sits between two governance regimes, each enforcing its own policy, and any AI that wants to do partnership work has to be sanctioned on both sides.

The reference architecture for crossing a trust boundary safely is already well-understood inside hyperscaler platforms. AWS’s managed Partner Central agents MCP Server does not let an AI tool reach into partner data through general-purpose credentials. It mediates every call through SigV4-signed, IAM-scoped authentication, isolates data to the logged-in partner’s own opportunities, and requires human-in-the-loop approval for every write. AWS’s security team has formalized this further with new IAM context keys (aws:ViaAWSMCPService, aws:CalledViaAWSMCP) so administrators can write fine-grained policies that allow or deny actions specifically when they originate through an AI agent. That is the architectural shape of a trust boundary that AI can cross: identity-scoped, auditable, action-aware.

The deeper principle is the one NIST SP 800-207 put in standards form — defenses move from network perimeter to per-request authentication and authorization of users, assets, and resources. A homegrown AI tool sitting inside one company’s perimeter looks safe to its own IT team and unsafe to the partner’s, because by zero-trust standards it is making cross-organization requests with no sanctioned identity on the other side. The fix is not stronger demos or better data-handling promises. It is an environment both companies have explicitly authorized — what WorkSpan calls the shared environment — where AI’s permissions are scoped, logged, and revocable from either side.

  • Sacred Data — the category of data that must never leave the boundary, and the reason the boundary exists in the first place.
  • Shared Environment — the architectural answer: a sanctioned cross-company workspace where AI can be authorized to act under both sides’ governance.
  • Global SI Trust Boundary — the practitioner voice that drew the line, in their own words.
  • Partnership Operator — the role that lives on this boundary every day and decides what AI is allowed to touch.

Sources

  1. Partner Central agents MCP Server — AWS Documentation
  2. Understanding IAM for Managed AWS MCP Servers — AWS Security Blog
  3. Delegate access across AWS accounts using IAM roles — AWS IAM User Guide
  4. NIST SP 800-207: Zero Trust Architecture — NIST
  5. AWS Partner Central API: Setup and authentication — AWS Documentation