A 2026 report on what AI is doing to the partnership function — beyond time savings, into category transformation. From WorkSpan and ten partner-revenue practitioners across the cloud, ISV, and SI ecosystem.
Public launch in
00
Days
00
Hours
00
Mins
00
Secs
Monday May 4, 2026 · 10:00 AM Pacific
Early access enabled. The full report unlocks for you Sunday May 3 at 6:00 PM Pacific.
Voices from the ecosystem
Featured contributors
Jay McBain
Principal Analyst · Canalys
Swati Moran
Partner Programs · Docusign
Vince Menzione
Founder · Ultimate Partner
Jason Mann
Partner Ecosystem · Gong
Joe Estes
Alliances · Boomi
Rob Moyer
Founder · BlueThread · Operator OS
Mayank Bawa
CEO & Co-founder · WorkSpan
Sam Gong
AI Go-To-Market · WorkSpan
Content
01Intro
May 2026WorkSpan · Partner Revenue Platform
Running AI-Native Partnerships
Almost every partnership team is using AI. The time savings are real. But the AI moment is bigger than personal productivity — the market underneath partner teams is shifting faster than the tools, and the role itself is being rewritten in real time.
01
The AI Moment Is Bigger Than ChatGPT
02
Three Bars AI Has to Clear
03
Bar 1 — See the Live Deal, Not the Document
04
Bar 2 — Execute, Don't Just Generate
05
Bar 3 — Operate Across the Trust Boundary
06
What It Looks Like When It Works
07
A Higher Bar
9 Primary Calls
35 Concepts
$513B in Proof
01The Setup — what the headline measures
AI adoption in partnerships is near-universal. It saves a few hours a week.
The adoption is real. The use cases — meeting prep, knowledge search, document drafting — are genuinely useful. But the headline measures whether individuals in partnership roles got faster at their personal work. It does not measure whether the partnership function got transformed as a revenue driver. A few hours a week is a floor. Partnerships teams deserve a higher bar than this.
02The Frame — three things AI has to do
"The CRM is the system of record. The right tools with the right mindset of partner manager — the AI becomes the system of action."
— Rob Moyer, BlueThread · 2026 interview
Moyer names the gap precisely. The lower-hanging-fruit AI use cases — drafting, prep, summaries — make individual operators faster. They don't change the function. The shift is to AI as a system of action: connected to the deal, authorized to execute, with the human in the loop on discernment, not data entry. For partnerships, that layer has three specific requirements general-purpose tools structurally cannot meet.
01
Execute Submit the referral. Don't draft it.
02
See live state The deal, not the document.
03
Cross the boundary Operate sanctioned, between two companies.
03Bar 1 — see the live deal, not the document
The bottleneck isn't information. It's action latency.
Partner teams already know which deals need co-sell and which partners to activate. The motion stalls because the mechanical steps — referral submission, account matching, private offer creation, scheduling — each create enough friction that sellers don't do them. AI that "drafts" doesn't remove the friction. AI that acts does.
"Send account list to regional leads, wait for them to run manual lookups, receive data back, manually enter it into Salesforce. Two human handoffs per batch."
— A partner ops leader at a Fortune 100 industrial OEM · describing the P2B data workflow
100%
ClearScale referral automation across 350+ referrals → $5M partner-originated pipeline.
3,000%+
Boomi YoY AWS Marketplace growth — automation, not better drafting.
0
Manual handoffs in the WorkSpan referral motion. Submit, accept, sync.
04Bar 2 — execute, don't just generate
You can't give ChatGPT access to a live deal.
The data that defines the partnership lives in three places at once: a CRM record, a cloud partner portal, and the partner's own system — often out of sync. Across nine companies we found the same pattern: manual as default. The portal exists. The process exists. The data exists. Almost none of it is in a state where AI can act on it.
"We don't log into their partner portal. It's just shared through Slack or a spreadsheet."
— A partnerships leader at a frontier AI company · on ingesting Google partner data
Rob Moyer's standard for when a system counts as operational: under 60 seconds from trigger to first action. If finding the right partner requires a Slack message, the system is manual.
05Bar 3 — operate across the trust boundary
A co-sell motion is two companies. AI inside one can't see the other.
"That's where the blockers come in. It's an agent or an outside force acting upon data, and they consider that data to be sacred. How could I use any AI tool, even one we built ourselves?"
— An alliance leader at a global systems integrator · tens of thousands of professionals trained in Claude
The blocker isn't vendor trust. It's the absence of a sanctioned cross-company environment — where data governance from both sides is enforced and AI is authorized to act because both parties signed off on the space. Single-company AI cannot satisfy this. By definition.
06Proof — what it looks like when it works
$513B
in shared pipeline executed in a sanctioned cross-company environment. Not inside any one organization's AI stack — between them. $196B already closed.
100%
ClearScale referral automation across 350+ referrals → $5M partner-originated pipeline.
3,000%+
Boomi YoY growth in AWS Marketplace revenue — not faster prep, fewer manual steps.
22 → 47%
Win-rate lift when cloud field sellers are activated in partnership engagements.
This is what AI adoption for partnerships actually looks like: sellers see partner intelligence in the opportunity record without leaving CRM; referrals are submitted, accepted, and synced automatically; partner managers work on strategy and exceptions, not data relay.
Closing — the higher bar
Partnerships teams deserve to be measured on what they actually produce.
Pipeline sourced. Co-sell conversion. Seller activation. Partner-attached win-rate lift. Those metrics require AI that can do things, that knows what's happening in live deals, and that can operate safely across the company boundary. That bar is achievable. Some companies are already hitting it.
"You don't win with one partner — you get outvoted. Even if you compete in the morning, you need to be best friends by the afternoon. This is truly a platform moment, and platforms are synonymous with partnerships."
— Jay McBain, Canalys / Omdia · 2026 interview
Coverage. Coordination. Cross-company execution. The architecture McBain is describing is the precondition for everything else. The question is whether the industry stops celebrating individual productivity and starts asking for more.
01 / 06
May 2026 · A WorkSpan Point of View
Running AI-Native Partnerships
Sam Gong
SVP of AI GTM, WorkSpan
Rob Moyer
Founder, BlueThread
Like digital-native or cloud-native shifts before, we're in a new era where products and companies born post-AI are shaped differently and behave differently than the ones that came before.
You can't compete by adding chatbots and agents to an outdated partner motion.
AI-native partnerships leverage AI to accelerate and scale partner execution to serve a market defined by rapid innovation and continuous value exchange.
2/3
Of tech now sold on subscription or consumption — partners forced beyond the point of sale
6.3
Partners in the average enterprise deal — co-sell is the convergence motion
22 → 47%
Win-rate lift when partner referral is submitted — the operator's prize
Chatbot adoption inside partnership organizations is now near-universal. The dominant tools are general-purpose language models — Claude, ChatGPT, Gemini — used mostly for meeting prep, knowledge search, and document drafting. The practitioners doing it well — building reusable workflow automations, running morning call-prep agents, eliminating hours of research — deserve credit.
But the benchmark this measures is whether individuals have gotten faster at the documentation tasks they were already doing. That is not the same thing as AI transforming partnerships as a revenue function. Faster call prep doesn't change how a co-sell motion runs, how sellers activate, or what happens between account overlap and closed revenue. The ceiling on time-saving AI is a few hours saved per week.
Partner operators set a higher bar for AI transformation.
The Market Has Already Moved: Co-Sell Is the Convergence Motion
When software was sold as perpetual licenses, the transaction was the relationship and the partner's job was to move product. Once the check cleared, the work was done.
That model is gone for two-thirds of the industry. Today technology is sold on subscription and consumed on usage. The transaction is now the beginning of the work, not the end of it. Customers don't commit upfront — they commit to the outcome, renewed every quarter.
Jay McBain has tracked this shift across the largest technology companies on earth. The customer relationship transitions post-close into a delivery phase requiring partner involvement — implementation, managed services, adoption, renewal, expansion, and now co-keeping the customer through every renewal cycle. Every partner type is being forced beyond the point of sale. The average enterprise deal now involves 6.3 partners coordinating around a single customer; on large deals, more than ten.
McBain catalogs thirteen distinct co-motions partners now run — co-marketing, co-development, co-sell, co-deliver, co-keeping, and the rest. Co-sell is the convergence motion, because co-sell is where the deal lives. McKinsey estimates roughly $80 trillion in annual ecosystem revenue by 2030. Partnerships are not a side function; they are how enterprise software actually gets sold.
JM
Interview
~2 min
Jay McBain · Canalys / Omdia · 4/23/26
"Two-thirds of tech is now consumed, not bought. Every partner type is forced beyond the point of sale. You don't win with one partner — you get outvoted. Even if you compete in the morning, you need to be best friends by the afternoon. This is a platform moment, and platforms are synonymous with partnerships."
Watch interview clip ›
The Partner Leader Becomes a Partner Operator
The shift to co-sell didn't just change how software is sold. It changed what partnership professionals actually have to do. The previous era's job description made internal sense — manage a portfolio of partners, maintain the relationship, administer the program, track certifications, issue MDF, approve deal registrations.
That role doesn't scale in a co-sell world. When the average enterprise deal involves 6.3 partners and the co-sell motion lives in deals opening and closing weekly, the relationship-manager model breaks. The work is fundamentally different: not managing a relationship over time, but deploying the right partner capability against the right deal at the right moment.
Rob Moyer calls this the death of the partner manager and the rise of the Partnership Operator. The operator doesn't maintain relationships — they run systems. Inputs, levers, and loops. Not portfolios and QBRs. The operator's job is to make the co-sell motion repeatable: trigger identification, partner matching, pre-call brief, meeting choreography, log-tag-track. And to know when the system is working — Rob's benchmark is under 60 seconds to find the right partner for a deal stuck at VP level. If it requires a Slack message, the system is manual, not operational.
Sales has known this shift for a decade — sellers spend over 70% of their time in non-selling activity. Partner managers now have the same problem. The new partner manager is focused on deal velocity, working pipeline, and helping the co-sell get to closed-won. The old one was focused on connections, lunch-and-learns, and certifications.
RM
Interview
~90 sec
Rob Moyer · BlueThread · 4/23/26
"Five years ago you could change over a year and get great results. In the world of AI, it's a 90-day sprint. The best partner managers will do multiple things — they're still driving revenue, but speed to execution is everything in a deal. The ones who can self-serve the lower-hanging tasks are the ones who win."
Watch interview clip ›
AI Has to Serve the Partnership Operator
The operator's job creates a specific calculus for AI. Time-saving hacks help individuals; they don't change the system. The co-sell motion runs across a CRM that doesn't see the partner's pipeline, a partner portal that doesn't see the seller's deal, and a shared environment that the AE has never logged into. AI that just summarizes documents inside one of those silos cannot move co-sell forward.
Rob Moyer puts the gap precisely:
"There's still a whole lot of people that are bringing their own AI to work, doing the lowest-hanging-fruit AI to make their job better. The actual AI that helps is more as part of a system. The CRM is the system of record. The right tools with the right mindset of partner manager — the AI becomes the system of action. Without it, it's just like time-saving AI hacks."
Moyer names the missing layer. Time-saving AI makes individual operators faster. System-of-action AI changes the function. The distinction is structural, not one of model capability — and for partnerships, the system layer has three specific requirements that general-purpose AI cannot meet. Three bars to clear before partnerships are actually transformed.
01Live Partner ContextAI that can apply your partner playbook to live deals
When a partner manager using ChatGPT has to provide the context on the account, the opportunity, and the partner relationship in every interaction, the human is working to make the agent effective. It's supposed to go the other way.
The research we gathered across nine companies reveals a consistent pattern we're calling manual as default: partnership workflows exist, but they run on Slack threads and spreadsheets, not structured platforms. A partnerships leader at a frontier AI company described their process for ingesting cloud partner data: "We don't log into their partner portal they have. It's just shared through like a Slack or a spreadsheet." A partnerships lead at a mid-market governance/compliance SaaS described account targeting with advisory firm partners as "we'll fill out an Excel sheet, download it to our desktop, send it via email for them to fill out on their end."
The portal exists. The process exists. The data exists. And almost none of it is in a state where AI can act on it, because the data is in Slack, in an email thread, in a spreadsheet someone downloaded to their desktop.
Copying and pasting every referral into ChatGPT is not the answer.
An effective AI-driven motion has a specific architecture, described by Rob Moyer and Chris Lavoie in the Co-Sell Engine: trigger identification → partner matching → pre-call brief → meeting choreography → CRM logging. Every step requires real-time knowledge of deal state, partner capability, and account context. Moyer's standard: "Under 60 seconds to find the right partner for a healthcare deal stuck at VP level. If it requires a Slack message to the partner team, the system is manual, not operational."
The current generation of practitioners is not operating at this standard yet. They are using AI to accelerate the research phase while the execution phase remains fully manual. That is not a criticism — it is an honest description of what's available to them. General-purpose AI tools don't have deal context. The platform layer that provides it is the missing piece.
JM
Interview
~1.5 min
Jason Mann · Gong · 5/1/26
"With Gong's understanding of every component of a deal cycle and WorkSpan's ability to take that context to our partners, we know exactly when to bring partners in. Our teams come together and just get to work."
Watch interview clip ›
Bar 1 · In Practice
Get the rep and the cloud partner's seller engaged on this $2M deal.
Three real tasks in a co-sell motion. The prompt stays the same — the difference is who has the context. In LLM-assisted mode, the human is the courier. In AI-native mode, the agent has it already and the human stays on for the high-leverage review.
Goal
Rep + cloud partner's seller engaged on the $2M deal — both prepared, on this week's calendar.
Task 01 · Recognition
Decide whether this deal is worth co-selling with our cloud partner
Qualify the opportunity, pick the right partner
The model is asked: Should we co-sell, and which partner?
To prompt it, the human leaves the screen to paste opp fields, tech stack, cloud signals, and TCV; sends the prompt; reads the response; switches back; types the decision into a CRM custom field.
If the rep is busy, the response sits in a chat tab and the deal never gets badged.
Should we co-sell this deal with our cloud partner?
- Customer: paste account, ARR, industry
- Tech stack: paste from notes
- Cloud signals: paste anything you can find
- Stage / TCV: paste
What's the rationale, and which partner do we engage?
→Typed into a CRM custom field
Same question, asked of an agent that already has the answer's inputs. The human reviews the call.
Should we co-sell this deal with our cloud partner?
- Customer: live opp record
- Tech stack: technographics feed
- Cloud signals: propensity model
- Stage / TCV: live opp record
What's the rationale, and which partner do we engage?
→Rationale and co-sell badge written onto the opportunity record; flagged for human approval if confidence is low.
What scales
Re-evaluates qualification continuously as the deal evolves; humans qualify once and forget.
Considers technographics, cloud propensity, and prior win patterns together; humans guess by stack alone.
Checks every opportunity in the pipeline — not just the obvious co-sell candidates.
Catches qualification drift when the customer's cloud spend signals shift.
Task 02 · Artifact
Brief the alliance team and draft the partner-portal referral
Documentation + submission
The model is asked: Draft a Slack brief and a portal-shaped referral.
To prompt it, the human pastes 12 deal fields, the portal schema, and a prior referral example; sends the prompt; reads the draft brief and draft referral; pastes the brief into Slack; switches to the portal and re-keys all 12 fields to submit.
A response that fits the schema doesn't help if the human never gets back to the portal.
Draft (1) a Slack brief to our cloud alliance team and (2) a referral submission for the cloud partner's portal.
- Deal: paste 12 fields
- Portal required schema: paste schema
- Standard rationale format: paste prior example
Keep the Slack message under 150 words; format the referral to fit the schema.
→Brief into Slack; referral re-keyed into the cloud partner's portal
Same prompt, against the live opp and the live portal schema. The human approves the submission.
Draft (1) a Slack brief to our cloud alliance team and (2) a referral submission for the cloud partner's portal.
- Deal: live opp record
- Portal required schema: live integration
- Standard rationale format: workspace template
Keep the Slack message under 150 words; format the referral to fit the schema.
→Brief posted into the alliance Slack channel; referral filed via API into the cloud partner's portal after a quick human approval.
What scales
Maps every required portal field accurately — including the obscure ones humans guess on.
References prior submissions for tone and rationale consistency; humans use whatever's top of mind.
Tags the alliance manager based on customer segment and partner coverage, not who you know.
Checks for duplicate submissions across channels before sending.
Task 03 · Cross-company handoff
Hand context to the cloud partner's seller before the meeting
Prime the other side of the relationship
The model is asked: Draft a context handoff email.
The human pastes deal context, customer stack, AE style, and relationship history; sends the prompt; reads the draft; pastes into email, edits, and sends — and has no visibility into whether the partner's seller read it.
When the email goes cold, the partner-side handoff goes cold too.
The cloud partner's seller just got assigned to our deal. Draft a context handoff email.
- Our deal context: paste
- Customer tech stack: paste
- Our AE's style + cadence: paste
- Relationship history: paste
Warm but efficient. Make the next-step clear.
→Copied into an outbound email
Same handoff, with full live context and a path into the partner's system. The human reviews tone before send.
The cloud partner's seller just got assigned to our deal. Draft a context handoff.
- Our deal context: live opp record
- Customer tech stack: technographics
- Our AE's style + cadence: workspace profile
- Relationship history: CRM activity feed
Warm but efficient. Make the next-step clear.
→Partner's seller primed inside their own CRM; read receipt + first reply surfaced back to the AE.
What scales
Includes the complete relationship history — not just the most recent calls.
Adapts tone based on the partner seller's working style; humans use their default.
Tracks read receipts and follow-up cues; nudges automatically if cold.
Surfaces the moment the partner seller acts on the deal back to our opportunity.
Active human time
~90 min
Leaving the screen, paste-prompt-paste
Outcome at scale
Inconsistent
Some deals get the full treatment; most don't
Quality at scale, unlocked
0 / 12 patterns
Things humans skip when they're busy
In LLM-assisted mode, the human is the courier of context — every prompt is a screen-leave, paste, paste, read, paste-back. The LLM only knows what you've fed it. AI-native flips it: the agent has the context already, runs the same prompts in place, and the human stays on for the calls only a human can make. Bar 1 is the difference between feeding the LLM context and the agent having context.
When the agent has context, the human stops being the courier — and starts doing the work that requires judgment.
02AI That ExecutesText generation doesn't change the economics of partnerships
Using chatbots to generate content, summarize information, or prepare documents makes them output tools. They produce text that a human then acts on.
Real AI adoption for partnerships requires AI that acts. Not "draft the referral email" — submit the referral. Not "here are the P2B scores to paste into Salesforce" — write the scores directly to the opportunity record. Not "here's a scheduling email to send" — analyze both calendars, match a time, send the invite, and confirm receipt.
This is agentic execution. Partner operators need co-sell machinery that cuts out the manual steps and catches the missed actions. When it works, partner teams know which deals need co-sell. They know which partners to activate. When it doesn't, the motion stalls because the actual mechanical steps — referral submission, account matching, private offer creation, meeting scheduling — are each small but collectively create enough friction that sellers don't do them.
The evidence from customer calls is unambiguous. A partner ops leader at a Fortune 100 industrial OEM described their propensity to buy (P2B) data workflow: send account list to regional leads, wait for them to run manual lookups, receive data back, manually enter it into a specific Salesforce field. Two human handoffs per batch. The barrier isn't knowledge — everyone knows the P2B data should be in Salesforce. The barrier is that every step requires a person to take an action.
Don't use AI to help with your administrative tasks.
Give AI your administrative tasks.
Bar 2 · In Practice
Get a $2M deal referred to your cloud partner — and accepted, before the week ends.
Three real steps in a referral relay. In LLM-assisted mode, the model produces text, but the human still has to carry that output across systems to apply it. In AI-native mode, the agent doesn't just draft — it submits, with the human reviewing the highest-leverage step.
Goal
Referral submitted, accepted by the cloud partner, reflected in the CRM — in time to lift win rate from 22% to 47%.
Task 01 · Compose
Draft the referral so it fits the cloud partner's portal schema
Turn deal context into a portal-shaped artifact
The model produces a portal-shaped referral draft, in a chat tab.
The human still has to dispatch it: copy the draft, switch to the cloud partner's portal, paste and re-key the 12 fields into the portal form, then fix any validation errors.
If the seller is busy, the draft sits in a tab and the referral never goes out.
→Re-keyed into the cloud partner's portal
The agent generates the same draft against the live opportunity and the live portal schema, then files it after a quick human approval.
→Filed via integration directly into the cloud partner's portal
What scales
Maps every required portal field — including the obscure ones humans guess on.
Validates against the schema before submit; never bounces on a missing required field.
Drafts the rationale to match the partner's accepted patterns, not your nearest example.
Files within minutes of the deal qualifying — never lets a draft rot in a tab.
Task 02 · Dispatch
Submit the referral inside the cloud partner's portal
A system the LLM cannot reach
The model has no role here — the portal lives outside the chat tab.
The human is the integration: log into the cloud partner's portal, navigate to the referral form, enter 12 required fields, guess at the unclear ones, click submit, and retry on validation failures.
Most referrals stall here. This is the step the LLM era never solved.
→Manually submitted in the cloud partner's portal
The agent reads the live opp record, validates against the live portal schema, and calls the partner portal API. The human reviews exceptions only.
→Submitted via API into the cloud partner's portal
What scales
Submits every qualified deal — humans only submit the ones they remember.
Catches schema changes the day they ship; humans find out by getting bounced.
Tags submissions with deal-stage metadata so the partner's PAM gets useful context, not boilerplate.
Retries automatically on transient errors instead of leaving the referral half-submitted.
Task 03 · Close the loop
Reflect the partner's acceptance back into the CRM
The referral ID, the PAM, the next step
The model summarizes the partner acceptance email — referral ID, assigned PAM, suggested next step.
The human still has to carry it back: open the inbox, switch to the CRM, find the opportunity, type the referral ID and PAM into custom fields, set the next step, and log the activity.
If the rep forgets, partner-pipeline reporting goes blind for the rest of the deal.
→Typed into CRM custom fields
The agent listens for the acceptance event from the cloud partner's portal and writes the CRM record inline. The human is notified to take the next step.
→Referral ID, PAM, and next step written into the CRM opportunity
What scales
Updates the CRM the moment the partner accepts — no end-of-week catch-up.
Logs the activity with full context so partner-pipeline reporting never goes dark.
Notifies the AE with the partner PAM's preferred channel and timing.
Surfaces drift if the partner goes quiet — humans don't notice until QBR.
Active human time
~25 min
Re-keying, switching, retrying
Outcome at scale
Inconsistent
Most referrals never get submitted
Quality at scale, unlocked
0 / 12 patterns
Things humans skip when they're busy
In LLM-assisted mode, the model produces text — and the human still has to carry it across screens to apply it. Every screen-leave is a chance for the referral to never get submitted. AI-native flips it: the agent doesn't just draft, it dispatches — and the human stays on for the highest-leverage approval. Bar 2 is the difference between getting an answer and getting a result.
When the agent can act, the referral doesn't stall on the human keystroke — and the win-rate lift actually shows up.
03Operate Across the Trust BoundaryPartnerships built for agentic execution between two companies
This is the bar the industry hasn't fully articulated yet.
Partnership AI isn't single-company AI. A co-sell motion by definition involves two companies with different CRMs, different security postures, different data governance policies, and different views of the same account. An AI agent that operates inside Company A's Salesforce instance cannot see what Company B knows about that account. It cannot submit a referral to Company B's portal. It cannot receive the acceptance back and update Company A's CRM record. It can only generate text.
The consequence of this architectural reality is visible in the primary research. An alliance leader at a global systems integrator — one of the largest and most AI-forward professional services firms in the world, which has trained tens of thousands of professionals in Claude — described what happens when an AI agent tries to access partnership data:
"That's where the blockers all come in on our side. It's an agent or an outside force acting upon data, and they consider that data to be sacred."
He went further: "How could I use any AI tool, even one that we built ourselves?"
The blocker is not vendor trust. It is the absence of a sanctioned shared environment — a space where both parties have explicitly authorized access, where data governance rules from both companies are enforced, and where AI can act on partnership data without becoming an "outside force."
This is architecturally different from anything general-purpose AI provides today. ChatGPT cannot be given field-level access to two companies' CRM records simultaneously. Claude cannot execute a referral workflow that requires authentication in two partner portals. Google Gemini cannot generate a private offer that passes compliance review at both the ISV and the cloud marketplace simultaneously.
The risk practitioners feel — homegrown automations introducing vulnerabilities they didn't realize they were building in — is the same shape as the architectural one. It isn't a security problem alone. It's that DIY AI tools operating on partnership data do so without the authorization model that enterprise partnerships require. When the firm's IT team blocks an AI agent from touching partner data, they are correct. The agent is not operating in a sanctioned shared environment. It is an outside force.
What's required is a purpose-built execution layer: a space that exists between two companies — not inside one — with enterprise-grade security enforced at the object, team, and field level; with workflows that initiate in one company's CRM, execute in the partner's environment, and return results; and with AI agents that are authorized to act on behalf of both parties because both parties have sanctioned the environment.
Bar 3 · In Practice
Close the quarter on a live joint forecast — not a reconciled spreadsheet.
Two companies. One quarter. $50M of joint pipeline. Single-company AI is locked inside its own walls. The trust boundary is the line that decides whether the joint forecast is current and agreed — or stale and contested.
Goal
Both companies hold a live, reconciled joint pipeline forecast — by close-of-quarter, with no Excel handoff.
Company A · ISV
CRM · joint pipeline (4 deals)
Acme$2.0MSt 4
Beacon$4.5MSt 3
Cygnus$1.8MSt 5
Drift$3.1MSt 2
Internal AI · permitted on its own data
Cannot share with partner CRM
Authorized in shared layer
The wall · what each side does
Each side runs its own forecast on partial data. To reconcile, the work has to leave the building:
Export joint pipeline from each CRM
Legal scrub of sensitive fields, both sides
Secure email exchange under NDA
Manual VLOOKUP to find the overlap
PowerPoint reconciliation for the QBR
Quarterly cycle — half the deals have moved by then
Two forecasts, neither current, neither agreed.
The shared execution layer
Both companies authorize the layer. Each side's data-sovereignty rules are enforced field-by-field. Agents from either company read live joint pipeline state and surface the same forecast to both, with a bilateral audit trail.
→One live joint forecast, mutually agreed by close-of-quarter
What scales
Both sides see the same numbers — no reconciliation needed.
Stage shifts surface on the day they happen, not at the QBR.
Field-level entitlements respect each side's data-sovereignty policy.
Audit trail attributes every action to the originating company.
Company B · Cloud partner
CRM · joint pipeline (4 deals)
Acme$2.2MSt 5
Beacon$4.5MSt 3
Cygnus$2.0MSt 5
Drift$3.1MSt 2
Internal AI · permitted on its own data
Cannot share with partner CRM
Authorized in shared layer
Cycle time
~2 weeks
Export, scrub, email, VLOOKUP
Outcome at scale
Two forecasts, neither agreed
Quarterly only · always stale
Quality at scale, unlocked
0 / 4 patterns
Things only a sanctioned shared layer can do
In LLM-assisted mode, AI is locked inside each company. The trust boundary is a wall: data has to be exported, scrubbed, emailed, and reconciled — quarterly at best. AI-native flips it: a sanctioned execution layer where both sides authorize the work, field-level entitlements enforce each company's data policy, and agents from either side operate on one live joint pipeline. Bar 3 is the difference between AI inside a company and AI between companies.
When the boundary becomes a sanctioned execution layer, the joint forecast stops being a meeting — and becomes a shared state.
VM
Interview
~2 min
Vince Menzione · Ultimate Partner · 4/23/26
"Pick one motion, one hyperscaler, one play — go deep until it produces a flywheel. Doing four things badly with one operator is not a strategy. The teams that win in the AI era are the ones disciplined enough to choose."
Watch interview clip ›
The AI era is the partnerships era
Outside of channel sales, complex partnerships were built on relationships — joint roadmap reviews, multi-year alliance commitments, the patient work of two companies learning to trust each other. Trust still holds. What changed is how it's built. The cloud era gave even the most complex alliances a way to encode trust into shared systems and shared data — and the pathway from "we partner" to "we close predictable revenue together" stopped being mysterious. It became infrastructure.
AI isn't a productivity upgrade for partner managers. It's a new capability to accelerate building trusted business relationships — at the scale and speed the AI market demands. That's the role shift. The partner manager becomes the Partnership Operator — accountable not for activity, but for what their partnerships produce. Agents are the operating layer of the function, not a tool a few people use cleverly. Just as sales operators built on CRMs and marketing operators built on ABM platforms, partnership operators build on agents.
The companies that started building this are already showing what it looks like. ClearScale's AWS referrals are 100% automated — at the pace of the deal, not the partner team's calendar. Boomi's marketplace revenue grew 30× in a year. Win rates rise from 22% to 47% when sellers are activated inside the deals they're already working. These aren't predictions. They're the early operating model — the one that defines partner-led efficiency for the AI era.
Sources: WorkSpan interviews — Rob Moyer, BlueThread (4/23/26); Jay McBain, Canalys / Omdia (4/23/26). Primary calls drew on partnership leaders at nine companies across enterprise software, cloud platforms, integration platforms, professional services, and industrial OEMs — including Joe Estes (Boomi). Vault canon: Co-Sell Engine (Moyer/Lavoie), Operator OS, PTM, Partner Revenue Platform, Seller Activation. Customer proof points: ClearScale, Boomi, win-rate shift, $513B pipeline.
WorkSpan's mission is to help businesses achieve more together.
Cloud built the infrastructure of trust between businesses. AI is how that trust scales.
The operators building this layer now are writing the rules everyone else will operate by.