Your agent can write the email. It cannot yet hear the reply.
MCP is opening the door for AI agents to act on your behalf. HeySpeak is building the voice feedback layer they will use to collect real signal from real humans, fast, with no scheduling. Here is what is coming.
The short answer
Why voice is the missing primitive for agentic outreach
The agentic stack of 2026 is filling in fast. Agents can read your inbox, schedule meetings, draft messages, search your codebase, and push deploys. What they still cannot do is hear a customer hesitate before saying yes. That last mile, the unfiltered human reaction, is where most product decisions actually get made or unmade. Text forms strip it out. Calendar calls do not scale. Voice notes, dispatched async, are the format that fits.
Magic Links were built for that loop: ask one question, send a link, get back voice plus transcript plus summary. The format is already agent-shaped. There is no app to install, no login to handle, no calendar to manage. An agent can create the link, send it, and read back the responses without a human in the middle of the plumbing. The human only shows up where the human is needed: at the microphone.
What the HeySpeak MCP server will expose
The first tool surface is small on purpose. We would rather ship four good tools than twenty unused ones. The shape of the API maps directly to the workflow most teams already run by hand.
create_magic_link
Your agent calls this with a single open question. The server returns a shareable URL. That is the unit of work: one question, one link, many voice responses.
send_to_recipients
Pass a list of email addresses. The server dispatches a branded message with the link and an explainer. Recipients tap once and record.
list_responses
Pull the list of responses for a given link, each with a one-line AI summary. The agent can scan, count, cluster, or rank without pulling the full transcripts.
get_response_detail
Pull the full transcript and AI summary for a specific response when the agent needs depth. Audio stays in your HeySpeak account behind signed URLs, the agent never holds the raw file.
That is the v1 surface. We will widen it based on what you actually try to automate, not what we guess sounds useful in a blog post.
What teams will use this for
Always-on customer discovery
Your agent watches your CRM. Every time a deal closes lost, it dispatches a one-question Magic Link to the contact: "What were you really hoping this would do for you?" Twenty closed-lost deals in a month becomes twenty 60-second voice notes you actually read, instead of a Salesforce field nobody fills in honestly.
Post-shipping reaction loops
An agent in your release pipeline sends a Magic Link 48 hours after a feature ships, to the users who touched it first. The agent reads the summaries, flags the two or three responses worth a real follow-up, and drops them into your Linear comments. The window is short, the signal is fresh, and you did nothing manually.
Support triage with a human voice
Your support agent escalates a ticket. Before booking a call, it sends a voice link: "Tell me what you were trying to do when it broke." The customer talks for 90 seconds. The summary reaches the human teammate with the actual story, not a typed reconstruction. Half of those calls do not need to happen.
The bigger picture: customer feedback as an agentic primitive
Every wave of new infrastructure rewards the formats that fit it. Mobile rewarded the swipe. Slack rewarded the short message. The agentic web will reward the formats that an agent can dispatch and read without a human in the middle. Async voice fits that shape almost perfectly: low friction for the human at the microphone, structured output for the agent on the other end.
We are not betting that MCP itself is the final protocol. We are betting that the underlying need, agents talking to humans without breaking the async contract, is durable. The Magic Link format already works for that loop. The MCP server is the wrapper that lets your agent reach for it.
Common questions
What is an MCP server for customer feedback?
Can AI agents already send HeySpeak Magic Links today?
When will the HeySpeak MCP server be available?
What will an AI agent be able to do with HeySpeak through MCP?
How does voice fit into agentic workflows?
Will this work with Claude, ChatGPT, and Cursor?
Be first when the MCP server ships.
Create an account today. Active users get early access and shape the v1 tool surface.
Create your account