Skip to main content
For AI agents

Your agent can write the email. It cannot yet hear the reply.

MCP is opening the door for AI agents to act on your behalf. HeySpeak is building the voice feedback layer they will use to collect real signal from real humans, fast, with no scheduling. Here is what is coming.

Get on the early list
5 free responses, no credit card

The short answer

MCP, the Model Context Protocol, lets AI agents call external tools. HeySpeak is building the MCP server for async voice feedback. One Magic Link, sent by your agent, gathers 60-second voice responses from 20 customers. The agent reads back AI summaries and full transcripts. The voice stays human. The dispatch and the synthesis go agentic.
1 link
for your agent to collect feedback from many humans
60 sec
per voice response, with transcript and AI summary
0 logins
required from recipients, designed for agent dispatch

Why voice is the missing primitive for agentic outreach

The agentic stack of 2026 is filling in fast. Agents can read your inbox, schedule meetings, draft messages, search your codebase, and push deploys. What they still cannot do is hear a customer hesitate before saying yes. That last mile, the unfiltered human reaction, is where most product decisions actually get made or unmade. Text forms strip it out. Calendar calls do not scale. Voice notes, dispatched async, are the format that fits.

Magic Links were built for that loop: ask one question, send a link, get back voice plus transcript plus summary. The format is already agent-shaped. There is no app to install, no login to handle, no calendar to manage. An agent can create the link, send it, and read back the responses without a human in the middle of the plumbing. The human only shows up where the human is needed: at the microphone.

What the HeySpeak MCP server will expose

The first tool surface is small on purpose. We would rather ship four good tools than twenty unused ones. The shape of the API maps directly to the workflow most teams already run by hand.

create_magic_link

Your agent calls this with a single open question. The server returns a shareable URL. That is the unit of work: one question, one link, many voice responses.

send_to_recipients

Pass a list of email addresses. The server dispatches a branded message with the link and an explainer. Recipients tap once and record.

list_responses

Pull the list of responses for a given link, each with a one-line AI summary. The agent can scan, count, cluster, or rank without pulling the full transcripts.

get_response_detail

Pull the full transcript and AI summary for a specific response when the agent needs depth. Audio stays in your HeySpeak account behind signed URLs, the agent never holds the raw file.

That is the v1 surface. We will widen it based on what you actually try to automate, not what we guess sounds useful in a blog post.

What teams will use this for

Always-on customer discovery

Your agent watches your CRM. Every time a deal closes lost, it dispatches a one-question Magic Link to the contact: "What were you really hoping this would do for you?" Twenty closed-lost deals in a month becomes twenty 60-second voice notes you actually read, instead of a Salesforce field nobody fills in honestly.

Post-shipping reaction loops

An agent in your release pipeline sends a Magic Link 48 hours after a feature ships, to the users who touched it first. The agent reads the summaries, flags the two or three responses worth a real follow-up, and drops them into your Linear comments. The window is short, the signal is fresh, and you did nothing manually.

Support triage with a human voice

Your support agent escalates a ticket. Before booking a call, it sends a voice link: "Tell me what you were trying to do when it broke." The customer talks for 90 seconds. The summary reaches the human teammate with the actual story, not a typed reconstruction. Half of those calls do not need to happen.

The bigger picture: customer feedback as an agentic primitive

Every wave of new infrastructure rewards the formats that fit it. Mobile rewarded the swipe. Slack rewarded the short message. The agentic web will reward the formats that an agent can dispatch and read without a human in the middle. Async voice fits that shape almost perfectly: low friction for the human at the microphone, structured output for the agent on the other end.

We are not betting that MCP itself is the final protocol. We are betting that the underlying need, agents talking to humans without breaking the async contract, is durable. The Magic Link format already works for that loop. The MCP server is the wrapper that lets your agent reach for it.

Common questions

What is an MCP server for customer feedback?
MCP, the Model Context Protocol, is the emerging standard for letting AI agents call tools and pull context from external systems. An MCP server for customer feedback exposes a small set of actions an agent can take on your behalf: create a Magic Link with a question, send it to a list of contacts, watch responses come in, and read back transcripts and summaries. The agent does the dispatch and the synthesis. The voice happens between you and your customer, unfiltered.
Can AI agents already send HeySpeak Magic Links today?
Not yet through MCP. The HeySpeak dashboard and API exist today, but the MCP server is on the roadmap. If you sign up now, you are using the same product humans use, and you will be first in line when the MCP integration ships. Early access goes to active accounts.
When will the HeySpeak MCP server be available?
We are sequencing it behind a first wave of paying customers. The MVP launch comes first, the MCP server follows once we have heard from real users which workflows they actually want an agent to automate. Best way to influence the priority: create an account, start a Magic Link, and tell us what you wish an agent could do with it.
What will an AI agent be able to do with HeySpeak through MCP?
The planned tool surface is small and concrete. Create a Magic Link with a single open question. List recent responses for a link, with one-line summaries. Pull the full transcript and AI summary for a response. Send the link to a list of recipients via email. That covers the loop: ask, collect, read. The agent never holds the audio. Recordings stay in your account behind signed URLs.
How does voice fit into agentic workflows?
Agents are good at text. They are not yet good at being humans. When you need a real reaction, a tone of voice, an unprompted story, you still need a human at the other end. Voice keeps the human in the loop without putting them on a call. Your agent dispatches the question. The customer records 60 seconds. The agent reads the summary and decides what to do next. That is the missing primitive for async customer outreach in agentic systems.
Will this work with Claude, ChatGPT, and Cursor?
Yes. MCP is a shared standard, so any client that speaks MCP will be able to use the HeySpeak server. The same applies to agent frameworks like LangChain or AutoGen, and to in-IDE agents like Cursor. One server, every compatible client.

Be first when the MCP server ships.

Create an account today. Active users get early access and shape the v1 tool surface.

Create your account