AB Split Test adds MCP support

Let's talk about the elephant in the room.

These days, everyone is using AI to build their pages - entire websites, even. Whether you're using Claude, ChatGPT, or some built-in AI inside your website builder, AI is everywhere doing things at crazy speed. In seconds.

But here's the thing: just because it's fast and confident doesn't mean it's right.

Your AI doesn't know which headline actually converts best with your exact traffic. It doesn't know which call to action your visitors are going to respond to. It's guessing. Very confidently.

Split Testing Is the Human in the Loop

This is exactly why split testing matters more than ever.

We need to validate that what we're putting out actually works with real people. Split testing lets you do that - it's the reality check AI desperately needs.

And now, we've made it even easier.

Introducing MCP Integration for AB Split Test

AB Split Test now has MCP (Model Context Protocol) integration, which means your AI agents can create, run, and analyze split tests directly - without ever leaving your workflow.

Think about what this unlocks:

You've got fast content creation (thanks, AI)
You can validate it quickly (thanks, split testing)
All in one seamless workflow (thanks, MCP)

That combination is going to give you the edge.

See It in Action

Let me show you how this works in practice.

Here we have a features page on our website. Since we're using WordPress, we're connected via the official MCP connector available in WordPress 7.

I'll use Claude (on the cheap model, because we don't want to burn through all our tokens) and give it a simple instruction:

"Create a split test of the features page. Test a new headline and subheading - something focused on AI-forward, efficiency-focused WordPress developers in 2026."

And just like that:

  • Claude creates a second variation: "Automate Your CRO"
  • The test is created and set as a draft, ready to publish
  • Claude explains why this variation matters and what it's testing
  • We can review it, edit it, or just say "start the test"

Let's keep going.

"Get me a list of all active tests and completed tests. List them in a table."

Done. Claude pulls the data, formats it, and presents it right in the conversation.

This isn't just a cool tech demo. It's a fundamental shift in how you can approach optimization.

Before: AI creates content → you manually set up tests → you wait → you analyze → repeat
Now: AI creates content → AI sets up the test → AI monitors → AI reports back → you make the call

You're still the decision-maker. But now you have an AI assistant that handles the tedious parts of validation.

Fast Content + Fast Validation = Competitive Edge

The old workflow was: build something, hope it works, maybe test it later.

The new workflow is: build something, validate it immediately, know what works.

If you're already using AI to build your WordPress sites - and let's be honest, you probably are - then you need split testing to make sure that content actually converts.

And with MCP integration, there's no excuse not to.