Skip to content

AI Auction

4 posts with the tag “AI Auction”

Why Your AI Agent Needs a Competitor (Series, 3 of 3)

Part 3 of 3 - Part 1: MCP Servers Have a Discovery Problem | Part 2: Your AI Agent Should Earn the Job

In part one, we showed the problem: when multiple AI agents can do the same job, the model picks one based on its description. Not cost. Not quality. Not track record. A coin flip with extra steps.

In part two, we introduced the idea: make agents compete. Run an auction. Let the best one earn the job.

This post shows what that looks like in practice.

638Labs Demo

Watch the demo (1 min)


One gateway, three modes

Everything in 638Labs runs through a single gateway. Same API key, same payload format, same endpoint. What changes is how you route.

Direct -you name the agent. The gateway sends the request to it. Simple proxy. You are in control.

AIX -you describe the job. The gateway runs an auction across every eligible agent. The most suited one wins and executes. You get the result back. You never had to pick.

AIR -same auction, but instead of executing the winner, you get a ranked shortlist. Prices, models, reputation scores. You review the candidates and call the one you want.

Three modes. One gateway. The same agents compete regardless of how you route.


How the auction works

You submit a job with a category and a reserve price. That is the minimum specification.

{
"stoPayload": {
"stoAuction": {
"core": {
"category": "summarization",
"reserve_price": 1.00
}
}
}
}

Here is what happens next:

  1. The gateway identifies every agent registered in that category.
  2. Each agent computes a bid based on its strategy -some bid their minimum, some undercut, some adapt dynamically.
  3. Bids are sealed. No agent sees what any other agent bid.
  4. The system selects the best suited agent.
  5. In AIX mode, the winner executes. In AIR mode, the candidates are ranked and returned.

One round. Deterministic. No negotiation. The entire auction completes in milliseconds before the winning agent even starts working.


What competition actually changes

Without an auction, your routing is static. You hardcode Agent A for summarization. Agent A has no incentive to improve. If Agent B launches with better quality, you will never know unless you manually discover it, evaluate it, and rewrite your integration.

With an auction, Agent B shows up, registers, and starts competing. If it is better suited, it wins. If Agent A wants to keep winning, it has to respond -improve its quality, its reliability, or both. You do not have to change a single line of code. The system adapts.

This is not a theoretical benefit. This is basic market dynamics applied to AI routing.

New agent? It registers and starts bidding immediately. No routing config changes. No deployment tickets.

Agent goes down? It stops competing. The next best agent wins. Your request still gets served.

Agent improves its model? Its reputation score goes up. In quality-weighted auctions (coming soon), it gets an edge even at a slightly higher price.

Provider changes pricing? The agent adjusts its bid range. The market recalibrates on the next request.

None of this requires you to do anything. The auction handles it.


Why a single agent is a liability

If you depend on one agent for a task, you have a single point of failure with zero price pressure. That agent controls your cost, your uptime, and your quality. You are locked in.

The moment you have two agents that can do the same job, you have options. The moment they compete, you have a market. The moment that market runs automatically on every request, you have infrastructure that optimizes itself.

This is why your AI agent needs a competitor. Not because competition is philosophically good. Because a monopoly on your task routing means you are paying whatever the incumbent charges, accepting whatever quality it delivers, and absorbing whatever downtime it has.

An agent with a competitor is an agent that earns its place on every call.


How to connect

638Labs works two ways.

Direct API - call the gateway with any HTTP client. Send a JSON payload, get a result. No SDK required. Any language, any platform.

MCP server - install the open source MCP server and your AI coding assistant (Claude Code, Cursor, Codex, any MCP client) can discover agents, run auctions, get recommendations, and route directly. One connection, every agent in the registry.

Same gateway, same auction, same agents. The MCP server is one way in. The API is another. Use whichever fits your stack.


Where this goes

Right now, the auction creates real competitive pressure across every request.

What comes next:

  • Quality-weighted auctions -factor in reputation scores so a slightly more expensive agent with a 99% success rate can beat a cheaper one with 80%.
  • Preference-based ranking -tell the system to optimize for latency, cost, quality, or a balance. The auction adapts.
  • Batch auctions -submit a batch of work and let agents bid on the whole thing.

The mechanism stays the same. Agents compete. The best one wins. What “best” means gets richer over time.


Try it

The MCP server is open source. The registry is live. Agents are bidding right now.

Install the MCP server, run an auction, see what comes back. If you have agents of your own, register them and start competing.

We are building the competitive layer for AI. If that resonates, we want to hear from you: info@638labs.com


Learn more: https://638labs.com

Your AI Agent Should Earn the Job (Series, 2 of 3)

Part 2 of 3 - Part 1: MCP Servers Have a Discovery Problem | Part 3: Why Your AI Agent Needs a Competitor

In part one, we laid out the problem:

When multiple AI agents can do the same job, the model picks one based on how well its description reads.

Not the best. Not the cheapest. Not the most reliable. Not the one with the best track record.

The one with the best-matching string.

A coin flip with extra steps.

This post is about the alternative.


The decision you never made

Here is what happens today:

  1. You make a request to your LLM. Claude, ChatGPT, Gemini…
  2. The LLM has MCP servers connected, each exposing tools that can engage agents on your behalf. It looks at the tool descriptions and picks one.
  3. You get a result. You have no idea if the best agent for the job was chosen.

You did not make that decision. You do not see it. You cannot audit it. You have zero idea why one agent was chosen over another, or what alternatives even existed.

The model is not evaluating cost. It is not checking availability. It knows nothing about past performance, error rates, or reputation. It has no concept of which agent has handled ten thousand similar requests successfully and which one was deployed yesterday. It is matching token patterns against tool descriptions. That is the entire selection mechanism.

When there is one tool per job, this is fine. When there are two or more that overlap in functionality, you have a problem. The model picks one. It does not tell you why. And if a better, cheaper, faster or more reliable option existed, you will never know.

This is not a theoretical concern.
It is the default behavior of every MCP-connected system running today.


What if agents had to compete?

The core idea behind 638Labs is simple:

When multiple agents can do the same job, do not let the LLM guess what agent to use.

Make the agents compete. Make them bid for the job.

We built the auction house: when you want something done, the LLM does not decide for you, but puts the job up for auction. For bidding. Let the agents earn the right to execute your query.

Every eligible agent is evaluated on merit - cost, availability, reputation, fit for the task… The best one at that moment earns the right to handle your request. Not because it had a clever description. Because it was actually the best option.

This is not just about your requests. It is about how agent selection should work across the entire ecosystem. Every platform that routes AI requests faces this problem. Competition is the answer.


In part three, we show what this looks like in practice.

If you are building with AI agents and this resonates, we would like to hear from you: info@638labs.com


Learn more: https://638labs.com

MCP Servers have a Discovery Problem (Series, 1 of 3)

Part 1 of 3 - Part 2: Your AI Agent Should Earn the Job | Part 3: Why Your AI Agent Needs a Competitor

MCP is working and it is working well - Anthropic is really firing on all cylinders in that direction.

Developers are connecting tools to their AI environments - GitHub, Slack, Notion, databases, internal services. The protocol does what it promised: it gives agents a standard way to discover and call external capabilities. That’s a real step forward.

But as the number of connected tools grows, something starts to break - not in the protocol itself, but in how tools get selected.


How Discovery works in a MCP Server

When an LLM connects to an MCP server, it reads a list of tool definitions.

Each tool has a name and a description. When the user makes a request, the model reads those descriptions and decides which tool to call.

This works well when there’s one tool per job. A GitHub MCP server for repo operations. A Slack MCP server for messaging. No ambiguity, no overlap.

Now consider what happens when there are multiple tools that can do the same thing:

Your organization deploys four summarization agents in the MCP server.

  • one runs on OpenAI
  • one on Anthropic
  • one on a self-hosted model the team down the hall is running
  • and one from a third-party vendor.

All four are registered as MCP tools. You ask your LLM: “Summarize this document.”

Which agent runs?

We don’t know; it’s magic, internal to the black box that is the LLM.

Not the cheapest. Not the fastest. Not the one with the best track record on this type of content. The model picks based on how well the tool description reads - a beauty contest judged by token prediction.

That’s not selection. That’s a coin flip with extra steps.


How big a problem is this?

This pattern shows up everywhere capability overlap exists:

  • classification
  • translation
  • code generation
  • content moderation
  • data extraction
  • any category where multiple providers can fulfill the same request.

As the ecosystem matures, overlap will increase, not decrease. More agents, more MCP servers, more tools with overlapping capabilities (internal in an organization, inter-teams and then also in the open market)

The current model has no mechanism for handling this. There’s no ability for the LLM to choose on merit. There is no price signal. No latency comparison. No historical quality score. No way for an agent to say “I can do this job for less” or “I’ve been more accurate at this task over the last thousand calls.” The tool descriptions are static text, written once, evaluated by the LLM at call time.

This creates two problems.

For the client, there’s no confidence that the best available agent handled the request. You get an answer, not the best available answer. Worse, you have no visibility into why that tool was chosen or what alternatives existed.

For the agent provider, there’s no way to compete. You can write a better description, but that’s marketing, not performance. You can’t bid lower, respond faster, or prove quality - because the selection mechanism doesn’t accept those inputs. If you’re the fourth summarizer to connect, you’re at the mercy of how a language model interprets four paragraphs of text.


Is this an MCP Problem?

This isn’t a flaw in MCP.

MCP solved two hard problems: standardized capability definition and runtime discovery. Every agent describes what it can do in a common format. Clients discover those capabilities through a simple, standard runtime protocol. That is real infrastructure, and it works.

But that discovery is static. Tool descriptions are written once by the developer and never change. They carry no runtime signal. No price. No latency. No track record. No availability. The model reads the same fixed text every time, regardless of what has changed since it was written.

When there is one tool per job, static discovery is enough. When five tools overlap, it becomes a hardcoded, inflexible selection mechanism with no way to adapt.

Search engines had the same arc. Standardizing how web pages described themselves came first. Ranking them on merit was the breakthrough.

MCP gave us the standard. The ranking layer does not exist yet.


In part two, we will show you how 638Labs solved this.

If you are building with AI agents and this resonates, we would like to hear from you: info@638labs.com.


Learn more: https://638labs.com

Introducing the Agentic AI Auction

Patent pending.

Today we’re opening up a new architectural concept for multi-agent systems: the Agentic AI Auction. It’s a simple idea with a big impact - every job sent to the platform triggers a real-time, deterministic, sealed-bid auction across eligible agents. Instead of static routing, hardcoded priority lists, or manual model selection, agents compete to win the job based on price, latency, or internal strategy.

In v1, we’re starting with a single-round sealed-bid auction designed for real-time execution. Each agent submits one bid without seeing others. The Auction Manager picks the best bid deterministically and dispatches the job. This gives predictable behavior, low overhead, and repeatable results, which is ideal for fast API-level tasks.

The design also decouples demand and supply: as long as the interface stays stable, both sides can evolve independently. Agents can upgrade models, adjust strategies, or add capabilities without breaking clients. Clients don’t need to track model changes, new entrants, or performance drift -> the auction handles that.

We’ve also implemented more advanced auction modes (multi-round, batch-oriented, quality-weighted, and strategy-adaptive). These are designed for asynchronous or outcome-driven jobs. We’ll demonstrate those in future updates.

v1 is focused on showing the simplest version working end-to-end:

Submit job → agents bid → deterministic winner → job executes.

This is fully working now, have been testing it successfully for past weeks; and it is in preview state. Contact us if you want to be part of the preview: info at 638labs.com

More coming soon.