Skip to content

Blog

Applied AI Ecosystem at 638Labs

638Labs sits at the center of an applied AI ecosystem - a modular architecture for routing, deploying, and scaling live AI endpoints.

This post introduces how three core AI systems service come together to act as components as part of an AI pipeline - 638Labs, NeuralDreams, and TensorTensor - work together to power production-grade AI services.

We will integrate with more core services in the future, however, they must provide something that is at least 50% not offered by any other service we integrate with.


Applied AI Ecosystem

638Labs

The central routing layer - a secure, OpenAI-compatible registry and gateway for live AI models, agents, and data services.

Explore Docs →

NeuralDreams

AI data brokerage and vector-ready APIs for search, retrieval, and classification. Available through 638Labs or direct enterprise integration.

Visit NeuralDreams →

TensorTensor

Batch inference and large-scale pipelines for LLMs, agents, and AI workflows. Use via 638Labs or deploy directly for enterprise workloads.

Visit TensorTensor →

AI Registry - Soft versioning

Introducing Soft Versioning.

Soft Versioning for Online AI Endpoints

Most AI registries focus on models and weights. But 638Labs is built for a different world: online, deployed endpoints - models, agents, and data services that are live and callable right now.

That changes how versioning works.

Why not enforce hard versioning?

In traditional software registries or package managers, versioning is critical because you’re distributing code or binaries. But in our model, you’re pointing to a live endpoint, not downloading anything.

So instead of managing version trees or enforcing git-like histories, we keep it simple:

  • If you deploy a new version, register a new route.
    Example:
    acme/agent-v1 → original
    acme/agent-v2 → new endpoint

  • Or reuse the same route and update the metadata:

    • "version": "2.1.0"

We call this soft versioning.

This is in testing for now, will be deployed soon

Why it works

  • Keeps the registry light
  • Lets you evolve your services without breaking users
  • Encourages clear naming and route discipline
  • Avoids overengineering a version system in a space where most tools (like GitHub or your CI/CD flow) already handle source tracking

When we’ll evolve this

As we introduce public marketplaces and more collaborative agent development, we’ll expand version support - but without sacrificing the simplicity of pointing to live, callable endpoints.

Until then, soft versioning keeps things lean and developer-friendly.


638Labs is a live registry and proxy for deployed AI services.
Learn more: https://638labs.com

638Labs Beta - accepting users

Introducing 638Labs - The AI Gateway and Registry for Deployed Models, Agents, and Datasources

Looking for active, deployed AI Endpoints? We built 638Labs as a developer-first gateway and registry for deployed AI endpoints only.

  • Register and route to live AI services
  • OpenAI-compatible proxy
  • Supports OpenAI, Together.ai, Hugging Face, and Cohere (more coming)
  • Plug in your own private or public endpoints
  • Clean API format: {apikey, route_name, payload}

This is not a model zoo. It’s a live registry and proxy for AI models, agents, and data services.

KYAIS - Know Your Aritifical Intelligence System

Free to register, free to search, free to list in our marketplace, must have an API key to use the gateway routing services.

KYAIS - Know Your AI System

KYAIS is our effort to build knowledge about the AI systems we integrate with, learn how to avoid bad ones, have the ability to prune in a deterministic fashion bad systems in order to build and maintain trust for the majority of our customers.

Free, Like in beer

638Labs offer free access to all our core services. Free, not public. Free means you will not be charged, but you will need to create a free account. The chance for mischief is too grand in the day of bots, AI and massively inexpensive infra that can overwhelm any state of the art hosting facility. We allow you to make public endpoints to all your AI infra - AI Agents, AI models, AI knowledgebases - and customers are only required to create an account and get an API key if they want to use the respective endpoint.

Knowledge

We use knowledge about our system, traffic patterns, threat actors, etc in order to watch for possible issues and offer you the most secure and best service we can provide.

Automation Integration

We see the rise of automation integration among AI controlled systems which will only communicate with trusted sources. These trust controlls will be setup out of system in order to eliminate possible bad actors gaming the system.

AI to combat AI

The scale of the internet, bots used in various automation systems was already overwhelming before the current AI wave. The rise of AI Agents has made this scale to be multiple orders of magnitude larger.

Security, Guardrails, AI Models, Agents and Datasets are at the forefront of AI infrastructure and one of the fastest growing applied AI sections in the tech industry.

We want customers to develop, deploy and grow with AI systems. That can only happen if we invest heavily into guardrails, preventive systems and resilient systems.