The central communication hub that routes messages between your apps and services in real-time.
Conduit is a WebSocket-based message routing hub that acts as the central nervous system for your infrastructure. Instead of services talking directly to each other, all communication flows through Conduit.
Applications connect to Conduit once and can then communicate with any registered service. Conduit handles message routing, tracks request progress, and ensures real-time updates reach the right clients.
This decouples your frontend from your backends. Your browser doesn't need to know where the LLM service lives or how to handle reconnections - Conduit manages all of that.
Services register their capabilities when they connect. Clients send requests to service names, not URLs. Conduit routes the message to the right place and streams progress updates back.
Communication through Conduit follows a simple pattern: connect, register, request, receive.
Clients and services connect via Socket.IO. Services register what they provide (e.g., "summarizer", "image-gen"). Clients register as type "client".
Client sends a request with a "to" field naming the target service. Conduit looks up the service, routes the message, and starts tracking the request.
The service declares tasks, sends progress updates, and eventually returns results. Conduit forwards all updates to the original requester in real-time.
Everything you need for real-time service communication.
All communication happens over persistent WebSocket connections via Socket.IO. No polling, no delays - messages flow instantly between connected nodes.
Services register their capabilities when they connect. Clients request services by name, and Conduit routes to the right provider automatically.
Services can declare tasks and report progress. Conduit tracks everything and streams updates to clients, perfect for long-running operations like LLM generation.
Every request is tracked from creation to completion. Get status, view tasks, check progress, and retrieve results - all through simple API calls.
Connect to Conduit from any JavaScript environment using Socket.IO.
// Connect to Conduit
const socket = io('http://conduit.local:9100');
// Register as a client
socket.emit('register', { type: 'client' });
// Send a request to a service
socket.emit('request', {
to: 'brightwrapper',
action: 'generate-infographics',
payload: { url: 'https://example.com/article' }
});
// Listen for progress updates
socket.on('progress', (data) => {
updateProgressBar(data.task_id, data.progress);
});
// Listen for task results
socket.on('task_result', (data) => {
displayResult(data.task_id, data.result);
});
// Listen for final result
socket.on('result', (data) => {
showFinalResult(data.result);
});
Conduit excels at coordinating long-running, multi-step operations.
Generate summaries, infographics, or analysis with real-time progress. Users see each step as it completes rather than waiting for everything.
Generate multiple images in parallel. Track progress per image and stream results as each one completes.
Process documents through multiple stages - parse, analyze, extract, format - with visibility into each step.
Conduit supports horizontal scaling through service pools with automatic load balancing.
Multiple instances of a service can register with the same name. Conduit automatically load-balances requests across all providers using round-robin:
# Multiple instances register as 'llm'
Worker 1 connects → registers as 'llm' → added to pool
Worker 2 connects → registers as 'llm' → added to pool
Worker 3 connects → registers as 'llm' → added to pool
# Internal registry structure
self._services = {
'llm': {
'providers': ['sid_123', 'sid_456', 'sid_789'],
'next_index': 0
}
}
# Requests are distributed round-robin
Request 1 → Worker 1 (index 0)
Request 2 → Worker 2 (index 1)
Request 3 → Worker 3 (index 2)
Request 4 → Worker 1 (index 0, wraps around)
Just run multiple workers - no configuration needed. Each worker registers with Conduit, and requests are automatically distributed across all available instances.
When a worker disconnects, it's automatically removed from the pool. Remaining workers continue receiving requests. No manual intervention required.
Scaling is straightforward - just run multiple instances:
# Option 1: Multiple processes (recommended)
python worker.py & # Worker 1
python worker.py & # Worker 2
python worker.py & # Worker 3
# Option 2: Separate Conduit client from HTTP server
# Run one Conduit-connected dispatcher that farms work to HTTP workers
python conduit_dispatcher.py & # Connects to Conduit
uvicorn api:app --workers 4 # HTTP workers (no Conduit)
Conduit and Apache have complementary roles:
SSL termination, static file serving, WebSocket upgrade proxying, domain routing, rate limiting, security headers. Infrastructure concerns.
Message routing by service name, service registry and discovery, load balancing across service pools, request lifecycle tracking, progress streaming.
For even larger scale, multiple Conduit hub instances can share state:
Socket.IO has built-in Redis adapter support. Multiple hub instances share a message bus, so any hub can route to services connected to other hubs.
Apache routes clients to consistent hub instances using cookies or IP hashing. Combined with Redis, this enables true horizontal scaling of the hub itself.
With multiple hubs, there's no single point of failure. If one hub goes down, clients reconnect to another and services remain available.
For implementation details and code templates, see the Integration Guide.
Explore the dashboard, check the API, or read the full documentation.