Welcome to Resultity
Resultity is a decentralized inference network that executes language model requests across a distributed set of nodes — without centralized control, vendor lock-in, or data leakage.
This documentation describes every component of the system, from node architecture and routing protocols to API usage and reward mechanics.
What is Resultity?
Resultity transforms GPU hardware into a global inference fabric.
- Run LLMs anywhere, trustlessly and verifiably.
- Earn RCP points for uptime, execution, and orchestration.
- Connect via OpenAI-compatible API, no lock-in.
- Deploy subclouds, host custom agents, models, or automations.
Whether you're an individual contributor, enterprise client, or protocol integrator — this is your portal into the decentralized AI layer.