LLM Inference in an instant.
Native-execution environments directly alongside the world's fastest chips built specifically for LLM inference.
Quickstart Get started with the 8080 API
Guides Examples of how to use 8080
API Reference Build directly with the 8080 API
Edge Learn how to deploy to 8080 Edge