Powered by Hetzner Infrastructure

The Easiest Way to Run AI
on Hetzner GPUs

Don't manage bare metal drivers. SUPA runs on Hetzner GEX44 servers but gives you a serverless API. Deploy LLMs in minutes, not days.

The Problem

Hetzner GPUs are powerful, but setup is painful

  • Hours spent configuring CUDA drivers and dependencies
  • Ongoing server maintenance and security updates
  • Managing model deployments, scaling, and failures
  • Waiting weeks for GPU availability

The SUPA Solution

Serverless AI on Hetzner infrastructure

  • No bare metal driver management
  • No CUDA setup or GPU configuration
  • No server maintenance overhead
  • Instant API access to powerful models
  • Pay-per-use pricing
  • German data residency guaranteed

Infrastructure Specs

Enterprise-grade hardware, zero configuration

GPU

NVIDIA L4 (Hetzner GEX44)

Location

Falkenstein & Nuremberg, Germany

API

OpenAI-compatible

Latency

< 100ms TTFB

One API Call Away

Use the same OpenAI SDK you already know

curl https://api.supa.works/openai/v1/chat/completions   -H "Content-Type: application/json"   -H "Authorization: Bearer <YOUR_API_KEY>"   -d '{
    "model": "supa:instant",
    "messages": [{"role": "user", "content": "Hello from Hetzner!"}]
  }'

Start running AI on Hetzner today

Free sandbox tier. No credit card required. German data residency guaranteed.

Get Your API Key

Frequently Asked Questions

Everything you need to know about running AI on Hetzner with SUPA