Tarifs horaires en direct
Auto-bills every hour while running. Stop the pod to pause; terminate to release the GPU.
| GPU | VRAM | Per hour |
|---|---|---|
| RTX A4000 | 16 GB | $0.34 |
| RTX 4090 | 24 GB | $0.68 |
| L40S | 48 GB | $1.58 |
| A100 80 GB | 80 GB | $2.78 |
| H100 80 GB | 80 GB | $5.38 |
| H200 | 141 GB | $7.18 |
Prices shown are list. Live availability and any spot discounts are surfaced at request time on /infra/pods/new.
Pourquoi les équipes choisissent Hypereal
SSH in 60 seconds
Provision a pod, copy the SSH command, and you are in. PyTorch, CUDA, and Jupyter pre-installed on every image.
Persistent volumes survive Stop
Stop your pod to pause the meter without losing your weights, datasets, or virtual env. Resume anytime.
Auto-bill hourly, terminate to stop the meter
Billed per hour while running. Terminate to fully release the GPU and stop all charges. No minimum commit.
Comment ça marche
Pick a GPU
Choose from A4000 up to H200 — live availability shown at request time.
Set image (PyTorch default)
Default PyTorch 2.x + CUDA image, or paste any public Docker image URL.
Get SSH host + key
Your private key is generated on creation. Copy the one-line SSH command and connect.
Stop or Terminate anytime
Stop to pause billing while keeping the disk. Terminate to fully release.

