Quickstart
Sign up, install the CLI, and run your first GPU job in under five minutes.
1. Create an account
Go to account.yarn.prosodylabs.com.au and sign up. Every new account gets $5 free compute balance to start.
2. Install the CLI
pip install yarn-au
3. Authenticate
yarn auth login
This opens your browser for login. Once authenticated, your credentials are saved locally.
4. Check available GPUs
yarn gpus
You'll see available GPUs with pricing and current availability. Platform GPUs (Perth, AU) are free during beta.
| GPU | VRAM | $/hr |
|---|---|---|
| RTX 4090 | 24 GB | $1.50 |
| A100 40GB | 40 GB | $3.50 |
| A100 80GB | 80 GB | $5.00 |
| H100 | 80 GB | $8.00 |
5. Submit your first job
Create a file called train.py:
import yarn.train as yt
import torch
import torch.nn as nn
@yt.model
def net():
return nn.Sequential(nn.Linear(784, 256), nn.ReLU(), nn.Linear(256, 10))
@yt.dataset(batch_size=64)
def mnist():
from torchvision import datasets, transforms
t = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
return {"train": datasets.MNIST("./data", train=True, download=True, transform=t)}
@yt.job(epochs=5)
def train(model, data):
opt = torch.optim.Adam(model.parameters(), lr=1e-3)
for epoch in range(5):
for x, y in data["train"]:
loss = nn.functional.cross_entropy(model(x.view(-1, 784).cuda()), y.cuda())
opt.zero_grad(); loss.backward(); opt.step()
yt.report(loss=loss.item(), epoch=epoch)
train()
Check if it fits before submitting:
yarn job submit train.py --dry-run
This prints parameter count, memory breakdown, GPU fit verdict, and estimated cost -- without actually submitting.
Submit it:
yarn job submit train.py --gpu rtx-4090
Watch the logs:
yarn job logs <job-id> --follow
What next?
- Training jobs -- submit code, get results, pay only for execution time
- Interactive sessions -- a live GPU endpoint you SSH or Ray-connect into
- Jupyter notebooks -- JupyterLab on GPU, in your browser
- CLI reference -- every
yarncommand - Python SDK -- programmatic access from Python
- REST API -- direct HTTP access, OpenAI-compatible