Getting Started
Get up and running with Substrate in minutes. This guide walks you through installing the CLI, provisioning your first GPU instance, and running a training job.
Install the CLI
The Substrate CLI is a single binary with no dependencies. Install it with one command:
curl -fsSL https://get.onsubstrate.run | shAfter installation, verify it works by checking the version:
substrate --versionAuthenticate
Log in to your Substrate account. This opens your browser for OAuth authentication and stores a token locally.
substrate auth loginYour credentials are stored securely in ~/.substrate/credentials and automatically refreshed when they expire.
Provision your first instance
Create a GPU compute instance by specifying the resources you need. Substrate composes hardware to match your exact requirements — no predefined instance types.
substrate compute create --gpu-cores 4 --vram 24 --ram 64 --storage 100This provisions an instance with 4 GPU cores, 24 GB VRAM, 64 GB system RAM, and 100 GB NVMe storage. The instance will be ready within seconds and you will see output like:
Instance created successfully.
ID: inst_abc123
Status: provisioning
Cores: 4 GPU cores
VRAM: 24 GB
RAM: 64 GB
Storage: 100 GB
Region: us-east-1
Endpoint: inst_abc123.compute.onsubstrate.runConnect to your instance
Once the instance status changes to running, connect via SSH:
substrate compute ssh inst_abc123This establishes a secure SSH tunnel to your instance. Your local SSH keys are automatically configured during authentication.
Run a training job
With your instance running, you can execute training jobs directly. Here is a minimal PyTorch example:
import torch
import torch.nn as nn
import torch.optim as optim
# Verify GPU is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Training on: {device}")
# Define a simple model
model = nn.Sequential(
nn.Linear(784, 256),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(256, 128),
nn.ReLU(),
nn.Linear(128, 10),
).to(device)
# Set up optimizer and loss function
optimizer = optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.CrossEntropyLoss()
# Training loop
for epoch in range(10):
inputs = torch.randn(64, 784).to(device)
targets = torch.randint(0, 10, (64,)).to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
print(f"Epoch {epoch + 1}/10 — Loss: {loss.item():.4f}")
print("Training complete.")Clean up
When you are done, delete the instance to stop billing. This permanently removes the instance and all associated storage.
substrate compute delete inst_abc123To stop an instance without deleting its storage, use substrate compute stop inst_abc123 instead.
Next steps
Now that you have provisioned and connected to your first instance, explore the rest of the documentation:
- API Reference— Manage instances programmatically via the REST API
- CLI Reference— Full command reference for the Substrate CLI
- Terraform Provider— Infrastructure as code for Substrate resources