Quickstart
Set up your account, install the SDK, and launch your first GPU instance in minutes.
Prerequisites
- A valid email address
- Python 3.8+ (for SDK) or any HTTP client
- Funds for GPU rental (USDT, Stripe, or YooKassa)
1. Create Your Account
Sign Up
Visit gpuniq.com and click Sign Up. Use email or Google Sign-In.
Verify Email
Enter the 6-digit verification code sent to your inbox.
Add Funds
Go to Balance in your dashboard and deposit via USDT (TRC20), Stripe, or YooKassa.
2. Get Your API Key
Go to LLM API Keys in your dashboard and create a new key. Your key starts with gpuniq_ — save it securely.
API keys work across all endpoints: marketplace, instances, volumes, LLM, and the OpenAI-compatible proxy at /v1/openai (for Claude Code, Cursor, LiteLLM).
3. Install the SDK
pip install GPUniq
4. Your First GPU Rental
Option A: Dex-Cloud (Simplest)
Pick a GPU type and deploy — the platform finds the best machine automatically.
from gpuniq import GPUniq
client = GPUniq(api_key="gpuniq_your_key")
# See available GPU types
gpus = client.gpu_cloud.list_instances()
for gpu in gpus["featured"]:
print(f"{gpu['gpu_name']}: ${gpu['gpu_price_per_gpu_hour_usd']}/hr")
# Deploy an RTX 4090
deploy = client.gpu_cloud.deploy(
gpu_name="RTX_4090",
docker_image="pytorch/pytorch:latest",
disk_gb=100,
)
print(f"Deploying... Job ID: {deploy['job_id']}")
Option B: Marketplace (Full Control)
Browse all available servers and pick a specific machine.
# Browse GPUs with filters
gpus = client.marketplace.list(
gpu_model=["RTX 4090"],
min_vram_gb=24,
sort_by="price-low",
)
print(f"Found {gpus['total_count']} GPUs")
# Rent a specific machine
agent_id = gpus["agents"][0]["id"]
order = client.marketplace.create_order(
agent_id=agent_id,
pricing_type="hour",
docker_image="pytorch/pytorch:latest",
)
5. Connect to Your Instance
After deployment, find SSH credentials in your dashboard or via API:
instances = client.instances.list()
for inst in instances["instances"]:
print(f"Task {inst['task_id']}: {inst['status']}")
ssh root@<host> -p <port>
# Password shown in dashboard
6. Verify GPU Access
nvidia-smi
python -c "import torch; print(torch.cuda.is_available())"
7. Set Up Command Checkpointing (Optional)
Use gg to checkpoint your commands — if your instance restarts, replay them instantly:
# Initialize gg with your instance token (shown in dashboard)
gg init <your-gg-token>
# Run commands with checkpointing
gg run pip install torch transformers
gg run python train.py --epochs 100
# After a restart, replay unfinished commands
gg replay
See the CLI documentation for full details.
What's Next
Last updated today
Built with Documentation.AI