GPU Dex-Cloud
Deploy GPU instances by type — pick a GPU model, click deploy, and the platform finds the best available server automatically.
Overview
GPU Dex-Cloud is the simplest way to deploy GPU instances. Tell GPUniq which GPU type you want and how many — the platform automatically finds the best available machine, provisions it, and returns a ready instance.
No need to browse individual servers. Pick a GPU type, set your configuration, and deploy.
Best for: Quick experiments, prototyping, and deployments where you don't need to pick a specific server.
How It Works
- Choose a GPU type (e.g., RTX 4090)
- Set GPU count, disk size, and Docker image
- GPUniq finds the best available server across all providers
- Your instance is deployed and ready for SSH access
List Available GPU Types
See which GPU types are available and their pricing.
from gpuniq import GPUniq
client = GPUniq(api_key="gpuniq_your_key")
gpus = client.gpu_cloud.list_instances()
for gpu in gpus["featured"]:
print(f"{gpu['gpu_name']}: {gpu['available_count']} available")
print(f" ${gpu['gpu_price_per_gpu_hour_usd']}/GPU/hr")
print(f" VRAM: {gpu['vram_gb']}GB, RAM: {gpu['ram_gb']}GB")
curl "https://api.gpuniq.com/v1/gpu-cloud/instances" \
-H "X-API-Key: gpuniq_your_key"
Search by GPU Name
# Find 4090 instances
gpus = client.gpu_cloud.list_instances(search="4090")
# Filter by specs
gpus = client.gpu_cloud.list_instances(
min_gpu_count=2,
min_vram_gb=24,
min_ram_gb=32,
)
Filter Parameters
Search by GPU name (e.g., "4090", "A100").
Only show secure cloud instances.
Minimum GPU count.
Minimum VRAM per GPU.
Minimum system RAM.
Check Pricing
Get detailed pricing for a specific GPU configuration before deploying.
pricing = client.gpu_cloud.pricing(
"RTX_4090",
gpu_count=2,
disk_gb=100,
secure_cloud=False,
)
print(f"Total: ${pricing['total_price_per_hour']}/hr")
Deploy
Deploy a GPU instance with a single call.
deploy = client.gpu_cloud.deploy(
gpu_name="RTX_4090",
gpu_count=1,
docker_image="pytorch/pytorch:latest",
disk_gb=100,
volume_id=9, # optional: attach persistent storage
secure_cloud=False,
)
print(f"Job ID: {deploy['job_id']}")
curl -X POST "https://api.gpuniq.com/v1/gpu-cloud/deploy" \
-H "X-API-Key: gpuniq_your_key" \
-H "Content-Type: application/json" \
-d '{
"gpu_name": "RTX_4090",
"gpu_count": 1,
"docker_image": "pytorch/pytorch:latest",
"disk_gb": 100
}'
Deploy Parameters
GPU type to deploy (e.g., RTX_4090, A100, H100).
Number of GPUs (1-8).
Docker image to use.
Disk size in GB (20-2048).
Persistent volume to attach.
Deploy on secure cloud infrastructure.
After Deployment
Once deployed, manage your instance through the Instances module:
# Check your instances
instances = client.instances.list()
# Get SSH credentials
details = client.instances.get(task_id=456)
# Stop / start / delete
client.instances.stop(task_id=456)
client.instances.start(task_id=456)
client.instances.delete(task_id=456)
Marketplace vs Dex-Cloud
| Marketplace | Dex-Cloud | |
|---|---|---|
| Control | Pick exact server | Platform picks for you |
| Speed | Browse, compare, then rent | One API call |
| Filters | 15+ filter options | GPU type + count |
| Best for | Specific hardware needs | Quick deployments |
Last updated Feb 22, 2026
Built with Documentation.AI