GPU Products

Access thousands of GPUs across our distributed network. When nodes fail, your workloads don't.

H100

Available

NVIDIA’s most powerful datacenter GPU. Hopper architecture with industry-leading AI performance and HBM3 memory.

80GB HBM31979 TFLOPS FP8Hopper200+ nodes
Large LLM inference (70B+)
Training
High-throughput AI
$1.20/hr
Deploy H100

H200

Available

The successor to H100 with 1.8x more memory. 141GB HBM3e for the largest models without compromise.

141GB HBM3e1979 TFLOPS FP8Hopper100+ nodes
Massive LLMs (100B+)
Long-context inference
Training at scale
$2.49/hr
Deploy H200

A100

High Availability

Industry-standard datacenter GPU for AI workloads. Excellent for large model inference and training.

80GB HBM2e312 TFLOPS FP16Ampere600+ nodes
LLM inference (13B-70B)
Fine-tuning
Production AI
$0.80/hr
Deploy A100

RTX 4090

High Availability

The fastest consumer GPU for inference. Ada Lovelace architecture with exceptional FP16/INT8 throughput.

24GB GDDR6X83 TFLOPS FP32Ada Lovelace2,400+ nodes
LLM inference (7B-13B)
Stable Diffusion
Real-time AI

RTX 3090

High Availability

Proven workhorse for AI inference. Excellent price-to-performance with massive VRAM for its class.

24GB GDDR6X36 TFLOPS FP32Ampere4,800+ nodes
Batch inference
Fine-tuning
Development

RTX 4080

Available

Sweet spot of performance and efficiency. Great for latency-sensitive workloads.

16GB GDDR6X49 TFLOPS FP32Ada Lovelace1,200+ nodes
Real-time inference
Smaller models
Edge deployment

RTX 3080

Available

Cost-effective option for development and testing. Solid performance at the lowest price point.

10GB/12GB GDDR6X30 TFLOPS FP32Ampere1,600+ nodes
Development
Testing
Small models

RTX 4070 Ti

Available

Excellent efficiency for inference workloads. Lower power draw with strong performance.

12GB GDDR6X40 TFLOPS FP32Ada Lovelace800+ nodes
Efficient inference
Smaller models
High volume

Have GPUs? Join the network.

Contribute your GPUs to the VectorLay network and earn. We handle routing, failover, and billing.