Accelerate your data science workflows by with an NVIDIA-Powered Mobile Data Science Workstation solution. NVIDIA RTX GPUs provide up to 16 GB of ultra-fast local memory to speed up deep-learning, AI, and compute-intensive workloads by up to 100x as compared to CPU based solutions with moderate size datasets.


The NVIDIA Data Science Workstation comes with an RTX5000 GPU which packs 16GB vRAM, 3072 CUDA Cores and 384 Tensor Cores. It is pre-loaded with NVIDIA Data Science Stack Software to get you started within minutes by removing complex software setup processes. NVIDIA Data Science Stack includes NVIDIA drivers, CUDA-X and 200+ common GPU-optimized frameworks and SDKs used by data scientists, data engineers and ML engineers. Frameworks and SDKs include GPU optimized TensorFlow, PyTorch, NVIDIA open source RAPIDS (GPU Optimized equivalent libraries to Pandas, Scikit-Learn, Graph analytics, etc.), XGBoost, etc. -- all hosted on the Ubuntu 20.04 operating system.


I use Python programming language for Data Science tasks (Pytorch, Tensorflow, Pandas, sklearn, numpy, scipy, numba)
I use RAPIDS GPU accelerated python modules cudf, cuml and cugraph with Apache Dask
I would like to know more about speeding up ETL AI ML using NVIDIA RAPIDS
How large is my database?
How large are my working datasets?
I would prefer to do GPU accelerated data science tasks on:

My primary Data Science tasks are:

The GPUs I utilize for Data Science tasks are in:

I work on these use cases:

I already use NVIDIA Automatic Mixed Precision (AMP) training leveraging Volta RTX Tensor Cores for faster training
I haven't used AMP, tell me more

I would like to evaluate a system
Send me the latest enterprise news, announcements, and more from NVIDIA. I can unsubscribe at any time.