Adding GPU compute support to Windows Subsystem for Linux (WSL) has been the #1 most requested feature since the first WSL release.
Learn how Windows and WSL 2 now support GPU Accelerated Machine Learning (GPU compute) using NVIDIA CUDA, including TensorFlow and PyTorch, as well as all the Docker and NVIDIA Container Toolkit support available in a native Linux environment.
Clark Rahig will explain a bit about what it means to accelerate your GPU to help with training Machine Learning (ML) models, introducing concepts like parallelism, and then showing how to set up and run your full ML workflow (including GPU acceleration) with NVIDIA CUDA and TensorFlow in WSL 2.
Additionally, Clarke will demonstrate how students and beginners can start building knowledge in the Machine Learning (ML) space on their existing hardware by using the TensorFlow with DirectML package.
- Related Microsoft Windows Blog Posts: https://aka.ms/GPUinWSL
- GPU-Accelerated ML Training Docs: https://aka.ms/GPUinWSLdocs
- NVIDIA Docs: https://developer.nvidia.com/cuda/wsl
- DirectML repo (Get started, Samples, etc): https://aka.ms/DirectML
- Follow Clark Rahig on Twitter: @crahrig