Configuring CUDA on AWS for Deep Learning with GPUs
Objective: a no frills tutorial showing you how to setup CUDA on AWS for Deep Learning using GPUs. Includes PyTorch configuration w/CUDA 8.0.
Audience: anyone with basic command line and AWS skills.
Note: you’ll have to request access to GPUs on AWS prior to completing this tutorial.
Instance Setup
- On AWS, select Ubuntu Server 14.04 LTS (HVM), SSD Volume Type - ami-43391926 as your Amazon Machine Image (AMI).
- Choose p2.xlarge as your instance type.
- Configure storage, tags, and security group however you like. The defaults are just fine. The only exception is security group. Make sure you set source to My IP so only you have access.
- Then launch the instance.
CUDA Configuration
- SSH into your AWS instance.
- Type:
sudo apt-get update
- Type:
sudo apt-get install dkms
- Download CUDA by typing:
wget https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda_8.0.61_375.26_linux-run
- Run CUDA script:
sudo sh cuda_8.0.61_375.26_linux-run
- An agreement will follow. Use spacebar to page to the end and type
accept
. - Type
yes
to install NVIDIA Graphics Driver. - Type
yes
to install OpenGL libraries. - Type
yes
to run nvidia-xconfig. - Type
yes
to install the CUDA 8.0 Toolkit. - Hit
enter
to accept the default Toolkit location. - Type
yes
to install a symbolic link at /usr/local/cuda. - Type
no
to install the CUDA 8.0 Samples.
PyTorch Configuration
- Download key libraries:
sudo apt-get install -y cmake zlib1g-dev libjpeg-dev xvfb libav-tools xorg-dev python-opengl libboost-all-dev libsdl2-dev swig libgl1-mesa-dev libglu1-mesa freeglut3 build-essential g++
- Download MiniConda:
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/anaconda.sh
- Install MiniConda:
bash ~/anaconda.sh -b -p $HOME/anaconda
- Update path:
echo -e '\nexport PATH=$HOME/anaconda/bin:$PATH' >> $HOME/.bashrc && source $HOME/.bashrc
- Install PyTorch w/CUDA 8.0 support:
conda install pytorch torchvision cuda80 -c soumith
- Check that CUDA is configured properly by opening python, importing torch, and typing:
torch.cuda.is_available()
. Result should be True.
That’s it. Happy Deep Learning!