Is running the docker image on a fresh standard AMI [1] all it takes to get a working tensorflow backed by the GPU? There is nothing you need to install on the host OS?
[1] for example: Ubuntu 14.04 (HVM) public ami, ami-06116566
This is a great guide for starting out, but how do I get TensorFlow on EC2 GPU instances in a more production-ready, reproducible way? Even the results of things like
Or if you think they are too old, you can rebuild them manually, it will still be easier than installing all the dependencies manually.
There is also an easier way of downloading cuDNN v2 (there is no such thing as cuDNN v6.5 by the way): https://github.com/NVIDIA/nvidia-docker/blob/master/ubuntu-1...