https://alliseesolutions.wordpress.com/2016/09/08/install-gpu-tensorflow-from-sources-w-ubuntu-16-04-and-cuda-8-0/

Install Required Packages

sudo apt-get install openjdk-8-jdk git python-dev python3-dev python-numpy python3-numpy python-six python3-six build-essential python-pip python3-pip python-virtualenv swig python-wheel python3-wheel libcurl3-dev libcupti-dev

Update & Install Nvidia Drivers

You must also have the 367 (or later) NVidia drivers installed, this can easily be done from Ubuntu’s built in additional drivers after you update your driver packages.

$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt update

Once installed using additional drivers restart your computer. If you experience any troubles booting linux or logging in: try disabling fast & safe boot in your bios and modifying your grub boot options to enable nomodeset.

Install Nvidia Toolkit 8.0 & CudNN

Skip if not installing with GPU support

To install the Nvidia Toolkit download base installation .run file from Nvidia website. MAKE SURE YOU SAY NO TO INSTALLING NVIDIA DRIVERS! Also make sure you select yes to creating a symbolic link to your cuda directory.

$ cd ~/Downloads
$ wget 
https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda_8.0.61_375.26_linux-run

$ sudo sh cuda_8.0.61_375.26_linux.run --override --silent --toolkit

This will install cuda into: /usr/local/cuda

To install CudNN download cudNN v6.0 for Cuda 8.0 from Nvidia website and extract into /usr/local/cuda via:

$ tar -xzvf cudnn-8.0-linux-x64-v6.0.tgz
$ sudo cp cuda/include/cudnn.h /usr/local/cuda/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
$ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*

Then update your bash file:

$ gedit ~/.bashrc

This will open your bash file in a text editor which you will scroll to the bottom and add these lines:

export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
export CUDA_HOME=/usr/local/cuda

Once you save and close the text file you can return to your original terminal and type this command to reload your .bashrc file:

$ source ~/.bashrc

Install Bazel

Instructions also onBazelwebsite

$ echo "deb [arch=amd64] 
http://storage.googleapis.com/bazel-apt
 stable jdk1.8" | sudo tee /etc/apt/
sources.list.d/bazel.list

$ curl 
https://bazel.build/bazel-release.pub.gpg
 | sudo apt-key add -
$ sudo apt-get update
$ sudo apt-get install bazel
$ sudo apt-get upgrade bazel

Option 2: Building from source

In case you prefer building from source, it's unfortunately not as easy as cloning the Git repository and typing make. Recent versions of Bazel can only be built with Bazel, unless one downloads a distribution source build, which contains some already pre-generated files. With one such installation in place, one could build Bazel straight from the repository source, but that's probably not necessary.

So we will go with building a distribution build, which is reasonably straightforward:

  • Download a distribution package from the releases page. The current version at the time of writing was 0.5.3.

    $ mkdir bazel &&cd bazel
    $ wget https://github.com/bazelbuild/bazel/releases/download/0.5.3/bazel-0.5.3-dist.zip
    
  • Unzip the sources. This being a zip file, the files are stored without containing folder. Glad we already put it in its own directory...

    $ unzip bazel-0.5.3-dist.zip
    
  • Compile Bazel

    $ bash ./compile.sh
    
  • The output executable is now located in output/bazel. Add a PATH entry to your .bashrc, or just export it in your current shell:

    $ export PATH=`pwd`/output:$PATH
    

You should now be able to call the bazel executable from anywhere on your filesystem.

Clone Tensorflow

$ cd ~
$ git clone 
https://github.com/tensorflow/tensorflow

Unless you want absolute bleeding edge I highly recommend checking-out to the latest branch rather than master.

$ cd ~/tensorflow
$ git checkout r1.2

Configure TensorFlow Installation

$ cd ~/tensorflow
$ ./configure
Please specify the location of python. [Default is /usr/bin/python]: [enter]
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] n
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with GPU support? [y/N] y
GPU support will be enabled for TensorFlow
Please specify which gcc nvcc should use as the host compiler. [Default is /usr/bin/gcc]: [enter]
Please specify the Cuda SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 8.0
Please specify the location where CUDA 8.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: [enter]
Please specify the Cudnn version you want to use. [Leave empty to use system default]: 5
Please specify the location where cuDNN 5 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: [enter]
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "3.5,5.2"]: 5.2,6.1 [see https://developer.nvidia.com/cuda-gpus]
Setting up Cuda include
Setting up Cuda lib64
Setting up Cuda bin
Setting up Cuda nvvm
Setting up CUPTI include
Setting up CUPTI lib64
Configuration finished

Use defaults by pressing enter for all except:

Please specify the location of python. [Default is /usr/bin/python]:
For Python 2 use default or If you wish to build for Python 3 enter:

$ /usr/bin/python3.5

Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages]:
For Python 2 use default or If you wish to build for Python 3 enter:

$ /usr/local/lib/python3.5/dist-packages

Unless you have a Radeon graphic card you can say no to OpenCL support. (has anyone tested this? ping me if so!)

Do you wish to build TensorFlow with CUDA support?

$ Y

You can find the compute capability of your device at:https://developer.nvidia.com/cuda-gpus

If all was done correctly you should see:

INFO: All external dependencies fetched successfully.
Configuration finished

Build TensorFlow

Warning Resource Intensive I recommend having at least 8GB of computer memory.

If you want to build TensorFlow with GPU support enter:

$ bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
#powerful configure
bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

For CPU only enter:

$ bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package

Build & Install Pip Package

This will build the pip package required for installing TensorFlow in your /tmp/ folder

$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

To Install Using Python 3 (remove sudo if using a virtualenv)

$ sudo pip3 install /tmp/tensorflow_pkg/tensorflow

# with no spaces after tensorflow hit tab before hitting enter to fill in blanks

For Python 2 (remove sudo if using a virtualenv)

$ sudo pip install /tmp/tensorflow_pkg/tensorflow

# with no spaces after tensorflow hit tab before hitting enter to fill in blanks

results matching ""

    No results matching ""