$ docker login nvcr.io Username: $oauthtoken Password: <Your Key> $ docker pull nvcr.io/nvidia/tensorflow:18.12-py3
2019-01-14 現在の最新版
atsushi@ai-chan:~$ docker run --runtime=nvidia --rm -ti nvcr.io/nvidia/tensorflow:18.12-py3 /bin/bash ================ == TensorFlow == ================ NVIDIA Release 18.12 (build 879479) Container image Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. Copyright 2017-2018 The TensorFlow Authors. All rights reserved. Various files include modifications (c) NVIDIA CORPORATION. All rights reserved. NVIDIA modifications are covered by the license terms that apply to the underlying project or file. NOTE: MOFED driver for multi-node communication was not detected. Multi-node communication performance may be reduced. NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be insufficient for TensorFlow. NVIDIA recommends the use of the following flags: nvidia-docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 ... root@8926a61e0715:/workspace# cat /etc/os-release NAME="Ubuntu" VERSION="16.04.5 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.5 LTS" VERSION_ID="16.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial root@8926a61e0715:/workspace# python --version Python 3.5.2 root@8926a61e0715:/workspace# nvidia-smi Mon Jan 14 13:18:40 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.93 Driver Version: 410.93 CUDA Version: 10.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce RTX 2070 Off | 00000000:01:00.0 Off | N/A | | 19% 20C P8 19W / 175W | 8MiB / 7952MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| +-----------------------------------------------------------------------------+
root@8926a61e0715:/workspace# pip3 install keras root@8926a61e0715:/workspace# wget wget https://raw.githubusercontent.com/fchollet/keras/master/examples/mnist_cnn.py root@8926a61e0715:/workspace# python3 mnist_cnn.py Using TensorFlow backend. Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz 11493376/11490434 [==============================] - 308s 27us/step x_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples Train on 60000 samples, validate on 10000 samples Epoch 1/12 2019-01-14 13:28:38.105766: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:957] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-01-14 13:28:38.106272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce RTX 2070 major: 7 minor: 5 memoryClockRate(GHz): 1.62 pciBusID: 0000:01:00.0 totalMemory: 7.77GiB freeMemory: 7.64GiB 2019-01-14 13:28:38.106290: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-01-14 13:28:38.657944: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-01-14 13:28:38.657973: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-01-14 13:28:38.657995: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-01-14 13:28:38.658160: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7353 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2070, pci bus id: 0000:01:00.0, compute capability: 7.5) 60000/60000 [==============================] - 7s 123us/step - loss: 0.2643 - acc: 0.9182 - val_loss: 0.0753 - val_acc: 0.9776 Epoch 2/12 60000/60000 [==============================] - 5s 90us/step - loss: 0.0857 - acc: 0.9748 - val_loss: 0.0405 - val_acc: 0.9875 Epoch 3/12 60000/60000 [==============================] - 5s 82us/step - loss: 0.0647 - acc: 0.9806 - val_loss: 0.0325 - val_acc: 0.9893 Epoch 4/12 60000/60000 [==============================] - 4s 74us/step - loss: 0.0518 - acc: 0.9843 - val_loss: 0.0293 - val_acc: 0.9903 Epoch 5/12 60000/60000 [==============================] - 5s 76us/step - loss: 0.0455 - acc: 0.9864 - val_loss: 0.0286 - val_acc: 0.9911 Epoch 6/12 60000/60000 [==============================] - 5s 76us/step - loss: 0.0407 - acc: 0.9873 - val_loss: 0.0298 - val_acc: 0.9901 Epoch 7/12 60000/60000 [==============================] - 5s 80us/step - loss: 0.0361 - acc: 0.9889 - val_loss: 0.0272 - val_acc: 0.9906 Epoch 8/12 60000/60000 [==============================] - 5s 90us/step - loss: 0.0333 - acc: 0.9897 - val_loss: 0.0273 - val_acc: 0.9918 Epoch 9/12 60000/60000 [==============================] - 5s 90us/step - loss: 0.0296 - acc: 0.9911 - val_loss: 0.0275 - val_acc: 0.9916 Epoch 10/12 60000/60000 [==============================] - 5s 90us/step - loss: 0.0295 - acc: 0.9913 - val_loss: 0.0269 - val_acc: 0.9911 Epoch 11/12 60000/60000 [==============================] - 5s 81us/step - loss: 0.0255 - acc: 0.9918 - val_loss: 0.0265 - val_acc: 0.9906 Epoch 12/12 60000/60000 [==============================] - 4s 73us/step - loss: 0.0245 - acc: 0.9920 - val_loss: 0.0245 - val_acc: 0.9920 Test loss: 0.02450078019701341 Test accuracy: 0.992
速!