1
0
Fork 0
mirror of synced 2024-04-30 02:52:26 +12:00

Merge pull request #349 from zhangboyang/master

Update nvidia-docker commands in README.md
This commit is contained in:
nagadomi 2020-06-18 09:02:24 +09:00 committed by GitHub
commit d5171bcba9
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -243,17 +243,17 @@ You can check the performance of model with `models/my_model/noise1_scale2.0x_be
( Docker image is available at https://hub.docker.com/r/nagadomi/waifu2x )
Requires `nvidia-docker`.
Requires [nvidia-docker](https://github.com/NVIDIA/nvidia-docker).
```
docker build -t waifu2x .
nvidia-docker run -p 8812:8812 waifu2x th web.lua
nvidia-docker run -v `pwd`/images:/images waifu2x th waifu2x.lua -force_cudnn 1 -m scale -scale 2 -i /images/miku_small.png -o /images/output.png
docker run --gpus all -p 8812:8812 waifu2x th web.lua
docker run --gpus all -v `pwd`/images:/images waifu2x th waifu2x.lua -force_cudnn 1 -m scale -scale 2 -i /images/miku_small.png -o /images/output.png
```
Note that running waifu2x in without [JIT caching](https://devblogs.nvidia.com/parallelforall/cuda-pro-tip-understand-fat-binaries-jit-caching/) is very slow, which is what would happen if you use docker.
For a workaround, you can mount a host volume to the `CUDA_CACHE_PATH`, for instance,
```
nvidia-docker run -v $PWD/ComputeCache:/root/.nv/ComputeCache waifu2x th waifu2x.lua --help
docker run --gpus all -v $PWD/ComputeCache:/root/.nv/ComputeCache waifu2x th waifu2x.lua --help
```