Merge pull request #349 from zhangboyang/master
Update nvidia-docker commands in README.md
This commit is contained in:
commit
d5171bcba9
|
@ -243,17 +243,17 @@ You can check the performance of model with `models/my_model/noise1_scale2.0x_be
|
|||
|
||||
( Docker image is available at https://hub.docker.com/r/nagadomi/waifu2x )
|
||||
|
||||
Requires `nvidia-docker`.
|
||||
Requires [nvidia-docker](https://github.com/NVIDIA/nvidia-docker).
|
||||
|
||||
```
|
||||
docker build -t waifu2x .
|
||||
nvidia-docker run -p 8812:8812 waifu2x th web.lua
|
||||
nvidia-docker run -v `pwd`/images:/images waifu2x th waifu2x.lua -force_cudnn 1 -m scale -scale 2 -i /images/miku_small.png -o /images/output.png
|
||||
docker run --gpus all -p 8812:8812 waifu2x th web.lua
|
||||
docker run --gpus all -v `pwd`/images:/images waifu2x th waifu2x.lua -force_cudnn 1 -m scale -scale 2 -i /images/miku_small.png -o /images/output.png
|
||||
```
|
||||
|
||||
Note that running waifu2x in without [JIT caching](https://devblogs.nvidia.com/parallelforall/cuda-pro-tip-understand-fat-binaries-jit-caching/) is very slow, which is what would happen if you use docker.
|
||||
For a workaround, you can mount a host volume to the `CUDA_CACHE_PATH`, for instance,
|
||||
|
||||
```
|
||||
nvidia-docker run -v $PWD/ComputeCache:/root/.nv/ComputeCache waifu2x th waifu2x.lua --help
|
||||
docker run --gpus all -v $PWD/ComputeCache:/root/.nv/ComputeCache waifu2x th waifu2x.lua --help
|
||||
```
|
||||
|
|
Loading…
Reference in a new issue