1
0
Fork 0
mirror of synced 2024-06-02 11:04:31 +12:00
Commit graph

383 commits

Author SHA1 Message Date
nagadomi eab65a1f20 Update assets; remove photo option 2016-06-13 12:44:06 +09:00
nagadomi 472215d7ee Add support for TTA mode on web.lua 2016-06-13 11:05:09 +09:00
nagadomi 6cca2f8488 Add -force_cudnn option to tools/benchmark.lua 2016-06-12 22:03:19 +09:00
nagadomi 67d36a1220 benchmark time 2016-06-12 21:56:49 +09:00
nagadomi 25e293202a Add support for tta_level=1; Add support for TTA to web.lua 2016-06-12 16:55:05 +09:00
nagadomi af74a67bd1 Add -force_cudnn option; support for cuDNN in waifu2x.lua/web.lua 2016-06-12 16:33:50 +09:00
nagadomi 599da6a665 refactor 2016-06-12 15:56:44 +09:00
nagadomi 6be1479710 Fix a block_size issue when using upconv_7 model 2016-06-12 05:15:24 +09:00
nagadomi 9103d393fe Fix a performance issue 2016-06-12 05:13:40 +09:00
nagadomi 83188c5ab7 Add support for method=noise_scale to tools/benchmark.lua 2016-06-12 05:12:31 +09:00
nagadomi c16d0a07a2 Add -crop_size and -batch_size option to tools/benchmark.lua. Fix a bug in tta mode. 2016-06-10 09:20:43 +09:00
nagadomi b8ff8c6787 Remove -gamma_correction option 2016-06-10 07:37:39 +09:00
nagadomi 01b2e6d441 Remove -upsampling_filter option 2016-06-10 07:34:11 +09:00
nagadomi afac4b52ab Add -batch_size option to waifu2x.lua/web.lua 2016-06-09 14:03:18 +09:00
nagadomi 0b949c05a7 Add support for TTA level 2016-06-09 13:09:28 +09:00
nagadomi 9514027f65 Fix a bug that -nr_rate is not used 2016-06-09 02:44:22 +09:00
nagadomi e5cfd3dfce Add -resume option 2016-06-09 02:39:52 +09:00
nagadomi 37bc7a5eea Add support for new noise_scale method 2016-06-08 09:32:27 +09:00
nagadomi 6c758ec5c0 Correct messages 2016-06-08 07:52:38 +09:00
nagadomi 51914b894a change weight initialization and upconv_7 2016-06-08 06:58:46 +09:00
nagadomi 307ae40883 Add noise_scale training 2016-06-08 06:39:36 +09:00
nagadomi 3b09bff8cf randomly swap fg/bg color in dots/gen 2016-06-08 06:38:26 +09:00
nagadomi d0630d3a20 individual filters and box-only support 2016-06-06 14:04:13 +09:00
nagadomi 5e222a3981 Add -tta and -resize_blur option to benchmark 2016-06-02 10:15:54 +09:00
nagadomi abae4cb855 Fix processing time 2016-06-02 10:13:07 +09:00
nagadomi 0349fc774c refactor 2016-06-02 10:12:04 +09:00
nagadomi c89fd7249a Add learning_rate_decay 2016-06-02 10:11:15 +09:00
nagadomi 70eb2b508f Fix a performance problem in resampling 2016-05-30 19:15:54 +09:00
nagadomi 70a2849e39 Fix missing file 2016-05-30 06:48:26 +09:00
nagadomi 634046d5f0 Fix training mode 2016-05-29 05:50:53 +09:00
nagadomi b96bc5d453 Use correct criterion 2016-05-28 10:56:15 +09:00
nagadomi 99e6dd1a57 Fix border removing 2016-05-28 10:25:08 +09:00
nagadomi 8a65db7bab Change the evaluation metric 2016-05-27 16:57:14 +09:00
nagadomi 8088460a20 Add oracle_rate option 2016-05-27 16:54:29 +09:00
nagadomi 8fec6f1b5a Change the learning rate decay rate 2016-05-21 09:56:26 +09:00
nagadomi 7814691cbf Add resize_blur parameter
latest graphicsmagick is required
2016-05-21 09:54:12 +09:00
nagadomi 145b47dbf5 Add use_transparent_png option 2016-05-19 23:02:02 +09:00
nagadomi db68eb208e Add caffe.prototxt example 2016-05-15 16:50:47 +09:00
nagadomi f6a37b66c3 Add support for Transparent PNG in convert_data.lua 2016-05-15 12:34:03 +09:00
nagadomi 8d3950b90a Change the default parameter (epoch, downsampling_filters) 2016-05-15 11:33:34 +09:00
nagadomi c028ce6e4f Fix a bug in reconstrct.scale() when inner_scale > 1 && y only model 2016-05-15 11:31:14 +09:00
nagadomi a210090033 Convert model files; Add new pretrained model
- Add new pretrained model to ./models/upconv_7
- Move old models to ./models/vgg_7
- Use nn.LeakyReLU instead of w2nn.LeakyReLU
- Add useful attribute to .json

New JSON attribute:
The first layer has `model_config` attribute.
It contains:
  model_arch: architecture name of model. see `lib/srcnn.lua`
  scale_factor: if scale_factor > 1, model:forward() changes image resolution with scale_factor.
  channels: input/output channels. if channels == 3, model is RGB model.
  offset: pixel size that is to be removed from output.
          for example:
            (scale_factor=1, offset=7, input=100x100) => output=(100-7)x(100-7)
            (scale_factor=2, offset=12, input=100x100) => output=(100*2-12)x(100*2-12)
And each layer has `class_name` attribute.
2016-05-15 03:04:08 +09:00
nagadomi 48411a4dde refactor 2016-05-14 16:51:36 +09:00
nagadomi 51ae485cd1 Add new models
upconv_7 is 2.3x faster than previous model
2016-05-13 09:49:53 +09:00
nagadomi e62305377f Add compression.size() 2016-05-13 09:35:53 +09:00
nagadomi b8088ca209 Remove the limit of learning_rate_decay 2016-04-30 13:48:24 +09:00
nagadomi 8da52d5fb9 Merge from master 2016-04-23 12:48:24 +09:00
nagadomi 1464b0db3e Improve output file format
supported format variable:
  %s: basename of the source filename
  %d: sequence number

example:
   output/2x/%s.png
   output/%d.png
   output/%06d_%s.png
   output/%s_%d.png
2016-04-23 12:44:40 +09:00
nagadomi 7af5c9443d Add model option and 12 layers net 2016-04-23 09:19:03 +09:00
nagadomi da03209d3e Remove PNG compression option 2016-04-20 18:53:31 +09:00