1
0
Fork 0
mirror of synced 2024-06-26 18:20:26 +12:00
Commit graph

12 commits

Author SHA1 Message Date
nagadomi 29260ede24 fix batchwise psnr 2017-02-12 02:04:23 +09:00
nagadomi 451ee1407f Stop calculate the instance loss when oracle_rate=0 2016-09-11 20:59:32 +09:00
nagadomi c89fd7249a Add learning_rate_decay 2016-06-02 10:11:15 +09:00
nagadomi 70a2849e39 Fix missing file 2016-05-30 06:48:26 +09:00
nagadomi 68a6d4cef5 Use MSE instead of PSNR
PSNR depends on the minibatch size and those group.
2016-04-17 02:08:38 +09:00
nagadomi a938cd5994 Reduce draw calls 2016-04-10 23:30:23 +09:00
nagadomi 1900ac7500 Use PSNR for evaluation 2016-03-12 06:53:42 +09:00
nagadomi aaac6ed6e5 Refactor training loop
more shuffle
2015-11-30 17:18:52 +09:00
nagadomi b35a9ae7d7 tuning 2015-11-03 06:10:44 +09:00
nagadomi 490eb33a6b Minimize the weighted huber loss instead of the weighted mean square error
Huber loss is less sensitive to outliers(i.e. noise) in data than the squared error loss.
2015-10-31 22:05:59 +09:00
nagadomi 8dea362bed sync from internal repo
- Memory compression by snappy (lua-csnappy)
- Use RGB-wise Weighted MSE(R*0.299, G*0.587, B*0.114) instead of MSE
- Aggressive cropping for edge region
and some change.
2015-10-26 09:23:52 +09:00
nagadomi 2231423056 update training script 2015-05-17 14:43:07 +09:00
Renamed from lib/minibatch_sgd.lua (Browse further)