What to do when loss becomes nan in chainer’s RNN

One of the causes of nan and inf may be a large absolute value as x in softmax cross entropy

They say normalizing the middle class is a good idea.

https://groups.google.com/forum/#!topic/chainer/Ks0KpYjf6pU

It didn't work.

I've heard that sometimes a lower learning rate can fix it.

Didn't work for me.

http://ai-kenkyujo.com/2017/07/07/chainer/

http://ai-kenkyujo.com/2017/07/07/chainer/

Nan solved!

If there are zeros in the data, log10(0) will produce the following error!

That would make it nan.

Deleting all these data solved the problem!

919d3c0e_nohash_0.wav loaded
43fc47a7_nohash_2.wav loaded
f47d644e_nohash_0.wav loaded
39543cfd_nohash_0.wav loaded
98447c43_nohash_1.wav loaded
train_snack.py:77: RuntimeWarning: divide by zero encountered in log10
P[m, :] = np.log10(np.absolute(X[m, :N/2])) # convert to logarithmic power spectrum (256 points)
9db2bfe9_nohash_0.wav loaded
f0edc767_nohash_0.wav loaded
02746d24_nohash_0.wav loaded
531a5b8a_nohash_1.wav loaded
cd7f8c1b_nohash_0.wav loaded
d98f6043_nohash_0.wav loaded
32561e9e_nohash_0.wav loaded