Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

assertion error #2

Open
ghost opened this issue Sep 19, 2019 · 8 comments
Open

assertion error #2

ghost opened this issue Sep 19, 2019 · 8 comments

Comments

@ghost
Copy link

ghost commented Sep 19, 2019

Local (My Computer) & distant (Google Colab) training stop at :

/content/WGanSing/data_pipeline.py in data_gen(mode, sec_mode)
    128         feats_targs = np.array(feats_targs)
    129 
--> 130         assert feats_targs.max()<=1.0 and feats_targs.min()>=0.0
    131 
    132         yield feats_targs, targets_f0_1, np.array(pho_targs), np.array(targets_singers)

AssertionError: 

When I comment the assert line, the training start but I want to be sure that is not really a problem?

@pc2752
Copy link
Contributor

pc2752 commented Sep 19, 2019

Hi, this might be due to an error in normalisation, could you try running the get_stats() function in data_pipeline.py and see if you still get the assertion error? Let me know if this problem persists.

@ghost
Copy link
Author

ghost commented Sep 20, 2019

I need to comment the second line in the get_stats() function that is not used and produce an error on config.backing_dir not found:

back_list = [x for x in os.listdir(config.backing_dir) if x.endswith('.hdf5') and not x.startswith('._') and not x.startswith('mir') and not x.startswith('med')]

The function run well.

I restart the training (local) and get the same assert error than previously.

@ghost
Copy link
Author

ghost commented Sep 20, 2019

Please note that when I run the prep_data_nus.py before training, I get some warning during the process:

Processing singer ADIZ
G:\AI\sing\utils.py:267: RuntimeWarning: divide by zero encountered in log2
  y=69+12*np.log2(f0/440)

I get 96 files with size:

 8.170.864 - nus_ADIZ_read_01.hdf5
15.557.744 - nus_ADIZ_read_09.hdf5
34.567.824 - nus_ADIZ_read_13.hdf5
45.938.144 - nus_ADIZ_read_18.hdf5
25.436.304 - nus_ADIZ_sing_01.hdf5
54.844.624 - nus_ADIZ_sing_09.hdf5
53.756.544 - nus_ADIZ_sing_13.hdf5
80.155.824 - nus_ADIZ_sing_18.hdf5
24.383.024 - nus_JLEE_read_05.hdf5
20.998.144 - nus_JLEE_read_08.hdf5
28.484.784 - nus_JLEE_read_11.hdf5
28.380.384 - nus_JLEE_read_15.hdf5
74.274.624 - nus_JLEE_sing_05.hdf5
55.085.904 - nus_JLEE_sing_08.hdf5
66.161.584 - nus_JLEE_sing_11.hdf5
69.924.624 - nus_JLEE_sing_15.hdf5
25.721.664 - nus_JTAN_read_07.hdf5
33.025.024 - nus_JTAN_read_15.hdf5
40.766.864 - nus_JTAN_read_16.hdf5
30.881.344 - nus_JTAN_read_20.hdf5
32.892.784 - nus_JTAN_sing_07.hdf5
68.263.504 - nus_JTAN_sing_15.hdf5
57.902.384 - nus_JTAN_sing_16.hdf5
66.187.104 - nus_JTAN_sing_20.hdf5
18.875.344 - nus_KENN_read_04.hdf5
32.224.624 - nus_KENN_read_10.hdf5
13.963.904 - nus_KENN_read_12.hdf5
48.288.304 - nus_KENN_read_17.hdf5
66.980.544 - nus_KENN_sing_04.hdf5
45.873.184 - nus_KENN_sing_10.hdf5
21.292.784 - nus_KENN_sing_12.hdf5
71.963.904 - nus_KENN_sing_17.hdf5
20.118.864 - nus_MCUR_read_04.hdf5
28.640.224 - nus_MCUR_read_10.hdf5
14.915.104 - nus_MCUR_read_12.hdf5
47.520.384 - nus_MCUR_read_17.hdf5
65.003.904 - nus_MCUR_sing_04.hdf5
46.302.384 - nus_MCUR_sing_10.hdf5
21.464.464 - nus_MCUR_sing_12.hdf5
69.822.544 - nus_MCUR_sing_17.hdf5
21.049.184 - nus_MPOL_read_05.hdf5
29.343.184 - nus_MPOL_read_11.hdf5
19.086.464 - nus_MPOL_read_19.hdf5
27.257.504 - nus_MPOL_read_20.hdf5
69.344.624 - nus_MPOL_sing_05.hdf5
64.815.984 - nus_MPOL_sing_11.hdf5
68.808.704 - nus_MPOL_sing_19.hdf5
62.663.024 - nus_MPOL_sing_20.hdf5
20.121.184 - nus_MPUR_read_02.hdf5
32.505.344 - nus_MPUR_read_03.hdf5
21.260.304 - nus_MPUR_read_06.hdf5
41.764.464 - nus_MPUR_read_14.hdf5
28.637.904 - nus_MPUR_sing_02.hdf5
34.498.224 - nus_MPUR_sing_03.hdf5
63.417.024 - nus_MPUR_sing_06.hdf5
61.846.384 - nus_MPUR_sing_14.hdf5
19.427.504 - nus_NJAT_read_07.hdf5
23.615.104 - nus_NJAT_read_15.hdf5
29.282.864 - nus_NJAT_read_16.hdf5
21.837.984 - nus_NJAT_read_20.hdf5
34.069.024 - nus_NJAT_sing_07.hdf5
68.882.944 - nus_NJAT_sing_15.hdf5
59.178.384 - nus_NJAT_sing_16.hdf5
63.108.464 - nus_NJAT_sing_20.hdf5
23.951.504 - nus_PMAR_read_05.hdf5
20.682.624 - nus_PMAR_read_08.hdf5
34.718.624 - nus_PMAR_read_11.hdf5
29.050.864 - nus_PMAR_read_15.hdf5
75.808.144 - nus_PMAR_sing_05.hdf5
53.118.544 - nus_PMAR_sing_08.hdf5
67.117.424 - nus_PMAR_sing_11.hdf5
68.398.064 - nus_PMAR_sing_15.hdf5
 6.699.984 - nus_SAMF_read_01.hdf5
15.038.064 - nus_SAMF_read_09.hdf5
29.995.104 - nus_SAMF_read_13.hdf5
45.877.824 - nus_SAMF_read_18.hdf5
25.995.424 - nus_SAMF_sing_01.hdf5
55.842.224 - nus_SAMF_sing_09.hdf5
57.561.344 - nus_SAMF_sing_13.hdf5
82.429.424 - nus_SAMF_sing_18.hdf5
21.793.904 - nus_VKOW_read_05.hdf5
31.178.304 - nus_VKOW_read_11.hdf5
25.162.544 - nus_VKOW_read_19.hdf5
29.493.984 - nus_VKOW_read_20.hdf5
69.344.624 - nus_VKOW_sing_05.hdf5
65.409.904 - nus_VKOW_sing_11.hdf5
68.750.704 - nus_VKOW_sing_19.hdf5
65.038.704 - nus_VKOW_sing_20.hdf5
18.559.824 - nus_ZHIY_read_02.hdf5
32.512.304 - nus_ZHIY_read_03.hdf5
21.434.304 - nus_ZHIY_read_06.hdf5
47.211.824 - nus_ZHIY_read_14.hdf5
26.308.624 - nus_ZHIY_sing_02.hdf5
32.630.624 - nus_ZHIY_sing_03.hdf5
64.505.104 - nus_ZHIY_sing_06.hdf5
63.082.944 - nus_ZHIY_sing_14.hdf5

Hope I help, thank you very much for your support.

@pc2752
Copy link
Contributor

pc2752 commented Oct 28, 2019

Hi,

Thanks for the feedback.
RuntimeWarning: divide by zero encountered in log2
y=69+12*np.log2(f0/440) isn't really a problem, its because of the use of log over 0, which is corrected for later.

However, could you modify the get_stats function to make sure you are not getting any nan or inf values in the dataset?

@yuuSiVo
Copy link

yuuSiVo commented Nov 7, 2019

Hello I get the Dis Loss, Final Loss, Val Dis Loss is nan.
I using training dataset is 60 files

@pc2752
Copy link
Contributor

pc2752 commented Nov 7, 2019

Are you using the NUS dataset or your own dataset?
Can you just check get_stats() function in data_pipeline.py to see if there are file which have nans in them? These might be what cause the assertion error.

@yuuSiVo
Copy link

yuuSiVo commented Nov 12, 2019

my own dataset

@bangbuken
Copy link

I am using NUS dataset on fe434ff
and meet the same problem. I find that "nus_KENN" is excluded when stats are calculated,

voc_list = [x for x in os.listdir(config.voice_dir) if x.endswith('.hdf5') and x.startswith('nus') and not x.startswith('nus_KENN') ]

but it is still normalized during training. So, the reason, in my opinion, is that "nus_KENN" must contain some large or small numbers and make the normalized data fall outside the interval [0, 1]. I modify the code
and not x == 'nus_JLEE_sing_05.hdf5' and not x == 'nus_JTAN_read_07.hdf5']

to

and not x == 'nus_JLEE_sing_05.hdf5' and not x == 'nus_JTAN_read_07.hdf5' and not x.startswith('nus_KENN')]

and training goes well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants