-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gmmhmm cannot work #1
Comments
Hmm, I'm not sure how to reproduce this. Are you running one of the provided config files? |
Edit: the question above appears to have been deleted. For anyone wondering, the commenter was asking whether they could use Python 3.6 instead of Python 2.7. I've only tried Python 2.7, so I guess that's the only version I can endorse. Maybe Python 3.6 forces you to use different versions of the modules and they don't behave the same in all regards. |
(If time permits, I'll test the code with Python 3.6 later this week and get back to you.) |
Thanks for your reply. I tested it on Python 3.6. It seems to work for feature extraction and model building. But my current problem is, that some of the models like e.g. walking returns always nan as score. Therefore after sort, it always return 0 for this activity. Is it maybe possible to share your model with me? I want to reproduce the accuracy you got. It should be around 50% for each activity right? Thanks! |
Sure, here's a ZIP file with some pre-trained models. If you extract the contents of the ZIP file in the project's root directory, then you can evaluate the models' performance with
Using these models, I observe classification accuracies ranging from 58% to 80% on my validation split (the one generated by |
Hey @fbiying87, I managed to run a couple of tests. Python 3.6 seems to work as long as you use the exact versions of the modules that are specified in For the time being, I recommend you just set up a virtual environment with the supported dependency versions. |
Hi @ohjay , thanks a lot for your reply. I figured it would be something like this. I used the latest hmmlearn and opencv version. Some models were droped due to NaN predictions. That's why the counts of the models were different for each activity. I will try to use the equal version of the requirements.txt to reproduce the results. Thanks again. |
If option 'm_type' was set as 'gmm', the training will not converge. While 'GaussianHMM' can work and the training can converge.
The text was updated successfully, but these errors were encountered: