Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gmmhmm cannot work #1

Open
LINYANWANG opened this issue Feb 20, 2019 · 7 comments
Open

gmmhmm cannot work #1

LINYANWANG opened this issue Feb 20, 2019 · 7 comments

Comments

@LINYANWANG
Copy link

If option 'm_type' was set as 'gmm', the training will not converge. While 'GaussianHMM' can work and the training can converge.

@ohjay
Copy link
Owner

ohjay commented Feb 20, 2019

Hmm, I'm not sure how to reproduce this. Are you running one of the provided config files?
With Python 2.7, hmmlearn 0.2.0, and the owen.yaml config file, my models converge.

@ohjay
Copy link
Owner

ohjay commented Sep 5, 2019

Edit: the question above appears to have been deleted. For anyone wondering, the commenter was asking whether they could use Python 3.6 instead of Python 2.7.

I've only tried Python 2.7, so I guess that's the only version I can endorse. Maybe Python 3.6 forces you to use different versions of the modules and they don't behave the same in all regards.

@ohjay
Copy link
Owner

ohjay commented Sep 5, 2019

(If time permits, I'll test the code with Python 3.6 later this week and get back to you.)

@fbiying87
Copy link

(If time permits, I'll test the code with Python 3.6 later this week and get back to you.)

Thanks for your reply. I tested it on Python 3.6. It seems to work for feature extraction and model building. But my current problem is, that some of the models like e.g. walking returns always nan as score. Therefore after sort, it always return 0 for this activity. Is it maybe possible to share your model with me? I want to reproduce the accuracy you got. It should be around 50% for each activity right?

Thanks!

@ohjay
Copy link
Owner

ohjay commented Sep 6, 2019

Sure, here's a ZIP file with some pre-trained models. If you extract the contents of the ZIP file in the project's root directory, then you can evaluate the models' performance with

python main.py classify config/quickstart.yaml

Using these models, I observe classification accuracies ranging from 58% to 80% on my validation split (the one generated by ./get_data.sh).

@ohjay
Copy link
Owner

ohjay commented Sep 7, 2019

Hey @fbiying87, I managed to run a couple of tests. Python 3.6 seems to work as long as you use the exact versions of the modules that are specified in requirements.txt. It's when you switch those up that things get a little iffy. After I upgraded hmmlearn and scikit-learn to their latest versions, I started seeing NaN warnings, so there may be numerical issues somewhere.

For the time being, I recommend you just set up a virtual environment with the supported dependency versions. quickstart.sh might help you get started with that.

@fbiying87
Copy link

Hi @ohjay , thanks a lot for your reply. I figured it would be something like this. I used the latest hmmlearn and opencv version. Some models were droped due to NaN predictions. That's why the counts of the models were different for each activity. I will try to use the equal version of the requirements.txt to reproduce the results. Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants