-
-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Model for Time Series Classification #253
Conversation
I have also added a multivariate classification dataset and verified the training loop is working fine for it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of drive-by comments on the WIP here
Hey guys, @ToucheSir @darsnack @lorenzoh. I have updated the notebook. This PR essentially contains code to load regression datasets and the notebook for the classification task. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to go modulo a couple small things!
function blockmodel(inblock::TimeSeriesRow, | ||
outblock::OneHotTensor{0}, | ||
backbone) | ||
data = rand(Float32, inblock.nfeatures, 32, inblock.obslength) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be using Flux.outputsize
(I think we talked about this in a meeting). Changes required should be minimal :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried doing that, but seems to give an error. Will try to debug it later.
https://pastebin.com/mB7aYvdD
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, this is FluxML/Flux.jl#1755 (comment). Can you do the following then?
- Use
zeros
orones
instead ofrand
. - Pass a batch size of 1 instead of 32 and seq length of 1 instead of
inblock.obslength
. This will save some time and you don't need it because you only care about the first output dimension. reset!
the backbone after you calculateoutput
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, sounds great.
InlineTest = "bd334432-b1e7-49c7-a2dc-dd9149e4ebd6" | ||
MLUtils = "f1d291b0-491e-4a28-83b9-f70985020b54" | ||
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" | ||
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2" | ||
UnicodePlots = "b8865327-cd53-5732-bb35-84acbb429228" | ||
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f" |
I don't see Zygote used anywhere, so one less dependency doesn't hurt.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried doing that, but it throws an error
ERROR: LoadError: ArgumentError: Package FastTimeSeries does not have Zygote in its dependencies:`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This really could use a full stack trace, because AFAICT you're not using Zygote anywhere? Did you ] resolve
after removing it from the project?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems to work now, might have missed the resolve
. thanks :)
Great progress in my absence and cool to see the training working! Are the outstanding comments being addressed in #256 or could this be merged individually beforehand? |
I am resolving the outstanding comments in #256 , some of them have been resolved and a couple are left, which I will resolve by end of day. This can be merged then I believe. |
I think this PR is complete ? |
I wasn't aware #256 was also working on the StackedRNN model. It would be preferable to merge those changes here so that PR stays more focused on the InceptionTime. If not, we can just close this one and focus on that one (I'm interpreting your comment to mean that the changes in #256 completely subsume this PR). |
PR #256 does contain the code changes in this PR, since I created that branch from this branch. Other than that, just some renaming that's done there, etc. |
You'd likely have to rebase #256 after merging this PR if you've made changes to the same functions/types in that one. If that doesn't sound like a problem, we can merge this first. |
Wouldn't be an issue. 👍🏻 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merging as noted to follow up in #256.
I have added the code for a basic RNN Model for the task of time series classification.
As I discussed with @darsnack, the idea is to add an encoding to do the reshaping to (features, batch, time) instead of doing it inside on the RNN Model. Working on that right now.
Have remove the type from StackedLSTM as it was redundant.