-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/validation between classes #39
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice one @edwardxtg ! I haven't reviewed the new validation method in detail, but looking at the files, I might suggest a file rename to start?
We discussed there would be four validation steps: field, composition, pre-solve, and then solve. The features implemented here are pre-solve validation, post-composition. So, first, can these validators be added as method='after'
on the pydantic model? (This might also make the inputs to these functions more descriptive, which will help readability). Second, can we break these out into a separate file? Let's try grouping the model_presolve
and model_composition
validation together.
Two more things - we've now moved to a proper pytesting framework, as you can see in the github actions. These new features must be accompanied by a new test in the tests/
folder. You could have a think about a simple model that could be specified that would be infeasible according to the validation rules, and write the test for this. Because our model composition isn't ready yet you could mark this as pytest.mark.skip(reason=xyz)
for now and give a compelling reason.
Finally, we should be naming our branches and PRs now based on tasks from the agile boards.
@Lkruitwagen I've made the suggested changes and added testing for the model validation. The way the testing for the failing testing is working, is it reads the CSVs from the examples folder, makes the necessary changes, then rewrites the CSVs in seperate folders to be tested, after which they are deleted. Not sure I see the need to mark these tests to skip for now? Grateful for your review once more. (I've also snuck in a test for the to_xr_ds function) |
Looks good ed! I don't think you need to implement the skipping in this PR. |
Description
I've added initial validation between classes to a new script, model_validation.py. I've added 4 functions so far, the simpler ones being:
The big function I've added is check_able_to_meet_demands() which...
For each commodity, check there is enough allowed capacity and allowed activity to meet demands for each region and year, for technologies that directly produce the final demand. This check only considers annual activity and capacity limits, not whole model period limits.
When this validation check does not pass, it raises an error giving the name of commodity, year, region, and technologies producing the commodity which are overly constrained.
Testing instructions
test_otoole_roundtrip should work as normal.
If wanting to see the error messages, in the model_three_edited data, the technology PWRTRNNPLXX (which produces the final electricity demand) can have it's annual activity or capacity constrained via
TotalTechnologyAnnualActivityUpperLimit.csv or TotalAnnualMaxCapacity.csv
@abhishek0208 if you have time, would appreciate if you could have a look at the check_able_to_meet_demands() function to see if you can follow the logic of the tests, both whether my osemosys logic makes sense and if the code is understandable.