Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/validation between classes #39

Merged
merged 18 commits into from
Feb 23, 2024
Merged

Conversation

edwardxtg
Copy link
Contributor

@edwardxtg edwardxtg commented Feb 15, 2024

Description

I've added initial validation between classes to a new script, model_validation.py. I've added 4 functions so far, the simpler ones being:

  • For each commodity, check there is a technology which produces it
  • For each impact, check there is a technology which produces it
  • For each commodity which isn't a final demand, check it is the input of a technology

The big function I've added is check_able_to_meet_demands() which...
For each commodity, check there is enough allowed capacity and allowed activity to meet demands for each region and year, for technologies that directly produce the final demand. This check only considers annual activity and capacity limits, not whole model period limits.
When this validation check does not pass, it raises an error giving the name of commodity, year, region, and technologies producing the commodity which are overly constrained.

Testing instructions

test_otoole_roundtrip should work as normal.

If wanting to see the error messages, in the model_three_edited data, the technology PWRTRNNPLXX (which produces the final electricity demand) can have it's annual activity or capacity constrained via
TotalTechnologyAnnualActivityUpperLimit.csv or TotalAnnualMaxCapacity.csv

@abhishek0208 if you have time, would appreciate if you could have a look at the check_able_to_meet_demands() function to see if you can follow the logic of the tests, both whether my osemosys logic makes sense and if the code is understandable.

Copy link

@edwardxtg edwardxtg marked this pull request as ready for review February 16, 2024 16:55
Copy link
Contributor

@Lkruitwagen Lkruitwagen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice one @edwardxtg ! I haven't reviewed the new validation method in detail, but looking at the files, I might suggest a file rename to start?
We discussed there would be four validation steps: field, composition, pre-solve, and then solve. The features implemented here are pre-solve validation, post-composition. So, first, can these validators be added as method='after' on the pydantic model? (This might also make the inputs to these functions more descriptive, which will help readability). Second, can we break these out into a separate file? Let's try grouping the model_presolve and model_composition validation together.

Two more things - we've now moved to a proper pytesting framework, as you can see in the github actions. These new features must be accompanied by a new test in the tests/ folder. You could have a think about a simple model that could be specified that would be infeasible according to the validation rules, and write the test for this. Because our model composition isn't ready yet you could mark this as pytest.mark.skip(reason=xyz) for now and give a compelling reason.

Finally, we should be naming our branches and PRs now based on tasks from the agile boards.

@edwardxtg
Copy link
Contributor Author

edwardxtg commented Feb 21, 2024

nice one @edwardxtg ! I haven't reviewed the new validation method in detail, but looking at the files, I might suggest a file rename to start? We discussed there would be four validation steps: field, composition, pre-solve, and then solve. The features implemented here are pre-solve validation, post-composition. So, first, can these validators be added as method='after' on the pydantic model? (This might also make the inputs to these functions more descriptive, which will help readability). Second, can we break these out into a separate file? Let's try grouping the model_presolve and model_composition validation together.

Two more things - we've now moved to a proper pytesting framework, as you can see in the github actions. These new features must be accompanied by a new test in the tests/ folder. You could have a think about a simple model that could be specified that would be infeasible according to the validation rules, and write the test for this. Because our model composition isn't ready yet you could mark this as pytest.mark.skip(reason=xyz) for now and give a compelling reason.

Finally, we should be naming our branches and PRs now based on tasks from the agile boards.

@Lkruitwagen I've made the suggested changes and added testing for the model validation.

The way the testing for the failing testing is working, is it reads the CSVs from the examples folder, makes the necessary changes, then rewrites the CSVs in seperate folders to be tested, after which they are deleted.

Not sure I see the need to mark these tests to skip for now?

Grateful for your review once more.

(I've also snuck in a test for the to_xr_ds function)

tests/test_construction/test_model.py Outdated Show resolved Hide resolved
@Lkruitwagen
Copy link
Contributor

Looks good ed! I don't think you need to implement the skipping in this PR.
I've requested one more change which is to package your csv re-writing in a fixture.

@edwardxtg edwardxtg merged commit 5837ed7 into main Feb 23, 2024
1 check passed
@edwardxtg edwardxtg deleted the feat/validation_between_classes branch February 23, 2024 09:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants