-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update nf scaling. #1145
Update nf scaling. #1145
Conversation
Codecov ReportAll modified lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1145 +/- ##
==========================================
+ Coverage 94.88% 94.89% +0.01%
==========================================
Files 341 341
Lines 34481 34549 +68
==========================================
+ Hits 32716 32784 +68
Misses 1765 1765
☔ View full report in Codecov by Sentry. |
rejection_factor = ( | ||
10 ** abs(self.permeate_side[t, x].charge_comp[j].value) - 1 | ||
) / 10 ** abs(self.permeate_side[t, x].charge_comp[j].value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am trying to bring these into my PR and see what the result will be. However, is rejection_factor making much difference? The comment says will be 9, 99, 999 for 1,2,3 valence, but as written, it would be 0.9, 0.99, 0.999, leading me to believe that `rejection_factor might not be critical. Sanity check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah- it should actually make a difference since you are doing (1-rejection_factor)
below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea. That was not the most clear note. should of added the %. We also need to think about how to deal with "physical hinderence".....wait. dont we have an expression for that in NF model already ....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lambda_comp
is the ratio between stokes radius of solute and membrane pore radius, so we technically could've used that in place of your manual writing of it although it is an expression with a smooth_min function in it, so I didn't press for it. We do have convective and diffusive hindrance factors in the NF model if that's what you were hinting at.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I actually tested out using diffus_pore_comp, but it made a minor change to unscaled tolerances. So I did not update it.
The issue still remains for me, that if I add low concentration of ion like Sr or Ba, my dual infeasibility is at about 7e-6, ideally, we want to push that to 1e-8, like in the current nf flowsheet. but there is still something missing in the puzzle. I wounder if we are focusing on the wrong scaling. One other thought came to mind, that we don't properly scale concentrations at membrane water interface, which would be higher than bulk...maybe that is worth looking at next for me.
I will keep this PR up so we can use it as a test bed, there must be a better way to do this scailing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. I will put this PR into draft mode to avoid confusion by other potential reviewers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great. Go ahead and push PR#1136 first. I want to keep this up so I can test linux/windows impacts, and propose new scailing approaches if there is somethhing I figure out that's cleaner/better.
Trying to merge PR #1136 first and @avdudchenko wants to keep this up to try a few more mods after the fact. |
…atertap into nf_scailing_update
@@ -275,7 +275,6 @@ def unfix_opt_vars(m): | |||
m.fs.NF.nfUnit.velocity.unfix() | |||
m.fs.NF.nfUnit.velocity.setub(0.25) | |||
m.fs.product.max_hardness.fix(500) | |||
m.fs.NF.nfUnit.recovery_vol_phase.fix(0.90) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this still break if you fix to 90% after scale refinement (considering all the latest changes with all checks passing)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still would prefer fixing recovery rate here simply for the sake of avoiding future test failures due to recov rate riding up to the upper bound and subtle variations that could occur from that, but going to approve this anyway and leave the choice to you
"recovery": [ | ||
m.fs.NF.nfUnit.recovery_vol_phase[0.0, "Liq"] * 100, | ||
90, | ||
94.64, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My only concern about letting recovery be "optimized" is that it'll drive up to the upper bound and at least previously, the test solution will deviate by a little every now and then with new changes in future PRs. It may be the case that the latest changes to scaling have remedied the issue completely, but it might also be the case that we'll see future tests fail because the solution converged to another recovery value close to 95% upper bound but outside of the tolerance of the regression test.
iscale.set_scaling_factor(self.membrane_thickness_effective, 1e7) | ||
iscale.set_scaling_factor( | ||
self.membrane_thickness_effective, | ||
1 / self.membrane_thickness_effective.value, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, it's better to go with value(object)
over object.value
(after importing from pyomo.environ) since the current approach would work with variables but fail with expressions, whereas using value(object)
should work across the board.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - just a couple of questions/comments before we merge this
iscale.set_scaling_factor(v, sf) | ||
|
||
for (t, p, j), v in self.flux_mol_phase_comp_avg.items(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an Expression and I never add scaling for them. Did this make a difference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see- you use it later on...thought it looks like you are just using the scaling factor of flow_mol_phase_comp
@@ -1904,6 +1926,7 @@ def calculate_scaling_factors(self): | |||
|
|||
# these variables do not typically require user input, | |||
# will not override if the user does provide the scaling factor | |||
# make sure we scale solvent first! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch!
hfd = value(self.hindrance_factor_diffusive_comp[t, j]) | ||
bs = value(self.partition_factor_born_solvation_comp[t, j]) | ||
fdc = value(self.partition_factor_donnan_comp_feed[t, x, j]) | ||
hinderence_factor = hfd * bs * fdc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hinderence_factor = hfd * bs * fdc | |
hindrance_factor = hfd * bs * fdc |
if ( | ||
membrane_charge > 0 | ||
and value(self.permeate_side[t, x].charge_comp[j]) > 0 | ||
): | ||
charge_reflection_multiplier = 10 ** -abs( | ||
value(self.permeate_side[t, x].charge_comp[j]) | ||
) | ||
elif ( | ||
membrane_charge < 0 | ||
and value(self.permeate_side[t, x].charge_comp[j]) < 0 | ||
): | ||
charge_reflection_multiplier = 10 ** -abs( | ||
value(self.permeate_side[t, x].charge_comp[j]) | ||
) | ||
else: | ||
charge_reflection_multiplier = 10 ** abs( | ||
value(self.permeate_side[t, x].charge_comp[j]) | ||
) | ||
iscale.constraint_scaling_transform( | ||
con, 1 / (hinderence_factor * charge_reflection_multiplier) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same suggestion for condensing as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will do
** -1 | ||
) ** -1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm... this is all well and good, following that logic of equivalent resistance in parallel circuits, but do you have an extra power of -1 that should be removed here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its becouse we need to sum this with scailing factor for membrane charge which I do later down.
(e.g ((sum(x^-1)^-1)^-1+mem_charge^-1)^-1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense- we could then cancel these powers out, but won't functionally change anything. I'll leave the choice to you.
Co-authored-by: Adam Atia <aatia@keylogic.com>
Co-authored-by: Adam Atia <aatia@keylogic.com>
Co-authored-by: Adam Atia <aatia@keylogic.com>
Co-authored-by: Adam Atia <aatia@keylogic.com>
This reverts commit d18652b97e5816d1dad853ba3f587675ca8b9ddd.
Co-authored-by: Adam Atia <aatia@keylogic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Fixes/Resolves:
This is meant to improve nf stability when adding new ions, with low concentrations, and simplifies some of the scaling calculations.
(replace this with the issue # fixed or resolved, if no issue exists then a brief statement of what this PR does)
Summary/Motivation:
When adding Sr and Ba to NF flowsheet for external analysis, it was found that dual infesability was 3e-3 ,while for the current NF flowsheet the dual infeasibility was 1e-7. This change fixes the estimates for ion flux scailing, which improves dual infesaiblity, both in external flowsheet (to 7e-6) and watertap flowsheet reducing dual infeasibility to 1e-8 and hopefully, improving the stability of the flowsheets. Further improvments will need to be made for low ion concentrations, where dual infeasibility seems to grow.
Report from ipopt before updated scailing for psudo-box problem solve:
Output with updated scaling:
Changes proposed in this PR:
-flux scaling is calculated based on water flux, feed ion concentration, and rejection estimated based on ion valance, and ion size relative to membrane pore size - this can use improvement in the future.
-some small updated to other calculated scailing factors (they produce equivalent scaling factors as before, but simplify the code)
Legal Acknowledgement
By contributing to this software project, I agree to the following terms and conditions for my contribution: