Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow external lattice to provide its own field-merge implementation #47910

Merged
merged 1 commit into from
Dec 20, 2022

Conversation

aviatesk
Copy link
Sponsor Member

@aviatesk aviatesk commented Dec 16, 2022

When merging PartialStruct, we are currently merging their fields a bit aggressively (#44404) in order to accelerate convergence. However, when PartialStruct wraps external lattice elements, this can be too aggressive since it does not use tmerge(𝕃, fields...) recursively and thus the external lattice elements are not merged as expected.

This commit adds an additional lattice hook, tmerge_field, inside tmerge(::PartialsLattice) so that external lattice implementation can provide its own field-merge strategies.

@aviatesk aviatesk requested a review from Keno December 16, 2022 06:11
base/compiler/typelimits.jl Outdated Show resolved Hide resolved
This is an opt-in interface to allow external lattice implementation to provide its own
field-merge strategy. If it returns `nothing`, `tmerge(::PartialsLattice, ...)`
will use the default aggressive type merge implementation that does not use `tmerge`
recursively to accelerate convergence.
Copy link
Sponsor Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

accelerate is probably the wrong term here, since the later computations this skips were demonstrated to be necessary to reach convergence at all

also, confusingly the tmerge function does not directly operate on the AbstractLattice subtypes, but rather the complexity lattice, and merely widens using the AbstractLattice

Copy link
Sponsor Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, confusingly the tmerge function does not directly operate on the AbstractLattice subtypes, but rather the complexity lattice, and merely widens using the AbstractLattice

Could you elaborate this? Maybe you mean we want to set up another lattice for representing complexity and tmerge should work on it rather than the AbstractLattice?

@Keno
Copy link
Member

Keno commented Dec 16, 2022

#44404 was the change that made this more aggressive for future reference.

When merging `PartialStruct`, we are currently merging their fields a
bit aggressively (#47307) in order to accelerate convergence.
However, when `PartialStruct` wraps external lattice elements, this can
be too aggressive since it does not use `tmerge(𝕃, fields...)` recursively
and thus the external lattice elements are not merged as expected.

This commit adds an additional lattice hook, `tmerge_field`, inside
`tmerge(::PartialsLattice)` so that external lattice implementation
can provide its own field-merge strategies.
@@ -515,8 +515,12 @@ function tmerge(lattice::PartialsLattice, @nospecialize(typea), @nospecialize(ty
tyi = ai
elseif is_lattice_equal(lattice, bi, ft)
tyi = bi
elseif (tyi′ = tmerge_field(lattice, ai, bi); tyi′ !== nothing)
# allow external lattice implementation to provide a custom field-merge strategy
tyi = tyi′
Copy link
Sponsor Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
tyi = tyi′
if issimplertype(tyi′, ai) && issimplertype(tyi′, bi)
tyi = tyi′
else
tyi = Any # the custom tmerge_field implementation violates the requirements of this lattice: abort
end

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That doesn't seem right. The requirement is that the total tmerge you can get is bounded, not that any individual tmerge produces something simpler than the things you're merging.

Copy link
Sponsor Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that an external tmerge_field should apply some complexity-limiting mechanisms, such as limiting internal Union-type to split up to 4, but currently we don't have a good way to detect if such a limitation has been applied to individual operations.

Copy link
Sponsor Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In practice, we have found that to be a poor heuristic for Unions, since it is too big for practical purposes of similar things and too small for distinct things. It also does not converge in total, unless you had also applied some complexity bounding to each individual part of the Union

Copy link
Sponsor Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that everywhere else, this algorithm is mandating issimplertype of each result (though || instead of && would also suffice here), which is lost here in the external call implementation now

@aviatesk aviatesk merged commit e976206 into master Dec 20, 2022
@aviatesk aviatesk deleted the avi/external-field-merge branch December 20, 2022 07:33
"""
tmerge_field(𝕃::AbstractLattice, a, b) -> nothing or lattice element

Compute a lattice join of elements `a` and `b` over the lattice `𝕃`,
Copy link
Sponsor Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is still not quite the lattice that tmerge refers to. It is a complexity reduction join, like tmerge. But it deviates from tmerge because it is a join of invariant parameters instead of a covariant join, so the requirements are even a bit stricter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants