Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StackOverflow when inferring Base.vect with many (>1000) arguments #55341

Closed
topolarity opened this issue Aug 2, 2024 · 5 comments
Closed

StackOverflow when inferring Base.vect with many (>1000) arguments #55341

topolarity opened this issue Aug 2, 2024 · 5 comments
Labels
compiler:inference Type inference

Comments

@topolarity
Copy link
Member

topolarity commented Aug 2, 2024

julia> x = 1.0
           # foo() = [x, 1.0, 2.0, 3.0, ...]
julia> @eval foo() = [x, $([Float64(i) for i=1:1000]...)]
foo (generic function with 1 method)

julia> foo()
Internal error: stack overflow in type inference of foo().
This might be caused by recursion over very long tuples or argument lists.

The problematic inference stack looks (roughly) like this:

1. Base.vect(::Any, ::Float64 (1000 times)...)
2. Base.promote_typeof(::Any, ::Float64, ::Float64, ::Float64, ::Float64 (1000 times)...)
3. Base.promote_typeof(::Any, ::Float64 (999 times)...)
4. Base.promote_typeof(::Any, ::Float64 (998 times)...)
5. Base.promote_typeof(::Any, ::Float64 (997 times)...)
   etc... (996 more times)

Code like this can apparently end up generated by MTK (SymbolicUtils.Code.create_array), which is where I encountered this.

@topolarity topolarity added the compiler:inference Type inference label Aug 2, 2024
@topolarity
Copy link
Member Author

I think the strategy to solve this down-stream will be to:

  • inline this reduction (e.g. promote_type(promote_type(promote_type(A, B), C), D)), and/or
  • change the reduction strategy to minimize recursion depth (a parallel reduction would yield O(log(N)) recursion depth)

Both avoid the need for inference to descend large recursive function hierarchies
(and in theory, runtime performance should also be better)

@topolarity
Copy link
Member Author

It also seems like inference should have better limits for this so that afoldl doesn't grow an arbitrary-length inference stack in the first place.

@vtjnash
Copy link
Sponsor Member

vtjnash commented Aug 2, 2024

What version are you using? This seems likely to have been fixed already by #53665 as there is no afoldl in your backtrace, but there is on master

@oscardssmith
Copy link
Member

I can confirm this works on master.

@topolarity
Copy link
Member Author

topolarity commented Aug 2, 2024

Yeah, hit this on 1.10 - Sorry about that, should have checked on master but it's been a lot of different Julia versions this week 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference
Projects
None yet
Development

No branches or pull requests

3 participants