Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Latency & inference issues #706

Closed
4 tasks
timholy opened this issue Sep 24, 2020 · 5 comments
Closed
4 tasks

Latency & inference issues #706

timholy opened this issue Sep 24, 2020 · 5 comments
Labels
Collection contains multiple issues Performance also memory planning For discussion and planning development

Comments

@timholy
Copy link
Contributor

timholy commented Sep 24, 2020

My estimates suggest that about half the latency of Makie is due to type-inference and the other half to native code generation. While currently there's little short of PackageCompiler that can be done about native code, inference results can, under some circumstances, be cached in the precompile file. In recent conversations, a tentative plan has emerged:

  1. improve internal inferrability
  2. add precompiles
  3. profit

Here's a running list of things I'm finding to worry about in terms inferrability:

A sometimes-useful trick for figuring out what type objects actually are in do-blocks: from @snoopi, the MethodInstance gives the specific types it was compiled with. mi.def shows the method (including file/line number). Combining the two, you can annotate the arguments of the do-block. However, there's a risk: the instances you compiled may not be the full set of possible instances. So it's much better to proceed from actual awareness of what the possible types are.

@jkrumbiegel
Copy link
Member

Thanks a lot for looking into this! It will probably take some time for us to really understand what's going on and where the highest gains can be expected

@timholy
Copy link
Contributor Author

timholy commented Sep 24, 2020

To be clear, it's not 100% clear even to me what will work and what won't. i have some experience from Revise & Julia itself, but that's not enough for me to say with confidence that I can see a complete path forward. In general my experience has suggested that getting rid of inference problems helps in and of itself (better performance) and give you less vulnerability to invalidation and perhaps more effective precompilation. It probably isn't strictly necessary to fix all inference problems to reduce inference time, but just generally having fewer MethodInstances tends to correlate with less inference time and better reusability of previously-inferred (and hopefully, cached) MethodInstances. And of course if the user makes some plots (paying latency cost) and then loads some new packages that invalidate much of Makie's code, the next plot is going to pay latency again. So on balance, I think it's probably worth doing even if there might be an easy shortcut that would kind of work without doing it.

If you're not aware, the main tools I've found useful for this are

  • solving inference problems: Cthulhu
  • measuring inference time: SnoopCompile
  • asking "how many different specializations of this method or function got inferred?": MethodAnalysis
  • general "what's taking time?" questions: Profiler

I should also say I have quite a few commitments over the next 1.5 months and so really should limit how much time I put into this.

@timholy
Copy link
Contributor Author

timholy commented Sep 25, 2020

Thinking a bit more about this, @extract actually could type-annotate things in a more fine-grained way: for example, if you know that alpha will always be a Float32, and color a symbol (just making that up, I have no actual idea), then

@extract plot (color, colormap, colorrange, alpha)

could generate something like

color = plot.attributes[:color]::Observable{Symbol}
colormap = plot.attributes[:colormap]::Observable{Any}
colorrange = plot.attributes[:colorrange]::Observable{Any}
alpha = plot.attributes[:alpha]::Observable{Float32}

which would solve a lot of your type-inference problems very easily---all you'd need to do is change the @extract macro definition by adding some kind of Dict{Symbol,Type} that records the types based on name.

But I'd need someone who knows this package to generate that Dict for me.

@SimonDanisch
Copy link
Member

Yeah, the problem is, that some of those have quite a large range of possible input types, and my philosophy has been to convert as late as possible, to enable backends to be as close to the user input as desired (very helpful, when a backend accepts some of the user input data directly without conversions and is faster that way).

But, I'm sure, we can improve this quite a bit. E.g, there are definitely some attributes we can convert early - and for the others, we could probably use a wrapper type, that then also comes out with an exact type all the time.

Btw, convert_attributes(input, ::key"attribute_name") should give us all the final converted types!

@timholy
Copy link
Contributor Author

timholy commented Sep 25, 2020

Yeah, it would be fine to leave some untyped. Generally, once type-specification starts causing feelings of stress that's when I can tell I should just stop putting types on 😄.

@ffreyer ffreyer added planning For discussion and planning development Collection contains multiple issues Performance also memory labels Aug 21, 2024
@MakieOrg MakieOrg locked and limited conversation to collaborators Aug 21, 2024
@ffreyer ffreyer converted this issue into discussion #4163 Aug 21, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
Collection contains multiple issues Performance also memory planning For discussion and planning development
Projects
None yet
Development

No branches or pull requests

4 participants