Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve efficiency of link_ancestors #2442

Closed
gtsambos opened this issue Jul 26, 2022 · 11 comments · Fixed by #2456
Closed

Improve efficiency of link_ancestors #2442

gtsambos opened this issue Jul 26, 2022 · 11 comments · Fixed by #2456
Labels
Performance This issue addresses performance, either runtime or memory

Comments

@gtsambos
Copy link
Member

I'm noticing that link_ancestors is scaling much more poorly than I intuitively expected (memory seems super-linear wrt sample size?! :( ) Will think a bit more about why this might be, but there is one reasonably small 'edit' to the algorithm we could make that should improve things somewhat.

Currently, we're iterating through all of the edges in the edge table (see here) but we should actually just stop as soon as we reach a point in the edge table where the parent node IDs exceed the largest of the supplied ancestral node IDs.

@gtsambos gtsambos added the Performance This issue addresses performance, either runtime or memory label Jul 26, 2022
@gtsambos
Copy link
Member Author

@gtsambos
Copy link
Member Author

gtsambos commented Jul 26, 2022

Also, rough experiments suggest runtime is larger for a more recent set of census ancestors 😱 (edit: memory too, I think)

@jeromekelleher
Copy link
Member

Hmm, where's the memory being used to you think? What's the size of the actual output as you scale the sample size?

@gtsambos
Copy link
Member Author

gtsambos commented Jul 26, 2022 via email

@gtsambos
Copy link
Member Author

gtsambos commented Jul 26, 2022 via email

@jeromekelleher
Copy link
Member

Interesting to see some data here, see what we can dig up. I've found mprof really helpful for memory stuff .

@gtsambos
Copy link
Member Author

Thanks Jerome, will do!

@gtsambos
Copy link
Member Author

gtsambos commented Aug 3, 2022

okay, I used scalene instead of mprof, but it seems to confirm that this change would make a huge difference in practice:

Currently, we're iterating through all of the edges in the edge table (see here) but we should actually just stop as soon as we reach a point in the edge table where the parent node IDs exceed the largest of the supplied ancestral node IDs.

I ran three simulations using the same demographic scenario which contained a census at time=500, and then applied the AncestorMap (the Python mockup of link_ancestors) to each of these using the census nodes as the 'ancestors' input:

  1. 15 individuals (30 chromosomes)
  2. 150 individuals (300 chromosomes)
  3. 150 individuals (300 chromosomes), with the simulation stopped at time=501 (just after the census)

I also used the same random seed. Note that the table output in (2) and (3) is identical, so any differences in memory/time/cpu etc are due to computations happening after all the ancestral nodes have been processed.

@gtsambos
Copy link
Member Author

gtsambos commented Aug 3, 2022

Here's the peak memory usage amount and runtime for each scenario:

  1. 30 Mb (17 sec)
  2. 925 Mb (1603 sec)
  3. 15 Mb (13 sec)

@gtsambos
Copy link
Member Author

gtsambos commented Aug 4, 2022

Here is the full profiling output for simulation 2, and here is the script I was profiling (just a stripped down copy of python/tests/simplify.py)

If you scroll down to the function profile at the bottom, you see that the vast majority of the memory goes towards initialising new segments, which are fed into the cumulative list of ancestral segments via the add_ancestry method. This is an entirely internal object -- if we stopped building it once all the provided ancestral nodes have been processed, we'd save a huge number of these operations.

I didn't directly measure the size of the table output, but there's strong evidence to suggest that's a negligible factor here. For simulations 2 and 3, this table must be at most 15Mb (but probably a lot smaller) -- and given how big simulation 2 was, this looks to be pretty small compared to the size of the internal segment lists. All of this output is created in the record_edge method (here), which doesn't even make it into the function profile output because it's so minimal.

@gtsambos
Copy link
Member Author

gtsambos commented Aug 4, 2022

so tldr -- I think this would be a good change to make!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Performance This issue addresses performance, either runtime or memory
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants