Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make sure the initial mark list size is capped #76083

Merged
merged 1 commit into from
Oct 5, 2022

Conversation

cshung
Copy link
Member

@cshung cshung commented Sep 23, 2022

Fixes #75538

Part of the reason that we have a large working set when running on a machine with 64GB of memory is because of our mark list initial size logic.

The existing logic:

For workstation GC, the initial size is computed by setting it to a number proportional to soh_segment_size. The heuristic is that if you happen to have a large segment size, then you will probably have more surviving objects, so you will need a larger mark list.

Later on, it is possible that we hit a mark list overflow, and in that case, we will grow the mark list. The mark list has to be sorted, so we have a cap on the mark list size to ensure we don't have a huge mark list and spend all our time sorting.

The flaw:

Since we have the capability to grow the mark list, there is no need for us to create a big one up front. Also, since the grow_mark_list have the logic to cap the mark list size, it doesn't make sense for us to have an initial size bigger than the cap.

The fix:

I extracted the logic to compute the cap as a helper function, then I can use the same cap to limit the initial size computation.

The benefits:

By reducing the mark list size, we will always use less memory. The reduced mark list size should also cap the sorting time. Without this fix, we could be sorting large lists, but now it is impossible.

The risk:

A reduced mark list size may not be able to store all survived objects, and in such cases, the plan_phase needs to walk all the objects, survived or not, and that could potentially be a regression. For that to happen, one must have quite a bit of survived objects, but also not survive a high percentage such that walking the free objects is an issue. IMO, this should be quite unlikely.

@ghost
Copy link

ghost commented Sep 23, 2022

Tagging subscribers to this area: @dotnet/gc
See info in area-owners.md if you want to be subscribed.

Issue Details

TBD

Author: cshung
Assignees: -
Labels:

area-GC-coreclr

Milestone: -

@cshung cshung changed the title [WIP] Make sure the initial mark list size is capped Make sure the initial mark list size is capped Sep 26, 2022
@cshung cshung force-pushed the public/limit-mark-list-initialization branch from 7873cac to bc67958 Compare October 4, 2022 04:37
@cshung cshung merged commit 0041da9 into dotnet:main Oct 5, 2022
@cshung cshung deleted the public/limit-mark-list-initialization branch October 5, 2022 16:19
cshung added a commit to cshung/runtime that referenced this pull request Oct 6, 2022
carlossanlop pushed a commit that referenced this pull request Oct 7, 2022
* Make special_sweep_p a per heap member (#74625)

* Low memory fixes

* Make sure the initial mark list size is capped (#76083)
@ghost ghost locked as resolved and limited conversation to collaborators Nov 4, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Extremely high and unexpected memory commit sizes using DOTNET_GCHeapHardLimit
2 participants