Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Program running memory is higher and higher, GC is not big, working set memory is large #50758

Closed
NlMay opened this issue Apr 6, 2021 · 6 comments
Labels
area-Diagnostics-coreclr question Answer questions and provide assistance, not an issue with source code or documentation.
Milestone

Comments

@NlMay
Copy link

NlMay commented Apr 6, 2021

Description

There are many requests per second, gc0, GC1, GC2 are all small. It's just that the working set keeps growing,What causes this?

Configuration

CentOS Linux release 7.8.2003 (Core)
.NETCore 5.0

Regression?

Other information

Line chart:
image
Using global dotnet counters to analyze dotnet process:
image

@mairaw mairaw transferred this issue from dotnet/core Apr 6, 2021
@dotnet-issue-labeler
Copy link

I couldn't figure out the best area label to add to this issue. If you have write-permissions please help me learn by adding exactly one area label.

@dotnet-issue-labeler dotnet-issue-labeler bot added the untriaged New issue has not been triaged by the area owner label Apr 6, 2021
@ghost
Copy link

ghost commented Apr 6, 2021

Tagging subscribers to this area: @dotnet/gc
See info in area-owners.md if you want to be subscribed.

Issue Details

Description

There are many requests per second, gc0, GC1, GC2 are all small. It's just that the working set keeps growing,What causes this?

Configuration

CentOS Linux release 7.8.2003 (Core)
.NETCore 5.0

Regression?

Other information

Line chart:
image
Using global dotnet counters to analyze dotnet process:
image

Author: NlMay
Assignees: -
Labels:

area-GC-coreclr, untriaged

Milestone: -

@Maoni0
Copy link
Member

Maoni0 commented Apr 6, 2021

@sywhang recently made a change to expose committed bytes to counters. including him to see if he can give you a build of dotnet counters that show the committed bytes data. of course it may not have anything to do with the GC heap - committed would give the info to show whether that's the case or not.

@Maoni0 Maoni0 added area-Diagnostics-coreclr and removed untriaged New issue has not been triaged by the area owner labels Apr 6, 2021
@ghost
Copy link

ghost commented Apr 6, 2021

Tagging subscribers to this area: @tommcdon
See info in area-owners.md if you want to be subscribed.

Issue Details

Description

There are many requests per second, gc0, GC1, GC2 are all small. It's just that the working set keeps growing,What causes this?

Configuration

CentOS Linux release 7.8.2003 (Core)
.NETCore 5.0

Regression?

Other information

Line chart:
image
Using global dotnet counters to analyze dotnet process:
image

Author: NlMay
Assignees: -
Labels:

area-Diagnostics-coreclr, area-GC-coreclr

Milestone: -

@NlMay
Copy link
Author

NlMay commented Apr 7, 2021

It was the working set that found the growth

@tommcdon tommcdon added this to the 7.0.0 milestone Jul 6, 2021
@eiriktsarpalis eiriktsarpalis added question Answer questions and provide assistance, not an issue with source code or documentation. and removed customer assistance labels Oct 5, 2021
@tommcdon
Copy link
Member

We added GC committed bytes in .NET 6, so hopefully this issue can be investigated further. Since the issue is older than a year, closing it. Feel free to comment or open a new issue if still blocking.

@ghost ghost locked as resolved and limited conversation to collaborators Jun 17, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-Diagnostics-coreclr question Answer questions and provide assistance, not an issue with source code or documentation.
Projects
None yet
Development

No branches or pull requests

6 participants