-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Container status not updated after termination cause by pod deletion #106896
Comments
/sig node |
@howardjohn so you are initiating a race condition between |
I am not quite sure what you mean by the race? I simply run |
Ah, I see, sorry I thought you were killing the first container manually and then initiating a |
/assign |
/triage accepted |
After dig into source code, find the reason why pod status is not update is:
need some advice before fix it |
One approach is make |
One question I have is is this a bug or a performance optimization? I do have a (potential) use case for reading this status but not sure how much overhead there is for adding more writes on each pod kill |
IMO, if exit time interval between containers are long then it is a bug. But maybe In most cases this may cause some overhead. |
Since we have another user's use case about using the status during containers shutdown #107183 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@songjiaxun: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
/reopen |
@SergeyKanzhelev: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened?
Reproduction steps below, but at a high level:
What did you expect to happen?
After container 1 terminates, the containerStatus is updated to indicate the terminated container is no longer running.
How can we reproduce it (as minimally and precisely as possible)?
Apply this pod:
echo container exits immediately, sleep does not until SIGKILL.
Next, run
kubectl delete foo
.Observe that after a few seconds (up to 30s), the container status is not updated:
Kubelet logs, with -v6:
kubelet.txt
Anything else we need to know?
No response
Kubernetes version
Also tested on GKE 1.21
Cloud provider
OS version
Install tools
Container runtime (CRI) and and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: