Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.5.10 causes memory leak in mysql container; regression from 1.4.13 #6707

Open
C-Higgins opened this issue Mar 21, 2022 · 28 comments
Open

1.5.10 causes memory leak in mysql container; regression from 1.4.13 #6707

C-Higgins opened this issue Mar 21, 2022 · 28 comments

Comments

@C-Higgins
Copy link

Description

I updated containerd.io on Fedora from 1.4.13 to 1.5.10. After this, all mysql docker containers across multiple projects began using 20+GB of memory where before they used <300MB. I restarted the computer and confirmed the problem persisted. I then rolled back to 1.4.13 and the problem was gone.

Steps to reproduce the issue

  1. Run a docker container with linux/amd64 arch with ENV MYSQL_VERSION=5.7.36-0ubuntu0.18.04.1
  2. Start mysqld inside the container
  3. Observe massive memory usage in 1.5.10

Describe the results you received and expected

No regression

What version of containerd are you using?

1.5.10

Any other relevant information

runc version 1.0.3
commit: v1.0.3-0-gf46b6ba
spec: 1.0.2-dev
go: go1.16.15
libseccomp: 2.5.3

Linux localhost.localdomain 5.16.7-200.fc35.x86_64 #1 SMP PREEMPT Sun Feb 6 19:53:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Show configuration if it is related to CRI plugin.

No response

@ssaaiidd
Copy link

I have the same issue

runc version 1.0.3
commit: v1.0.3-0-gf46b6ba
spec: 1.0.2-dev
go: go1.16.15
libseccomp: 2.5.3

I implemented a temp solution using ulimit like this comment. It's worked for me. docker-library/mysql#579 (comment)

@ssaaiidd
Copy link

The difference may be due to this PR: #4475

@C-Higgins
Copy link
Author

I have the same issue

runc version 1.0.3
commit: v1.0.3-0-gf46b6ba
spec: 1.0.2-dev
go: go1.16.15
libseccomp: 2.5.3

I implemented a temp solution using ulimit like this comment. It's worked for me. docker-library/mysql#579 (comment)

Yes, can confirm that this is a valid workaround. It seems very likely that the increase to infinity nofile size is the problem on some systems.

@thaJeztah
Copy link
Member

That's .... interesting. If that's causing an issue, that feels like a bug in systemd 🤔 (as infinity should be equivalent to the old value)

@C-Higgins
Copy link
Author

I will do a repro test later this week with 1.5 and 1.6, after modifying the service file, to see if the problem is isolated to just that line

@thaJeztah
Copy link
Member

Thanks in advance for that!

Slightly orthogonal; I recall I once opened a ticket (more a "reminder"); to look into other options in this area; that ticket was related to the systemd unit for docker-ce, but the same applies to the one for containerd (which was originally based on what we had in docker); docker/for-linux#73. If anyone arriving here is interested in doing some research into those options (also taking into account if they're supported on all "usual" distros), that would be welcome.

@C-Higgins
Copy link
Author

C-Higgins commented Mar 24, 2022

I installed 1.5 and confirmed problem still persisted. I then modified /lib/systemd/system/containerd.service to read LimitNOFILE=1048576. I then restarted the service and the daemon: sudo systemctl restart service containerd sudo systemctl daemon-reload. Problem still persisted. I then restarted the service once more, because I thought the daemon might have cached the service (so in total I restarted service, daemon, service) and the problem was gone. Docker stable repos don't have 1.6 available so I didn't test that.

@fuweid
Copy link
Member

fuweid commented Mar 25, 2022

I think you should do daemon-reload first and then restart containerd. And it only makes sure that it works for new containers, not for existing ones.

@CryptoRodeo
Copy link

CryptoRodeo commented Mar 28, 2022

I applied the same fix as @C-Higgins and it also got rid of the issue on my machine.
(containerd version 1.5.11 )

@forestuser
Copy link

forestuser commented Apr 11, 2022

I installed 1.5 and confirmed problem still persisted. I then modified /lib/systemd/system/containerd.service to read LimitNOFILE=1048576. I then restarted the service and the daemon: sudo systemctl restart service containerd sudo systemctl daemon-reload. Problem still persisted. I then restarted the service once more, because I thought the daemon might have cached the service (so in total I restarted service, daemon, service) and the problem was gone. Docker stable repos don't have 1.6 available so I didn't test that.

Not work for me

docker-library/mysql#840

апр 11 08:40:09 tk-kv.localhost kernel: eth0: renamed from vethe040851
апр 11 08:40:09 tk-kv.localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf1fa9b5: link becomes ready
апр 11 08:40:09 tk-kv.localhost kernel: docker0: port 1(vethf1fa9b5) entered blocking state
апр 11 08:40:09 tk-kv.localhost kernel: docker0: port 1(vethf1fa9b5) entered forwarding state
апр 11 08:40:09 tk-kv.localhost NetworkManager[702]: [1649648409.8858] device (vethf1fa9b5): carrier: link connected
апр 11 08:40:09 tk-kv.localhost NetworkManager[702]: [1649648409.8861] device (docker0): carrier: link connected
апр 11 08:40:09 tk-kv.localhost systemd-resolved[62431]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
апр 11 08:40:11 tk-kv.localhost avahi-daemon[704]: Joining mDNS multicast group on interface vethf1fa9b5.IPv6 with address fe80::440a:aff:fee3:4278.
апр 11 08:40:11 tk-kv.localhost avahi-daemon[704]: New relevant interface vethf1fa9b5.IPv6 for mDNS.
апр 11 08:40:11 tk-kv.localhost avahi-daemon[704]: Registering new address record for fe80::440a:aff:fee3:4278 on vethf1fa9b5.*.
апр 11 08:40:27 tk-kv.localhost kernel: mysqld invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=0
апр 11 08:40:27 tk-kv.localhost kernel: CPU: 0 PID: 294221 Comm: mysqld Tainted: P OE 5.16.18-200.fc35.x86_64 #1
апр 11 08:40:27 tk-kv.localhost kernel: Hardware name: Gigabyte Tecohnology Co., Ltd. H61M-DS2/H61M-DS2, BIOS F4 12/21/2011
апр 11 08:40:27 tk-kv.localhost kernel: Call Trace:
апр 11 08:40:27 tk-kv.localhost kernel:
апр 11 08:40:27 tk-kv.localhost kernel: dump_stack_lvl+0x48/0x5e
апр 11 08:40:27 tk-kv.localhost kernel: dump_header+0x4a/0x1fd
апр 11 08:40:27 tk-kv.localhost kernel: oom_kill_process.cold+0xb/0x10
апр 11 08:40:27 tk-kv.localhost kernel: out_of_memory+0x229/0x4d0
апр 11 08:40:27 tk-kv.localhost kernel: mem_cgroup_out_of_memory+0x120/0x140
апр 11 08:40:27 tk-kv.localhost kernel: try_charge_memcg+0x6a6/0x760
апр 11 08:40:27 tk-kv.localhost kernel: ? __alloc_pages+0xd6/0x210
апр 11 08:40:27 tk-kv.localhost kernel: charge_memcg+0x36/0x130
апр 11 08:40:27 tk-kv.localhost kernel: __mem_cgroup_charge+0x29/0x80
апр 11 08:40:27 tk-kv.localhost kernel: __handle_mm_fault+0xb56/0x1470
апр 11 08:40:27 tk-kv.localhost kernel: handle_mm_fault+0xb2/0x280
апр 11 08:40:27 tk-kv.localhost kernel: do_user_addr_fault+0x1ce/0x690
апр 11 08:40:27 tk-kv.localhost kernel: exc_page_fault+0x72/0x170
апр 11 08:40:27 tk-kv.localhost kernel: ? asm_exc_page_fault+0x8/0x30
апр 11 08:40:27 tk-kv.localhost kernel: asm_exc_page_fault+0x1e/0x30
апр 11 08:40:27 tk-kv.localhost kernel: RIP: 0033:0x7f9f975fa2b3
апр 11 08:40:27 tk-kv.localhost kernel: Code: 47 10 f3 0f 7f 44 17 e0 f3 0f 7f 47 20 f3 0f 7f 44 17 d0 f3 0f 7f 47 30 f3 0f 7f 44 17 c0 48 01 fa 48 83 e2 c0 48 39 d1 74 c0 <66> 0f 7f 01 66 0f 7f 41 10 66 0f 7f 41 20 66 0f 7f 41 30>
апр 11 08:40:27 tk-kv.localhost kernel: RSP: 002b:00007ffc90812a38 EFLAGS: 00010202
апр 11 08:40:27 tk-kv.localhost kernel: RAX: 00007f9b97553430 RBX: 000000003ffffff8 RCX: 00007f9c9693b000
апр 11 08:40:27 tk-kv.localhost kernel: RDX: 00007f9f97552f80 RSI: 0000000000000000 RDI: 00007f9b97553430
апр 11 08:40:27 tk-kv.localhost kernel: RBP: 00007ffc90812a80 R08: 0000000000000000 R09: 00007f9b97553060
апр 11 08:40:27 tk-kv.localhost kernel: R10: 0000000000000022 R11: 00007f9b97553420 R12: 00007f9b97553030
апр 11 08:40:27 tk-kv.localhost kernel: R13: 0000564afa5858a8 R14: 0000564afa5858a0 R15: 0000000000000000
апр 11 08:40:27 tk-kv.localhost kernel:
апр 11 08:40:27 tk-kv.localhost kernel: memory: usage 2097152kB, limit 2097152kB, failcnt 223331
апр 11 08:40:27 tk-kv.localhost kernel: swap: usage 2097152kB, limit 2097152kB, failcnt 2
апр 11 08:40:27 tk-kv.localhost kernel: Memory cgroup stats for /system.slice/docker-39dbb45552a10932a86eafc687fbae48d5bec97eb6dac316249cb868a3c33d50.scope:
апр 11 08:40:27 tk-kv.localhost kernel: anon 2137669632
file 0
kernel_stack 49152
pagetables 8548352
percpu 144
sock 0
shmem 0
file_mapped 0
file_dirty 0
file_writeback 876544
swapcached 909312
anon_thp 0
file_thp 0
shmem_thp 0
inactive_anon 2138464256
active_anon 102400
inactive_file 0
active_file 0
unevictable 0
slab_reclaimable 78664
slab_unreclaimable 168320
slab 246984
workingset_refault_anon 0
workingset_refault_file 1047
workingset_activate_anon 0
workingset_activate_file 0
workingset_restore_anon 0
workingset_restore_file 0
workingset_nodereclaim 0
pgfault 1048185
pgmajfault 37
pgrefill 25
pgscan 2092313
pgsteal 525113
pgactivate 45
pgdeactivate 22
pglazyfree 0
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0
апр 11 08:40:27 tk-kv.localhost kernel: Tasks state (memory values in pages):
апр 11 08:40:27 tk-kv.localhost kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
апр 11 08:40:27 tk-kv.localhost kernel: [ 294189] 0 294189 967 658 40960 93 0 docker-entrypoi
апр 11 08:40:27 tk-kv.localhost kernel: [ 294220] 0 294220 967 499 40960 93 0 docker-entrypoi
апр 11 08:40:27 tk-kv.localhost kernel: [ 294221] 0 294221 4203297 524054 8491008 524160 0 mysqld
апр 11 08:40:27 tk-kv.localhost kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=docker-39dbb45552a10932a86eafc687fbae48d5bec97eb6dac316249cb868a3c33d50.scope,mems_allowed=0,oom_memcg=/system.slice/docker-39dbb4>
апр 11 08:40:27 tk-kv.localhost kernel: Memory cgroup out of memory: Killed process 294221 (mysqld) total-vm:16813188kB, anon-rss:2087228kB, file-rss:8988kB, shmem-rss:0kB, UID:0 pgtables:8292kB oom_score_adj:0
апр 11 08:40:27 tk-kv.localhost systemd[1]: docker-39dbb45552a10932a86eafc687fbae48d5bec97eb6dac316249cb868a3c33d50.scope: A process of this unit has been killed by the OOM killer.
апр 11 08:40:27 tk-kv.localhost kernel: oom_reaper: reaped process 294221 (mysqld), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
апр 11 08:40:27 tk-kv.localhost systemd[1]: docker-39dbb45552a10932a86eafc687fbae48d5bec97eb6dac316249cb868a3c33d50.scope: Deactivated successfully.

@forestuser
Copy link

forestuser commented Apr 12, 2022

Hi, its work for me. MYSQL

docker system prune -a
nano /lib/systemd/system/containerd.service
LimitNOFILE=1048576
systemctl daemon-reload
systemctl restart containerd

and not work

gitlab-runner -> docker -> containerd 1.5.11

yarn install
апр 12 08:40:52 debug3.local gitlab-runner[1068]: Checking for jobs... nothing runner=rccAVBWo
апр 12 08:40:54 debug3.local kernel: webpack invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
апр 12 08:40:54 debug3.local kernel: CPU: 15 PID: 3018610 Comm: webpack Not tainted 5.16.18-200.fc35.x86_64 #1
апр 12 08:40:54 debug3.local kernel: Hardware name: Micro-Star International Co., Ltd. MS-7A38/B450M PRO-VDH MAX (MS-7A38), BIOS B.70 06/10/2020

debug3.local kernel: Memory cgroup out of memory: Killed process 3017931 (webpack) total-vm:38547452kB, anon-rss:3423448kB, file-rss:2896kB, shmem-rss:0kB, UID:1000 pgtables:85972kB oom_score_adj:0

@kangclzjc
Copy link
Contributor

Seems there is a bug in Systemd, if you need more than 65535 files, need to change the LimitNOFILE manually to a bigger number, such as 1048576

systemd/systemd#6559

Maybe we need to revert this PR 4475

@jsalgado78
Copy link

It works fine on Ubuntu 18.04.6 but it fails on Fedora 35 running same containerd and runc version

go1.16.15
containerd: 1.5.11
runc: 1.0.3

These commands crashes on Fedora 35 because of container use too much memory:
docker run -it --rm mysql:5.7.36
docker run -it --rm mysql:5.5.62

This command works fine on Fedora 35:
docker run -it --rm mysql:8.0.29

It works fine on Fedora 35 running mysql:5.x after setting LimitNOFILE to 1048576 in containerd.service as @kangclzjc said

@AlexAtkinson
Copy link

AlexAtkinson commented Jun 7, 2022

Hello. I'm experiencing the same issue.

Launching mysql:8.* works great, but mysql:5.7.* causes immediate 100% memory consumption (htop), and results in the following in /var/log/messages:

2022-06-06T17:23:24.094275-04:00 laptop kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=user.slice,mems_allowed=0,global_oom,task_memcg=/system.slice/docker-xxx.scope,task=mysqld,pid=38421,uid=0
2022-06-06T17:23:24.094288-04:00 laptop kernel: Out of memory: Killed process 38421 (mysqld) total-vm:16829404kB, anon-rss:12304300kB, file-rss:108kB, shmem-rss:0kB, UID:0 pgtables:28428kB oom_score_adj:0
2022-06-06T17:23:24.094313-04:00 laptop systemd[1]: docker-xxx.scope: A process of this unit has been killed by the OOM killer.
2022-06-06T17:23:24.856029-04:00 laptop systemd[1]: docker-xxx.scope: Deactivated successfully.

Versions:

Docker version 20.10.16, build aa7e414
containerd containerd.io 1.6.4 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16
runc version 1.1.1
commit: v1.1.1-0-g52de29d
spec: 1.0.2-dev
go: go1.17.9
libseccomp: 2.5.3
Linux laptop1138 5.17.12-200.fc35.x86_64 #1 SMP PREEMPT Mon May 30 16:58:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Fedora release 35 (Thirty Five)

Limits:

cat /proc/$(pidof dockerd)/limits
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        unlimited            unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             unlimited            unlimited            processes 
Max open files            1073741816           1073741816           files     
Max locked memory         8388608              8388608              bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       62780                62780                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us

ulimit -a
real-time non-blocking time  (microseconds, -R) unlimited
core file size              (blocks, -c) unlimited
data seg size               (kbytes, -d) unlimited
scheduling priority                 (-e) 0
file size                   (blocks, -f) unlimited
pending signals                     (-i) 62780
max locked memory           (kbytes, -l) 8192
max memory size             (kbytes, -m) unlimited
open files                          (-n) 64000
pipe size                (512 bytes, -p) 8
POSIX message queues         (bytes, -q) 819200
real-time priority                  (-r) 0
stack size                  (kbytes, -s) 8192
cpu time                   (seconds, -t) unlimited
max user processes                  (-u) 62780
virtual memory              (kbytes, -v) unlimited
file locks                          (-x) unlimited

systemctl show containerd | grep LimitNOFILE
LimitNOFILE=infinity
LimitNOFILESoft=infinity

Creating the following custom limit for containerd does not resolve the issue.
/etc/systemd/system/containerd.service.d/custom.conf

[Service]
LimitNOFILE=1048576

Note: Prior systemctl show containerd | grep LimitNOFILE would report 'unlimited'.

systemctl show containerd | grep LimitNOFILE
LimitNOFILE=1048576
LimitNOFILESoft=1048576

This change did not resolve the issue as suggested here: #3201

@AlexAtkinson
Copy link

Flushing all the docker components and reinstalling resolved the issue. (new containerd version released 22h ago)

 sudo dnf remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-selinux \
                  docker-engine-selinux \
                  docker-engine

 sudo dnf -y install dnf-plugins-core

 sudo dnf config-manager \
    --add-repo \
    https://download.docker.com/linux/fedora/docker-ce.repo

sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin
# REF: https://docs.docker.com/engine/install/fedora/

containerd --version
containerd containerd.io 1.6.6 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1

Thanks!

@VGerris
Copy link

VGerris commented Oct 10, 2022

What worked for me on Fedora 36:
sudo vim /lib/systemd/system/containerd.service ->

#LimitNOFILE=infinity
LimitNOFILE=1048576

Then:

systemctl daemon-reload
sudo systemctl restart containerd
docker ps -a # (find id and then docker rm id)

run again:
docker run --name=mysql1 -d mysql/mysql-server:5.7

After that it worked for me, where with the value set to unlimited, the container crashed in seconds.
This change does not seem to be needed on Ubuntu and derivates, is there anything else messing this up?

@kzys
Copy link
Member

kzys commented Oct 20, 2022

Launching mysql:8.* works great, but mysql:5.7.* causes immediate 100% memory consumption (htop), and results in the following in /var/log/messages:

MySQL is having https://bugs.mysql.com/bug.php?id=96525.

@lgoebkes
Copy link

Just to give this topic a little push here.
I had the same problem.
In our docker-compose file we need 3 different mysql servers and as they had no memory limits set they killed my whole system (fedora 37).
A simple docker-compose up led to a system freeze and only a hard reset helped.

The change in /lib/systemd/system/containerd.service by setting the LimitNOFILE=1048576 helped!

@Imartinn
Copy link

Imartinn commented Jan 5, 2023

Having the same problem running mysql 5.5 on a Kind cluster on Fedora 36.
I haven't found a way to set the ulimit in kubernetes but I saw MariaDB fixed this bug on 5.5 so I'm just using MariaDB now and it's working fine.

@creightonfrance
Copy link

I am also having the same issue on Fedora. Having to change the LimitNOFILE every time I run updates.

@giovannicimolin
Copy link

I'm having the same issue on Manjaro (Kernel 6.1.12-1), and changing LimitNOFILE=1048576 change made it work again.

@ThatCoffeeGuy
Copy link

ThatCoffeeGuy commented Mar 14, 2023

So to summarize, this seemingly helped me on Fedora 37

sudo nano /lib/systemd/system/containerd.service
LimitNOFILE=INFINITY --> LimitNOFILE=1048576 
docker stop <db>
docker rm <db>
sudo systemctl daemon-reload
sudo systemctl restart containerd
docker-compose up -d

@paralin
Copy link

paralin commented Mar 24, 2023

FYI this also breaks cups.

@Wesleyotio
Copy link

@ThatCoffeeGuy thank you! This solved the same problem in my distro ( Manjaro linux )!

@polarathene
Copy link
Contributor

polarathene commented Oct 7, 2023

Just a heads-up that the LimitNOFILE=infinity setting in both docker.service and containerd.service files has finally been removed from their respective projects.

Possibly those changes will be part of new releases before the end of the year:

After that happens, this issue can be considered resolved?

@creightonfrance
Copy link

Just a heads-up that the LimitNOFILE=infinity setting in both docker.service and containerd.service files has finally been removed from their respective projects.

Possibly those changes will be part of new releases before the end of the year:

* v25 for Docker (_with backports planned to `20.10`_)

* v1.7.8 for containerd?

After that happens, this issue can be considered resolved?

I suppose so. FYI: I'm not very familiar with the Linux kernel or developing Linux apps.

If this is an issue of resource allocation, I suppose that would resolve it. If this is actually a memory leak or some bit of bad code, it might be worthwhile to dive into the two versions noted here and see if the code causing the issue can be identified.

I ran into issues trying to revert my containerd binaries, but I might try again.

@polarathene
Copy link
Contributor

polarathene commented Oct 10, 2023

If this is an issue of resource allocation, I suppose that would resolve it.

I haven't tried to reproduce the mysql one, but it's been a problem that affected many software where outside of containers the expected limits are considerably lower.

Default is 1024 soft limit. docker.service and containerd.service have overrided that to infinity which can be as high as over 1 billion. Usually that just stalls the software for a much longer duration (sometimes hours), but some software affected by this may also do some allocation, which as you can guess 1e3 vs 1e9 is quite the delta, a million times more. So if your software would normally allocate 1MB in this scenario and that allocation code was affected by this much large limit increase, it'd now use 1TB.

Normally it's just CPU from tasks like iterating through the limit range and closing potentially (highly unlikely) open file descriptors.

If this is actually a memory leak or some bit of bad code, it might be worthwhile to dive into the two versions noted here and see if the code causing the issue can be identified.

True. In some projects affected that I used, I helped track down the problem area and implement a workaround. However outside of container environments where the limits are sane and typically only raised when necessary, you're not likely to encounter this sort of problem. The bug is more to do with misconfiguration with LimitNOFILE=infinity being applied to the containers (and each process in those containers being run with a soft limit well above the expected 1024).


In the meantime, you can start the container with docker run --ulimit "nofile=1024:524288" extra option, or with systemd services set a drop-in override with systemctl edit containerd.service with the following:

[Service]
LimitNOFILE=1024:524288
  • You may need to do the same for docker.service until a release is out with that updated too.
  • After applying the edit for these files, restart Docker systemctl restart docker.
  • You can verify these are the new limits applied with docker run --rm -it alpine ash -c 'ulimit -Sn && ulimit -Hn', you should get 1024 and 524288.

@Syndelis
Copy link

Syndelis commented Nov 9, 2023

I wanted to add my own experience with this issue: apparently I can only reproduce it if swap is enabled. Starting the container with no swap (a.k.a running swapoff <device>) makes the container work just fine, and I can then reenable swap later

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests