-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wallabag mariadb: Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.29.0.33' (This connection closed normally without authentication) #380
Comments
I see the same thing, and the container is listed as unhealthy. |
I think this might be related to #381. When I fixed the healthcheck, the connection error went away. Try changing your docker-compose.yml to look like this:
|
Doesn't work for me, I get Also, the main wallabag container apparently can't connect to the database anymore, even though I'm able to open a shell in there and run |
Same here :( |
I had the same problem and managed to solve it by using docker volumes instead of bind mounts. Here's my docker-compose. Remember to set the version: '3'
services:
wallabag:
image: wallabag/wallabag
environment:
- MYSQL_ROOT_PASSWORD=wallaroot
- SYMFONY__ENV__DATABASE_DRIVER=pdo_mysql
- SYMFONY__ENV__DATABASE_HOST=db
- SYMFONY__ENV__DATABASE_PORT=3306
- SYMFONY__ENV__DATABASE_NAME=wallabag
- SYMFONY__ENV__DATABASE_USER=wallabag
- SYMFONY__ENV__DATABASE_PASSWORD=wallapass
- SYMFONY__ENV__DATABASE_CHARSET=utf8mb4
- SYMFONY__ENV__DATABASE_TABLE_PREFIX="wallabag_"
- SYMFONY__ENV__MAILER_DSN=smtp://127.0.0.1
- SYMFONY__ENV__FROM_EMAIL=wallabag@example.com
- SYMFONY__ENV__DOMAIN_NAME=http://192.168.1.10:8082
- SYMFONY__ENV__SERVER_NAME="Your wallabag instance"
ports:
- 8082:80
volumes:
- wallabag-images:/var/www/wallabag/web/assets/images
healthcheck:
test: ["CMD", "wget" ,"--no-verbose", "--tries=1", "--spider", "http://localhost"]
interval: 1m
timeout: 3s
depends_on:
- db
- redis
db:
image: mariadb
environment:
- MYSQL_ROOT_PASSWORD=wallaroot
volumes:
- wallabag-data:/var/lib/mysql
healthcheck:
test: ['CMD', '/usr/local/bin/healthcheck.sh', '--innodb_initialized']
interval: 20s
timeout: 3s
redis:
image: redis:alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 20s
timeout: 3s
volumes:
wallabag-images:
wallabag-data: |
I finally found a solution... at least for me The Readme of https://github.com/wallabag/docker says that
do this
After I did that, everything worked fine again! |
@cboergermann That works for me as well! 🎉 Funny that I didn't notice that particular section earlier. Git says the last change to the README file was five months ago, but I'm pretty positive that this fix wasn't there the last time I looked. 🙈 Or maybe it was merged just recently, I'm too lazy to look it up right now. |
Same problem (as described in the original issue) for me. [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.27.0.4' (This connection closed normally without authentication) Thanks to @jeff47 for the hint on #381. This solved at least the additional health status issue I had. |
there is a problem with the latest mariadb image - go back to version 11.1 (GA)
...and all works for me Did also the healthcheck fix in the mariadb part:
|
@jeff47 this issue can be closed. @mariobraun This is unrelated to this issue. If you were using mariadb:11 (or :latest), you updated to 11.3, which
|
recommend having Images updates have happened, repull should be sufficient. Appoligies. Integrating the health check into service dependency:
|
Container orchestration doesn't like the conditions. Kubernetes does not support this at all, docker compose often leads to failures by (most frequently) timeout. When you reboot, the service likely won't come up. Creating dependency chains is a bad idea, instead the application should wait/manage its connection dependencies itself. Even if depends_on..condition is used, it only applies on first start. Currently, wallabag container restarts if it doesn't have a database, which works, and is imo preferable to your proposal. Instead, a better change would be to change wallabag container itself to wait while db is unavailable. If you want to track the change, create a new issue, as it is unrelated to this. More reading (attention on to why there are multiple probes): https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ If you, however feel, that your proposal should be considered by others as well, actually make a PR. |
Unfortunately I still find the warning message described in the first entry:
|
I am seeing the following error/warning in my the logs of the wallabag-db docker container. I saw this error coming up and decided to reinstall everything, since I had only a few things saved. I removed the whole wallabag volume and reinstalled, but the warning is still there.
Any clues on how to troubleshoot?
The text was updated successfully, but these errors were encountered: