Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Container exits when starting #283

Open
gooseleggs opened this issue Jun 27, 2024 · 46 comments
Open

[BUG] Container exits when starting #283

gooseleggs opened this issue Jun 27, 2024 · 46 comments
Assignees
Labels
bug Something isn't working

Comments

@gooseleggs
Copy link

Describe the bug
I am trying to use the 22.4.47 container. I think it worked the first time. However on each subsequent start, it quits as soon as it tries to start the scanner and then it just loops. I can get it to start by enabling SKIPSYNC=true.

This is the output of the issue. Note that this has been previously run, so the NVTs are up to date.

root@scanner:/opt/container/prod/openvas# docker compose up -d
root@scanner:/opt/container/prod/openvas# docker logs -f openvas
starting container at: Wed Jun 26 23:57:39 UTC 2024
Setting up container filesystem
/data/database/base already exists ...
 NOT moving data from image to /data
cp: cannot stat '/usr/local/var/lib/*': No such file or directory
chown: invalid user: ‘gvm:gvm’
cp: cannot stat '/var/lib/gvm/*': No such file or directory
cp: cannot stat '/var/lib/notus/*': No such file or directory
cp: cannot stat '/var/lib/openvas/*': No such file or directory
cp: cannot stat '/etc/gvm/*': No such file or directory
cp: cannot stat '/usr/local/etc/openvas/*': No such file or directory
Choosing container start method from:

Starting gvmd & openvas in a single container !!
Wait for redis socket to be created...
Testing redis status...
Redis ready.
Creating postgresql.conf and pg_hba.conf
Starting PostgreSQL...
waiting for server to start....2024-06-26 23:57:42.727 UTC [104] LOG:  redirecting log output to logging collector process
2024-06-26 23:57:42.727 UTC [104] HINT:  Future log output will appear in directory "/data/var-log/postgresql".
 done
server started
pg exit with 0 .
Checking for existing DB
Running first start configuration...
NEWDB=false
LOADDEFAULT=false
Current GVMd database version is 250
Migrate the database if needed.
Updating NVTs and other data
This could take a while if you are not using persistent storage for your NVTs
 or this is the first time pulling to your persistent storage.
 the time will be mostly dependent on your available bandwidth.
Checking age of current data feeds from Greenbone.
ImageFeeds=1717727332
InstalledFeeds=1717727332
Syncing all feeds from GB
Synchronizing the Notus feed from Immauss Cybersecurity
And all others from the GB Community feed
Running as root. Switching to user 'gvm' and group 'gvm'.
Trying to acquire lock on /var/lib/openvas/feed-update.lock
Acquired lock on /var/lib/openvas/feed-update.lock
⠋ Downloading Notus files from rsync://rsync.immauss.com/feeds/notus/ to
/var/lib/notus⠋ Downloading NASL files from
rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/vt-data/
nasl/ to /var/lib/openvas/pluginsReleasing lock on /var/lib/openvas/feed-update.lock
Trying to acquire lock on /var/lib/gvm/feed-update.lock
Acquired lock on /var/lib/gvm/feed-update.lock
⠋ Downloading SCAP data from
rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/scap-dat
a/ to /var/lib/gvm/scap-data⠋ Downloading CERT-Bund data from
rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/cert-dat
a/ to /var/lib/gvm/cert-data⠋ Downloading gvmd data from
rsync://feed.community.greenbone.net/community/data-feed/22.04/ to
/var/lib/gvm/data-objects/gvmd/22.04Releasing lock on /var/lib/gvm/feed-update.lock
Starting Greenbone Vulnerability Manager...
root@scanner:/opt/container/prod/openvas#

To Reproduce
Steps to reproduce the behavior:

  1. I have a docker compose file
services:
  openvas:
    image: immauss/openvas:22.4.47
#    image: immauss/openvas:22.4.40
    container_name: openvas
    ports:
      - "127.0.0.1:80:9392"
    restart: always
    networks:
      - default
    volumes:
       - openvas:/data
#    environment:
#      SKIPSYNC: true

volumes:
  openvas:
  1. When did the issue occur?
  • whenever the container is started prior to scan with after started the first time

Expected behavior
GVM to start after feeds finished updating

Environment (please complete the following information):

  • OS: Debian 12.5
  • Memory available to OS: 8G
  • Container environment used with version: docker

logs ( commands assume the container name is 'openvas' )
Please attach the output from one of the following commands:

docker-compose

docker-compose logs > logfile.log
docker-compose.txt

@fashberg
Copy link

i have the same prob. Docker image was 1 week old. Pulling the latest doesn't fix it. It seems the rsync feed brakes it

@immauss
Copy link
Owner

immauss commented Jun 27, 2024

Can you run the sync manually after starting with skipsync and send the output?

Also, if you could copy out the gvmd.log, that might help as well.

Sorry for brevity, only have my phone.

-Scott

@grandaor
Copy link

I have the same issue.

$ docker exec -it openvas /scripts/sync.sh
Synchronizing the Notus feed from Immauss Cybersecurity
And all others from the GB Community feed
Running as root. Switching to user 'gvm' and group 'gvm'.
Trying to acquire lock on /var/lib/openvas/feed-update.lock
Acquired lock on /var/lib/openvas/feed-update.lock
⠧ Downloading Notus files from rsync://rsync.immauss.com/feeds/notus/ to /var/lib/notus
⠏ Downloading NASL files from rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/vt-data/nasl/ to 
/var/lib/openvas/plugins
Releasing lock on /var/lib/openvas/feed-update.lock
Trying to acquire lock on /var/lib/gvm/feed-update.lock
Acquired lock on /var/lib/gvm/feed-update.lock
⠸ Downloading SCAP data from rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/scap-data/ to 
/var/lib/gvm/scap-data
⠸ Downloading CERT-Bund data from rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/cert-data/ to 
/var/lib/gvm/cert-data
⠸ Downloading gvmd data from rsync://feed.community.greenbone.net/community/data-feed/22.04/ to 
/var/lib/gvm/data-objects/gvmd/22.04
Releasing lock on /var/lib/gvm/feed-update.lock

@rolemee
Copy link

rolemee commented Jul 3, 2024

I have the same question.

$ docker exec -it openvas  /scripts/sync.sh
Synchronizing the Notus feed from Immauss Cybersecurity
And all others from the GB Community feed
Running as root. Switching to user 'gvm' and group 'gvm'.
Trying to acquire lock on /var/lib/openvas/feed-update.lock
Acquired lock on /var/lib/openvas/feed-update.lock
⠇ Downloading Notus files from rsync://rsync.immauss.com/feeds/notus/ to /var/lib/notus
rsync: did not see server greeting
rsync error: error starting client-server protocol (code 5) at main.c(1863) [Receiver=3.2.7]

⠦ Downloading NASL files from rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/vt-data/nasl/ to /var/lib/openvas/plugins
rsync: did not see server greeting
rsync error: error starting client-server protocol (code 5) at main.c(1863) [Receiver=3.2.7]

Releasing lock on /var/lib/openvas/feed-update.lock
Trying to acquire lock on /var/lib/gvm/feed-update.lock
Acquired lock on /var/lib/gvm/feed-update.lock
⠦ Downloading SCAP data from rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/scap-data/ to /var/lib/gvm/scap-data
rsync: did not see server greeting
rsync error: error starting client-server protocol (code 5) at main.c(1863) [Receiver=3.2.7]

⠦ Downloading CERT-Bund data from rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/cert-data/ to /var/lib/gvm/cert-data
rsync: did not see server greeting
rsync error: error starting client-server protocol (code 5) at main.c(1863) [Receiver=3.2.7]

⠧ Downloading gvmd data from rsync://feed.community.greenbone.net/community/data-feed/22.04/ to /var/lib/gvm/data-objects/gvmd/22.04
rsync: did not see server greeting
rsync error: error starting client-server protocol (code 5) at main.c(1863) [Receiver=3.2.7]

Releasing lock on /var/lib/gvm/feed-update.lock

@immauss
Copy link
Owner

immauss commented Jul 3, 2024

OK ... It looks like it was an issue with the feeds updating properly. . . . .

I think I've fixed it.

Well ... it works for me now. :)

Please let me know.

No new image needed, the fix was on the feed sync server for the notus files.

-Scott

@grandaor
Copy link

grandaor commented Jul 3, 2024

It's ok for me
Thanks a lot @immauss as always :)

@immauss
Copy link
Owner

immauss commented Jul 5, 2024

thanks @grandaor

@gooseleggs, It all is OK for you, I'll go ahead and close this one.

Thanks,
-Scott

@gooseleggs
Copy link
Author

Haven’t tested yet but close as complete.

@immauss
Copy link
Owner

immauss commented Jul 8, 2024

Thanks @gooseleggs !

@immauss immauss closed this as completed Jul 8, 2024
@grandaor
Copy link

grandaor commented Jul 8, 2024

Sorry @immauss ...
That happened again...
Exactely same issue xD
SKIPSYNC: true permit to start without start loop

IMO, you can reopen this issue

@immauss immauss reopened this Jul 9, 2024
@immauss
Copy link
Owner

immauss commented Jul 9, 2024

Ugh ... sorry...
OK .. I'm going to start from scratch on this again. But it might be a day or so before I can get the time to dedicate to it. Already full throttle on a number of issues around here.

-Scott

@immauss immauss added the bug Something isn't working label Jul 9, 2024
@cbrunnkvist
Copy link

cbrunnkvist commented Jul 11, 2024

Oh my gosh my port forwarding rules got me confused: looking back, I was experimenting with different dockerized releases, and I'm frankly not sure anymore where this message originated (possibly a Version 23.1.1 instance). Please disregard, apologies for the confusion. FWIW, I have seen the current official Docker Compose version fail into some unrecoverable state during first sync run - pruning all data volumes and recreating the stack eventually got it back on track though.

I probably am pulling a 22.5.* (:latest ???) image, and I'm seeing a slightly different message on sync during container start:

[...]

Have you raised this issue at https://community.greenbone.net ? Rsync seem to be working fine here at the moment, but I did see some failures to start containers a few days ago, unfortunately didn't have time to look into it and pruned those logs already. 🤷‍♂️

@immauss
Copy link
Owner

immauss commented Jul 13, 2024

@cbrunnkvist
Thanks. But in the case of the OP, the rsync actually completed, but then gvmd is failing to start.

@gooseleggs / @grandaor
I'm having trouble reproducing this issue. ( AKA ... works fine for me. 😉 )
Can either of you please do the following:

  1. With a clean empty volume, reproduce the issue.
  2. Restart with "SKIPSYNC=true"
  3. After restart, run the following: ( change "openvas" to the correct container name if needed)
docker exec -it openvas tar cJvf /all-logs.tar.xz /data/var-log
docker cp openvas:/all-logs.tar.xz .

When done, please attach the ''' all-logs.tar.xz '''

As it looks like gvmd is dying, I want to see if there is anything that is making it into the logs. Because the tail dies with the container, often things get written to the log file that will not make it to the 'docker logs' when something crashes.

@grandaor
Copy link

Hello @immauss
See my attachement
all-logs.zip
Thanks to take a look

@immauss
Copy link
Owner

immauss commented Jul 14, 2024

@gooseleggs Can you do the same?
There is an unusual error in the postgresql log about one of the files being processed. I'd like to know if yours is exactly the same.

Thanks,
-Scott

@gooseleggs
Copy link
Author

gooseleggs commented Jul 14, 2024 via email

@immauss
Copy link
Owner

immauss commented Jul 28, 2024

@gooseleggs
Any luck ?

@gooseleggs
Copy link
Author

Thanks for the bump. Hopefully this has captured what you want.

all-logs-seoncdrun.tar.gz

I ran through it a couple of time. The first run with SkipSync=False succeeded once, but then also failed once as well, so seems now to be random more than a hard fail.

@irene-romero
Copy link

Hello,

I am experiencing the same error: after starting the container, it immediately exits. However, after attempting to start the container multiple times (10+), I noticed that the issue seems to occur randomly. I managed to start it successfully 3 times out of 10 attempts, but it fails again after a restart.

Logs after trying to start the container:

starting container at: Mon Aug  5 10:58:02 UTC 2024
Setting up container filesystem
cp: cannot stat '/usr/local/var/lib/*': No such file or directory
chown: invalid user: ‘gvm:gvm’
cp: cannot stat '/var/lib/gvm/*': No such file or directory
cp: cannot stat '/var/lib/notus/*': No such file or directory
cp: cannot stat '/var/lib/openvas/*': No such file or directory
cp: cannot stat '/etc/gvm/*': No such file or directory
cp: cannot stat '/usr/local/etc/openvas/*': No such file or directory
Choosing container start method from:

Starting gvmd & openvas in a single container !!
Wait for redis socket to be created...
Testing redis status...
Redis ready.
Creating postgresql.conf and pg_hba.conf
Starting PostgreSQL...
waiting for server to start....2024-08-05 10:58:03.715 UTC [107] LOG:  redirecting log output to logging collector process
2024-08-05 10:58:03.715 UTC [107] HINT:  Future log output will appear in directory "/data/var-log/postgresql".
 done
server started
pg exit with 0 .
Checking for existing DB
Loading Default Database
Running first start configuration...
Generating certs...
Using /tmp/tmp.I7LBVdmPgO to temporarily store files.
Creating new certificate infrastructure in automatic mode.
Generating private key.
Generated private key in /tmp/tmp.I7LBVdmPgO/cakey.pem.
Generating certificate.
  Generating self signed certificate.
Generated self signed certificate in /tmp/tmp.I7LBVdmPgO/cacert.pem.
  CA certificate generated.
Installing certificate and key.
Install destinations do not exist as directories, attempting to create them.
Setting up directories
Installed private key to /var/lib/gvm/private/CA/cakey.pem.
Installed certificate to /var/lib/gvm/CA/cacert.pem.
  CA certificate and key installed.
Generating private key.
Generated private key in /tmp/tmp.I7LBVdmPgO/serverkey.pem.
Generating certificate.
  Generating certificate request.
Generated certificate request in /tmp/tmp.I7LBVdmPgO/serverrequest.pem.
Signing certificate request.
Signed certificate request in /tmp/tmp.I7LBVdmPgO/serverrequest.pem with CA certificate in /var/lib/gvm/CA/cacert.pem to generate certificate in /tmp/tmp.I7LBVdmPgO/servercert.pem
  Server certificate generated.
Installing certificate and key.
Installed private key to /var/lib/gvm/private/CA/serverkey.pem.
Installed certificate to /var/lib/gvm/CA/servercert.pem.
  Server certificate and key installed.
Generating private key.
Generated private key in /tmp/tmp.I7LBVdmPgO/clientkey.pem.
Generating certificate.
  Generating certificate request.
Generated certificate request in /tmp/tmp.I7LBVdmPgO/clientrequest.pem.
Signing certificate request.
Signed certificate request in /tmp/tmp.I7LBVdmPgO/clientrequest.pem with CA certificate in /var/lib/gvm/CA/cacert.pem to generate certificate in /tmp/tmp.I7LBVdmPgO/clientcert.pem
  Client certificate generated.
Installing certificate and key.
Installed private key to /var/lib/gvm/private/CA/clientkey.pem.
Installed certificate to /var/lib/gvm/CA/clientcert.pem.
  Client certificate and key installed.
Removing temporary directory /tmp/tmp.I7LBVdmPgO.
NEWDB=false
LOADDEFAULT=true
########################################
Creating a base DB from /usr/lib/base-db.xz
########################################
ERROR:  relation "config_preferences_by_config" already exists
ERROR:  relation "host_details_by_host" already exists
ERROR:  relation "host_identifiers_by_host" already exists
ERROR:  relation "host_identifiers_by_value" already exists
ERROR:  relation "host_max_severities_by_host" already exists
ERROR:  relation "host_oss_by_host" already exists
ERROR:  relation "nvt_selectors_by_family_or_nvt" already exists
ERROR:  relation "nvt_selectors_by_name" already exists
ERROR:  relation "nvts_by_creation_time" already exists
ERROR:  relation "nvts_by_cvss_base" already exists
ERROR:  relation "nvts_by_family" already exists
ERROR:  relation "nvts_by_modification_time" already exists
ERROR:  relation "nvts_by_name" already exists
ERROR:  relation "nvts_by_solution_type" already exists
ERROR:  relation "permissions_by_name" already exists
ERROR:  relation "permissions_by_resource" already exists
ERROR:  relation "report_counts_by_report_and_override" already exists
ERROR:  relation "report_host_details_by_report_host_and_name" already exists
ERROR:  relation "report_hosts_by_report_and_host" already exists
ERROR:  relation "reports_by_task" already exists
ERROR:  relation "result_nvt_reports_by_report" already exists
ERROR:  relation "results_by_date" already exists
ERROR:  relation "results_by_host_and_qod" already exists
ERROR:  relation "results_by_nvt" already exists
ERROR:  relation "results_by_report" already exists
ERROR:  relation "results_by_task" already exists
ERROR:  relation "tag_resources_by_resource" already exists
ERROR:  relation "tag_resources_by_resource_uuid" already exists
ERROR:  relation "tag_resources_by_tag" already exists
ERROR:  relation "tag_resources_trash_by_tag" already exists
ERROR:  relation "tls_certificate_locations_by_host_ip" already exists
ERROR:  relation "tls_certificate_origins_by_origin_id_and_type" already exists
ERROR:  relation "vt_refs_by_vt_oid" already exists
ERROR:  relation "vt_severities_by_vt_oid" already exists
NOTICE:  relation "vt_severities" already exists, skipping
Unpacking base feeds data from /usr/lib/var-lib.tar.xz
Base DB and feeds collected on:
Wed Jul 24 02:21:33 UTC 2024
Checking DB Version
Current GVMd database version is  255
Migrate the database if needed.
Updating NVTs and other data
This could take a while if you are not using persistent storage for your NVTs
 or this is the first time pulling to your persistent storage.
 the time will be mostly dependent on your available bandwidth.
Checking age of current data feeds from Greenbone.
ImageFeeds=1721788095
InstalledFeeds=1721787693
Updating local feeds with newer image feeds.
Syncing all feeds from GB
Synchronizing the Notus feed from Immauss Cybersecurity
And all others from the GB Community feed
Running as root. Switching to user 'gvm' and group 'gvm'.
Trying to acquire lock on /var/lib/openvas/feed-update.lock
Acquired lock on /var/lib/openvas/feed-update.lock
⠋ Downloading Notus files from rsync://rsync.immauss.com/feeds/notus/ to 
/var/lib/notus⠋ Downloading NASL files from 
rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/vt-data/
nasl/ to /var/lib/openvas/pluginsReleasing lock on /var/lib/openvas/feed-update.lock
Trying to acquire lock on /var/lib/gvm/feed-update.lock
Acquired lock on /var/lib/gvm/feed-update.lock
⠋ Downloading SCAP data from 
rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/scap-dat
a/ to /var/lib/gvm/scap-data⠋ Downloading CERT-Bund data from 
rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/cert-dat
a/ to /var/lib/gvm/cert-data⠋ Downloading gvmd data from 
rsync://feed.community.greenbone.net/community/data-feed/22.04/ to 
/var/lib/gvm/data-objects/gvmd/22.04Releasing lock on /var/lib/gvm/feed-update.lock
Starting Greenbone Vulnerability Manager...

@immauss
Copy link
Owner

immauss commented Aug 28, 2024

Sorry for the long delays. . . .
Can anyone on this thread please test the latest versions (22.4.49) and let me know if that resolves the issue?

Thank you,
Scott

@gooseleggs
Copy link
Author

I removed the openvas volume on Debian 12. Same docker compose file as per original issue (except of course for the container version).

This is the log that I got on startup:
openvas-2.4.49_initial.txt

It did exit out, but either restarted and continued, or just continued. Didn't retry as got this error (repeating) when it got to Waiting for gvmd

md manage:WARNING:2024-09-02 04h11.03 utc:2952: sql_exec_internal: SQL: SELECT count(*) FROM permissions WHERE subject_type = 'role' AND subject = (SELECT id FROM roles                WHERE uuid = 'cc9cac5e-39a3-11e4-abae-406186ea4fc5') AND resource = 0;
md manage:WARNING:2024-09-02 04h11.03 utc:2952: sql_x: sql_exec_internal failed
md   main:MESSAGE:2024-09-02 04h11.04 utc:2958:    Greenbone Vulnerability Manager version 23.8.1 (DB revision 256)
md manage:   INFO:2024-09-02 04h11.04 utc:2958:    Getting users.
md manage:WARNING:2024-09-02 04h11.05 utc:2958: sql_exec_internal: PQexec failed: ERROR:  more than one row returned by a subquery used as an expression
 (7)

@immauss
Copy link
Owner

immauss commented Sep 14, 2024

This is absolutely baffling me......

Every now and then, I've been taking a look at this hoping some time and distance will give me some perspective ... but i've got nothing.....

I just built a Debian 12 fully updated, used your compose file, and it booted up just fine.

The errors you are getting look like there is already an existing database, which there should not be if you started with an empty volume.

New check . . .

Can you try with command line.

docker run -d -p 127.0.0.1:80:9392 --name openvas immauss/openvas:22.4.49

This will also run without a volume ...

If this fails out the same way, please send me the logs ...

Thanks,
Scott

@gooseleggs
Copy link
Author

Scott

Sorry - still no dice

Here is a console session output:
openvasv3.txt

Here are the files from the logs directory: gvm.tar.gz

kelvins@debian:~/var/log/gvm$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

I am running this test from a VirtualBox VM. I could ship it to you if you like for you to try in your environment if you like although that should not make a difference.

@immauss
Copy link
Owner

immauss commented Sep 14, 2024

Actually ... that would be awesome

What I'm seeing makes no sense ....

Thanks,
-Scott

@immauss
Copy link
Owner

immauss commented Sep 15, 2024

So .... I have a new theory ...
Looking at your logs and comparing with mine ...
on mine ... there is a point I see this in the postgresql.log

2024-09-15 22:04:41.115 UTC [111] LOG:  checkpoints are occurring too frequently (8 seconds apart)
2024-09-15 22:04:41.115 UTC [111] HINT:  Consider increasing the configuration parameter "max_wal_size".

In yours I see

2024-09-14 22:54:57.415 UTC [224] gvm@gvmd FATAL:  role "gvm" does not exist
2024-09-14 22:54:57.430 UTC [229] gvm@gvmd FATAL:  role "gvm" does not exist

There's more, but it just goes downhill from there. The "gvm" role creation is one of the first things in the base-db.sql.

So.... I suspect it has something to do with how the DB is being restored. It's a standard method, pg_dumpall then redirect to psql.

I'm actually wondering now if it is a something much lower in the stack .. the disk, or how virtual box handles the high rate of writes during the DB restore ...
So ....

I'm going to try re-writing the db restore ....
but that means I need to re-write the db creation as well. If what I'm reading is right, then over all the restoration will be faster, resulting in a faster initial container start ... so I'm all for that, but it will take me some time....

I would still love to have that VM if you could ship it over though ... It could provide more insight.

Thanks,
-Scott

@gooseleggs
Copy link
Author

Scott - check out your linked in message from Kelvin Smith.

@Talanor
Copy link

Talanor commented Sep 18, 2024

I have the exact same issue.

I can send you more logs if needed.

@immauss
Copy link
Owner

immauss commented Sep 18, 2024

Is anyone else here using Virtual Box for the VM with issues?
Just curious . . . . .

@gooseleggs .... Got it!
Can you tell me about your install process for the VM.
I couldn't convert the OVA to my VMWare Fusion, so I installed Virtual Box.
FWIW .... VMWare Workstation (and Fusion) are Free now. (Which is only slightly annoying considering how much I have paid for them over the years.) At least on my Mac, it is MUCH Faster than Virtual Box.

@gooseleggs
Copy link
Author

Quite vanilla actually. Debian basic install-no gui. Install docker from docker itself (not distribution). I may have made my user account have group membership to run commands. Copy docker-compose file and run.

Did you get VM running? I could convert disk to VMDK if that would help?

did not know those VM engines are now free. Surprised since Broadcom have hiked other products and removed free ESXi

@immauss
Copy link
Owner

immauss commented Sep 19, 2024

I did get the VM running in VBox. It happily fails just as you described. :/
Failure starts at 22.4.46, so I'm going to focus there. I think we worked that our before, but without a wayt to reliably see the failure, it's tough to TS. Now that I have that .... maybe I can figure this out.

Strange thing though ... I installed Debian 12.5 in VBox, ran `apt update && apt install docker' and ran the latest container and it worked like a charm. So .......

I'll try again with docker from docker... but should be the same.

-Scott

@Talanor
Copy link

Talanor commented Sep 19, 2024

Is anyone else here using Virtual Box for the VM with issues? Just curious . . . . .

No virtualization here. Host is EndeavousOS, x86_64. Latest packages as of yesterday.
Could provide detailed versions of docker environment if it helps.

@immauss
Copy link
Owner

immauss commented Sep 20, 2024

some one please try:
immauss/openvas:beta

Thank you,
Scott

@immauss
Copy link
Owner

immauss commented Sep 20, 2024

nvm ....
😦

@immauss
Copy link
Owner

immauss commented Sep 20, 2024

OK ...
I've been focused on the scripts in the container .... (The bits I wrote ....)
But there is nothing in the diffs between the working and not working that made any sense.
So I thought maybe it was an updated dependent package ...
Nope ... that wasn't it ...

Then I realized ... between those times, I changed the script to generate the weekly image .....

Some how ... (I've not yet figured out why ... ) That is causing the problem.

So ... please try adding:

-e NEWDB

to your start command
or

NEWDB=true

in your docker-compose.

This will cause it to create a brand new db from scratch, which will take a bit of time. BUT .... it "should" work. ... It will also be a very valuable piece of troubleshooting info me.

I'm working on rolling back that change and rebuilding some images .. but it might take a few days for me . . . .

Thanks,
Scott

@immauss
Copy link
Owner

immauss commented Sep 20, 2024

OK ...
My Friday evening plans got canceled. :(

But ...

I rebuilt the latest image with a fresh DB, and it is now working in the VM provided by @gooseleggs !!
So ... Maybe it will work for you as well.

Please let me know.

Thanks,
Scott

@gooseleggs
Copy link
Author

gooseleggs commented Sep 20, 2024 via email

@immauss
Copy link
Owner

immauss commented Sep 20, 2024

It should be in both now.

-Scott

@gooseleggs
Copy link
Author

gooseleggs commented Sep 20, 2024

Just re-downloaded 22.4.49 with docker compose and all started as expected with clean volume. However looking on docker hub it says it is a month old.
Great stuff.

@immauss
Copy link
Owner

immauss commented Sep 22, 2024

22.4.49 did not get the update ...

"latest" did as the update pushes to "latest"

I'll push a new 22.4.50 later today with a note for clarity.

Thanks,
-Scott

@gooseleggs
Copy link
Author

OK - ran up the latest. Looks ok, however, would you be expecting all the error lines?

Generating certificate.
  Generating certificate request.
Generated certificate request in /tmp/tmp.1pCLrGA5yw/clientrequest.pem.
Signing certificate request.
Signed certificate request in /tmp/tmp.1pCLrGA5yw/clientrequest.pem with CA certificate in /var/lib/gvm/CA/cacert.pem to generate certificate in /tmp/tmp.1pCLrGA5yw/clientcert.pem
  Client certificate generated.
Installing certificate and key.
Installed private key to /var/lib/gvm/private/CA/clientkey.pem.
Installed certificate to /var/lib/gvm/CA/clientcert.pem.
  Client certificate and key installed.
Removing temporary directory /tmp/tmp.1pCLrGA5yw.
NEWDB=false
LOADDEFAULT=true
########################################
Creating a base DB from /usr/lib/base-db.xz
########################################
ERROR:  relation "config_preferences_by_config" already exists
ERROR:  relation "host_details_by_host" already exists
ERROR:  relation "host_identifiers_by_host" already exists
ERROR:  relation "host_identifiers_by_value" already exists
ERROR:  relation "host_max_severities_by_host" already exists
ERROR:  relation "host_oss_by_host" already exists
ERROR:  relation "nvt_selectors_by_family_or_nvt" already exists
ERROR:  relation "nvt_selectors_by_name" already exists
ERROR:  relation "nvts_by_creation_time" already exists
ERROR:  relation "nvts_by_cvss_base" already exists
ERROR:  relation "nvts_by_family" already exists
ERROR:  relation "nvts_by_modification_time" already exists
ERROR:  relation "nvts_by_name" already exists
ERROR:  relation "nvts_by_solution_type" already exists
ERROR:  relation "permissions_by_name" already exists
ERROR:  relation "permissions_by_resource" already exists
ERROR:  relation "report_counts_by_report_and_override" already exists
ERROR:  relation "report_host_details_by_report_host_and_name" already exists
ERROR:  relation "report_hosts_by_report_and_host" already exists
ERROR:  relation "reports_by_task" already exists
ERROR:  relation "result_nvt_reports_by_report" already exists
ERROR:  relation "result_vt_epss_by_vt_id" already exists
ERROR:  relation "results_by_date" already exists
ERROR:  relation "results_by_host_and_qod" already exists
ERROR:  relation "results_by_nvt" already exists
ERROR:  relation "results_by_report" already exists
ERROR:  relation "results_by_task" already exists
ERROR:  relation "tag_resources_by_resource" already exists
ERROR:  relation "tag_resources_by_resource_uuid" already exists
ERROR:  relation "tag_resources_by_tag" already exists
ERROR:  relation "tag_resources_trash_by_tag" already exists
ERROR:  relation "tls_certificate_locations_by_host_ip" already exists
ERROR:  relation "tls_certificate_origins_by_origin_id_and_type" already exists
ERROR:  relation "vt_refs_by_vt_oid" already exists
ERROR:  relation "vt_severities_by_vt_oid" already exists
NOTICE:  relation "vt_severities" already exists, skipping
Unpacking base feeds data from /usr/lib/var-lib.tar.xz
Base DB and feeds collected on:
Fri Sep 20 02:21:49 UTC 2024
Checking DB Version

@immauss
Copy link
Owner

immauss commented Sep 23, 2024

Yes ...
That will be removed in the next version. It's part of the start-up that makes sure the DB is in the proper state during an upgrade from older version and is not longer needed, so I've already pulled it from the current (not yet published) code base.

There is a 22.4.50 now. It is identical to 22.4.49, but includes the most recently (works better) database.

I'm going to close this out in a day or so, but I would really like to hear from anyone else who has been expereincing this issue. While I seem to have found how to fix it, I still don't know the root cause of "WHY" it happens to some installs and not others.

Thanks
-Scott

@SparKiiRQ
Copy link

Hey there,

I was having the same problem using docker desktop in a windows 11 host, pulling the latest image seems to fix the issue.

Thank you so much!!

Should you need any logs please let me know.

  • Adri

@immauss
Copy link
Owner

immauss commented Sep 23, 2024

@SparKiiRQ Outstanding!!
Thank you

-Scott

@SparKiiRQ
Copy link

Sorry, as soon as restarting the container it seems to crash again, after the latest log entry a status Exited (1) is returned:

Let me know if some more info is needed.

2024-09-24 11:48:15 starting container at: Tue Sep 24 09:48:15 UTC 2024
2024-09-24 11:48:15 Looks like this container has already been started once.
2024-09-24 11:48:15 Just doing a little cleanup instead of the whole fs-setup.
2024-09-24 11:48:16 Choosing container start method from:
2024-09-24 11:48:16 
2024-09-24 11:48:16 Starting gvmd & openvas in a single container !!
2024-09-24 11:48:16 Wait for redis socket to be created...
2024-09-24 11:48:17 Testing redis status...
2024-09-24 11:48:17 Redis not yet ready...
2024-09-24 11:48:18 Redis ready.
2024-09-24 11:48:18 Starting PostgreSQL...
2024-09-24 11:48:18 pg_ctl: another server might be running; trying to start server anyway
2024-09-24 11:48:18 waiting for server to start....
2024-09-24 09:48:18.441 UTC [26] LOG:  redirecting log output to logging collector process
2024-09-24 11:48:18 
2024-09-24 09:48:18.441 UTC [26] HINT:  Future log output will appear in directory "/data/var-log/postgresql".
2024-09-24 11:48:21 .. done
2024-09-24 11:48:21 server started
2024-09-24 11:48:21 pg exit with 0 .
2024-09-24 11:48:21 Checking for existing DB
2024-09-24 11:48:21 Running first start configuration...
2024-09-24 11:48:21 NEWDB=false
2024-09-24 11:48:21 LOADDEFAULT=false
2024-09-24 11:48:21 Checking DB Version
2024-09-24 11:48:21 Current GVMd database version is  256
2024-09-24 11:48:21 Migrate the database if needed.
2024-09-24 11:48:21 Updating NVTs and other data
2024-09-24 11:48:21 This could take a while if you are not using persistent storage for your NVTs
2024-09-24 11:48:21  or this is the first time pulling to your persistent storage.
2024-09-24 11:48:21  the time will be mostly dependent on your available bandwidth.
2024-09-24 11:48:21 Checking age of current data feeds from Greenbone.
2024-09-24 11:48:21 ImageFeeds=1726799365
2024-09-24 11:48:21 InstalledFeeds=1726799365
2024-09-24 11:48:21 Syncing all feeds from GB
2024-09-24 11:48:21 Synchronizing the Notus feed from Immauss Cybersecurity
2024-09-24 11:48:21 And all others from the GB Community feed
2024-09-24 11:48:21 Running as root. Switching to user 'gvm' and group 'gvm'.
2024-09-24 11:48:21 Trying to acquire lock on /var/lib/openvas/feed-update.lock
2024-09-24 11:48:21 Acquired lock on /var/lib/openvas/feed-update.lock
2024-09-24 11:48:22 ⠋ Downloading Notus files from rsync://rsync.immauss.com/feeds/notus/ to 
2024-09-24 11:48:28 /var/lib/notus⠋ Downloading NASL files from 
2024-09-24 11:48:28 rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/vt-data/
2024-09-24 11:48:28 nasl/ to /var/lib/openvas/pluginsReleasing lock on /var/lib/openvas/feed-update.lock
2024-09-24 11:48:28 Trying to acquire lock on /var/lib/gvm/feed-update.lock
2024-09-24 11:48:28 Acquired lock on /var/lib/gvm/feed-update.lock
2024-09-24 11:48:29 ⠋ Downloading SCAP data from 
2024-09-24 11:48:29 rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/scap-dat
2024-09-24 11:48:30 a/ to /var/lib/gvm/scap-data⠋ Downloading CERT-Bund data from 
2024-09-24 11:48:30 rsync://feed.community.greenbone.net/community/vulnerability-feed/22.04/cert-dat
2024-09-24 11:48:30 a/ to /var/lib/gvm/cert-data⠋ Downloading gvmd data from 
2024-09-24 11:48:30 rsync://feed.community.greenbone.net/community/data-feed/22.04/ to 
2024-09-24 11:48:30 /var/lib/gvm/data-objects/gvmd/22.04Releasing lock on /var/lib/gvm/feed-update.lock
2024-09-24 11:48:30 Starting Greenbone Vulnerability Manager...

@immauss
Copy link
Owner

immauss commented Sep 24, 2024

@SparKiiRQ

Welllllllll
SHIT!

The VM I got from @gooseleggs does the same thing.

Thank you.

I'll keep working on it.

-Scott

@immauss
Copy link
Owner

immauss commented Sep 26, 2024

Huzzaahhh !!

OK ... I think I figured it out ... (Really this time.)
It's the health checks.
The health check script uses gvmd to validate the scanners for openvas.
Normally, the healthcheck waits 5 minutes before the first execution. On a startup with the synch.sh running, this means the health checks could run before gvmd is started. This causes the gvmd command to check for openvas scanners to hang for bit. If the timing is bad, and it is frequently bad, the gvmd start command runs while the hung gvmd is still there, and it sees a gvmd process already running and refuses to start, which causes the container to fail.

Now that I know the Root Cause, I can fix this in one of a few different ways.

I should have time to get this worked out before the end of the weekend.

Thanks again to @gooseleggs. I would not have been able to find this without that VM!!

And to @SparKiiRQ for being a buzzkill 😁

-Scott

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

9 participants