-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dist/tools/ci: also print doxygen and flake8 versions #8745
Conversation
Murdock reports flake8 is missing, but the flake8 check script succeeds. What is happening there? |
I don't understand this either. I thought it was installed, at least it should be in the riotdocker docker image. Does that mean the Murdock workers are not in sync with the riotdocker image ? Question: why not 'just' derive the Docker image used in Murdock workers from riotdocker ? |
Some CI-Workers were not restarted since flake8 was introduced, hence they did not pull the latest riot docker image. Nevertheless flake8 is part of the docker image, which is the base for the CI-worker image + some extra packages for murdock. |
It would be great to have them defined in the same repo (riotdocker), the Dockerfile of the image used for Murdock workers could be simply put in a |
you're right we need to clarify and structure this better, i.e., which is the user docker image and which is the CI-worker one. However, that's not the problem here; the problem is that updating and rolling out a new CI-worker image did not work reliably using |
The Murdock worker is now using riotdocker as base.
The problem are nodes sharing a name (e.g., inria-ci), as they're sharing control channels. The only way to restart all of them (using dwq) is to issue the restart command to "inria-ci" often enough, which is, as @smlng is completely right, unreliable. |
@kaspar030 actually the node missing flake8 was the |
INRIA currently is a mere 2 or 4 core machine, at least that's what I guess from the low number of tasks per build that this worker completes. |
That explains why restarting inria didn't reliably fix the problem. |
I just re-triggered Murdock and flake8 is still missing, this time on mobi1.inet.haw-hamburg.de |
I'll restart mobi1 |
@aabadie the problem is there is no I manually installed them on mobi1 now, but we may need to update the workers again. |
How ? Using |
Is there something like update-alternatives which need to be run to set up the flake8 -> python3-flake8 symlink? |
I don't know, I was just comparing to how it's installed with Travis |
So far I read that the Ubuntu package |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Displaying them does not hurt, it even showed that not all workers have them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes perfect sense to print those, too. ACK!
Contribution description
This PR adds the possibility to display the doxygen and flake8 versions in the
print_toolchain
script. I wanted to comment in #8727 about this but it was already merged.Issues/PRs references
#8727