-
Notifications
You must be signed in to change notification settings - Fork 865
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: Self-hosted agent fails to mount whole work directory for second run in container with checkout path #4479
Labels
Comments
asaril
changed the title
[BUG]: Self-hosted agent fails to mount whole working directory for second run in container with checkout path
[BUG]: Self-hosted agent fails to mount whole work directory for second run in container with checkout path
Oct 19, 2023
Hi @asaril, thanks for the feedback, we'll take a look |
I have tried three more cases:
|
Merged
@asaril can you confirm that this issue is fixed? |
@asaril I have used this pipeline to verify that this issue is fixed — trigger: none
pool: <Self-Hosted Agent Pool>
resources:
containers:
- container: box
image: <image:tag>
steps:
- checkout: self
path: sources/project/repo
- bash: |
cd ../..
touch ./$(build.buildid).label
target: box This issue is fixed. I am closing it. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What happened?
Hi,
I am trying to build a project using the west build system in a container-based pipeline on self-hosted agents.
Due to the way west is operating, the sources need to be cloned one level deeper, as west will create additional files next to the repository.
For this, I use a
step. In this case, only one repository is checked out for this job, no multi-repo checkout. There are other repo resources defined in the pipeline, but this is the only checkout step in this job.
The resulting directory structure is correct (sources located at /__w/20/s/west_stage/sources in the container, and in /home/azure_cloud/adc_agent_03/_work/20/s/west_stage on the host).
However, this only works for the first run. If the work directory exists already on the agent host, it will detect the west_stage/sources as default working directory, and try to mount west_stage only for the container.
The paths are still correct, but the level that is mounted seems to be tied to
$(Build.SourcesDirectory)/..
instead of the$(Pipeline.Workspace)
.This leads to the binary and artifact staging directories being unavailable in the container, breaking the build.
[edit: this was one level too deep still, as a one-repo clone will use
/s
directly]As a final step, I retested with a checkout path of just
s/sources
(I would like to avoid that, as west will pollute thework dir/s
with this).This setup still results in the same issue.
For the second run, only the
s/
directory will be mounted, instead of the whole work directory.[/edit]
Versions
Agent Version: 3.227.2
Agent OS: Linux
Environment type (Please select at least one enviroment where you face this issue)
Azure DevOps Server type
dev.azure.com (formerly visualstudio.com)
Azure DevOps Server Version (if applicable)
No response
Operation system
No response
Version controll system
git
Relevant log output
The text was updated successfully, but these errors were encountered: