Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting up Azure pipelines #1

Closed
aminya opened this issue Jul 2, 2020 · 76 comments · Fixed by #3, #9, #13, #30 or #46
Closed

Setting up Azure pipelines #1

aminya opened this issue Jul 2, 2020 · 76 comments · Fixed by #3, #9, #13, #30 or #46
Labels

Comments

@aminya
Copy link
Member

aminya commented Jul 2, 2020

We need to set up Azure pipelines to get CI similar to upstream.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

Checking in, that all sounds good. 👍

@aminya
Copy link
Member Author

aminya commented Jul 3, 2020

@DeeDeeG Could you set up the CI? I saw you had Azure Pipelines running on your personal account. How did you do this?

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

Someone at upstream was kind enough to put my branches up at upstream and run it themselves. I'll look into if there is a free tier of Azure Pipelines though, so we can set it up here without paying some sort of subscription.

Edit: This looks like the place to do it. "Start free with GitHub" https://azure.microsoft.com/en-us/services/devops/pipelines/

Kinda makes sense now that Microsoft owns Github...

@aminya
Copy link
Member Author

aminya commented Jul 3, 2020

Someone at upstream was kind enough to put my branches up at upstream and run it themselves. I'll look into if there is a free tier of Azure Pipelines though, so we can set it up here without paying some sort of subscription.

Edit: This looks like the place to do it. "Start free with GitHub" https://azure.microsoft.com/en-us/services/devops/pipelines/

Kinda makes sense now that Microsoft owns Github...

There is a free tier, but how do we get the same configuration that upstream is using? Is there a duplicate button or something, or is there a config file we need to use somewhere? I could not find anything in the repository itself.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

Ah yeah, the config is in a pretty obscure place. I only found it because it mentions Python, and I did some PRs about being ready for Python3.

https://github.com/atom/atom/tree/master/script/vsts

(VSTS is short for Visual Studio Team System, which got renamed to Azure Devops Server... according to Wikipedia.)

According to upstream's nice README.md for the CI, VSTS = Visual Studio Team Services, which dot renamed to Azure DevOps.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

Particularly here: https://github.com/atom/atom/tree/master/script/vsts/platforms

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

Editing this as I go to be as complete as I can manage:

  • Visit here https://azure.microsoft.com/services/devops/pipelines/ and press "Start free with GitHub"
    • If you already have a Microsoft account, and you don't want to make one linked to your GitHub account, you can just press "Start free".
  • Visit https://dev.azure.com
  • Click the "+ New project" button (it's blue for me)
    • Fill in some details (name, description) and press "Create"
  • In the new project, click "Pipelines" in the side nav area
  • Press "New pipeline"
  • After adding the three pipelines ("Atom Nightly", "Atom Production Branches", and "Atom Pull Requests"), manual tweaking might be required.
    • They all seem to have "Pull Request" and "Continuous Integration" triggers enabled by default, even though they shouldn't.
    • Visit the Pipelines view, from the Pipelines sidebar nav entry
    • Press the specific pipeline you want to adjust
    • Press "Edit"
    • Press the "vertical three dots" button (this appears gray for me with black dots)
    • Press "Triggers" from the dropdown menu
    • Depending on which pipeline you are editing, use the "Triggers" tab to:
      • disable the "Pull request validation" trigger (for the "Atom Production Branches" pipeline)
      • disable the "Continuous integration" trigger (for the "Atom Pull requests" pipeline)
      • disable both of these triggers and add a scheduled trigger for every night at midnight (for the "Atom Nightly" pipeline)
    • Optionally use the "YAML" tab to rename the pipeline and make sure the correct yaml file is selected.
      • Don't change the "Default agent pool for YAML" setting for now, or it might break subsequent runs at the moment.
    • Press the "Save & queue" button, then "Save" or "Save & queue" to save these changes and update the pipeline.
    • Optionally enter a message explaining the edits you made to the pipeline settings.

@aminya
Copy link
Member Author

aminya commented Jul 3, 2020

I have created the repository: https://dev.azure.com/atomcommunity/atomcommunity
I am now trying to see how I can set up build using the files in VSTs 🤔

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

I'm curious what happens if you add a pipeline from the side nav bar area, and after authenticating to GitHub with OAuth so it can officially link up with/access this repo. (Trying to write out the instructions in my above comment, but that's where I get stuck.)

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

I'm trying it on my own personal fork now to see how far I can get and update my instructions/steps above.

@aminya
Copy link
Member Author

aminya commented Jul 3, 2020

I added pull requests and release builds for now: https://dev.azure.com/atomcommunity/atomcommunity/_build.
We might have to edit the release configuration once we wanted to release things with the new name.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

I'm a bit confused about what this means, but there are "Missing tasks" required to run the CI, and supposedly these are installable via https://marketplace.visualstudio.com/ according to the error message.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

Hmm, maybe we need to get some VMs running? In the Pipelines sidebar there is a subsection "Environments", which is empty by default.

Disregard, I'll update if I figure something new out.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

I created the "starter pipeline" with no errors.

Full starter pipeline yaml (click to expand)
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo Add other tasks to build, test, and deploy your project.
    echo See https://aka.ms/yaml
  displayName: 'Run a multi-line script'

I can't tell if it's stalled waiting on an Azure Pipelines worker/VM, or because there are no actual steps in the script, so it never finishes... https://dev.azure.com/DeeDeeG/b/_build/results?buildId=3&view=logs&j=12f1170f-54f2-53f3-20dd-22fc7dff55f9

Now to read the docs and see if I can make one that actually does something. Working toward eventually running the Linux/Windows/macOS tests from upstream Atom.

@aminya
Copy link
Member Author

aminya commented Jul 3, 2020

Did you see #3? I need to look into the error message

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

I think this needs to be installed to the Azure DevOps org: https://marketplace.visualstudio.com/items?itemName=1ESLighthouseEng.PipelineArtifactCaching

It's referenced in all three platforms' pipeline (CI) steps.

@DeeDeeG

This comment has been minimized.

@aminya aminya changed the title atom-community discussion Setting up Azure pipelines Jul 3, 2020
@aminya aminya added CI and removed discussion labels Jul 3, 2020
@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

Now I'm getting The pipeline must contain at least one job with no dependencies (for the platforms/linux.yml pipeline).

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

I started the "Pull Requests" pipeline (scripts/vsts/pull-requests.yml) and it seems to be running now! (Had to install that Lighthouse artifact caching thing first).

Running CI on my personal fork: https://dev.azure.com/DeeDeeG/b/_build/results?buildId=7&view=results

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

Now I'm getting an node-gyp rebuild error for @atom/watcher.

Good news: CI is basically up and running for me on my fork.

I'm not sure why that package doesn't build at my fork at the moment, but it's still progress, I suppose.

Full error (click to expand):
watcher.target.mk:145: recipe for target 'Release/obj.target/watcher/src/binding.o' failed
make: *** [Release/obj.target/watcher/src/binding.o] Error 1
make: Leaving directory '/home/vsts/work/1/s/node_modules/@atom/watcher/build'
gyp ERR! build error 
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack     at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack     at ChildProcess.emit (events.js:315:20)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:275:12)
gyp ERR! System Linux 4.15.0-1089-azure
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /home/vsts/work/1/s/node_modules/@atom/watcher
gyp ERR! node -v v12.18.1
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok 
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! @atom/watcher@1.3.1 install: `prebuild-install || node-gyp rebuild`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the @atom/watcher@1.3.1 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/vsts/.npm/_logs/2020-07-03T02_37_00_415Z-debug.log

##[error]Bash exited with code '1'.
Finishing: npm install

https://dev.azure.com/DeeDeeG/b/_build/results?buildId=7&view=logs&j=0d2f351d-5899-57e2-0cb5-b37eb91cc930&t=4bcffbf6-507a-564a-0cc4-d6754c0b024f&l=1722

(Sorry my Azure Pipelines project was set to "private" before, should be "public" now.)

@aminya aminya linked a pull request Jul 3, 2020 that will close this issue
@aminya
Copy link
Member Author

aminya commented Jul 3, 2020

Using Azure is pretty cool. It allows directly editing the code on its website. I was wondering if we use the same setup and a docker image or something to streamline development. Building Atom locally takes time.

At least we should upload the bootstrap for people to use. If it is bootstrap, they can run it using atom-dev thing.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

I am kind of reluctant to use a lot of platforms and CI stuff, because I like to verify what I'm seeing locally. If you can get what you describe working, I don't see why not to try it though. 🤷

@aminya
Copy link
Member Author

aminya commented Jul 3, 2020

Uploading the repo after bootstrap allows fast development. Doing it locally is hard or takes time. For example, on Windows, bootstrapping requires a installation of Visual Studio 15, which is not desirable in 2020.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 3, 2020

I'm not sure where would let us upload that many files? The bootstrapped repo can be multiple gigabytes. And it would be a bit slow to download it. (Would it just be a zip/tarball hosted somewhere?)

Also, keep in mind that there are some native packages, so at minimum it would be one version of the bootstrapped repo per OS, maybe more than that if other stuff in the platform changes within a given OS.

So I have trouble picturing it, but I am open to seeing it happen.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 11, 2020

I set the Pull Request pipeline to run every night on master (if it's been updated since that pipeline was last run on master), so pull request runs can have fewer cache misses.

I didn't want to trigger the Pull Request pipeline every time we merge into master, because we often need the runners for running actual PRs and for running the Release Branch Build pipeline on master, during those moments. I expect we'd cancel a lot of those. No problem if we want to change it, I just wanted to do something before we forget.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 14, 2020

We can optimize the implementation of Cache@2 by making sure that the bootstrap step is the last step of a job.

(Basically split "bootstrap" and "build" into two jobs.)

This should cause the cache to be saved immediately after bootstrapping. It will then be available if building fails for whatever reason.

After all steps in the job have run and assuming a successful job status, a special "save cache" step is run for each "restore cache" step that was not skipped.

This eliminates the main downside of Cache@2, as we have implemented it, versus SaveCache@1 as seen at upstream. (It is still more complex due to separate caches for each of the node_modules folders, but hopefully we can debug and manage that complexity.)

The alternative: Revert Cache@2 for roughly the same effect and less complexity. (Just have to make sure our vstsFeed variable is correct.)

@aminya
Copy link
Member Author

aminya commented Jul 15, 2020

Caching a bootstrap that fails building does not make sense. There is a good reason behind this approach. We don't want to cache faulty configuration. However, using #31 and #46, we can put the tests in a separate job very easily. In #31. tests run separately from the build step, which allows caching to happen if building passes.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 15, 2020

I respectfully disagree. There is more than one reason for a build failure, and only one of them is problems with dependencies. The project has its own code which can introduce errors/CI failures.

Caching a bootstrap that fails building does not make sense.

It saves about 15 minutes per OS/arch on the next run.

If the boostrap process finishes with no errors, it is a valid bootstrap result. It can be re-used for testing code tweaks that are outside of and a separate concern from the dependencies.

(If the dependencies in the package.json/package-lock.json files are updated, then the fingerprint of the dependencies change, which causes a bootstrap cache miss on the next run. Fresh bootstrapping is still done when needed.)

Recall that at the time I'm writing this, nothing from the build beyond the bootstrap process is saved in the cache.

Upstream have had this all figured out for some time, and to deviate from it (without providing an enhanced use of cache of some kind to justify it, e.g. restoring stuff from after the successful and finished build in a reasonable way) would introduce a regression in the availability of our cache without a concrete benefit.

Edit to add: Theoretically, there could be some dependency tweaking in the scripts or VSTS templates that would change the build dependencies, but would not cause cache misses. If so, we should add these files to the cache identifier.

@aminya
Copy link
Member Author

aminya commented Jul 16, 2020

This might sound possible in theory, but it does not work in practice. Other than having a cache that might not be able to build Atom we will waste a lot of time.

Imagine we have a suitable bootstrap cache. It takes ~5min to prepare a system. Then we spend ~2-3min to restore the cache. Then the bootstrap step is skipped. Now we have wasted ~8min to fire up a system and restore a cache that is not used anywhere. Considering 4 operating systems, this becomes wasting half an hour (4*8 = 32min). Now in the build job, again we spend another ~8min to prepare the systems for building. This becomes 1 hour in total!

Now, if we instead use this 8min that we have spent so far to build atom, we save 1 hour for each CI run.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 16, 2020

You may be right.

In terms of Cache@2: if there is a way to save the cache without having to close the Agent and spin up a new one, then that's more what I was thinking. I can't think of a way to do that at the moment, but I am keeping a close eye on the documentation that I read in case something makes this possible.

It is what upstream was doing with SaveCache@1, though. I think SaveCache@1 might be a bit slower to read/write the cache, by a very small amount, but I like that you can determine exactly when the save occurs. I am not going to start a long discussion about that here, because it's not a huge thing. But I have come to prefer the old caching strategy by a bit.

As a minor point to clear up (some build time numbers): I don't think the 4*8 minutes figure is the relevant amount of time, because we do not wait for runs serially, those times are in parallel. We do not experience the wait that way. So, back to parallel times: Indeed, 8 minutes * 2 for a total of 16 minutes is still a meaningful amount of time. So adding that time to builds would be bad. I agree this is bad, just I think the numbers are not as extreme as one hour.

@aminya aminya linked a pull request Jul 16, 2020 that will close this issue
@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 20, 2020

Now that #46 is merged, and the node/npm install parts of CI aren't in platforms/$(os), we should probably add platforms/templates/preparation.yml and bootstrap.yml to the cache identifier.

This is for the hypothetical case where we have updated Node, NPM, or some environment variables or config relevant to how the bootstrap and build should proceed.

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 20, 2020

CI isn't passing on master. Hmm. Not sure why.

@aminya
Copy link
Member Author

aminya commented Jul 25, 2020

CI isn't passing on master. Hmm. Not sure why.

This is fixed in #63

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 25, 2020

Glad that CI is working.

We should disable or delete the CI steps that try to publish artifacts to Amazon S3 buckets, since we don't own an Amazon S3 account, and that step has been erroring out at the very end, making our "Release Branch Build" pipeline always look red.

Other than that, CI is functionally working (tests passing) so that is great, thank you!

Edit: This is a PR now: #66

@DeeDeeG
Copy link
Member

DeeDeeG commented Jul 25, 2020

Doing npm install is slower on Windows, due to NTFS's poor performance writing lots of small files versus EXT4.

Interesting discussion here: https://github.com/rust-lang/rustup/issues/1540

If possible/if it's okay with you, I'd like to revert #3 (or explicitly set to use Linux) for the GetReleaseVersion and Release CI jobs, and use forward-slashes for cross-platform/Linux compatibility: DeeDeeG@fbc3742

Status: Will make a PR once I'm done with the stuff from my comment directly above this one.

@aminya
Copy link
Member Author

aminya commented Jul 25, 2020

That sounds like a PR. We will not revert things on master anymore. It is OK if you want to do it in your PR.

@DeeDeeG
Copy link
Member

DeeDeeG commented Aug 13, 2020

Linux Build in CI has been failing by exiting out after the two package formats are apparently successfully built.

I also had a similar experience outside of CI on my personal machine, where the dpkg-deb command building the .deb package errors out.

I think this is flakiness, not a hard "100% of the time" issue, but it's still weird.

@DeeDeeG
Copy link
Member

DeeDeeG commented Aug 13, 2020

Also, just a heads up that there are some more hard-coded URLs pointing to github.com/atom/atom in various parts of the repo, and semi-relatedly, if we fall behind with which tags are on our atom-nightly-releases repo vs upstream's atom-nightly-relleases repo, it can cause the Windows build to fail on building differential/"update from previous version" partial .nupkg files.

I do think if we fix the hard-coded URLs like we did in #70, then this build failure scenario should also go away when using non-hard-coded URLs pointing to our own repos.

@aminya
Copy link
Member Author

aminya commented Aug 17, 2020

I will close this as this is mostly done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment