Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Application descriptor #44

Closed
jkutner opened this issue Feb 20, 2019 · 5 comments
Closed

Application descriptor #44

jkutner opened this issue Feb 20, 2019 · 5 comments

Comments

@jkutner
Copy link
Member

jkutner commented Feb 20, 2019

Buildpack consumers often need to customize the commands used to run their images, run the same image with multiple different commands, or define the buildpacks they want to run on their app. It's also common to want to keep this information under source control with the application code (which helps when forking an app).

To solve this, we may need to introduce an application descriptor file to the v3 spec. In v2, this exists in the form of the manifest.yml, Procfile, .buildpacks, and app.json files.

The possible elements included in a v3 application descriptor file might be:

  • List or groups of buildpacks (used by lifecycle to override default groups)
  • Process types, and commands (used to override launch.toml)
  • Environment variables used during the build (for example MAVEN_OPTS, which is honored by mvn).
  • Arbitrary key-value pairs for use by a buildpack

The application descriptor might be named something like:

  • app.toml
  • cnb.toml
  • manifest.toml
  • launch.toml (we would need to reconcile this with the existin launch.toml)
  • cnb.xml

Alternatives

  • Instead of making the application descriptor a part of the spec (and thus the lifecycle), it could be something the pack CLI uses/parses and passes to the lifecycle as values.
    • This has the drawback of requiring that all lifecycle clients/consumers be aware of this file format if we want consistency.
  • We could leave the application descriptor up to the platform(s). For example, Heroku could support a heroku.yml file that contains a list of buildpacks or env vars that are interpreted by Heroku.
    • But this means that an app with a heroku.yml would not work with pack or another platform.
  • Do nothing.
    • The image can be run with the --entrypoint option.
    • It is possible to customize image commands with a Dockerfile that has only FROM and ENTRYPOINT lines.
    • Also, many platforms, including GCP, have platform specific mechanisms for overriding the entrypoint of an image.
    • However, the above alternatives don't solve the need for a serialized list of buildpacks.
@nebhale
Copy link
Contributor

nebhale commented Feb 20, 2019

To kick things off, I’m gently leaning towards alternative #2. I think that there’s a strong overarching theme of Cloud Native Buildpacks building an image that can run on pretty much any platform; this is why creating OCI images was such an important technical detail. However, I’ve never seen building (identical) source code on any platform as a goal. From the Cloud Foundry side, we’re looking at a completely decoupled build component (as in decoupled from our own platforms even) that creates images that can be run in any of our PaaS, FaaS, or KaaS abstractions and so I view the inputs as tied to the builder not the platform that the image runs on.

Beyond the good engineering of decoupling at this level, one of the larger requests we’ve received from customers is for promotion of built artifacts. Many customers have multi-tiered deployment architectures (dev, QA, prod, etc.) and today they are required to promote the source code (or in the Java case, compiled JARs) between those environments. The downside to this design is that the application is restaged in every environment making it susceptible to environment-specific variations of the buildpacks. What they’ve asked for is a way to promote a single built droplet from environment to environment ensuring that staging only happens once and the same artifact progresses through each phase. CNB really encapsulates this directly, not by giving CF customers a way to promote droplets, but rather by building a portable artifact (the OCI image) that is their primary artifact for testing.

To drag this back to the issue at hand, the strong requirement I see is portablity of those created images, between deployment environments (dev, QA, prod), between abstractions (PaaS, FaaS, KaaS), and between vendors (PCF, GCE, Heroku), but I haven’t seen evidence that there’s a strong desire for that same portability for source code.

All that being said, I’ve got a laser focus on Enterprise on-prem use-cases and would love to hear the broader view.

@josegonzalez
Copy link

I'm torn here.

I'd love for there to be a single, common manifest file across all PaaS. Many of our users move from Heroku to Dokku, decide there is a crucial limitation in Dokku or that they miss a feature, and move back to Heroku (or some other PaaS). It would be great to have a common, buildpack-related artifact such that users don't have to waste time getting started on a given platform.

As an alternative, I would hope the platform-specific file code could be shared, in a similar way to the buildpack @nebhale did with procfile-buildpack, but it makes sense that this would remain the "secret sauce" for a given platform.

I also am hesitant to add yet another format to the mix. For figuring out what buildpacks to run, Dokku tries to emulate Heroku, which supports the following:

  • buildpacks:set command
  • app.json
  • .buildpacks
  • heroku.yml
  • BUILDPACK_URL env var
  • fallback to bin/detect

It's not clear to me what order these work in, and is confusing to users on their platform (due to the various ways in which they interact). Adding yet another file into this makes it even more confusing, potentially without any real benefit if other PaaSs don't support it. The same problem exists for Procfile and other such manifests.


I like the idea of having a defined "hook" to manipulate buildpacks. I think manipulating the processes in a launch.toml to match a Procfile or heroku.yml can be done separately, but a clear way of manipulating the list of buildpacks to run would be ideal. I'm not sure what form that should take though.

@nebhale
Copy link
Contributor

nebhale commented Feb 21, 2019

It would be great to have a common, buildpack-related artifact

I think that, even in the best of outcomes, we can't solve the problem as extensively as you'd desire. I think it's a complete non-starter to define a standard file, then explicitly limit it to that standard. All reasonable scenarios result in this being the required kernel but platforms would use this file, in lieu of the 8 other files we all have (😜), as the one true place to add additional information. In other words the file would be open for extension.

If this is the case, I think the problem of hopping platforms, finding that your feature isn't there, and hopping back continues to exist. And this specific point, that I explicitly do not think source code can hop platforms without modification, is behind my (loosely-held) desire to not standardize it at all.

@sclevine
Copy link
Member

Proposal: introduce a standard config file that all platforms should support (cnb.toml?), but allow platforms to accept the same configuration from their own existing configuration files (manifest.yml, app.json, Procfile, etc.). That way an app can be maximally portable, but it doesn’t need to be.

@ekcasey
Copy link
Member

ekcasey commented Jan 22, 2020

discussion moved to buildpacks/rfcs#25 and buildpacks/rfcs#32

@ekcasey ekcasey closed this as completed Jan 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants