Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to skip one stage from multi-stage docker build #1134

Closed
venkat-koppisetti opened this issue Jun 20, 2018 · 18 comments
Closed

How to skip one stage from multi-stage docker build #1134

venkat-koppisetti opened this issue Jun 20, 2018 · 18 comments

Comments

@venkat-koppisetti
Copy link

venkat-koppisetti commented Jun 20, 2018

Description

We have Multistage docker build that creates rpm in each stages. if one rpm is there in folder i dont want to build that particular stage.
So how to skip one stage from multistage docker
is there any condition in dockerfile like if, to skip one stage or few docker commands

Describe the results you expected:

Skip one stage from multi-stage docker build

Output of docker version:

[root@localhost consul]# docker -v
Docker version 18.03.0-ce, build 0520e24

Output of docker info:

Containers: 5
Running: 5
Paused: 0
Stopped: 0
Images: 105
Server Version: 18.03.0-ce
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-693.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.859GiB
Name: localhost.localdomain
ID: QSFU:ODKS:LJGZ:GC34:KXTP:6B7Y:5UMB:Q7WT:V2X3:4K6M:DFLQ:I7WS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):
Running docker in Centos7 VirtualBox

@cpuguy83
Copy link
Collaborator

There is no way to skip stages per se.
You can build specific stages: docker build --target=<stage>.

Why are you creating the same thing in multiple stages? You can have a stage just for creating that thing and then copy it into the stage that requires it.

@venkat-koppisetti
Copy link
Author

venkat-koppisetti commented Jun 21, 2018

Thanks for your quick response.
Each stage has performs different build. Suppose in one stage we are creating rpm, we dont wanna recreate rpm using that build if rpm is already there, in that case we want to skip that stage .

Apart from that i have couple of issues

  1. Can we include one dockerfile in another dockerfile?
  2. Am starting container with init process as systemd and running couple of services through docker run, am not getting any log info through docker logs command related to that service if any service fails except that systemd logs
    what is the best way to view the logs through docker logs command for all services

@thaJeztah
Copy link
Member

we dont wanna recreate rpm using that build if rpm is already there, in that case we want to skip that stage .

Not sure I understand the "if rpm is already there" part. Docker build will cache Dockerfile steps where possible.

BuildKit (which will be included as an experimental feature in the next docker release) will further optimize that, and skip stages that are not needed for the --target stage

@marcellodesales
Copy link

marcellodesales commented Jul 5, 2018

  • @venkat-koppisetti , @cpuguy83
  • We have the similar need to skip stages
  • All our Dockerfiles are also multi-stage and we build using Jenkinsfile

Requirement

  • Each stage builds images to run unit and integration test images
  • Each stage is run in parallel in a Jenkins pipeline
  • When all the different test containers pass, we need to build an image with the binary (war, react, etc)
    • The stage image that is meant for publishing a binary is still needed to cover extra requirements such as reports, older monolith support, etc
    • Skipping the stages not part of the build is important. We want to avoid docker build from running a specific stage that's not meant to be run.
    • We'd like to specify in Dockerfile that the stage is optional

@mauricios
Copy link

I have the following Dockerfile with multiple stages. For efficiency I'm running different tasks in different stages so I want to skip some stages depending on the needs of the final stage.

# Defining environment
ARG APP_ENV=dev

# Building the base image
FROM alpine as base
RUN echo "running BASE commands"

# Building de pre-install Prod image
FROM base as prod-preinstall
RUN echo "running PROD pre-install commands"

# Building the Dev image
FROM base as dev-preinstall
RUN echo "running DEV pre-install commands"

# Installing the app files
FROM ${APP_ENV}-preinstall as install
COPY app ./app
RUN echo "running install commands"

FROM install as prod-postinstall
RUN echo "running PROD post-install commands"

FROM install as dev-postinstall
RUN echo "running DEV post-install commands"

FROM ${APP_ENV}-postinstall as final
RUN echo "running final commands"

If APP_ENV=PROD I want to skip all the stages that not depend on the PROD stages, not only for efficiency but because the DEV stages will break at some point due to missing stages. Now I know that skipping stages in not currently supported in Dockerfiles, and this scenario can be achieved using multiple Dockerfiles (with duplicated code) or adding some wrapper scripts to check the APP_ENV argument.

This can be a pretty good feature to build more complex and efficient Docker images.

@thaJeztah
Copy link
Member

@mauricios yes; BuildKit does exactly that; it will only build stages that are needed to build the final stage (or stage that's specified through --target). BuildKit is integrated in Docker 18.06 as an experimental feature (release candidates for 18.06 are already available; Docker for Mac/Windows releases will be available soon).

@thaJeztah
Copy link
Member

thaJeztah commented Jul 20, 2018

Docker 18.06 has been released, and includes experimental BuildKit support; to use this;

  • enable experimental features on the daemon ({"experimental":true}) in the daemon.json configuration file
  • set DOCKER_BUILDKIT=1 environment variable to use buildkit (remove the environment variable, or set it to 0 to disable)

Here's what it looks like;

with APP_ENV=prod:

DOCKER_BUILDKIT=1 docker build -t prod --build-arg APP_ENV=prod -<<'EOF'
# Defining environment
ARG APP_ENV=dev

# Building the base image
FROM alpine as base
RUN echo "running BASE commands"

# Building de pre-install Prod image
FROM base as prod-preinstall
RUN echo "running PROD pre-install commands"

# Building the Dev image
FROM base as dev-preinstall
RUN echo "running DEV pre-install commands"

# Installing the app files
FROM ${APP_ENV}-preinstall as install
RUN echo "running install commands"

FROM install as prod-postinstall
RUN echo "running PROD post-install commands"

FROM install as dev-postinstall
RUN echo "running DEV post-install commands"

FROM ${APP_ENV}-postinstall as final
RUN echo "running final commands"
EOF

Only the PROD build-stages are executed:

[+] Building 2.1s (9/9) FINISHED                                       
 => local://context (.dockerignore)                               0.0s
 => => transferring context: 02B                                  0.0s
 => local://dockerfile (Dockerfile)                               0.0s
 => => transferring dockerfile: 705B                              0.0s
 => docker-image://docker.io/library/alpine:latest                0.0s
 => => resolve docker.io/library/alpine:latest                    0.0s
 => /bin/sh -c echo "running BASE commands"                       0.5s
 => /bin/sh -c echo "running PROD pre-install commands"           0.4s
 => /bin/sh -c echo "running install commands"                    0.4s
 => /bin/sh -c echo "running PROD post-install commands"          0.4s
 => /bin/sh -c echo "running final commands"                      0.4s
 => exporting to image                                            0.0s
 => => exporting layers                                           0.0s
 => => writing image sha256:57048e9c9845084931f74490091563aadc50  0.0s
 => => naming to docker.io/library/prod                            0.0s

with APP_ENV=dev:

DOCKER_BUILDKIT=1 docker build -t dev -<<'EOF'
# Defining environment
ARG APP_ENV=dev

# Building the base image
FROM alpine as base
RUN echo "running BASE commands"

# Building de pre-install Prod image
FROM base as prod-preinstall
RUN echo "running PROD pre-install commands"

# Building the Dev image
FROM base as dev-preinstall
RUN echo "running DEV pre-install commands"

# Installing the app files
FROM ${APP_ENV}-preinstall as install
RUN echo "running install commands"

FROM install as prod-postinstall
RUN echo "running PROD post-install commands"

FROM install as dev-postinstall
RUN echo "running DEV post-install commands"

FROM ${APP_ENV}-postinstall as final
RUN echo "running final commands"
EOF

Only the DEV build-stages are executed:

[+] Building 1.8s (9/9) FINISHED                                         
 => local://dockerfile (Dockerfile)                                 0.0s
 => => transferring dockerfile: 705B                                0.0s
 => local://context (.dockerignore)                                 0.0s
 => => transferring context: 02B                                    0.0s
 => docker-image://docker.io/library/alpine:latest                  0.0s
 => CACHED /bin/sh -c echo "running BASE commands"                  0.0s
 => /bin/sh -c echo "running DEV pre-install commands"              0.4s
 => /bin/sh -c echo "running install commands"                      0.5s
 => /bin/sh -c echo "running DEV post-install commands"             0.4s
 => /bin/sh -c echo "running final commands"                        0.4s
 => exporting to image                                              0.0s
 => => exporting layers                                             0.0s
 => => writing image sha256:1cf6caaaab4689f6b11ea4d61b5b6175e54074  0.0s
 => => naming to docker.io/library/dev                              0.0s

@volkyeth
Copy link

Being able to interpolate vars on the build directives is great, but maybe we could have something more seamless, like building only the depended-upon stages for each target.

The dependency could be inferred both from COPY --from=<stage> directives and FROM <stage> directives. So for example, with this Dockerfile:

# Compile module Foo
FROM gcc as foo
RUN compile-foo.sh

# Install runtime dependencies
FROM alpine as base
RUN apk install <dependencies> 

# Building img flavor A
FROM base as versionA
RUN [...]

# Building img flavor B using module Foo
FROM versionA as versionB
COPY --from=foo /lib/foo /lib/foo

We could build the dependency graph based on the target:

versionB --> versionA --> base
        `--> foo

So building with versionA as target would trigger the base and versionA stages.
Likewise, building with versionB as target would also trigger the base and versionA stages, but additionally it would trigger the foo stage, as the dependency is implied by the COPY directive.

@cpuguy83
Copy link
Collaborator

@bwowk this is exactly what buildkit does.

@volkyeth
Copy link

volkyeth commented Oct 19, 2018

@cpuguy83
Oh, my bad, I completely misinterpreted the example.

I guess @thaJeztah meant to write ARG APP_ENV=prod in the first sample Dockerfile:

with APP_ENV=prod:

DOCKER_BUILDKIT=1 docker build -t prod --build-arg APP_ENV=prod -<<'EOF'
# Defining environment
ARG APP_ENV=dev

That's great :)
Can't wait to see it included as a stable feature.

@thaJeztah
Copy link
Member

I guess @thaJeztah meant to write ARG APP_ENV=prod in the first sample Dockerfile:

No, that's not needed; the ARG APP_ENV=dev line sets the default value for APP_ENV. That means that someone building the dockerfile, without setting a --build-arg will build a dev-image by default. To override the default, provide a --build-arg flag.

@romafederico
Copy link

Hey @thaJeztah ... Thanks for that example. Quick question: Is it possible that you mean --target instead of -t in the following command?

DOCKER_BUILDKIT=1 docker build -t prod --build-arg APP_ENV=prod -<<'EOF'

I thought -t was for --tag...

@thaJeztah
Copy link
Member

That's correct; -t is shorthand for --tag. In my example, both the prod and dev image build the final stage (target), but the --build-arg sets the value for APP_ENV. Therefore the final stage will use a different stage as base image

@keithmattix
Copy link

Did this ever get made a stable feature? I noticed the issue is still open

@thaJeztah
Copy link
Member

Buildkit is available on current versions of docker (but does still require you to enable it with with the DOCKER_BUILDKIT=1 environment variable)

@thaJeztah
Copy link
Member

Let me close this issue, because this is addressed in buildkit, and not something that can be addressed in the classic builder

DominicRoyStang added a commit to DominicRoyStang/uvindex that referenced this issue Apr 10, 2020
This saves time by skipping the build steps that aren't required for a prod build.
More information here: docker/cli#1134
DominicRoyStang added a commit to DominicRoyStang/uvindex that referenced this issue Apr 10, 2020
This saves time by skipping the build steps that aren't required for a prod build.
More information here: docker/cli#1134
@njleonzhang
Copy link

@ thaJeztah Wait this feature to be released formally, it's a really helpful.

@thaJeztah
Copy link
Member

@njleonzhang the feature has been released; BuildKit is included in Docker 18.09 and up (including the current 19.03 release); it's not enabled by default (see moby/moby#40379), but it's ready to be used for building images (and generally recommended to use instead of the classic builder); enable buildkit by setting DOCKER_BUILDKIT=1 in your environment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants