-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to skip one stage from multi-stage docker build #1134
Comments
There is no way to skip stages per se. Why are you creating the same thing in multiple stages? You can have a stage just for creating that thing and then copy it into the stage that requires it. |
Thanks for your quick response. Apart from that i have couple of issues
|
Not sure I understand the "if rpm is already there" part. Docker build will cache Dockerfile steps where possible. BuildKit (which will be included as an experimental feature in the next docker release) will further optimize that, and skip stages that are not needed for the |
Requirement
|
I have the following Dockerfile with multiple stages. For efficiency I'm running different tasks in different stages so I want to skip some stages depending on the needs of the final stage. # Defining environment
ARG APP_ENV=dev
# Building the base image
FROM alpine as base
RUN echo "running BASE commands"
# Building de pre-install Prod image
FROM base as prod-preinstall
RUN echo "running PROD pre-install commands"
# Building the Dev image
FROM base as dev-preinstall
RUN echo "running DEV pre-install commands"
# Installing the app files
FROM ${APP_ENV}-preinstall as install
COPY app ./app
RUN echo "running install commands"
FROM install as prod-postinstall
RUN echo "running PROD post-install commands"
FROM install as dev-postinstall
RUN echo "running DEV post-install commands"
FROM ${APP_ENV}-postinstall as final
RUN echo "running final commands" If This can be a pretty good feature to build more complex and efficient Docker images. |
@mauricios yes; BuildKit does exactly that; it will only build stages that are needed to build the final stage (or stage that's specified through |
Docker 18.06 has been released, and includes experimental BuildKit support; to use this;
Here's what it looks like; with DOCKER_BUILDKIT=1 docker build -t prod --build-arg APP_ENV=prod -<<'EOF'
# Defining environment
ARG APP_ENV=dev
# Building the base image
FROM alpine as base
RUN echo "running BASE commands"
# Building de pre-install Prod image
FROM base as prod-preinstall
RUN echo "running PROD pre-install commands"
# Building the Dev image
FROM base as dev-preinstall
RUN echo "running DEV pre-install commands"
# Installing the app files
FROM ${APP_ENV}-preinstall as install
RUN echo "running install commands"
FROM install as prod-postinstall
RUN echo "running PROD post-install commands"
FROM install as dev-postinstall
RUN echo "running DEV post-install commands"
FROM ${APP_ENV}-postinstall as final
RUN echo "running final commands"
EOF Only the PROD build-stages are executed:
with DOCKER_BUILDKIT=1 docker build -t dev -<<'EOF'
# Defining environment
ARG APP_ENV=dev
# Building the base image
FROM alpine as base
RUN echo "running BASE commands"
# Building de pre-install Prod image
FROM base as prod-preinstall
RUN echo "running PROD pre-install commands"
# Building the Dev image
FROM base as dev-preinstall
RUN echo "running DEV pre-install commands"
# Installing the app files
FROM ${APP_ENV}-preinstall as install
RUN echo "running install commands"
FROM install as prod-postinstall
RUN echo "running PROD post-install commands"
FROM install as dev-postinstall
RUN echo "running DEV post-install commands"
FROM ${APP_ENV}-postinstall as final
RUN echo "running final commands"
EOF Only the DEV build-stages are executed:
|
Being able to interpolate vars on the build directives is great, but maybe we could have something more seamless, like building only the depended-upon stages for each target. The dependency could be inferred both from
We could build the dependency graph based on the target:
So building with |
@bwowk this is exactly what buildkit does. |
@cpuguy83 I guess @thaJeztah meant to write
That's great :) |
No, that's not needed; the |
Hey @thaJeztah ... Thanks for that example. Quick question: Is it possible that you mean
I thought |
That's correct; |
Did this ever get made a stable feature? I noticed the issue is still open |
Buildkit is available on current versions of docker (but does still require you to enable it with with the |
Let me close this issue, because this is addressed in buildkit, and not something that can be addressed in the classic builder |
This saves time by skipping the build steps that aren't required for a prod build. More information here: docker/cli#1134
This saves time by skipping the build steps that aren't required for a prod build. More information here: docker/cli#1134
@ thaJeztah Wait this feature to be released formally, it's a really helpful. |
@njleonzhang the feature has been released; BuildKit is included in Docker 18.09 and up (including the current 19.03 release); it's not enabled by default (see moby/moby#40379), but it's ready to be used for building images (and generally recommended to use instead of the classic builder); enable buildkit by setting |
Description
We have Multistage docker build that creates rpm in each stages. if one rpm is there in folder i dont want to build that particular stage.
So how to skip one stage from multistage docker
is there any condition in dockerfile like if, to skip one stage or few docker commands
Describe the results you expected:
Skip one stage from multi-stage docker build
Output of
docker version
:[root@localhost consul]# docker -v
Docker version 18.03.0-ce, build 0520e24
Output of
docker info
:Containers: 5
Running: 5
Paused: 0
Stopped: 0
Images: 105
Server Version: 18.03.0-ce
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-693.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.859GiB
Name: localhost.localdomain
ID: QSFU:ODKS:LJGZ:GC34:KXTP:6B7Y:5UMB:Q7WT:V2X3:4K6M:DFLQ:I7WS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Additional environment details (AWS, VirtualBox, physical, etc.):
Running docker in Centos7 VirtualBox
The text was updated successfully, but these errors were encountered: