Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Developer Experience with Spread #9

Open
ethernetdan opened this issue Mar 12, 2016 · 9 comments
Open

Developer Experience with Spread #9

ethernetdan opened this issue Mar 12, 2016 · 9 comments

Comments

@ethernetdan
Copy link
Member

The primary goal of localkube is to provide a streamlined development experience for people working with Kubernetes. This involves abstracting the complexities of operating a Kubernetes cluster in a development context.

Here is the workflow I imagine for localkube

Developing Docker images/Kubernetes objects

  • Run single command to setup cluster
    • Uses docker-machine to bring up VM (non-Linux only)
    • Starts container(s) running Kubernetes
    • Setups kubectl creds for you
  • docker build the image that you want to work with
    • spread will soon integrate building
  • Create Kubernetes objects that uses the image build above
  • Run spread build . to deploy to cluster
    • Since localkube shares a Docker daemon with your host, there is no need to push images :)
  • Iterate on your application, updating image and objects running spread build . each time you want to deploy changes

Developing code

If you are developing code I would follow the same process above but would mount the code being changed using a HostPathVolumeSource. This way changes are immediately available within the container.

Linux users can simply mount the path storing the code. For OS X and Windows users docker-machine mounts /Users and C:\Users respectively inside the root of the VM. More information, see this page.

@rata
Copy link

rata commented Mar 14, 2016

Cool, it seems great! And an important and nice thing is that if the image has something in the path the host volume is mounted, its contents will be replaced by the contents of the host. So you can mount while your image has the code there too.

How can I help with this?

@jsquirrelz
Copy link

Overall sounds good. I do have some philosophical questions in regards to the developer/command workflow though. In redspread/spread/#59, @mfburnett mentions the "build-push-deploy" paradigm, and I agree that being explicit with commands is important.

From what I understand, spread build . will build images, spread push will push the images to a registry, and spread deploy will push updated Kubernetes objects to a cluster.

So even though the images are available to the shared daemon after building, does that mean they should automatically be pushed and deployed locally? Seems like a single responsibility principle violation.

In this situation, I'd expect to follow a workflow like spread build ., spread push <local_registry_name>, spread deploy <localkube_cluster_name>. No right or wrong answer. Just curious what you think.

@ethernetdan
Copy link
Member Author

@rata: How does this compare to your current workflows? I would love any feedback on how to make this as dev friendly as possible and if you are feeling ambitious feel free to contribute features you think would help enhance the workflow. I think there could be a whole suite of these which implement common Kube development that are currently cumbersome.

@jsquirrelz: My thinking with spread build was to have single command iteration with localkube, which functions similarly to docker-compose up which builds (if necessary) and launches images.

Something I've found gets tiresome in development with Kube is the cycle of building images, pushing images, and updating objects. Since the duration of building and pushing is variable (and can take a while), it leads to significant disruption in my development "flow". I also like not having to push because it allows offline work.

Maybe we keep spread build solely building images, to maintain the workflow you outlined above, and dedicate a new command for iteration which builds images and deploys objects using the local daemon without pushing. What do you think?

@rata
Copy link

rata commented Mar 15, 2016

@ethernetdan: I agree that a command named "build" should only build. But another command that "builds and deploys" should be there too, IMHO.

I think that:

  • Development should work just fine being offline and using a local kubernetes
  • You should develop "as always" using code in your local machine
  • You should be able to specify multiple HostPathVolumeSource to mount in different locations of the image

Then:

  • Code you develop is local and is mounted in the docker image with HostPathVolumeSource options (you can specify many if you want/need for your app layout)
  • The image doesn't change often, as I expect most time are changes to the code, so most of the time the image is not needed to be rebuilt and the HostPathVolumeSource make the code on the local machine "just work"
  • When the image changes (changing the Dockerfile, for example), this new command I talked about just deletes the old container and creates a new one with the same HostPathVolumeSource mounted (can be many)
  • All of this works also while being offline, as there is no need to push the image to a public repository and all the rest is local (kubernetes and HostPathVolumeSource)

And I think this address the concerns of all so far. What do you think? Am I missing something?

@jsquirrelz
Copy link

@ethernetdan totally get you on the significant disruption to your development flow. I haven't used a local registry before so I'm not familiar with the average downtime you'd experience pushing locally but I'd imagine it'd be much faster than pushing to public registry on Docker Hub or a private on another server? If not, pardon my ignorance. But like @rata said:

"Development should work just fine being offline and using a local kubernetes".

And pushing to a local registry (i.e. localhost:5000) shouldn't affect offline development, right?

Generally I'm imagining an easy plug & play for different environments where you deploy to development/staging/production (which all could have different registries/clusters defined in a config file).

I agree, there should be a single command that builds/deploys. I just can't decide if changing that to build/push/deploy for local development is unnecessary extra work, or introducing inconsistency in development practices when you need to push to staging/production. Nonetheless, a local registry isn't necessary to deploy locally so my vote would to be not include one.

But since it's really an isolated service, it could probably be incorporated pretty easily down the line if there was ever a true need for it. I'm down to open an issue and lead that investigation if you think it could help.

Overall I think @ethernetdan and @rata have described a solid development flow 👏

The key developer experiences I'm seeing are:

  • Utilizing HostPathVolumeSource to minimize image building and pushing for accelerated development when working on code.
  • Ability to work completely offline.
  • Single command to build/(push)/deploy Docker images/Kubernetes objects.

I do think one of the key points @rata made was:

"When the image changes (changing the Dockerfile, for example), this new command I talked about just deletes the old container and creates a new one with the same HostPathVolumeSource mounted (can be many)"

Is old container/image cleanup currently supported by spread? If so, awesome. If not, how can I help? Because that's definitely a priority for me. Also, would having a local registry make that easier?

P.S. When I want to start with a fresh docker daemon, I usually run a script like:

#!/bin/bash
docker stop $(docker ps -a -q)
docker rm `docker ps -a  | grep Exit | awk '{ print $1 }'`
docker volume rm $(docker volume ls -qf dangling=true)

There has got to be a better way, right?

@rata
Copy link

rata commented Mar 15, 2016

@jsquirrelz: Yes, with kubernetes you can update a container and that will delete the old one and create a new one. No need for that script that kill everything, kubernetes can handle this for us :)

@zoidyzoidzoid
Copy link
Contributor

@ethernetdan: I agree that a command named "build" should only build. But another command that "builds and deploys" should be there too, IMHO.

I agree here too. A build should just build a Docker image, especially since it mirrors the same command name of the Docker engine. But deploy/run should do a build by default, unless specified otherwise, similar to --no-recreate for docker-compose up .

The key developer experiences I'm seeing are:

  • Utilizing HostPathVolumeSource to minimize image building and pushing for accelerated development when working on code.
  • Ability to work completely offline.
  • Single command to build/(push)/deploy Docker images/Kubernetes objects.

I think something like docker-compose up/wercker dev that used HostPathVolumeSource and automatically built/deployed when necessary while developing locally would be great.

totally get you on the significant disruption to your development flow. I haven't used a local registry before so I'm not familiar with the average downtime you'd experience pushing locally but I'd imagine it'd be much faster than pushing to public registry on Docker Hub or a private on another server? If not, pardon my ignorance.

So doing a docker build . on an OSX machine builds the Docker image inside in the docker-machine which should make it available to k8s running in that machine? The only issue this could have is using the default k8s config does imagePullPolicy: IfNotPresentso we can't just tag an image with :latest without changing the default to Always. Updating images.

Is old container/image cleanup currently supported by spread? If so, awesome. If not, how can I help? Because that's definitely a priority for me. Also, would having a local registry make that easier?

Kubernetes does container and image cleanup, so it should be okay. Though it only cleans up images when free space reaches a certain threshold, by default.

From what I understand, spread build . will build images, spread push will push the images to a registry, and spread deploy will push updated Kubernetes objects to a cluster.
So even though the images are available to the shared daemon after building, does that mean they should automatically be pushed and deployed locally? Seems like a single responsibility principle violation.

Isn't building them enough, so we don't have to push them? And just maybe manually deploy them again?

I feel like a command similar to docker-compose up, but with similarities with wercker's dev pipeline would be useful. @rata's suggestion of using HostPathVolumeSource sounds great, and having a command that used that, and re-built and deployed on other local changes when necessary would be awesome, for local development, so people don't end up needing to write their own Makefiles/shell scripts to do spread build . && spread update/deploy/localdev.

For all our k8s work at the moment we use bash scripts (for "building + pushing" and "deploying"), and then a Makefile to automate all the docker-compose stuff. Having to build && up is a bit tedious, when we basically always want it, and a kubectl rolling-update failing because our deploy script doesn't automatically build + push is frustrating.

@ethernetdan
Copy link
Member Author

@rata @zoidbergwill @jsquirrelz sorry for taking so long to respond

I agree, there should be a single command that builds/deploys. I just can't decide if changing that to build/push/deploy for local development is unnecessary extra work, or introducing inconsistency in development practices when you need to push to staging/production. Nonetheless, a local registry isn't necessary to deploy locally so my vote would to be not include one.

I definitely see value in consistency, it would be nice if deploying to a remote cluster followed a similar workflow as deploying locally. An unfortunate side effect of running a registry within localkube would be that we would end up storing the image twice (registry + Daemon). This might be something that we could simplify using versioning (redspread/spread#122) by instead of pushing images to other clusters, pushing versioned references to images, leaving the actual transport up to Docker/Kubernetes.

@ethernetdan: I agree that a command named "build" should only build. But another command that "builds and deploys" should be there too, IMHO.

I agree here too. A build should just build a Docker image, especially since it mirrors the same command name of the Docker engine. But deploy/run should do a build by default, unless specified otherwise, similar to --no-recreate for docker-compose up .

+1, I like that interface. Any preference between deploy and run? (or another name)

I think something like docker-compose up/wercker dev that used HostPathVolumeSource and automatically built/deployed when necessary while developing locally would be great.

Should we just have users create Volume objects themselves or is there someway we could better facilitate that?

So doing a docker build . on an OSX machine builds the Docker image inside in the docker-machine which should make it available to k8s running in that machine? The only issue this could have is using the default k8s config does imagePullPolicy: IfNotPresentso we can't just tag an image with :latest without changing the default to Always. Updating images.

Not sure what to do with this one; changing that many fields seems invasive but at the same time would bring convenience.

@rata
Copy link

rata commented Apr 3, 2016

On Sun, Apr 03, 2016 at 02:57:25PM -0700, Dan Gillespie wrote:

@rata @zoidbergwill @jsquirrelz sorry for taking so long to respond

(sorry, will reply to the resto tomorrow)

So doing a docker build . on an OSX machine builds the Docker image inside in the docker-machine which should make it available to k8s running in that machine? The only issue this could have is using the default k8s config does imagePullPolicy: IfNotPresentso we can't just tag an image with :latest without changing the default to Always. Updating images.

Not sure what to do with this one; changing that many fields seems invasive but at the same time would bring convenience.

No, this is wrong. Using ":latest" on kubernetes automatically sets the policy
to Always. However, it is not possible (or straightforward at least) to reploy
and unchanged pod/Deployment/RC yaml definition file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants