Simplify the cluster provisioning process for a cluster with one master and multiple worker nodes. It should be secured with SSL and have all the default add-ons. There should not be significant differences in the provisioning process across deployment targets (cloud provider + OS distribution) once machines meet the node specification.
Cluster provisioning can be broken into a number of phases, each with their own exit criteria. In some cases, multiple phases will be combined together to more seamlessly automate the cluster setup, but in all cases the phases can be run sequentially to provision a functional cluster.
It is possible that for some platforms we will provide an optimized flow that combines some of the steps together, but that is out of scope of this document.
Note: Exit critieria in the following sections are not intended to list all tests that should pass, rather list those that must pass.
Objective: Create a set of machines (master + nodes) where we will deploy Kubernetes.
For this phase to be completed successfully, the following requirements must be completed for all nodes: - Basic connectivity between nodes (i.e. nodes can all ping each other) - Docker installed (and in production setups should be monitored to be always running) - One of the supported OS
We will provide a node specification conformance test that will verify if provisioning has been successful.
This step is provider specific and will be implemented for each cloud provider + OS distribution separately using provider specific technology (cloud formation, deployment manager, PXE boot, etc). Some OS distributions may meet the provisioning criteria without needing to run any post-boot steps as they ship with all of the requirements for the node specification by default.
Substeps (on the GCE example):
sshto all machines
sshto all machines and run a test docker image
sshto master and nodes and ping other machines
Objective: Generate security certificates used to configure secure communication between client, master and nodes
TODO: Enumerate certificates which have to be generated.
Objective: Run kubelet and all the required components (e.g. etcd, apiserver, scheduler, controllers) on the master machine.
nsenterto workaround problems with mount propagation
Objective: Start kubelet on all nodes and configure kubernetes network. Each node can be deployed separately and the implementation should make it ~impossible to change this assumption.
Objective: Configure the Kubernetes networking to allow routing requests to pods and services.
To keep default setup consistent across open source deployments we will use Flannel to configure kubernetes networking. However, implementation of this step will allow to easily plug in different network solutions.
--configure-cbr0=falseon node and
--allocate-node-cidrs=falseon master), which breaks encapsulation between nodes
Objective: Start all system daemons (e.g. kube-proxy)
Objective: Add default add-ons (e.g. dns, dashboard)
We will use Ansible as the default technology for deployment orchestration. It has low requirements on the cluster machines and seems to be popular in kubernetes community which will help us to maintain it.
For simpler UX we will provide simple bash scripts that will wrap all basic commands for deployment (e.g.
One disadvantage of using Ansible is that it adds a dependency on a machine which runs deployment scripts. We will workaround this by distributing deployment scripts via a docker image so that user will run the following command to create a cluster:
docker run k8s.gcr.io/deploy_kubernetes:v1.2 up --num-nodes=3 --provider=aws