With a Kubernetes cluster set up and an introductory deployment, what is the configuration actually defining? What are the components in the YAML file? This article will summarize the components within the introductory YAML file and explain their relationships. For more detailed explanations of the components, see the links below in the official Kubernetes documentation:
There are 3 components within in our test deployment. Actually, there were only 2 defined, because we used a shortcut. This article lays out the expanded and more detailed definition of all 3. (It’s still just a summary, so refer to the official documentation links above for more detail.)
Common YAML Keys
Each section of a Kubernetes component contains 4 top level keys:
apiVersion - This is what it says, but the value of the version depends on the kind. The official Kubernetes documentation has lists of proper values based on the kind.
kind - the type of resource described, such as
Service used in the previous article.
metadata - provides the name of the component and labels used for matching components to group together.
spec - the details of the resource to use in the cluster. Highly specific to the
A Pod is the smallest unit of work in a cluster. It consists of one or more containers. The control plane can schedule pods on any node in the cluster. Each pod receives a unique IP address.
Pod YAML file might look like, a single container running the
whoami app on port 80:
apiVersion: v1 kind: Pod metadata: name: whoami labels: app: whoami spec: containers: - name: whoami image: traefik/whoami ports: - containerPort: 80
This should look a lot like the bottom section of the
Deployment in the previous article. And it is.
spec for this pod shows the container image to use and the ports it exposes, similar to what a Dockerfile might do with the
EXPOSE statement. It also provides a label, which is a key/value pair that is used to associate this pod with higher level components. (Names must be unique, so labels are used to group like components together.)
Finally, pods are rarely defined by themselves, but as a subset of a higher level component, as with a
The pod and its IP address are not constant. The control pane deploys them at its discretion throughout the cluster. Any pod can appear on any node and be assigned any IP address (within the cluster’s range).
But an app must have a way to consistently communicate with its components; enter Services. A Service is a consistent endpoint which Kubernetes maps to the pod(s) that match the label of the the selector attribute in the Service spec.
Below is the Service defined in the test deployment. The selector in the spec is
app: whoami, which matches the metadata label in the pod above and in the
Deployment from the previous article.
apiVersion: v1 kind: Service metadata: name: whoami spec: type: NodePort ports: - port: 80 nodePort: 31000 selector: app: whoami
The spec for the Service also defines the ports.
Normally these IP are internal to the cluster, but using
NodePort exposes the port to outside the cluster.
(How do apps in the cluster find the IP address of the service? In-cluster DNS, to be discussed later.)
A Deployment is a high level component that ties together several components.
As mentioned above, a Deployment can create pods rather than rely on on the Pod kind to actually create them.
It does this via a lower level component called a ReplicaSet.
The ReplicaSet is responsible for maintaining a, wait for it… a set of replicas of pods as described in the Deployment spec.
spec: replicas: 1 selector: matchLabels: app: whoami template: metadata: labels: app: whoami spec: containers: - name: whoami image: traefik/whoami ports: - name: web containerPort: 80
The Deployment creates the ReplicaSet automatically. It’s possible to create a ReplicaSet by itself, but it’s easier to let the higher level Deployment sort
The above Deployment spec contains a template which is used to create the pods. To create more than a single pod, simply change the
replicas value to the desired value. The Service will match find all the pods that match the
service:spec:selector and call each pod round-robin. (A ReplicaSet backed by a Service can function as a load balancer.)
A Deployment and a ReplicaSet are known as Controllers. A controller in the
The pattern these articles have established so far seem good. One implementation and one explanation.
Next article will begin the custom app implementation.