### Creating Deployments
creating a deployment
kubectl and deployments
pods, containers, deployments
Deployment Core Concepts
Declarative Way to manage the pods
Declarative way to manager pods using a ReplicaSet
– basic resource of Kubernetets
– can be created and destroyed but never recreated
– Deployments and Replicasets ensure pods stay running and can be used to scale pods
ReplicaSet controls a pod which contains a Container.
– Self healing mechanism
– ensure requested number of pods are available
– fault tolerance
– can be used to scale pods horizontally
– relies on pod template
– no need to create pods
– relies on a pod template
– used by deployments
Result of Creating a ReplicaSet
– replicaset has desired x pods, so x pods are created
– when pods go down, replicasets will ensure that pods stay at desired level count.
– Just higher level wrapper around replicasets.
– responsible for managing pods using Replicasets
– Scales replicasets which scales pods
– Supports zero downtime updates by creating and destroying Replicasets.
– Provide rollback functionality
– Create a unique label that is assigned to replicaset and generated pods.
– YAML for deployment very similar to ReplicaSet
### Creating a Deployment
– using YAML
– YAML deployment + kubectl
– linking of pods happening using label matching by seeing selector property of deployment spec.
### kubectl and Deployments kubectl create -f file.deployment.yml kubectl apply -f file.deployment.yml (preferred approach)
kubectl get deployments kubectl get deployments --show-label kubectl get deployments -l (using specific label) kubectl get deployment -l app=nginx kubectl delete deployment deployment-name kubectl scale deployment deploymentname --replicas=5 (automatically scale this to 5 pods) kubectl scale -f file.deployment.yml --replicas=5
### You can specify replicas in yaml file as well.
See in selectors matchLabels app value matches with template metadata labels app (my-nginx). This will tie deployment to the template.
Putting constraints is very important. Otherwise it could bring down whole system if it keeps consuming unlimited resources.
kubectl create -f nginx.deployment.yml --save-config deployment.apps/my-nginx created
kubectl get all You can see you have your deployment there. You have your replicaset ready. And with the name of replicaset, you have the pod running.
NAME READY STATUS RESTARTS AGE
pod/my-nginx-5bb9b897c8-pw7hh 1/1 Running 0 60s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d2h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 1/1 1 1 60s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-5bb9b897c8 1 1 1 60s
kubectl describe deployment my-nginx Name: my-nginx Namespace: default CreationTimestamp: Sun, 25 Jul 2021 13:52:43 +0530 Labels: app=my-nginx Annotations: deployment.kubernetes.io/revision: 1 Selector: app=my-nginx Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=my-nginx Containers: my-nginx: Image: nginx:alpine Port: 80/TCP Host Port: 0/TCP Limits: cpu: 200m memory: 128Mi Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: my-nginx-5bb9b897c8 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 7m3s deployment-controller Scaled up replica set my-nginx-5bb9b897c8 to 1 kubectl get deployments --show-labels NAME READY UP-TO-DATE AVAILABLE AGE LABELS my-nginx 1/1 1 1 11m app=my-nginx show me deployment with app name my-nginx kubectl get deployments -l app=my-nginx NAME READY UP-TO-DATE AVAILABLE AGE my-nginx 1/1 1 1 15m
kubectl scale -f nginx.deployment.yml --replicas=4 deployment.apps/my-nginx scaled
kubectl get all NAME READY STATUS RESTARTS AGE pod/my-nginx-5bb9b897c8-9t2nq 1/1 Running 0 36s pod/my-nginx-5bb9b897c8-cwr2c 1/1 Running 0 36s pod/my-nginx-5bb9b897c8-g2m5h 1/1 Running 0 36s pod/my-nginx-5bb9b897c8-pw7hh 1/1 Running 0 18m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d2h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/my-nginx 4/4 4 4 18m NAME DESIRED CURRENT READY AGE replicaset.apps/my-nginx-5bb9b897c8 4 4 4 18m
kubectl delete -f nginx.deployment.yml deployment.apps "my-nginx" deleted
kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d2h
kubectl apply -f nginx.deployment.yml deployment.apps/my-nginx created
kubectl get all NAME READY STATUS RESTARTS AGE pod/my-nginx-5bb9b897c8-5ss65 1/1 Running 0 90s pod/my-nginx-5bb9b897c8-h7vxg 1/1 Running 0 90s pod/my-nginx-5bb9b897c8-jk6qd 1/1 Running 0 90s pod/my-nginx-5bb9b897c8-skh4w 1/1 Running 0 90s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d2h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/my-nginx 4/4 4 4 90s NAME DESIRED CURRENT READY AGE replicaset.apps/my-nginx-5bb9b897c8 4 4 4 90s
### Deployment Options Let's say ask is current pod has container which has image nginx:1.14.2-alpine. And you need to migrate to nginx:1.15.9-alpine. Traditonal way - will have some downtime.
### Zero Downtime Deployments Allow software updates to be deployed to production without impacting end users.
### Kubernetes - Zero Downtime Deployments - It can bring new pods and once they are running, it can bring down the old pods. - Update the pods without impacting end users
– Rolling Updates
– Blue-green Deployments ( Multiple environments running at same time, when new environments are good, then traffic is switched )
– Canary Deployments (Very small traffic goes to new environments, once it is proven, then all traffic goes there)
– Rollbacks (tried, didn’t work, go to previous work)
ex – 3 pods running with app v1 version
One by one new pod will get created/deployed and once it is ready (by liveness probe), the old pod will go down one by one.
kubectl apply -f file.deployment.yml This will take care of rolling deployment automatically.
### Zero Downtime Deployments in Action
apiVersion: apps/v1 #(Kubernetes API version and resource type) kind: Deployment #(because it is deployment) metadata: #(name, label, etc) name: my-nginx labels: app: my-nginx spec: #(deployment spec) replicas: 2 #number of replicas that need to run selector: #(Select pod template label(s)) matchLabels: app: my-nginx template: #(Template used to create the Pods) metadata: labels: app: my-nginx spec: containers: #(Containers that will run the pod) - name: my-nginx image: nginx:alpine ports: - containerPort: 80 resources: limits: memory: "128Mi" #128MB memory cpu: "200m" #200 millicpu (.2 cpu or 20% of the CPU)
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?