Table of contents
- What is the difference Docker and Kubernetes?
- What are the main components of Kubernetes architecture?
- What are the main difference between the Docker Swarm and Kubernetes?
- What is the difference between Docker container and a Kubernetes pod?
- What is a namespace in Kubernetes?
- What is the role of kube proxy?
- What are the different types of services within Kubernetes?
- What is the difference between Node Port and Load Balancer type service?
- What is the role of Kubelet?
- Day to Day Activities on Kubernetes?
What is the difference Docker and Kubernetes?
The differences between Docker and Kubernetes:
Docker:
Purpose: Docker is primarily a platform for containerization. It allows developers to package applications along with their dependencies into lightweight, portable containers.
Functionality:
Containerization: Docker enables you to create, distribute, and run containers. Containers encapsulate an application and its dependencies, ensuring consistency across different environments.
Single Node: Docker focuses on packaging applications on a single node (a single machine or server).
Key Points:
Container Images: Docker revolves around creating and managing container images.
Local Development: It’s commonly used for local development, testing, and building images.
Simplicity: Docker is straightforward and easy to get started with.
Less Complex: It’s less complex than Kubernetes.
Use Case: Ideal for small-scale applications or individual developers.
Kubernetes:
Purpose: Kubernetes is a container orchestration system designed to manage containerized applications across a cluster of nodes (multiple machines or servers).
Functionality:
Orchestration: Kubernetes automates tasks such as scaling, load balancing, self-healing, and rolling updates.
Multi-Node: It’s built to handle applications distributed across multiple nodes.
Key Points:
Cluster Management: Kubernetes manages clusters of containers.
Scaling and Load Balancing: It ensures efficient resource utilization by automatically distributing workloads.
Service Discovery: Containers can easily find and communicate with each other.
Self-Healing: If a container fails, Kubernetes replaces it.
Complexity: Kubernetes is more complex due to its rich feature set.
Use Case: Suitable for large-scale, production-grade applications.
In summary, Docker is about packaging applications into containers, while Kubernetes orchestrates and manages those containers in production
What are the main components of Kubernetes architecture?
The key components of Kubernetes architecture:
Kubernetes Master Components:
The control plane consists of several components that manage the overall state of the cluster:
Kube-apiserver: The central hub that exposes the Kubernetes API, allowing communication with other components.
etcd: A distributed key-value store that stores the configuration data for the entire cluster.
Kube-scheduler: Responsible for distributing workloads across worker nodes based on resource availability and constraints.
Kube-controller-manager: Manages various controllers that regulate the state of resources (e.g., nodes, pods, services).
Cloud Controller Manager: Integrates with cloud provider APIs for managing external resources (e.g., load balancers, volumes).
Worker Nodes (Compute Plane):
These nodes host the actual workloads (containers) and execute tasks:
Kubelet: Ensures that containers are running in a Pod. Communicates with the control plane and manages container lifecycle.
Kube-proxy: Maintains network rules (such as load balancing and routing) for services within the cluster.
Container Runtime: The software responsible for running containers (e.g., Docker, containerd).
Networking Components:
CNI (Container Network Interface): Provides networking capabilities for containers.
Pod Network: Facilitates communication between pods within the same cluster.
Service Network: Enables communication between services and external clients.
Add-Ons and Extensions:
Ingress Controllers: Handle external access to services within the cluster.
DNS (CoreDNS): Provides DNS resolution for service discovery.
Metrics Server: Collects resource utilization metrics.
Dashboard: Web-based UI for managing the cluster.
Persistent Storage: Integrates with storage solutions (e.g., CSI, Rook) for persistent data storage.
What are the main difference between the Docker Swarm and Kubernetes?
The key differences between Docker Swarm and Kubernetes:
Docker Swarm:
Purpose: Docker Swarm is an open-source platform for container orchestration, known for its quick setup and ease of use.
Container Management:
Dockerized Containers: Swarm manages Dockerized containers (containers created using Docker).
Native Mode: It’s a native mode of Docker, allowing seamless integration with existing Docker commands.
Cluster Components:
Manager Nodes: Swarm clusters consist of Docker Engine-deployed manager nodes that oversee the cluster.
Worker Nodes: Worker nodes execute tasks assigned by the manager.
Ideal Use Case:
Well-suited for smaller applications with fewer containers.
Great for users already familiar with Docker commands.
Kubernetes:
Purpose: Kubernetes (often abbreviated as K8s) is the most popular open-source platform for managing containers and their workloads.
Cluster Architecture:
Master/Worker Nodes: Kubernetes clusters have a more complex architecture, with master nodes controlling worker nodes.
Pods: Pods are the smallest deployable units in Kubernetes and can contain one or more containers.
Features:
Scaling and Auto-Healing: Kubernetes excels in automatic scaling and self-healing.
Resource Management: It efficiently manages resources across various IT systems (on-premises, virtual machines, public cloud, etc.).
Ideal Use Case:
Ideal for complex applications that benefit from automatic scaling and robust features.
Offers monitoring, security, high availability, and flexibility.
What is the difference between Docker container and a Kubernetes pod?
The difference between a Docker container and a Kubernetes pod:
Docker Container:
Definition: A Docker container is a standardized executable component that combines application source code with operating system libraries.
Isolation: Each container runs in isolation from other processes on a computer, physical server, or virtual machine.
Lightweight and Portable: Containers are lightweight, fast, and portable. Unlike virtual machines, they don’t need a separate operating system instance for each container; they leverage the host machine’s OS resources.
Use Case: Docker containers are commonly used for packaging applications, such as databases, web applications, or backend services.
Kubernetes Pod:
Definition: A Kubernetes pod is the basic execution unit within a Kubernetes application. It represents a set of processes running on a cluster node.
Composition: A pod typically includes one or more containers that work together as a functional unit. For example, you might have a pod with a web server container and a sidecar container for logging.
Shared Resources: Pods provide shared resources for their containers, including network and storage.
Network Isolation: All containers within a pod share the same network namespace, including IP address and network ports. They can communicate using
localhost
within the pod.Almost Same Environment: Containers within a pod run closely related processes and share almost the same environment, as if they were all running in a single container.
Purpose: Pods allow us to group containers together and manage them as a single unit, providing the best of both worlds: container functionality and process collaboration.
What is a namespace in Kubernetes?
In Kubernetes, a namespace provides a mechanism for isolating groups of resources within a single cluster. Here are the key points about namespaces:
Definition:
A namespace is like a virtual cluster inside your Kubernetes cluster.
You can have multiple namespaces within a single Kubernetes cluster, and they are all logically isolated from each other.
Namespaces help with organization, security, and even performance.
Purpose and Benefits:
Resource Isolation: Resources within a namespace are isolated from resources in other namespaces.
Unique Names: Resources must have unique names within a namespace, but the same name can exist across different namespaces.
Logical Segmentation: Namespaces allow you to logically segment your applications, teams, or projects.
Authorization and Policy: You can attach authorization and policy rules to specific namespaces.
Resource Quotas: Namespaces help enforce resource quotas for specific groups of resources.
Use Cases:
Multi-Tenancy: Namespaces enable different projects, teams, or customers to share a Kubernetes cluster while maintaining isolation.
Scalability: As your cluster grows, namespaces help manage complexity and prevent resource conflicts.
What is the role of kube proxy?
The role of kube-proxy in a Kubernetes cluster:
Kube-Proxy:
Kube-Proxy is a critical networking component within Kubernetes, installed on every node in the cluster.
Its primary role is to facilitate communication between services and pods.
Here’s how it works:
Service Discovery and Load Balancing:
Kube-Proxy reflects services defined in the Kubernetes API on each node.
It ensures that service requests are properly routed to the correct pods.
When a service is created, Kube-Proxy sets up network rules to allow traffic to reach the appropriate pods.
Network Rules and Translation:
Kube-Proxy translates service IPs and ports to actual network rules inside the node.
It can perform simple TCP, UDP, and SCTP stream forwarding or round-robin forwarding across a set of backends (pods).
For example, if a service exposes port 80, Kube-Proxy ensures that requests to that port reach the corresponding pods.
Modes of Operation:
Kube-Proxy operates in different modes:
Userspace Mode: In this mode, it uses iptables rules to handle service traffic.
IPVS Mode: Kube-Proxy can also use IPVS (IP Virtual Server) for more efficient load balancing.
What are the different types of services within Kubernetes?
There are several types of services that serve different purposes. Let’s explore them:
ClusterIP:
A ClusterIP service is the default type.
It exposes the service on a cluster-internal IP address.
Useful for internal communication between pods within the cluster.
Not accessible from outside the cluster.
NodePort:
A NodePort service exposes the service on a static port on each node.
It allows external access to the service via the node’s IP address and the specified port.
Typically used for development and testing.
LoadBalancer:
A LoadBalancer service provisions an external load balancer (e.g., AWS ELB, GCP Load Balancer).
Distributes traffic across multiple nodes.
Useful for publicly accessible services.
Ingress:
While not a service type itself, Ingress acts as an entry point for your cluster.
It consolidates routing rules for multiple components behind a single listener.
Allows you to expose multiple services via a single external IP.
Often used for HTTP-based routing.
What is the difference between Node Port and Load Balancer type service?
The differences between NodePort and LoadBalancer service types in Kubernetes:
NodePort:
A NodePort service is a straightforward way to expose your service to external traffic.
Here’s how it works:
A specific port (e.g., 30036) is opened on all nodes (VMs) in the cluster.
Any traffic sent to this port is forwarded to the service.
Key Points:
Direct Node Access: Clients connect directly to a specific node (client → node).
Port Range: You need to open firewall rules to allow access to ports 30,000 to 32,767.
Node IPs: You must know the IPs of individual worker nodes.
Simplicity: NodePort is simple but lacks load balancing sophistication.
Use Case: Useful for development, testing, or scenarios where simplicity suffices.
LoadBalancer:
A LoadBalancer service provisions an external load balancer (e.g., AWS ELB, GCP Load Balancer).
How it works:
The client connects to the cloud platform’s load balancer (client → load balancer).
The load balancer then picks a node and connects the client to it.
Key Points:
Stable VIP: LoadBalancer provides a single stable VIP (Virtual IP) for your service.
Public Cloud: Works well with public cloud providers.
Exact Port Control: The service can control the exact port it wants to use.
Use Case: Ideal for publicly accessible services in production.
What is the role of Kubelet?
The Kubelet is a critical component in the Kubernetes framework, responsible for managing and coordinating pods and nodes. Let’s dive into its key roles:
Pod Deployment:
The Kubelet ensures that the containers specified in a PodSpec (a YAML or JSON object describing a pod) are running and healthy.
It monitors the desired state of pods and takes actions to achieve that state.
Resource Management:
Kubelet manages resources (CPU, memory, etc.) for each pod on the node.
It enforces resource limits and allocates resources based on pod requirements.
Health Monitoring:
Kubelet continuously checks the health of containers within pods.
If a container fails or becomes unhealthy, Kubelet takes corrective actions (e.g., restarting the container).
Communication with Control Plane:
Kubelet communicates with the Kubernetes control plane (which includes the API server, scheduler, and controller manager).
It registers the node with the API server, ensuring the node’s presence in the cluster.
Container Manifests:
Kubelet receives container manifests (descriptions of containers and their properties) from various sources:
API server: Manifests provided through the API server.
Files: Manifests read from files on the node.
HTTP endpoints: Manifests fetched from specified HTTP endpoints.
Node Agent:
- Kubelet acts as the node agent, bridging the gap between the control plane and the actual workloads running on nodes.
Day to Day Activities on Kubernetes?
Managing a Kubernetes cluster involves various day-to-day activities. Here’s a concise cheat sheet for some common Kubernetes operations:
Viewing Pods:
List all pods in a namespace:
kubectl get pods -n <namespace>
View a specific pod in watch mode:
kubectl get pod <pod-name> --watch
Creating Pods:
Create a pod from an image (e.g., Nginx):
kubectl run nginx --generator=run-pod/v1 --image=nginx
Interacting with Pods:
Run a pod in interactive shell mode:
kubectl run -i --tty nginx --image=nginx -- sh
Execute a command after creating a pod:
kubectl run busybox --image=busybox -- sleep 100000
Formatting Output:
Customize output format (e.g., JSON, YAML, wide):
kubectl get pods -o json kubectl get pods -o wide
Other Useful Commands:
Describe a resource (e.g., pod, service):
kubectl describe pod <pod-name>
Edit a resource:
kubectl edit pod <pod-name>
Delete a resource:
kubectl delete pod <pod-name>
That's great if you have make till here you have covered Kubernetes Concept Interview Q&A.
If you liked what you read, do follow and any feedback for further improvement will be highly appreciated!
Thank you and Happy Learning!👏