DevOps Most Asked Real Time Interview Question And Answer - Set 3

 "In the world of DevOps, success is not measured by how quickly you can deploy, but by how seamlessly you can integrate, automate, and collaborate across the entire software lifecycle."{alertInfo}

Image from FreePik

{tocify} $title={Table of Contents}

Question 51: What is the use of Jenkins slave server?

Jenkins slave servers are auxiliary nodes that are used to offload tasks from the master Jenkins server. They assist in executing Jenkins jobs and builds, thereby distributing the workload and enhancing the overall performance and scalability of the Jenkins environment. Slave servers can be configured on different machines or environments, allowing concurrent execution of multiple jobs and supporting parallel builds. This setup enables better resource utilization and efficient handling of continuous integration and delivery pipelines.

By utilizing Jenkins slave servers, organizations can effectively manage their build infrastructure, accommodate diverse project requirements, and optimize the utilization of computing resources across their development ecosystem.

Question 52: How to maintain Jenkins failover or high availability?

Maintaining Jenkins failover or high availability involves implementing strategies to ensure continuous operation and minimal downtime in the event of server failures or system disruptions. Here are some key practices:

  1. Master-Slave Setup: Deploy Jenkins in a master-slave configuration where the master node orchestrates the builds and distributes tasks to multiple slave nodes. This setup ensures redundancy and fault tolerance.
  2. Load Balancing: Use load balancers to distribute incoming traffic across multiple Jenkins master instances. This helps in distributing the load and providing failover capabilities.
  3. Database Replication: Implement database replication to ensure data consistency and availability. Utilize technologies like master-slave replication or clustering for database servers hosting Jenkins data.
  4. Automated Backups: Regularly backup Jenkins configuration, job configurations, and build artifacts. Automate the backup process to ensure that critical data is protected and can be restored quickly in case of failures.
  5. Monitoring and Alerting: Implement monitoring tools to track the health and performance of Jenkins infrastructure. Set up alerts to notify administrators about potential issues or failures proactively.
  6. Disaster Recovery Plan: Develop and test a comprehensive disaster recovery plan to recover from catastrophic failures. This plan should include procedures for restoring Jenkins instances, databases, and associated infrastructure components.

By implementing these practices, organizations can ensure high availability and resilience of their Jenkins infrastructure, thereby minimizing disruptions and maintaining continuous delivery pipelines.

Question 53: How to secure credentials in Jenkins pipeline?

In Jenkins pipeline scripts, it's crucial to securely manage and use credentials to access sensitive resources such as repositories, servers, or external services. Here's how you can secure credentials in Jenkins pipeline:

  1. Use Jenkins Credentials Plugin: Jenkins provides a Credentials Plugin that allows you to securely store and manage credentials within Jenkins. You can store credentials such as usernames, passwords, API tokens, SSH keys, etc., in Jenkins' credential store.
  2. Secret Text and Secret File: Jenkins pipeline supports the use of secret text and secret file types to store sensitive information securely. These credentials can be defined in Jenkins and accessed within pipeline scripts using the withCredentials block.
  3. Credential Binding: Use the withCredentials step in Jenkins pipeline to bind credentials to environment variables or files securely. This step ensures that credentials are masked in logs and protected from exposure.
  4. Jenkins Credential Providers: Jenkins supports various credential providers such as Username and Password, SSH Username with Private Key, Secret text, Secret file, etc. Choose the appropriate credential type based on your use case and configure them securely within Jenkins.
  5. Restrict Access: Limit access to Jenkins credentials by configuring appropriate permissions and access controls. Only authorized users or jobs should have access to sensitive credentials stored in Jenkins.
  6. Avoid Hardcoding Credentials: Refrain from hardcoding credentials directly in pipeline scripts or configuration files. Instead, leverage Jenkins' credential store and reference them dynamically in your pipeline scripts.

By following these best practices, you can effectively secure credentials in Jenkins pipeline and mitigate the risk of unauthorized access or exposure of sensitive information.

Question 54: How to rollback deployment in Jenkins

To rollback a deployment in Jenkins, you can follow these steps:

  1. Identify the previous stable version or release of your application.
  2. Access Jenkins and navigate to the job or pipeline that performed the deployment.
  3. Select the build or deployment that needs to be rolled back.
  4. Click on the "Rebuild" or "Re-run" option to trigger the rollback process.
  5. During the execution of the job or pipeline, specify the version or commit ID of the previous stable release to deploy.
  6. Monitor the rollback process and verify that the deployment has been successfully reverted to the desired state.
  7. If necessary, perform any additional validation or testing to ensure the rollback hasn't introduced any issues or regressions.

Question 55: Steps to create free style project and declarative pipeline

To create a freestyle project in Jenkins:

  1. Log in to Jenkins and navigate to the Jenkins dashboard.
  2. Click on "New Item" to create a new project.
  3. Enter a name for your project and select "Freestyle project" as the project type.
  4. Configure the project by specifying details such as source code repository, build triggers, build steps, post-build actions, etc.
  5. Save the configuration and click on "Build Now" to trigger the first build of the project.

To create a declarative pipeline in Jenkins:

  1. Navigate to the Jenkins dashboard and click on "New Item".
  2. Enter a name for your pipeline and select "Pipeline" as the project type.
  3. In the pipeline definition, choose "Pipeline script from SCM" if your pipeline configuration is stored in a source code repository, or select "Pipeline script" to define the pipeline directly in Jenkins.
  4. Configure the pipeline script according to the declarative pipeline syntax, including stages, steps, triggers, and post-actions.
  5. Save the configuration, and Jenkins will automatically detect changes in the pipeline script and trigger builds accordingly.

Question 56: How to integrate SonarQube in Jenkins

To integrate SonarQube in Jenkins:

  1. Install the SonarQube Scanner plugin in Jenkins.
  2. Configure SonarQube server details in Jenkins global configuration.
  3. In your Jenkins job or pipeline, add a build step to execute SonarQube analysis.
  4. Specify the project key, name, and version to be analyzed by SonarQube.
  5. Optionally, customize additional parameters such as source code directories, exclusions, language settings, etc.
  6. Trigger the Jenkins job or pipeline, and SonarQube analysis will be executed as part of the build process.
  7. View the analysis results in the SonarQube dashboard to identify code quality issues, bugs, vulnerabilities, and code smells.

Question 57: What is poll scm?

Poll SCM is a feature in Jenkins that allows users to configure periodic checks for changes in the source code repository. When enabled, Jenkins will poll the specified SCM (Source Control Management) repository at regular intervals to detect any new commits or changes. If changes are detected since the last build, Jenkins will trigger a new build of the job or pipeline to incorporate those changes. Poll SCM is useful for triggering builds automatically based on changes in the source code repository without relying on external webhooks or triggers.

Question 58: How do you create a backup of Jenkins?

To create a backup of Jenkins:

  1. Log in to the Jenkins server.
  2. Navigate to the Jenkins home directory where Jenkins configuration and data are stored.
  3. Stop the Jenkins service to ensure data consistency during backup.
  4. Archive the entire Jenkins home directory, including configuration files, job configurations, build logs, and any additional data.
  5. Optionally, compress the archive to reduce storage space.
  6. Store the backup archive in a secure location, preferably offsite or in a redundant storage system.
  7. Restart the Jenkins service once the backup process is complete to resume normal operation.

Question 59: How to integrate AWS with Jenkins

To integrate AWS with Jenkins:

  1. Install the AWS SDK plugin in Jenkins.
  2. Configure AWS credentials in Jenkins global configuration using AWS access key ID and secret access key.
  3. Create a new Jenkins job or pipeline to automate AWS tasks.
  4. Add build steps or pipeline stages to execute AWS CLI commands or SDK operations, such as creating EC2 instances, managing S3 buckets, deploying Lambda functions, etc.
  5. Ensure that the IAM (Identity and Access Management) role associated with Jenkins credentials has the necessary permissions to perform AWS operations.
  6. Trigger the Jenkins job or pipeline, and Jenkins will interact with AWS services using the configured credentials and execute the specified tasks.

Question 60: Explain Dockerfile and how it works

A Dockerfile is a text file that contains instructions for building a Docker image. It defines the environment and configuration of the containerized application, including dependencies, runtime settings, and commands to run when the container starts. Dockerfile follows a simple syntax and consists of a series of directives and commands that are executed sequentially to build the Docker image.

The basic structure of a Dockerfile includes:

  • Base Image: Specifies the base image to be used as the starting point for building the new image.
  • Instructions: Defines various instructions such as RUN, COPY, ADD, CMD, ENTRYPOINT, etc., to install dependencies, copy files, set environment variables, and configure the container.
  • Build Context: Refers to the directory or context from which the Dockerfile is being built. All files and directories referenced in the Dockerfile are relative to this context.

When building a Docker image using a Dockerfile, Docker Engine reads the instructions from the Dockerfile and executes them in a temporary container environment. Each instruction in the Dockerfile results in a new layer being added to the image. Once all instructions are executed successfully, Docker Engine creates a final immutable image that encapsulates the application and its dependencies.

Question 61: Explain any 5 Docker commands

  1. docker build: Builds a Docker image from a Dockerfile and a specified build context.
  2. docker run: Creates and starts a new container instance based on a specified Docker image.
  3. docker push: Pushes a Docker image to a specified Docker registry, making it available for use by others.
  4. docker pull: Pulls a Docker image from a specified Docker registry or repository to the local machine.
  5. docker-compose: Manages multi-container Docker applications, defining services, networks, and volumes in a single YAML file.

Question 62: Explain end to end process to build image and push into registry

To build an image and push it into a registry:

  1. Write a Dockerfile defining the application environment and dependencies.
  2. Place the Dockerfile in a directory along with any necessary application files.
  3. Navigate to the directory containing the Dockerfile.
  4. Run the command docker build -t <image-name> . to build the Docker image.
  5. Once the image is built successfully, tag it with the registry URL using the command docker tag <image-name> <registry-url>/<image-name>:<tag>.
  6. Log in to the Docker registry using docker login command.
  7. Push the tagged image to the registry using docker push <registry-url>/<image-name>:<tag>.

Question 63: Explain about COPY and ADD commands

Both COPY and ADD commands are used in Dockerfile to copy files and directories from the host machine into

Question 64: Dockerfile structure

A Dockerfile typically consists of:

  1. Base Image: FROM instruction.
  2. Environment Setup: RUN, ENV, WORKDIR instructions.
  3. Application Installation: Using package managers or direct downloads (RUN).
  4. Copy Files: COPY or ADD instructions.
  5. Exposing Ports: EXPOSE instruction.
  6. Command Execution: CMD or ENTRYPOINT instruction.
  7. Additional Instructions: VOLUME, USER, LABEL, etc.

Question 65: Dockerfile format

Save the Dockerfile as "Dockerfile" (case-sensitive) in the root directory of the project. Ensure it's a plain text file without file extensions. Use UTF-8 encoding and proper file permissions.

Question 66: Explain k8s architecture

Kubernetes, often abbreviated as k8s, follows a distributed architecture designed to manage containerized workloads and services. The key components of Kubernetes architecture include:

  1. Master Node: The master node serves as the control plane for the Kubernetes cluster. It manages the cluster state, schedules workloads, and orchestrates communication between various components. The master node comprises several components:
  • kube-apiserver: Exposes the Kubernetes API, which is used by other components to interact with the cluster.
  • etcd: Consistent and highly available key-value store used to store cluster configuration and state information.
  • kube-scheduler: Responsible for scheduling pods onto nodes based on resource requirements and constraints.
  • kube-controller-manager: Manages various controller processes responsible for maintaining the desired state of the cluster.
  1. Node(s): Nodes, also known as worker nodes, are the compute resources where containerized applications, called pods, are deployed. Each node runs several components:
  • kubelet: Agent running on each node, responsible for managing containers, pods, and their lifecycle.
  • kube-proxy: Network proxy that maintains network rules and performs load balancing across services.
  • Container Runtime: Software responsible for running containers, such as Docker or containerd.
  1. Pods: Pods are the smallest deployable units in Kubernetes, consisting of one or more containers that share network and storage resources. They represent the application or microservice that Kubernetes manages. Pods are scheduled onto nodes, and Kubernetes ensures their desired state is maintained.
  2. Controller: Kubernetes controllers are control loops that watch the state of the cluster and make necessary changes to move the current state towards the desired state. Examples include ReplicaSet, Deployment, StatefulSet, etc., which manage the lifecycle of pods and ensure high availability and scalability.
  3. Service: Services provide a consistent way to access pods running within the cluster, regardless of their underlying IP addresses. They enable load balancing, service discovery, and routing traffic to the appropriate pods based on labels and selectors.

This distributed architecture of Kubernetes provides scalability, fault tolerance, and automation for deploying, scaling, and managing containerized applications in a production environment.

Question 67: Why is Kubernetes called as k8s?

Kubernetes is often abbreviated as k8s, where "8" stands for the eight letters between "k" and "s" in "Kubernetes." This abbreviation is commonly used due to the complexity and length of the word "Kubernetes." Using "k8s" as a shorthand makes it easier and quicker to reference Kubernetes in written communication, command-line usage, and naming conventions. It has become a widely accepted convention within the Kubernetes community and ecosystem.

Question 68: Explain complete end-to-end process to create EKS cluster in AWS EKS

  1. Log in to the AWS Management Console.
  2. Navigate to the Amazon EKS service.
  3. Click on "Create cluster" and choose a cluster name.
  4. Configure networking, including VPC settings and subnets.
  5. Select the Kubernetes version and desired IAM role for the cluster.
  6. Configure logging and monitoring options.
  7. Review and create the cluster.
  8. Wait for the cluster to be provisioned, then configure kubectl to interact with the cluster.

Question 69: What is pod, how many containers we can run in single pods, any limitations?

A pod is the smallest deployable unit in Kubernetes, representing one or more containers that share network and storage resources. In a single pod, you can run multiple containers that are tightly coupled and need to share resources. However, it's recommended to keep each pod focused on a single application or process to maintain simplicity and flexibility. There are no hard limits on the number of containers per pod, but excessive coupling can lead to management complexities.

Question 70: Explain complete end-to-end process how we can deploy pods in EKS cluster

  1. Define the application deployment configuration in a Kubernetes manifest file.
  2. Connect to the EKS cluster using kubectl.
  3. Apply the deployment manifest using kubectl apply -f <manifest-file>.
  4. Monitor the deployment status using kubectl get pods.
  5. Optionally, expose the deployment using a service definition.
  6. Verify the deployment and service connectivity.

Question 71: What is Helm?

Helm is a package manager for Kubernetes, allowing users to define, install, and manage Kubernetes applications. It simplifies the deployment and management of complex Kubernetes applications by providing templating, dependency management, and versioning capabilities. Helm packages are called charts, which consist of pre-configured Kubernetes resources bundled together for easy installation.

Question 72: Any major troubleshooting's you have done in Kubernetes?

Troubleshooting in Kubernetes often involves diagnosing issues related to pod scheduling, networking, resource constraints, and application failures. Some common troubleshooting tasks include:

  • Analyzing pod status and events using kubectl.
  • Checking container logs for errors.
  • Investigating networking issues with services and ingresses.
  • Monitoring resource utilization and adjusting resource requests/limits.
  • Debugging application code and dependencies.
  • Investigating cluster component failures.

Question 73: Post deployment what action we need to perform

After deployment, it's important to:

  • Monitor the application and infrastructure for any issues.
  • Set up logging and monitoring to track performance and detect anomalies.
  • Configure alerts to notify about critical events.
  • Perform periodic backups of data and configurations.
  • Continuously optimize resource usage and application performance.

Question 74: How to rollback last stable version if any prod deployment failed due or application not working post deployment

To rollback to the last stable version:

  1. Identify the previous stable version or deployment.
  2. Use kubectl rollout undo command to rollback to the previous revision.
  3. Monitor the rollout status using kubectl rollout status.
  4. Verify the application functionality and stability post-rollback.

Question 75: What is the daemon set?

A DaemonSet ensures that all or some nodes run a copy of a pod. As nodes are added or removed from the cluster, pods are added or removed accordingly. It's commonly used for system daemons like log collectors, monitoring agents, or storage drivers, ensuring these services run on every node in the cluster.

Read QnA Set 1  (1-25 ) -  DevOps Most Asked Real Time Interview Question And Answer - Set 1{alertSuccess}

Read QnA Set 2  (26 - 50 )  - DevOps Most Asked Real Time Interview Question And Answer - Set 2{alertSuccess}

Read QnA Set 4  (76 - 100 ) -  DevOps Most Asked Real Time Interview Question And Answer - Set 1{alertSuccess}  

Post a Comment

Previous Post Next Post