Powered by Blogger.

300x250 AD TOP

Tagged under: , , , ,

DevOps or SRE interview Questions


1.List Down Kubernetes Objects


Kubernetes Objects are persistent entities in the cluster. These objects are used to represent the state of the cluster.
The following are some of the Kubernetes Objects:
· pods
· Namespaces
· ReplicationController (Manages Pods)
· DeploymentController (Manages Pods)
· StatefulSets
· DaemonSets
· Services
· ConfigMaps
· Volumes


Docker World:
Before starting, let’s look at what is docker container first?

Any application requires software to run, some sort of configuration (between environments) and dependent libraries. Docker makes it easy to package all of them together and ship it as one standalone package. Docker Container image is a lightweight, executable software that package application, environment and dependent libraries. Docker Container image become containers at runtime. We can create any number of containers from Container Image. In terms of Java world, Container Image is a Java Class and running containers are the objects that we can instantiate by using Class.
Before starting Kubernetes Objects, I would encourage to go through Kubernetes Architecture (K8s journey) to understand the K8s components and how they interact with each other.
Let’s start our discussion with basic building block of K8s…
A Kubernetes cluster consists of two main components:
1. Master (Control Plane)
2. Worker Nodes.
Control Plane has following components. These components are responsible for maintaining the state of the cluster:
1. etcd distributed key value store.
2. API Server.
3. Controller Manager
4. Scheduler

Every worker node consists of the following components. These components are responsible for deploying and running the application containers.
1. Kubelet
2. Container Runtime (Docker)
There are few more components that are required for the cluster like kube-dns and kube-proxy, Ingress and Dashboard, we will discuss them in some other story.
Let’s discuss more about Master components.
etcd:
Kubernetes use etcd for storing the cluster status and metadata, which includes creation of any objects (pods, deployments, replication controllers, ingress etc…). etcd is a distributed key value store that provides reliable way of storing data across a cluster of machines. As mentioned in the diagram API Server is the single entry point that can directly talk to etcd store. Any other control plane component will go through API Server only. K8s stores all its data under /registry directory in etcd.
Api Server:
K8s Api Server is the central place for all other components. Api Server will take care about validating the object before saving the information to etcd.

The client for the Api Server can be either kubectl (command line tool) or a Rest Api client.
As mentioned in the diagram there are several plugin’s that are invoked by Api Server before creating/deleting/updating the object in etcd.
When we send a request for object creation operation to Api Server, it needs to authenticate the client. This is performed by one or more authentication plugins. The authentication mechanism can be based on the client’s certificate or based on Basic authentication using HTTP header “Authorization”.
Once the authentication is passed by any of the plugins, it will be passed to Authorization plugins. It validates whether user has access to perform the requested action on the object. Examples are like developers are not supposed to cluster role bindings or security policies. They are supposed to be controlled at the cluster level only by the ops team. Once the authorization passes the request will be sent to Admission Control PlugIns (ACP).
Admission Control PlugIns are responsible for initializing any missing fields or default values. For example, if we didn’t specify any Service Account information in the object creation, one of the plugIns will take care about adding default service account to the resource specification. Finally API Server, validates the object and stores it in etcd.
Api Server won’t initiate any requests for creating the pods/services. It’s the responsibility of controllers. In fact, it’s the responsibility of every control plane component to register for any changes that they are interested in. A Control plane component can request to be notified when a resource is created, modified or deleted. Clients watch for changes by opening a HTTP connection to the API server. Every time an object is updated, the Api Server uses this connection and sends the new version of the object.


Scheduler:
The scheduler’s main job is to allocate what node the pods needs to be created. It registers with Api Server for any newly created object/resource.
Scheduler figures out what node the pods needs to be created, using an algorithm. It checks whether the worker node has desired capacity or not. It checks whether the resource specification targeted any specific nodes with labels or affinity rules or any specific volumes like SSD. Finally after figuring out the node the scheduler will just update the resource specification and send it API Server. The Api Server updates the resource specification and stores into etcd. The Api Server notifies the kubelet for the worker node selected by scheduler (using watch mechanism).
Controller Manager:
Controller Manager is responsible to make sure the actual state of the system converges towards the desired state, as specified in the resource specification. There are several different controllers available under controller manager. Some of them are DeploymentControllers, StatefulSet Controllers, Namespace Controllers, PersistentVolume Controllers etc.
All controllers watch the API Server for changes to resources/objects and perform necessary actions like create/update/delete of the resource.
Worker Node components:
Kubelet:
Kubelet registers the node it is running with the API Server. Kubelet monitors the Api Server for Pods that are scheduled to the node, and then it will start the pod’s containers by instructing to docker runtime.
Kubelet monitors the status of running containers and reports to api server about status, events and resource consumption. Kubelet will also do health checks for the container and restart if needed.
Docker:
Docker was the container runtime used by Kubelet for spinning up Containers. Docker is a platform for packaging, distributing and running applications. Docker based container images contains application code, file system required and application metadata. A docker registry is a repository that stores docker images and allows us to share the images over the internet as public. A registry can be public or private (accessible only within the organization).

Pods:
In Docker World, Every Microservice is deployed as Container. In K8s world, A Pod is the basic building block of K8s Objects. A pod is a colocated group of containers. A pod can contain single container as well. But when it contains multiple containers, all of the containers are running on single worker node. A pod won’t distribute across multiple worker nodes.
Now, the question of why do we need colocated containers in the same pod?
· There are certain cross cutting concerns which you need to take care before bringing up actual application container. For example, let’s say your application stores all its secret/password configuration in vault. To integrate with vault you need short lived vault token so that you can retrieve the passwords during runtime. Now you can bring up the first container which is responsible to integrate with vault and query for vault token and write it to a shared volume. Actual app container can read this token which was in shared volume and get the secret/passwords needed.
· Another example, Let’s say our application is responsible for serving traffic for web content. We can divide the app into two containers, One container is responsible for serving the content, and another container is responsible for loading the content from external systems.
· Another example, Our app container is responsible for handling the ecommerce store, another container is responsible for collecting the logs and sending it to a central location.
It’s not necessary that, we need to have multiple containers running under one pod. If there is a need K8s supports with colocated containers (grouping) under one pod which will share the same linux namespace. We can mount a shared volume which will be available for all the containers under one pod that can read/write the data on shared volume. Containers that are running under one Pod share the IP address allocated and port space, which means the IP address is allocated for the pod not for each container under the same pod.
Assumption: You have a kubernetes cluster running, either on google cloud, on-prem or as minikube for experimenting. You have installed kubectl, which is a command line tool for accessing k8s cluster.
Enough of theory, Let’s look at how to create a Pod?
Create Pod:
We can use kubectl or Rest api (to the api server) to submit the pod manifest to K8s to create a resource (Or Object). Ultimately kubectl also connects to api server and submit the pod manifest using Rest api only.
The commands are pretty simple to use and you need to write a manifest either in YAML or JSON notation. To create a Pod:
kubectl create <pod-manifest>
Pod manifest sample: spring-app.yaml
1. apiVersion: v1
2. kind: Pod
3. metadata:
4. name: spring-pod
5. spec:
6. containers:
7. - image: chkrishna/springdemoapp:latest
8. name: spring-demoapp
9. ports:
10. - containerPort: 8080
11. protocol: TCP

spring-app.yaml
1. kind: Represent the type of k8s object created. It can be a Pod, DaemonSet, Deployments or Service.
2. version: K8s api version used to create the resource, It can be v1, v1beta and v2. Some of the resources can be released under beta and available for general public usage.
3. metadata: provides information about the resource like name of the pod, namespace under which the pod will be running,
labels and annotations.
4. spec: consists of the core information about pod. Here we will tell k8s what would be the expected state of resource, Like container image, number of replicas, environment variables and volumes.
5. status: consists of information about the running pod, status of each container. Status field is supplied and updated by Kubernetes after creation.
When we create an object in K8s, we need to provide the object spec (manifest) which describes the object desired state. The status describes the actual state of the object.
Probably it would make sense now to understand the pod manifest that we have submitted to k8s master. The type of resource that we are creating is a pod and it has only one container as mentioned in the spec, and we are referring a sample image springdemoapp which is a spring boot java based application and the container is running on port 8080.
You can get the running pod information using:
kubectl get pods
Now you should see:
NAME READY STATUS RESTARTS AGE
spring-pod 1/1 Running 0 10m
You can get the complete yaml by specifying -o (output format) option. The returned pod manifest will consists of the additional fields added by K8s such as status, hostIp (k8s worker node) and podIp Address.
kubectl get pod spring-pod -o yaml
You can check the logs for your container using:
kubectl logs -f spring-pod --tail 200
You should see something like:
s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
c.m.main.SpringDemoAppApplication : Started SpringDemoAppApplication in 6.997 seconds (JVM running for 7.762)
If we have multiple containers running under one pod, to look at the logs for specific container we can use -c option as:
kubectl logs -f spring-pod --tail 200 -c spring-demoapp
Now that our pod is running successfully, how do we access our application? There are several different ways to access the application running on K8s. We can use hostPort or else port-forward or service based approach. We will discuss about service based approach in next medium post. Let’s start with how to access using hostPort.
Access Pod Using hostPort:
We need to tweak our spring-pod.yaml and add hostPort parameter to the ports section.
spring-app-expose.yaml
With this change our application is exposed to host network using hostIp (In case of gcloud, on-prem it’s the running worker node or in case of minikube you can get the host IP address using minikube status which will print the IP Address of k8s master) and hostPort at 9090.
We can validate by running spring boot health call at:
curl -v http://192.168.99.100:9090/health
The output should be:
Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 9090 (#0)
> GET /health HTTP/1.1
< HTTP/1.1 200
{"status":"UP"}
Access Pod Using port-forward:
Using port-forward to access the application. port-forward option, I would recommend using it for only local testing.
kubectl port-forward spring-pod 10000:8080
After that you will see the following operation happening in the console, which will be exactly similar to hostPort option.
Forwarding from 127.0.0.1:10000 -> 8080
Forwarding from [::1]:10000 -> 8080
Now we can try the same request with 10000 port using:
curl -v http://127.0.0.1:10000/health
Now we see:
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 10000 (#0)
> GET /health HTTP/1.1
> Host: 127.0.0.1:10000
< HTTP/1.1 200
{"status":"UP"}
View Pod Details:
We can get the running pod details, events, container status by using describe command:
kubectl describe pod spring-pod
Now view the details:
Name: spring-labels
Namespace: default
Node: minikube/192.168.99.100
Labels: <none>
Annotations: <none>
Status: Running
IP: 172.17.0.6
Containers:
spring-demoapp:
Image: chkrishna/springdemoapp:latest
Port: 8080/TCP
State: Running
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31s default-scheduler Successfully assigned spring-labels to minikube
Normal SuccessfulMountVolume 30s kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-5fvlr"
Normal Pulling 30s kubelet, minikube pulling image "chkrishna/springdemoapp:latest"
Normal Pulled 29s kubelet, minikube Successfully pulled image "chkrishna/springdemoapp:latest"
Normal Created 29s kubelet, minikube Created container
Normal Started 29s kubelet, minikube Started container
Delete Pod:
We can delete the pod based on the name of the pod or by passing the same manifest that we have used to create the resource.
kubectl delete pod spring-pod
# OR
kubectl delete -f spring-app.yaml
Using Labels:
A label is a key/value pair which can be associated to a k8s object (resource). We can use these labels when selecting the resources using label selectors. We can associate any number of labels to a k8s object. We can add/delete/modify the labels even after creating the resource.
For example we can tag all the api services with appType as api, all the web applications with appType as web. Now create one more pod by adding labels.
Pod manifest: spring-labels.yaml
kubectl create -f spring-labels.yaml
Now to list all the pods along with labels, we can use --show-labels option.
kubectl get pods --show-labels
The output would be like this:
NAME READY STATUS RESTARTS AGE LABELS
spring-labels 1/1 Running 0 35s appType=api
spring-pod 1/1 Running 0 1h <none>
We can add the labels after creating the pod as well. We can label our spring-pod also with appType label using:
kubectl label pod spring-pod appType=api
kubectl label pod spring-labels dept=billing
Now you should see:
pod "spring-pod" labeled
Now get all the pods again with --show-labels option:
kubectl get pods --show-labels
Your output should be:
NAME READY STATUS RESTARTS AGE LABELS
spring-labels 1/1 Running 0 16m appType=api,dept=billing
spring-pod 1/1 Running 0 1h appType=api
To delete a label from running pod:
kubectl label pod spring-pod appType-
Now if we run the get pods with --show-labels option we will see the earlier status for spring-pod with label as <none>.
Similarly we can update the existing label of a pod using:
kubectl label pod spring-labels dept=customer_service --overwrite
To update existing label we have to use --overwrite option. Now get all the pods again with show-labels option
kubectl get pods --show-labels
The output should be:
NAME READY STATUS RESTARTS AGE LABELS
spring-labels 1/1 Running 0 18m appType=api,dept=customer_service
spring-pod 1/1 Running 0 1h <none>
We can also list pods based on label selectors, For that we can use -l option.
kubectl get pods -l dept
The output should be:
NAME READY STATUS RESTARTS AGE LABELS
spring-labels 1/1 Running 0 22m appType=api,dept=customer_service
We can narrow down the list based on specific value of label as well using:
kubectl get pods -l appType=api
The output should be:
NAME READY STATUS RESTARTS AGE LABELS
spring-labels 1/1 Running 0 22m appType=api,dept=customer_service
We can also delete the pods using label selectors like:
kubectl delete pod -l appType=api
Now you should see:
pod "spring-labels" deleted
Experimenting Kubernetes:
If on-premise or gcloud k8s cluster is not an option for you to explore you don’t need to worry. You can experiment kubernetes using the following interactive terminals:
1. Kubernetes interactive tutorials
2. Katacoda
It’s a lot of information for now with only one resource (Pod), I will cover the remaining objects in next post (will add the link soon). Please let me know your feedback in the comments section.
Tagged under: , , , ,

Saltstack Windows Patching

CONFIGURING SALT

Salt configuration is very simple. The default configuration for the master will work for most installations and the only requirement for setting up a minion is to set the location of the master in the minion configuration file.
The configuration files will be installed to /etc/salt and are named after the respective components, /etc/salt/master, and /etc/salt/minion.

MASTER CONFIGURATION

By default the Salt master listens on ports 4505 and 4506 on all interfaces (0.0.0.0). To bind Salt to a specific IP, redefine the "interface" directive in the master configuration file, typically /etc/salt/master, as follows:
- #interface: 0.0.0.0
+ interface: 10.0.0.1
After updating the configuration file, restart the Salt master. See the master configuration reference for more details about other configurable options.

MINION CONFIGURATION

Although there are many Salt Minion configuration options, configuring a Salt Minion is very simple. By default a Salt Minion will try to connect to the DNS name "salt"; if the Minion is able to resolve that name correctly, no configuration is needed.
If the DNS name "salt" does not resolve to point to the correct location of the Master, redefine the "master" directive in the minion configuration file, typically /etc/salt/minion, as follows:
- #master: salt
+ master: 10.0.0.1
After updating the configuration file, restart the Salt minion. See the minion configuration reference for more details about other configurable options.

PROXY MINION CONFIGURATION

A proxy minion emulates the behaviour of a regular minion and inherits their options.
Similarly, the configuration file is /etc/salt/proxy and the proxy tries to connect to the DNS name "salt".
In addition to the regular minion options, there are several proxy-specific - see the proxy minion configuration reference.

RUNNING SALT

  1. Start the master in the foreground (to daemonize the process, pass the -d flag):
    salt-master
    
  2. Start the minion in the foreground (to daemonize the process, pass the -d flag):
    salt-minion
    
Having trouble?
The simplest way to troubleshoot Salt is to run the master and minion in the foreground with log level set to debug:
salt-master --log-level=debug
For information on salt's logging system please see the logging document.
Run as an unprivileged (non-root) user
To run Salt as another user, set the user parameter in the master config file.
Additionally, ownership, and permissions need to be set such that the desired user can read from and write to the following directories (and their subdirectories, where applicable):
  • /etc/salt
  • /var/cache/salt
  • /var/log/salt
  • /var/run/salt
More information about running salt as a non-privileged user can be found here.
There is also a full troubleshooting guide available.

KEY IDENTITY

Salt provides commands to validate the identity of your Salt master and Salt minions before the initial key exchange. Validating key identity helps avoid inadvertently connecting to the wrong Salt master, and helps prevent a potential MiTM attack when establishing the initial connection.
MASTER KEY FINGERPRINT
Print the master key fingerprint by running the following command on the Salt master:
salt-key -F master
Copy the master.pub fingerprint from the Local Keys section, and then set this value as the master_finger in the minion configuration file. Save the configuration file and then restart the Salt minion.
MINION KEY FINGERPRINT
Run the following command on each Salt minion to view the minion key fingerprint:
salt-call --local key.finger
Compare this value to the value that is displayed when you run the salt-key --finger <MINION_ID> command on the Salt master.

KEY MANAGEMENT

Salt uses AES encryption for all communication between the Master and the Minion. This ensures that the commands sent to the Minions cannot be tampered with, and that communication between Master and Minion is authenticated through trusted, accepted keys.
Before commands can be sent to a Minion, its key must be accepted on the Master. Run the salt-key command to list the keys known to the Salt Master:
[root@master ~]# salt-key -L
Unaccepted Keys:
alpha
bravo
charlie
delta
Accepted Keys:
This example shows that the Salt Master is aware of four Minions, but none of the keys has been accepted. To accept the keys and allow the Minions to be controlled by the Master, again use the salt-key command:
[root@master ~]# salt-key -A
[root@master ~]# salt-key -L
Unaccepted Keys:
Accepted Keys:
alpha
bravo
charlie
delta
The salt-key command allows for signing keys individually or in bulk. The example above, using -A bulk-accepts all pending keys. To accept keys individually use the lowercase of the same option, -a keyname.
See also
salt-key manpage

SENDING COMMANDS

Communication between the Master and a Minion may be verified by running the test.version command:
[root@master ~]# salt alpha test.version
alpha:
    2018.3.4
Communication between the Master and all Minions may be tested in a similar way:
[root@master ~]# salt '*' test.version
alpha:
    2018.3.4
bravo:
    2018.3.4
charlie:
    2018.3.4
delta:
    2018.3.4
Each of the Minions should send a 2018.3.4 response as shown above, or any other salt version installed



I was going through few of DEVOPS tools for automation work out of personal interest and was comparing them with respect to our inhouse environments and while comparing the Puppet Vs Chef Vs Ansible Vs SaltStack I see Salt is a very good open source tool and have the capability to manage cloud environments without buying the enterprise edition and can easily manage 1000+ servers with single master itself plus its capable of handling Docker images also.
One better thing is its marginable faster than all the configuration management tools when it comes to large deployments of 1000+ slaves connected and managed by one master. It will be really helpful for curbing out lot of man hours of manual work specially in hosted environments including the inhouse environments.
As an immediate demo in this article, we will try to show how we can eliminate pain points like:
  1. Windows Security fixes (KB) patching to 100+ servers takes huge time and weekends time to login manually to each server and do the patching.
  2. Monitoring Service status and ensuring services are running fine, network status, etc. eats out time from daily work hours.
  3. Env’s configurations states are not in control and reverting back and tracking a single change is not possible. Due to which many issues gets raised by users.
To quickly summarize the article:
As an initial step towards salts capabilities, I have setup Saltstack(Master & Minion[slave]) doing basic windows security patching successfully. We can easily add any windows, linux, solaris boxes as minions to the same salt master for any kind of patching and configuration management tasks.
We can centralize the salt master to make all kind of patching and maintenance activities risk free and automated saving lot of hours of manual activities. Salt is well capable to manage data centers as well as cloud environment’s and amazon VM’s including our local VM environments.
Patching windows security fixes to all slaves from a single master server:
Objectives of example:
– Installation of salt master and saltstack minion (slave). I have taken my laptop as a minion or slave for testing, we can add 1k+ minions to one salt master also.
– Sample steps to change configuration in all minions automatically from one master.
– Sample steps to patch the security fixes to all the minions (Example: cloud servers)

SALT BOOTSTRAP

The Salt Bootstrap Script allows a user to install the Salt Minion or Master on a variety of system distributions and versions.
The Salt Bootstrap Script is a shell script is known as bootstrap-salt.sh. It runs through a series of checks to determine the operating system type and version. It then installs the Salt binaries using the appropriate methods.
The Salt Bootstrap Script installs the minimum number of packages required to run Salt. This means that in the event you run the bootstrap to install via package, Git will not be installed. Installing the minimum number of packages helps ensure the script stays as lightweight as possible, assuming the user will install any other required packages after the Salt binaries are present on the system.
The Salt Bootstrap Script is maintained in a separate repo from Salt, complete with its own issues, pull requests, contributing guidelines, release protocol, etc.
To learn more, please see the Salt Bootstrap repo links:
Note
The Salt Bootstrap script can be found in the Salt repo under the salt/cloud/deploy/bootstrap-salt.sh path. Any changes to this file will be overwritten! Bug fixes and feature additions must be submitted via the Salt Bootstrap repo. Please see the Salt Bootstrap Script's Release Process for more information.


Setup of Salt (Master):
Note: I have used a RHEL 6  server to setup the salt master.
  1. Run below command to download the script to install the master.
curl -L https://bootstrap.saltstack.com -o install_salt.sh 
  1. Run below command to start the install of salt master
install_salt.sh -M 
[root@saltmaster opt]# sudo sh install_salt.sh –M

*  INFO: sh install_salt.sh -- Version 2015.05.07
*  INFO: System Information:
*  INFO:   CPU:          GenuineIntel
*  INFO:   CPU Arch:     x86_64
*  INFO:   OS Name:      Linux
*  INFO:   OS Version:   2.6.18-348.el5
*  INFO:   Distribution: Red Hat Enterprise Server 5.9
*  INFO: Installing minion
*  INFO: Installing master
*  INFO: Found function install_red_hat_enterprise_server_stable_deps
*  INFO: Found function install_red_hat_enterprise_server_stable
*  INFO: Found function install_red_hat_enterprise_server_stable_post
*  INFO: Found function install_red_hat_enterprise_server_restart_daemons
*  INFO: Found function daemons_running
*  INFO: Running install_red_hat_enterprise_server_stable_deps()
*  INFO: Adding SaltStack's COPR repository

Loaded plugins: product-id, security, subscription-manager

This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.

Setting up Install Process
Package python26-PyYAML-3.08-4.el5.x86_64 already installed and latest version
Package python26-m2crypto-0.21.1-5.el5.x86_64 already installed and latest version
Package python26-2.6.8-2.el5.x86_64 already installed and latest version
Package python26-requests-1.1.0-5.el5.noarch already installed and latest version
Package python26-crypto-2.3-6.el5.x86_64 already installed and latest version
Package python26-msgpack-0.4.5-1.el5.x86_64 already installed and latest version
Package python26-zmq-14.5.0-1.x86_64 already installed and latest version
Package python26-jinja2-2.5.5-6.el5.noarch already installed and latest version

Nothing to do
*  INFO: Running install_red_hat_enterprise_server_stable() Loaded plugins: product-id, security, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Setting up Install Process Package salt-minion-2014.7.5-2.noarch already installed and latest version Resolving Dependencies --> Running transaction check ---> Package salt-master.noarch 0:2014.7.5-2 set to be updated --> Finished Dependency Resolution   Dependencies Resolved ================================================================================ Package           Arch         Version          Repository                Size ================================================================================ Installing: salt-master       noarch       2014.7.5-2       saltstack-salt-el5       765 k   Transaction Summary ================================================================================ Install       1 Package(s) Upgrade       0 Package(s)   Total download size: 765 k Downloading Packages: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction   Installing     : salt-master                                              1/1   Installed:   salt-master.noarch 0:2014.7.5-2   Complete! *  INFO: Running install_red_hat_enterprise_server_stable_post() *  INFO: Running install_red_hat_enterprise_server_restart_daemons() Starting salt-minion daemon:                               [  OK  ] Starting salt-master daemon:                               [  OK  ] *  INFO: Running daemons_running()   -------------------------------------------------------
  1. Once installation completed run below command to check for the versions installed.
SaltStack Windows Patching Example
  1. Run below command to restart the salt master service.
service salt-master restart
Salt Stack Windows Patching Example
  1. Now download the saltstack windows minion (slave) software for the servers to manage from below url:
SaltStack Windows Installation
  1. Install the saltstack windows minion in client node with default settings and update below details while installing.
                 IP address of master and FQDN of client node.
  1. Make sure the saltstack windows salt-minion service is up and running on client node which you want to manage.
Salt Stack Windows Patching Example
  1. Now go back to master server and run below command to see the authentication certificate sent by all minions.
[root@saltmaster opt]# sudo salt-key

Accepted Keys:

Unaccepted Keys:

SaltMinion-WindowsLaptop1

Rejected Keys:
  1. Now accept the key to authenticate the saltstack windows minion to master.
[root@saltmaster opt]# salt-key -a SaltMinion-WindowsLaptop1

The following keys are going to be accepted:

Unaccepted Keys:

SaltMinion-WindowsLaptop1

Proceed? [n/Y] Y

Key for minion SaltMinion-WindowsLaptop1 accepted.

[root@saltmaster opt]# sudo salt-key

Accepted Keys:

SaltMinion-WindowsLaptop1

Unaccepted Keys:

Rejected Keys:
  1. Run below command to check the status of all saltstack windows minions added to salt master. (Here we have added only one minion – my laptop)
Note : you can add 1k+ minions to a single salt master.
salt '*' test.ping
Salt Stack Windows Patching Example
  1. We can check if the salt master is able to control the saltstack windows minions by running below command to change the minion computers descriptions.
11. a. Check the computers description before running below command to change.
Salt Stack Windows Patching Example
11.b. Now run the below command to change the description of the saltstack windows minion/slave computer from master.
       salt '*' system.set_computer_desc 'Office Laptop Of Ramakanta'
Salt Stack Windows Patching Example                          11.c. Check the saltstack windows minion’s computer description to make sure the master is able to control the minion.
Salt Stack Windows Patching Example
  1. We are now sure that master is able to control the saltstack windows minion so we will go ahead with one more example of auto downloading and installing security fixes automatically.
Steps to Auto patch windows KB3011780 security update from master to all the salt minion servers.
  1. Before starting the installation, check the latest KB installed in the minion to keep note of it for future verifications of if the KB got installed successfully or not.
Latest KB = KB3010788
Salt Stack Windows Patching Example
  1. Run below command from salt master to download the KB from Microsoft website in all the saltstack windows minion servers.
It will download and store the file in C:\TEMP as mentioned in below command:
salt '*' cp.get_url 'http://download.microsoft.com/download/C/C/3/CC36FA0C-974B-444A-B2C6-8E368250E37F/Windows6.1-KB3011780-x64.msu' 'C:\TEMP\Windows6.1-KB3011780-x64.msu'  -l debug
Note: Here “*” is used to let salt order all registered saltstack windows minions to download the files. It can be customized further to suit your requirement of patching few servers or servers starting with some special patterns etc.
Salt Stack Windows Patching Example
On The saltstack windows minion C:\TEMP the file got downloaded automatically.
Salt Stack Windows Patching Example
  1. To install the downloaded KB from C:\TEMP in all saltstack windows minions run below command.
salt -t 900 '*' cmd.run 'wusa.exe C:\TEMP\Windows6.1-KB3011780-x64.msu /quiet /norestart' -l debug
Salt Stack Windows Patching Example
Now to check whether the KB got installed successfully run below command:
Below output shows the 226 number KB got installed successfully in my laptop (minion/slave)
salt -t 200 '*' cmd.run 'systeminfo | find "KB3011780"'
Salt Stack Windows Patching Example
You can see the security fix KB3011780 is now installed successfully on my laptop (saltstack windows minion) without any manual intervention.
Salt Stack Windows Patching Example
So if we implement the SaltStack we can manage any configurations, patching and upgrades in any of hosted or local environment’s easily and automatically saving a lot of man hours of manual work and a single person will be able to handle many environments easily from a master.


Some Important Commands:
Extract powershell modules here:
C:\Windows\System32\WindowsPowerShell\v1.0\Modules

Set-ExecutionPolicy Unrestricted
Import-Module PSWindowsUpdate
salt-call state.highstate -l debug
telnet 172.19.8.25 4505
telnet 172.19.8.25 4506
install-windowsfeature "telnet-client"
sudo salt '*' cmd.run 'get-eventlog system | where-object {$_.eventid -eq 6006} | select -first 1' shell=powershell
get-hotfix | select-object -property PScomputername,hotfixid,installedon|findstr -i KB4512506

https://github.com/jborean93/ansible-windows/blob/master/scripts/Upgrade-PowerShell.ps1
https://docs.ansible.com/ansible/latest/user_guide/windows_setup.html

Tagged under: , , ,

Activity:Create an EC2 instance & Install packages using Ansible


Install PIP and BOTO & then EC2
  • sudo apt-get install python-pip
  • pip install boto
































- name: Create a new Demo EC2 instance
  hosts: 127.0.0.1
  gather_facts: False

  vars:
      region: us-east-1
      instance_type: t2.micro
      ami: ami-07d0cf3af28718ef8  # Ubuntu
      keypair: n # pem file name

  tasks:

    - name: Create an ec2 instance
      ec2:
         key_name: "{{ keypair }}"
         aws_access_key: "AKIAJ33XMSCSCPBL7RB5ZA"
         aws_secret_key: "pIa38SDaJYSSDCSDc9DGORKyTS5sQSchJiqu/F"
         group: launch-wizard-1  # security group name
         instance_type: "{{ instance_type}}"
         image: "{{ ami }}"
         wait: true
         region: "{{ region }}"
         count: 1  # default
         count_tag:
            Name: "Demo"
         instance_tags:
            Name: "Demo"
         vpc_subnet_id: "subnet-afeab7f3"
         assign_public_ip: yes
      register: ec2


Tagged under: , , ,

Chef DevOps



In the DevOps model, developers and system operators work closely together throughout the software development process to deploy software more frequently and more reliably. Many new third party and proprietary tools have been developed to support automation, measurement and sharing.

Chef: "IT automation for speed and awesomeness"

Chef is a configuration management tool for dealing with machine setup on physical servers, virtual machines and in the cloud. Many companies use Chef software to control and manage their infrastructure including Facebook, Etsy, Cheezburger, and Indiegogo.

But what does that really mean?

Configuration management is all about trying to ensure that the files and software you are expecting to be on a machine are present, configured correctly, and working as intended.

When you have only a single machine this is fairly simple. When you have five or ten servers, it is still possible to do this manually, but it may take all day. However, when your infrastructure scales up into the thousands we need a better way of doing things.

Infrastructure as code

Chef helps solve this problem by treating infrastructure as code. Rather than manually changing anything, the machine setup is described in a Chef recipe.

Collections of recipes are stored in a cookbook. One cookbook should relate to a single task, but can have a number of different server configurations involved (for example a web application with a database, will have two recipes, one for each part, stored together in a cookbook).

There is a Chef server which stores each of these cookbooks and as a new chef client node checks in with the server, recipes are sent to tell the node how to configure itself.

The client will then check in every now and again to make sure that no changes have occurred, and nothing needs to change. If it does, then the client deals with it. Patches and updates can be rolled out over your entire infrastructure by changing the recipe. No need to interact with each machine individually.

Chef configuration

Chef Configuration from https://www.chef.io/solutions/cloud-management/

Figure 1- Chef configuration

Recipes and cookbooks are the heart of the configuration management. They are written using the Ruby programming language, however, the domain specific language used by Chef is designed to be able to be understood by everyone. As the configuration is just code it can be tested and it can be version controlled. This means that there is less downtime, more reliable services and less stressed people on both the dev and ops sides.

Chef config files to install Apache

Chef Hello World screenshot

Figure 2: Chef config files to install Apache and add a hello world html page

So why is it awesome?

You want more? How about Chef Analytics - the ability to visualise everything going on in real time. It will check if something is going wrong and notify you before the problem becomes noticeable to your clients.

The Chef development kit allows you to write and manage your chef infrastructure from any machine and any operating system.

Chef's Knife allows you to manage the interface between your Chef bookshelf (the repository) and your chef server. The high availability and replication feature allows you to ensure that even if something goes wrong, the chef server is able to adapt and recreate your infrastructure as required, without outside help.




Define Chef and its architecture in DevOps?

The Chef is powerful automation tool to transform the Company infrastructure into a well-structured code. With the help of Chef, you may write scripts that are further used to automate the business processes. Of course, the processes are somewhat related to IT.



Chef Interview Questions


CHEF INFRASTRUCTURE

The three major components of any Chef architecture include – Chef server, Chef workstation, and Chef server. They need to arrange in the same format as shown below in the diagram.

  • Chef Server – This is a central storage house that stores necessary data necessary to configure the nodes.
  • Chef Node – A node is based on the chef-client architecture where nodes are referred as client responsible to share data across the network
  • Chef Workstation – This is a host that helps you to modify the configuration data and cookbooks then it is forwarded to the Chef Server.
Define Chef Resource and its functions in brief?

A Resource is used to represent a part of the infrastructure and its state, a package that you are interested in installing, a running service, or a file you are planning to create. Now, let us see the functions of resources in brief –

  • It helps you to describe the desired state of a configuration item.
  • You will know the process or steps that will be followed to bring a particular item in the desirable state.
  • You can specify the type of resources like template, package, or service etc.
  • It helps to list the resource properties and the additional details that are necessary.
  • Further, resources can be grouped into recipes to describe the working configurations.
Define Chef Recipe and its functions too?

When resources are grouped together, it becomes a Recipe that describes the working configurations and policy. With a Recipe, you will get to know everything necessary to configure a particular system. Let us have a quick look at functions of Recipe –

  • Software components can be installed or configured with Chef Recipe.
  • It is used to manage files and apps deployments too.
  • With one recipe, the other related recipes can also be executed.
Define a Chef Node and why is it important?

A node is a virtual machine or a physical server that is taken an important part of the Chef architecture. It is basically used to execute any resource in Chef.

What is a Cookbook and how is it different from the Recipe in Chef?

When resources are grouped together, it becomes a Recipe that describes the working configurations and policy. At the same time, when recipes are combined together, it becomes a cookbook and easy to manage as compared to a single recipe.

If the action for a Chef Resource is not defined then what will happen?

In case, an action for a Chef Resource is not defined then it will choose for the Default action. For example, both resources are the same as shown below in the screenshot. For the Resource 1, an action is not defined still it will take the default action. At the same time, when you define the action with ‘create’ command, it is also used to create the default action.

Resource 1Chef Interview Questions

Resource 2Chef Interview Questions

Do the code for both the Chef Recipes is same?

Chef Interview QuestionsNo, they are not the same. Remember that code is executed in the same way as it is written. For the Recipe 1, first the package is installed then the service will be created. At the same time, for Recipe 2, first the service is configured then the package is installed.


When system boosts, do you know some command that can be used to stop or disable the ‘httpd’ service?

Yes, I know. You can use the code given below to stop or disable the httpd service.Chef Interview Questions

What is DK in Chef?

DK is the workstation that allows users to interact with Chef. There are special tools available installed in the DK workstation space that can be used to make the interaction even better.

What is a Chef Repository and how it works?

A repository is a storehouse in Chef that can be used to accommodate cookbooks, environments, roles, or data bags etc. The Chef repository further is synchronized with GIT, a version control system to make its performance even better.

Why are SSL certificates used in Chef?

How can you sure that right data is accessed among Chef server and the Chef client? To make sure, you need to establish a secure SSL connection and continue your work.

Have you heard of the Signed header? Explain the concept in brief?

To validate the interaction between the node and the Chef server, signed header authentication is necessary.

How will you define the run-list in Chef?

With the help of run-list in chef, you can specify which Recipes needs to run and what should be the order of execution for Recipes.

  • Run-list makes sure that recipes are executed in the same order as defined by you. In case, there is some recipe that is added twice by mistake then it will not be made run two times by the run-list.
  • You also need to specify the node on which run-list should be executed. Rub-list is also defined as the object of the node that is stored on Chef server.
  • It is maintained on Knife then from the workstation it is transferred to Chef server and management Console in Chef.
Why are starter kits needed in the Chef?

To create the required configuration files in Chef, starter kids are necessary. It helps to define the clear information for each configuration file and easy interaction with the server. This is easy to download the starter kits and take it to the desired place on the workstation where you want to use them.

How to update a cookbook in Chef? Give answer based on your experience.

This is easy to update a cookbook in Chef and you can use any of the three methods given below based on your convenience –

  • Knife SSH can be run from the workstation.
  • Run the chef-client and then SSH in your server directly.
  • The chef-client can be used as a service or daemon and make it restarted after every selected time intervals say 15 or 20 mins.
How can you bootstrap in Chef and tell me the required information needed for the same purpose?

To bootstrap in Chef, you need the following information as given below –

  • The public IP address or the hostname of your node.
  • To log in to a particular node, you need the credentials details like username and password for the same.
  • Further, you may choose authentication based on keys instead of using any login credentials.
Explain your understanding of Test Kitchen in Chef?

With this answer, the interviewer can get a clear idea of your understanding of Test Kitchen in Chef.

It enables the cookbooks on the server and increases the development lifecycle too. Also, it helps you to create a variety of virtual machines in the cloud or locally.


What is chef in devops?

Chef is a configuration management tool for dealing with machine setup on physical servers, virtual machines and in the cloud.

What is a Node in Chef?

A Node represents a server and is typically a virtual machine, container instance, or physical server – basically any compute resource in your infrastructure that is managed by Chef.

What is chef in automation?

Chef is a powerful automation platform that transforms infrastructure into code.s The Chef server acts as a hub for configuration data.

What is a chef server?

The Chef server acts as a hub for configuration data. The Chef server stores cookbooks, the policies that are applied to nodes, and metadata that describes each registered node that is being managed by Chef. Nodes use the Chef client to ask the Chef server for configuration details, such as recipes, templates, and file distributions.

Why do we use SSL Certificates in chef?

An SSL certificate is used between the chef-client and the Chef server to ensure that each node has access to the right data.

What is SSL_CERT_FILE in chef?

Use the SSL_CERT_FILE environment variable to specify the location for the SSL certificate authority (CA) bundle that is used by the chef-client.

What is knife ssl check command in chef?

Run the knife ssl check subcommand to verify the state of the SSL certificate, and then use the reponse to help troubleshoot issues that may be present.

What is chef resources file?

A file resource is used to manage files directly on a node.

A file resource block manages files that exist on nodes. For example, to write the home page for an Apache website:

file ‘/var/www/customers/public_html/index.php’ do

content ‘<html>This is a placeholder for the home page.</html>’

mode ‘0755’

owner ‘web_admin’

group ‘web_admin’

end


What are Data Bags?

A data bag is a global variable that is stored as JSON data and is accessible from a Chef server. A data bag is indexed for searching and can be loaded by a recipe or accessed during a search.

What is chef_acl resource in chef?

Use the chef_acl resource to interact with access control lists (ACLs) that exist on the Chef server.

Syntax: The syntax for using the chef_acl resource in a recipe is as follows:

chef_acl ‘name’ do

attribute ‘value’ # see properties section below

action :action # see actions section below

end

What information do you need in order to bootstrap in Chef?

Just mention the information you need in order to bootstrap:

Your node’s host name or public IP address.

A user name and password you can log on to your node with.

Alternatively, you can use key-based authentication instead of providing a user name and password.

What is the command you use to upload a cookbook to the Chef server?

You can directly mention the command to upload a cookbook to the Chef server “knife cookbook upload”.

What is run-list in Chef?

run-list lets you specify which Recipes to run, and the order in which to run them. The run-list is important when you have multiple Cookbooks, and the order in which they run matters.

Depending on the discussion if you think more explanation is required just mention the below points

A run-list is:

An ordered list of roles and/or recipes that are run in the exact order defined in the run-list; if a recipe appears more than once in the run-list, the chef-client will not run it twice.

Always specific to the node on which it runs; nodes may have a run-list that is identical to the run-list used by other nodes.

Stored as part of the node object on the Chef server.

Maintained using knife, and then uploaded from the workstation to the Chef server, or is maintained using the Chef management console.

How do you apply an updated Cookbook to your node in Chef?

There are three ways to apply an updated Cookbook to a node you can mention all or any one, I will suggest you to mention all three:

-Run knife ssh from your workstation.

-SSH directly into your server and run chef-client.

-You can also run chef-client as a daemon, or service, to check in with the Chef server on a regular interval, say every 15 or 30 minutes.

Write a service Resource that stops and then disables the httpd service from starting when the system boots in Chef?

Use the below Resource to stop and disable the httpd service from starting when system boots.

service 'httpd' do

action [:stop, :disable]

end

Are these two Chef recipes the same?

package 'httpd'

service 'httpd' do

action [:enable, :start]

end


Chef Interview Questions And Answers

1) What is chef in devops?

A) Chef is a configuration management tool for dealing with machine setup on physical servers, virtual machines and in the cloud.
Many companies use Chef software to control and manage their infrastructure including Facebook, Etsy, Cheezburger, and Indiegogo.

2) What is chef in automation?

A) Chef is a powerful automation platform that transforms infrastructure into code.s The Chef server acts as a hub for configuration data.
The Chef server stores cookbooks, the policies that are applied to nodes, and metadata that describes each registered node that is being managed by the chef-client.

3) What is chef DK?

A) The Chef DK workstation is the location where users interact with Chef. On the workstation users author and test cookbooks using tools such as Test Kitchen and interact with the Chef server using the knife and chef command line tools.

4) What are chef client nodes?

A) Chef client nodes are the machines that are managed by Chef. The Chef client is installed on each node and is used to configure the node to its desired state.

5) What is a chef server?

A) The Chef server acts as a hub for configuration data. The Chef server stores cookbooks, the policies that are applied to nodes, and metadata that describes each registered node that is being managed by Chef. Nodes use the Chef client to ask the Chef server for configuration details, such as recipes, templates, and file distributions.

6) What are work stations in chef?

A) A workstation is a computer running the Chef Development Kit (ChefDK) that is used to author cookbooks, interact with the Chef server, and interact with nodes.

The workstation is the location from which most users do most of their work, including:

Developing and testing cookbooks and recipes
Testing Chef code
Keeping the chef-repo synchronized with version source control
Configuring organizational policy, including defining roles and environments, and ensuring that critical data is stored in data bags
Interacting with nodes, as (or when) required, such as performing a bootstrap operation

7) What are Cookbooks in chef?

A) A cookbook is the fundamental unit of configuration and policy distribution. A cookbook defines a scenario and contains everything that is required to support that scenario:

Recipes that specify the resources to use and the order in which they are to be applied
Attribute values
File distributions
Templates
Extensions to Chef, such as custom resources and libraries

8) What is chef repo?

A) The chef-repo is a directory on your workstation that stores:

Cookbooks (including recipes, attributes, custom resources, libraries, and templates)
Roles
Data bags
Environments
The chef-repo directory should be synchronized with a version control system, such as git. All of the data in the chef-repo should be treated like source code.

9) What is chef-client Run?

A) A “chef-client run” is the term used to describe a series of steps that are taken by the chef-client when it is configuring a node.

10) What is chef validator?

A) chef-validator – Every request made by the chef-client to the Chef server must be an authenticated request using the Chef server API and a private key. When the chef-client makes a request to the Chef server, the chef-client authenticates each request using a private key located in /etc/chef/client.pem.

Chef Interview Questions Devops

11) Why do we use SSL Certificates in chef?

A) An SSL certificate is used between the chef-client and the Chef server to ensure that each node has access to the right data.

12) What are Signed Headers in chef?

A) Signed header authentication is used to validate communications between the Chef server and any node that is being managed by the Chef server.

13) What is SSL_CERT_FILE in chef?

A) Use the SSL_CERT_FILE environment variable to specify the location for the SSL certificate authority (CA) bundle that is used by the chef-client.

14) What are Knife Subcommands in chef?

A) The chef-client includes two knife commands for managing SSL certificates:

Use knife ssl check to troubleshoot SSL certificate issues
Use knife ssl fetch to pull down a certificate from the Chef server to the /.chef/trusted_certs directory on the workstation.

15) What is knife ssl check command in chef?

A) Run the knife ssl check subcommand to verify the state of the SSL certificate, and then use the reponse to help troubleshoot issues that may be present.

16) What is knife ssl fetch command in chef?

A) Run the knife ssl fetch to download the self-signed certificate from the Chef server to the /.chef/trusted_certs directory on a workstation.

17) What are Data Bags?

A) A data bag is a global variable that is stored as JSON data and is accessible from a Chef server. A data bag is indexed for searching and can be loaded by a recipe or accessed during a search.

18) What are recipes in chef?

A) A recipe is the most fundamental configuration element within the organization. A recipe:

Is authored using Ruby, which is a programming language designed to read and behave in a predictable manner
Is mostly a collection of resources, defined using patterns (resource names, attribute-value pairs, and actions); helper code is added around this using Ruby, when needed
Must define everything that is required to configure part of a system
Must be stored in a cookbook
May be included in a recipe
May use the results of a search query and read the contents of a data bag (including an encrypted data bag)
May have a dependency on one (or more) recipes
May tag a node to facilitate the creation of arbitrary groupings
Must be added to a run-list before it can be used by the chef-client
Is always executed in the same order as listed in a run-list

19) What is chef resources file?

A) A file resource is used to manage files directly on a node.

A file resource block manages files that exist on nodes. For example, to write the home page for an Apache website:

file ‘/var/www/customers/public_html/index.php’ do
content ‘<html>This is a placeholder for the home page.</html>’
mode ‘0755’
owner ‘web_admin’
group ‘web_admin’
end

20) What is apt_package resource in chef?

Answer) Use the apt_package resource to manage packages on Debian and Ubuntu platforms.

apt_package Syntax:

A apt_package resource block manages a package on a node, typically by installing it. The simplest use of the apt_package resource is:

apt_package ‘package_name’

Chef Interview Questions Magazine

21) What is apt_preference resource in chef?

A) The apt_preference resource allows for the creation of APT preference files. Preference files are used to control which package versions and sources are prioritized during installation. New in Chef Client 13.3

Syntax:

apt_preference ‘package_name’ do
action :add
end

22) What is apt_repository resource?

A) Use the apt_repository resource to specify additional APT repositories. Adding a new repository will update APT package cache immediately.

apt_repository ‘nginx’ do
uri ‘http://nginx.org/packages/ubuntu/’
components [‘nginx’]
end

23) What is apt_update resource in chef?

A) Use the apt_update resource to manage APT repository updates on Debian and Ubuntu platforms.

24) what is bff_package resource in chef?

A) Use the bff_package resource to manage packages for the AIX platform using the installp utility. When a package is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources.

25) What is cab_package resource in chef?

A) Use the cab_package resource to install or remove Microsoft Windows cabinet (.cab) packages.

26) What is chef_gem?

A) Use the chef_gem resource to install a gem only for the instance of Ruby that is dedicated to the chef-client. When a gem is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources.

27) What is chef_acl resource in chef?

A) Use the chef_acl resource to interact with access control lists (ACLs) that exist on the Chef server.

Syntax: The syntax for using the chef_acl resource in a recipe is as follows:

chef_acl ‘name’ do
attribute ‘value’ # see properties section below

action :action # see actions section below
end

28) What is chef_client resource?

A) A chef-client is an agent that runs locally on every node that is under management by Chef. When a chef-client is run, it will perform all of the steps that are required to bring the node into the expected state, including:

Registering and authenticating the node with the Chef server
Building the node object
Synchronizing cookbooks
Compiling the resource collection by loading each of the required cookbooks, including recipes, attributes, and all other dependencies
Taking the appropriate and required actions to configure the node
Looking for exceptions and notifications, handling each as required

29) What is chef_container resource?

A) chef_container resource is used to interact with container objects that exist on the Chef server.

30) What is chef_data_bag_item?

A) A data bag is a container of related data bag items, where each individual data bag item is a JSON file. knife can load a data bag item by specifying the name of the data bag to which the item belongs and then the filename of the data bag item.

Use the chef_data_bag_item resource to manage data bag items.

Syntax – The syntax for using the chef_data_bag_item resource in a recipe is as follows:

chef_data_bag_item ‘name’ do
attribute ‘value’

action :action
end

Chef Tool Interview Questions

31) What is chef_data_bag resource?

A) A data bag is a global variable that is stored as JSON data and is accessible from a Chef server. A data bag is indexed for searching and can be loaded by a recipe or accessed during a search.

Use the chef_data_bag resource to manage data bags.

32) What is chef_environment resource?

A) chef_environment resource to manage environments. An environment is a way to map an organization’s real-life workflow to what can be configured and managed when using Chef server. Every organization begins with a single environment called the _default environment, which cannot be modified (or deleted). Additional environments can be created to reflect each organization’s patterns and workflow.

33) What is chef_group resource?

A) chef_group resource is used to interact with group objects that exist on the Chef server.

34) What is chef_handler resource?

A) The chef_handler resource is used to enable handlers during a chef-client run. The resource allows arguments to be passed to the chef-client, which then applies the conditions defined by the custom handler to the node attribute data collected during the chef-client run, and then processes the handler based on that data.

35) What is the chef_mirror resource?

A) The chef_mirror resource to mirror objects in the chef-repo to a specified location.

36) What is chef_node resource?

A) A node is any machine—physical, virtual, cloud, network device, etc.—that is under management by Chef. chef_node resource is used to manage nodes.

37) What is chef_organization resource?

A) The chef_organization resource to interact with organization objects that exist on the Chef server.

38) What is chef_role resource?

A) The chef_role resource to manage roles. A role is a way to define certain patterns and processes that exist across nodes in an organization as belonging to a single job function. Each role consists of zero (or more) attributes and a run-list. Each node can have zero (or more) roles assigned to it.

39) What is chef_user resource?

A) The chef_user resource is used to manage users.

40) What is chocolatey_package resource?

A) A chocolatey_package resource manages packages using Chocolatey on the Microsoft Windows platform. The simplest use of the chocolatey_package resource is:

chocolatey_package ‘package_name’

Chef Tool Interview Questions And Answers

41) What is cookbook_file resource?

A) The cookbook_file resource to transfer files from a sub-directory of COOKBOOK_NAME/files/ to a specified path located on a host that is running the chef-client.

Syntax – A cookbook_file resource block manages files by using files that exist within a cookbook’s /files directory. For example, to write the home page for an Apache website:

cookbook_file ‘/var/www/customers/public_html/index.php’ do
source ‘index.php’
owner ‘web_admin’
group ‘web_admin’
mode ‘0755’
action :create
end

42) What is cron resource?

A) The cron resource is used to manage cron entries for time-based job scheduling.

43) What is dnf_package resource?

A) the dnf_package resource to install, upgrade, and remove packages with DNF for Fedora platforms. The dnf_package resource is able to resolve provides data for packages much like DNF can do when it is run from the command line. This allows a variety of options for installing packages, like minimum versions, virtual provides, and library names.

44) What is dpkg_package resource?

A) The dpkg_package resource to manage packages for the dpkg platform. When a package is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources.

45) What is metadata.rb in chef?

A) Every cookbook requires a small amount of metadata. A file named metadata.rb is located at the top of every cookbook directory structure. The contents of the metadata.rb file provides hints to the Chef server to help ensure that cookbooks are deployed to each node correctly.

46) What information stored in metadata.rb file?

A) A metadata.rb file is:

Located at the top level of a cookbook’s directory structure.
Compiled whenever a cookbook is uploaded to the Chef server or when the knife cookbook metadata subcommand is run, and then stored as JSON data.
Created automatically by knife whenever the knife cookbook create subcommand is run.
Edited using a text editor, and then re-uploaded to the Chef server as part of a cookbook upload.

47) What is Berkshelf in chef?

A) Berkshelf is a dependency manager for Chef cookbooks. With it, you can easily depend on community cookbooks and have them safely included in your workflow.

48) What is Berksfile in chef?

A) A Berksfile describes the set of sources and dependencies needed to use a cookbook. It is used in conjunction with the berks command.

49) What is Cookbook Keyword in chef?

A) The cookbook keyword allows the user to define where a cookbook is installed from, or to set additional version constraints. It can also be used to install additional cookbooks, for example to use during testing.

50) What is kitchen (executable) in chef?

A) kitchen is the command-line tool for Kitchen, an integration testing tool used by the chef-client. Kitchen runs tests against any combination of platforms using any combination of test suites.

Chef Interview Questions And Answers PDF

51) What is kitchen converge in chef?

A) Use the converge subcommand to converge one (or more) instances. Instances are based on the list of platforms in the .kitchen.yml file. This process will install the chef-client on an instance using the omnibus installer, upload cookbook files and minimal configuration to the instance, and then start a chef-client run using the run-list and attributes specified in the .kitchen.yml file.

Syntax – $ kitchen converge PLATFORMS (options)

52) What is kitchen create in chef?

A) Use the create subcommand to create one (or more) instances. Instances are based on the list of platforms and suites in the .kitchen.yml file.

Syntax – This subcommand has the following syntax:

$ kitchen create PLATFORMS (options)

53) What is kitchen destroy in chef?

A) Use the destroy subcommand to delete one (or more) instances. Instances are based on the list of platforms and suites in the .kitchen.yml file.

Syntax – This subcommand has the following syntax:

$ kitchen destroy PLATFORMS (options)

54) What is kitchen diagnose in chef?

A) Use the diagnose subcommand to show a computed diagnostic configuration for one (or more) instances. This subcommand will make all implicit configuration settings explicit because it echoes back all of the configuration data as YAML.

Syntax – This subcommand has the following syntax:

$ kitchen diagnose PLATFORMS (options)

55) What is kitchen driver create in chef?

A) Use the driver create subcommand to create a new Kitchen driver in the RubyGems project.

Syntax – This subcommand has the following syntax:

$ kitchen driver create NAME

56) What is kitchen driver discover?

A) Use the driver discover subcommand to discover Kitchen driver that have been published to RubyGems. This subcommand will return all RubyGems that are match kitchen-*.

Syntax – This subcommand has the following syntax:

$ kitchen driver discover

57) What kitchen exec in chef?

A) Use the exec subcommand to execute a command on a remote instance.

Syntax – This subcommand has the following syntax:

$ kitchen exec PLATFORMS (options)

58) What is kitchen init command in chef?

A) Use the init subcommand to create an initial Kitchen environment, including:

Creating a .kitchen.yml file
Appending Kitchen to the RubyGems file, .gitignore, and .thor
Creating the test/integration/default directory

Syntax – This subcommand has the following syntax:

$ kitchen init

59) What is kitchen list in chef?

A) Use the list subcommand to view the list of instances. Instances are based on the list of platforms in the .kitchen.yml file. Kitchen will auto-name instances by combining a suite name with a platform name. For example, if a suite is named default and a platform is named ubuntu-10.04, then the instance would be default-ubuntu-10.04. This ensures that Kitchen instances have safe DNS and hostname records.

Syntax – This subcommand has the following syntax:

$ kitchen list PLATFORMS (options)

60) What is kitchen login command in chef?

A) Use the login subcommand to log in to a single instance. Instances are based on the list of platforms and suites in the .kitchen.yml file. After logging in successfully, the instance can be interacted with just like any other virtual machine, including adding or removing packages, starting or stopping services, and so on. It’s a sandbox. Make any change necessary to help improve the coverage for cookbook testing.

Syntax – This subcommand has the following syntax:

$ kitchen login PLATFORM (options)

Chef Interview Questions And Answers For Experienced

61) What is kitchen setup c0mmand in chef?

A) Use the setup subcommand to set up one (or more) instances. Instances are based on the list of platforms in the .kitchen.yml file.

Syntax – This subcommand has the following syntax:

$ kitchen setup PLATFORMS (options)

62) What is kitchen test command in chef?

A) Use the test subcommand to test one (or more) verified instances. Instances are based on the list of platforms and suites in the .kitchen.yml file. This subcommand will create a new instance (cleaning up a previous instance, if necessary), converge that instance, set up the test harness, verify the instance using that test harness, and then destroy the instance.

In general, use the test subcommand to verify the end-to-end quality of a cookbook. Use the converge and verify subcommands during the normal the day-to-day development of a cookbook.

Syntax – This subcommand has the following syntax:

$ kitchen test PLATFORMS (options)

63) What is kitchen verify command in chef?

A) Use the verify subcommand to verify one (or more) instances. Instances are based on the list of platforms and suites in the .kitchen.yml file.

In general, use the test subcommand to verify the end-to-end quality of a cookbook. Use the converge and verify subcommands during the normal the day-to-day development of a cookbook.

Syntax – This subcommand has the following syntax:

$ kitchen verify PLATFORMS (options)

64) What is kitchen version command in chef?

A) Use the version subcommand to print the version of Kitchen.

Syntax – This subcommand has the following syntax:

$ kitchen version

65) What are handlers in chef?

A) Handlers are used to identify situations that arise during a chef-client run, and then tell the chef-client how to handle these situations when they occur.

66) How many types of handlers are there in chef? What are they?

A) In chef there are three types of handlers are there they are:
Exception Handler
Report Handler
Start Handler

67) What is exception handler in chef?

A) An exception handler is used to identify situations that have caused a chef-client run to fail. An exception handler can be loaded at the start of a chef-client run by adding a recipe that contains the chef_handler resource to a node’s run-list. An exception handler runs when the failed? property for the run_status object returns true.

68) What is a report handler in chef?

A) A report handler is used when a chef-client run succeeds and reports back on certain details about that chef-client run. A report handler can be loaded at the start of a chef-client run by adding a recipe that contains the chef_handler resource to a node’s run-list. A report handler runs when the success? property for the run_status object returns true.

69) What is start handler in chef?

A) A start handler is used to run events at the beginning of the chef-client run. A start handler can be loaded at the start of a chef-client run by adding the start handler to the start_handlers setting in the client.rb file or by installing the gem that contains the start handler by using the chef_gem resource in a recipe in the chef-client cookbook.

70) What is Handler DSL in chef?

A) Use the Handler DSL to attach a callback to an event. If the event occurs during the chef-client run, the associated callback is executed. For example:

Sending email if a chef-client run fails
Sending a notification to chat application if an audit run fails
Aggregating statistics about resources updated during a chef-client runs to StatsD

Chef Devops Interview Questions

71) What is Knife and what is the purpose of using Knife in chef?

A) Knife is a command-line tool that provides an interface between a local chef-repo and the Chef server. knife helps users to manage:

Nodes
Cookbooks and recipes
Roles, Environments, and Data Bags
Resources within various cloud environments
The installation of the chef-client onto nodes
Searching of indexed data on the Chef server

72) What are the different Knife plugins for cloud hosting platforms?

A) There are different knife plugins available for cloud hosting platforms:
knife azure, knife bluebox, knife ec2, knife eucalyptus, knife google, knife linode, knife openstack, and knife rackspace

73) What is Ohai in chef?

A) Ohai is a tool that is used to collect system configuration data, which is provided to the chef-client for use within cookbooks. Ohai is run by the chef-client at the beginning of every Chef run to determine system state. Ohai includes many built-in plugins to detect common configuration details as well as a plugin model for writing custom plugins.

74) Why do we use chef-jenkins plugin in chef?

A) chef-jenkins adds the ability to use Jenkins to drive continuous deployment and synchronization of environments from a git repository.

75) Why do we use jclouds-chef plugin in chef?

A) jclouds-chef plugin adds Java and Clojure components to the Chef server API REST API.

76) Why do we use chef-hatch-repo in chef?

A) chef-hatch-repo plugin adds a knife plugin and a Vagrant provisioner that can launch a self-managed Chef server in a virtual machine or Amazon EC2.

Real-Time Chef Interview Questions

77) Why do we use chef-trac-hacks in chef?

A) chef-trac-hacks adds the ability to fill a coordination gap between Amazon Web Services (AWS) and the chef-client.

78) What is chef-deploy plugin in chef and what is the purpose of using it?

A) chef-deploy adds a gem that contains resources and providers for deploying Ruby web applications from recipes.

79) What is kitchenplan in chef?

A) Kitchenplan is a utility for automating the installation and configuration of a workstation on macOS.

80) What is stove in chef?

A) Stove is a utility for releasing and managing cookbooks.

81) What are the benefits of Devops?

A) There are many benefits of using devops, explain about your devops experience.

Technical benefits:

Continuous software delivery
Less complex problems to fix
Faster resolution of problems
Business benefits:

Faster delivery of features
More stable operating environments
More time available to add value (rather than fix/maintain)

82) What is Vagrant in chef?

A) Vagrant helps Test Kitchen communicate with VirtualBox and configures things like available memory and network settings.

Tagged under: , ,

Powerful Excel Tricks for Analyzing Data

Excel is the most important and powerful search marketing tool. In PPC Management, Excel is a must-have tool in addition to AdWords (adCenter) Editors. Building keyword lists, writing ad copy, analyzing data and preparing reports – all these tasks mostly done using Excel. Most of our optimization time we spend cranking Excel spreadsheets and log in AdWords or adCenter web-UI only to get status updates or make some minor changes. Don’t get me wrong: same tasks can be performed in the web-UI, but Excel allows us to streamline the process and get a better use of our time. So, here are some tips to help you work with your PPC data more efficiently.

Both, AdWords and adCenter, allow you to download different kinds of reports in spreadsheets format. But in order to get them, you have to login in the UI and go through several steps to download your report. I found to be more efficient to simply copy or export data from AdWords Editor or adCenter Desktop tools. It is quick, and I can paste back changes in the desktop tool and upload them in the account right away.

Working in Excel involves a lot of copy-pasting, cleaning, reformatting and calculating. You can do it all manually, or, you can utilize powerful Excel’s functionality to get everything done more efficiently.


Microsoft Excel is one of the most widely used tools in any industry. While some enjoy playing with pivotal tables and histograms, others limit themselves to simple pie-charts and conditional formatting. Some may create an artwork out of the dull monochrome Excel, while others may be satisfied with its data analysis. In this discussion, we will make a deep delving analysis of Microsoft Excel and its utility. We will focus on how to analyze data in Excel, the various tricks, and techniques for it. The discussion will also explore the various ways to analyze data in Excel.

We will discuss the different features of Excel (much of which are unexplored to the mass), functions, and best practices.

Our discussion will include, but not be limited to:

  • Best Way to Analyze Data in Excel
  • How to Analyze Sales Data in Excel
  • Analyzing Data Sets with Excel
  • Data Segregation with Excel
  • The Importance of Data Reporting

How to Analyze Sales Data in Excel: Make Pivot Table your Best Friend

A pivot tool helps us summarize huge amounts of data. One of the best ways to analyze data in excel, it is mostly used to understand and recognize patterns in the data set. Recognizing patterns in a small dataset is pretty simple. But the enormity of the datasets often calls for additional efforts to find the patterns. In such cases, a pivot table can be a huge advantage as it takes only a few minutes to summarize groups of data using a pivot table.

Say, for example, you have a dataset consisting of regions and number of sales. You may want to know the number of sales based on the regions, which can be used to determine why a region is lacking and how to possibly improve in that area. Using a pivot table, you can create a report in excel within a few minutes and save it for future analysis.

A Pivot Table allows you to summarize data as averages, sums, or counts in Excel from data that is stored in another Spreadsheet, or table. It is great for quickly building reports because you can sort and visualize the data quickly.

For example, you may have put together a spreadsheet, which you can copy, and paste into Excel, or use in Google Docs if you would prefer (just click File > Make a Copy). The spreadsheet contains data with a mock company’s customer purchase information. Since companies purchase at different dates, a pivot table will help us to consolidate this data to allow us to see total buys per company, as well as to compare purchases across companies, for quick analysis.

The Pivot table allows you to take a table with a lot of data in it and rearrange the table so that you only look at only what matters to you.

  1. a) Whether you are using a Mac or a PC, you can select the whole dataset that you want to look at and select: “Data” -> “Pivot Table”. When you hit that, a new tab should be opened with a table.

Data Set

  1. b) Once you have your table in front of you, you can drag and drop the Column Labels, Row Labels, and Report Filter
  • Column Labels go across the top row of your table (for example Date, Month, Company Name)
  • Row Labels go across the left-hand side of your table [for example Date, Month, Company Name (same as with column labels, it depends on how you would prefer to look at the data, vertically or horizontally)]
  • The Values section is where you put the data you would like calculated (for example Purchases, Revenue)
  • Report Filter helps you refine your results. Add anything you would like to Filter by (for example you want to look at Lead Referral Sources, but exclude Google and Direct)

Pivot tables are a great way to manage the data from your reports. You can copy and paste the data into your own Excel file, or create a copy in Google Apps (File > Make a Copy).

How to Analyze Data in Excel: Analyzing Data Sets with Excel

You can instantly create different types of charts, including line and column charts, or add miniature graphs. You can also apply a table style, create PivotTables, quickly insert totals, and apply conditional formatting. Analyzing large data sets with Excel makes work easier if you follow a few simple rules:

  • Select the cells that contain the data you want to analyze.
  • Click the Quick Analysis button image button that appears to the bottom right of your selected data (or press CRTL + Q).
  • Selected data with Quick Analysis Lens button visible
  • In the Quick Analysis gallery, select a tab you want. For example, choose Charts to see your data in a chart.
  • Pick an option, or just point to each one to see a preview.
  • You might notice that the options you can choose are not always the same. That is often because the options change based on the type of data you have selected in your workbook.

You might want to know which analysis option is suitable for you. Here we offer you a basic overview of some of the best options to choose from.

  • Formatting: Formatting lets you highlight parts of your data by adding things like data bars and colors. This lets you quickly see high and low values, among other things.
  • Charts: Charts Excel recommends different charts, based on the type of data you have selected. If you do not see the chart you want, click More Charts.
  • Totals: Totals let you calculate the numbers in columns and rows. For example, Running Total inserts a total that grows as you add items to your data. Click the little black arrows on the right and left to see additional options.
  • Tables: Tables make it easy to filter and sort your data. If you do not see the table style you want, click More.
  • Sparklines: Sparklines are like tiny graphs that you can show alongside your data. They provide a quick way to see trends.

Ways to Analyze Data in Excel: Tips and Tricks

It is fun to analyze data in MS Excel if you play it right. Here, we offer some quick hacks for success.

  • How to Analyze Data in Excel: Data Cleaning

Data Cleaning, one of the very basic excel functions, becomes simpler with a few tips and tricks. You may learn how to use a native Excel feature and how to accomplish the same goal with Power Query. Power Query is a built-in feature in Excel 2016 and an Add-in for Excel 2010/2013. It helps you to extract, transform, and load your data with just a few clicks.

    1. Change the format of numbers from text to numeric

Sometimes when you import data from an external source other than Excel, numbers are imported as text. Excel will alert you by showing a green tooltip in the top-left corner of the cell.

Depending on the number of values in the range, you can quickly convert the values to numbers by clicking on ‘Convert to a number’ within the tooltip options.

However, if you have more than 1000 values, you will have to wait a couple of seconds while Excel finishes the conversion.

You may also convert the values to number format is to use Text-to-Columns using the following steps:

  • Select the range with the values to be converted.
  • Go to Data > Text to Columns.
  • Select Delimited and click Next.
  • Uncheck all the checkboxes for delimiters (see below) and click Next.
  • Text-Columns-Checkboxes

    2. Select General and click on Finish

When you have lots of numbers to convert this tip will be much faster than waiting for all the numbers to be converted. In Power Query, you just have to right click on the column header of the column you want to convert.

  • Then go to Change Type.
  • Then select the type of number you want (such as Decimal or Whole Number)
  • Power-Query-Data-Type
How to Analyze Data inExcel: Data Analysis

Data Analysis is simpler and faster with Excel. Here, we offer some tips for work:

  • Create auto expandable ranges with Excel tables: One of the most underused features of MS Excel is Excel Tables. Excel Tables have wonderful properties that allow you to work more efficiently. Some of these features include:
  • Formula Auto Fill: Once you enter a formula in a table it will be automatically be copied to the rest of the table.
  • Auto Expansion: New items typed below or at the right of the table become part of the table.
  • Visible headers: Regardless of your position within the table, your headers will always be visible.
  • Automatic Total Row: To calculate the total of a row, you just have to select the desired formula.
  • Use Excel Tables as part of a formula: Like in dropdown lists, if you have a formula that depends on a Table, when you add new items to the Table, the reference in the formula will be automatically updated.
  • Use Excel Tables as a source for a chart: Charts will be updated automatically as well if you use an Excel Table as a source. As you can see, Excel Tables allow you to create data sources that do not have to be updated when new data is included.
How to Analyze Data in Excel: Data Visualization

Quickly visualize trends with sparklines: Sparklines are a visualization feature of MS Excel that allows you to quickly visualize the overall trend of a set of values. Sparklines are mini-graphs located inside of cells. You may want to visualize the overall trend of monthly sales by a group of salesmen.

To create the sparklines, follow these steps below:

  1. Select the range that contains the data that you will plot (This step is recommended but not required, you can select the data range later).
  2. Go to Insert > Sparklines > Select the type of sparkline you want (Line, Column, or Win/Loss). For this specific example, I will choose Lines.
  3. Click on the range selection button Select Range Excel Button to browse for the location of the sparklines, press Enter and click OK. Make sure you select a location that is proportional to the data source. For example, if the data source range contains 6 rows then the location of the sparkline must contain 6 rows.

To format the sparkline you may try the following:

To change the color of markers:

  1. Click on any cell within the sparkline to show the Sparkline Tools menu.
  2. In the Sparkline tools menu, go to Marker Color and change the color for the specific markers you want.

For example High points on the green, Low points on red, and the remaining in blue.

To change the width of the lines:

  1. Click on any cell within the sparkline to show the Sparkline Tools menu.
  2. In the Sparkline tools contextual menu, go to Sparkline Color > Weight and change the width of the line as you desire.

Save Time with Quick Analysis: One of the major improvements introduced back in Excel 2013 was the Quick Analysis feature. This feature allows you to quickly create graphs, sparklines, PivotTables, PivotCharts, and summary functions by just clicking on a button.

When you select data in Excel 2013 or later, you will see the Quick Analysis button Quick Analysis Excel Button in the bottom-right corner of the range selected. If you click on the Quick Analysis button you will see the following options:

  • Formatting
  • Charts
  • Totals
  • Tables
  • Sparklines

When you click on any of the options, Excel will show a preview of the possible results you could obtain given the data you selected.

  • If you click in the Quick Analysis button and go to charts, you could quickly create the graph below just by clicking a button.
  • If you go to Totals, you can quickly insert a row with the average for each column:
  • If you click on Sparklines, you can quickly insert Sparklines:
  • As you can see, the Quick Analysis feature really allows you to quickly perform different visualizations and analysis with almost no effort.





Commonly used functions

1. Vlookup(): It helps to search a value in a table and returns a corresponding value. Let’s look at the table below (Policy and Customer). In Policy table, we want to map city name from the customer tables based on common key “Customer id”. Here, function vlookup() would help to perform this task.Vlookup_1

Syntax: =VLOOKUP(Key to lookup, Source_table, column of source table, are you ok with relative match?)

For above problem, we can write formula in cell “F4” as =VLOOKUP(B4, $H$4:$L$15, 5, 0) and this will return the city name for all the Customer id 1 and post that copy this formula for all Customer ids.

Tip: Do not forget to lock the range of the second table using “$” sign – a common error when copying this formula down. This is known as relative referencing.

2. CONCATINATE(): It is very useful to combine text from two or more cells into one cell. For example: we want to create a URL based on input of host name and request path.Concatenate

Syntax: =Concatenate(Text1, Text2,.....Textn)

Above problem can be solved using formula, =concatenate(B3,C3) and copy it.

Tip: I prefer using “&” symbol, because it is shorter than typing a full “concatenate” formula, and does the exactly same thing. The formula can be written as  “= B3&C3”.

3. LEN() – This function tells you about the length of a cell i.e. number of characters including spaces and special characters .

Syntax: =Len(Text)

Example: =Len(B3) = 23

4. LOWER(), UPPER() and PROPER() –These three functions help to change the text to lower, upper and sentence case respectively (First letter of each word capital).

Syntax: =Upper(Text)/ Lower(Text) / Proper(Text)

In data analysis project, these are helpful in converting classes of different case to a single case else these are considered as different classes of the given feature. Look at the below snapshot, column A has five classes (labels) where as Column B has only two because we have converted the content to lower case.
Cases

5. TRIM(): This is a handy function used to clean text that has leading and trailing white space. Often when you get a dump of data from a database the text you’re dealing with is padded with blanks. And if you don’t deal with them, they are also treated as unique entries in a list, which is certainly not helpful.

Syntax: =Trim(Text)

6. If(): I find it one of the most useful function in excel. It lets you use conditional formulas which calculate one way when a certain thing is true, and another way when false. For example, you want to mark each sales as “High” and “Low”. If sales is greater than or equals to $5000 then “High” else “Low”.

Syntax: =IF(condition, True Statement, False Statement)
Conditional

Generating inference from Data

1. Pivot Table: Whenever you are working with company data, you seek answers for questions like “How much revenue is contributed by branches of North region?” or “What was the average number of customers for product A?” and many others.

Excel’s PivotTable helps you to answer these questions effortlessly. Pivot table is a summary table that lets you count, average, sum, and perform other calculations according to the reference feature you have selected i.e.  It converts a data table to inference table which helps us to take decisions. Look at the below snapshot:
PivotAbove, you can see that table on the left has sales detail against each customer with region and product mapping. In table to the right, we have summarized the information at region level which now helps us to generate a inference that South region has highest sales.

Methods to create Pivot table:
Step-1: Click somewhere in the list of data. Choose the Insert tab, and click PivotTable. Excel will automatically select the area containing data, including the headings. If it does not select the area correctly, drag over the area to select it manually. Placing the PivotTable on a new sheet is best, so click New Worksheet for the location and then click OKPivot2Step-2: Now, you can see the PivotTable Field List panel, which contains the fields from your list; all you need to do is to arrange them in the boxes at the foot of the panel. Once you have done that, the diagram on the left becomes your PivotTable.
Pivot3Above, you can see that we have arranged “Region” in row, “Product id” in column and sum of “Premium” is taken as value. Now you are ready with pivot table which shows Region and Product wise sum of premium. You can also use count, average, min, max and other summary metric. For more detail on Pivot table, I would suggest you to refer this link.

2. Creating Charts: Building a chart/ graph in excel requires nothing more than selecting the range of data you wish to chart and press F11. This will create a excel chart in default chart style but you can change it by selecting different chart style. If you prefer the chart to be on the same worksheet as the data, instead of pressing F11, press ALT + F1.

Of course, in either case, once you have created the chart, you can customize to your particular needs to communicate your desired message.ChartsTo know about different properties of charts, I would recommend to refer this link.

Data Cleaning

1. Remove duplicate values: Excel has inbuilt feature to remove duplicate values from a table. It removes the duplicate values from given table based on selected columns i.e. if you have selected two columns then it searches for duplicate value having same combination of both columns data.
Duplicate
Above, you can see that A001 and A002 have duplicate value but if we select both columns “ID” and “Name” then we have only one duplicate value (A002, 2).
Follow the these steps to remove duplicate values: Select data –> Go to Data ribbon –> Remove DuplicatesDuplicate2

2. Text to Columns: Let’s say you have data stored in column as shown in below snapshot.
Text_ColumnAbove, you can see that values are separated by semi colon “;”. Now to split these values in different column, I will recommend to use “Text to Columns” feature in excel. Follow below steps to convert it to different columns:

  1. Select the range A1:A6
  2. Go to “Data” ribbon –> “Text to Columns”
    Text_Column_2Above, we have two options “Delimited” and “Fixed width”. I have selected delimited because the values are separated by a delimiter(;). If we would be interested to split data based on the width such as first four character to first column, 5 to 10th character to second column, then we would choose Fixed width.
  3. Click on Next –>Mark check box on for “Semi colon” then Next and finish.
    Text_Column3

Essential keyboard shortcuts

Keyboard shortcuts are the best way to navigate cells or enter formulas more quickly. We’ve listed our favorites below.

  1. Ctrl +[Down|Up Arrow]: Moves to the top or bottom cell of the current column and combination of Ctrl with Left|Right Arrow key, moves to the cell furthest left or right in the current row
  2. Ctrl + Shift + Down/Up Arrow: Selects all the cells above or below the current cell
  3. Ctrl+ Home: Navigates to cell A1
  4. Ctrl+End: Navigates to the last cell that contains data
  5. Alt+F1: Creates a chart based on selected data set.
  6. Ctrl+Shift+L: Activate auto filter to data table
  7. Alt+Down Arrow: To open the drop down menu of autofilter.  To use this shortcut:
  8. Alt+D+S: To sort the data set
  9. Ctrl+O: Open a new workbook
  10. Ctrl+N: Create a new workbook
  11. F4: Select the range and press F4 key, it will change the reference to absolute, mixed and relative.

Note: This isn’t an exhaustive list. Feel free to share your favorite keyboard shortcuts in Excel in the comments section below. Literally, I do 80% of excel tasks using shortcuts.

What's Hot This Week

Featured post

How to Increase the size of a Linux LVM by adding a new disk

This post will cover how to increase the disk space for a VMware virtual machine running Linux that is using logical volume manager (LVM). F...

Total Pageviews