Saturday, 19 December 2015

The Reset Button !!!!



!!!! The Reset button !!!!

Anyone who has recently used the Google Compute Engine for creating the VM instances will be aware of the reset button available.

Since I wasn't very much sure of it , I just clicked it without much know-how . This resulted in making all the servers to their original state as they were freshly build and which is certainly a very bad thing for us.

But , we had  puppet that we used to create the whole infrastructure as it is . All the modules we had used and changes we made were committed to GitHub repo and this certainly was a boon to us, else we have to sit whole day long for making those changes on the servers.

Just in couple of minutes the new  instances were created using the compute engine-create group instance feature. We did  installation of the foreman  and git on one of the servers and set up the  puppet clients agents accordingly . This took around 15 more crucial minutes and then cloned our GitHub repo which contains all the  necessary modules and configurations required for the rest of infrastructure.

These are the conditions where Configuration Management Tools like Puppet come in picture and help us get on the track in the shortest possible manner.
It was a hectic day but definitely made us learn several important aspects. Using puppet for maintain the infrastructure is really important now days. It is reliable,efficient and fast for deploying configurations on the servers  and making ready for the production work load.

Thursday, 3 December 2015

Setup Jenkins using Ansible

In this document I’ll walk you through how you can  setup jenkins using ansible.

Prerequisites
  •  OS - Ubuntu {at least two machine required in production}
  •  First machine for Ansible  installation
  •  Second machine where we will install jenkins server
  • You should have basic understanding of ansible workflow.
Note :  You should have password less login enabled in second machine. use this link 
http://www.linuxproblem.org/art_9.html

Ansible Installation
Before starting with installing jenkins using ansible, you need to have ansible installed in your system.

 $ curl https://raw.githubusercontent.com/OpsTree/AnsiblePOC/alok/scripts/Setup/setup_ansible.sh | sudo bash

Setup jenkins using Ansible

Install jenkins ansible roles

Once we have ansible installed in our system, we can start installing the jenkins using ansible. To install we will use an already available ansible role to setup jenkins

$ ansible-galaxy install geerlingguy.jenkins
to know more about the jenkins role hit this link https://galaxy.ansible.com/detail#/role/440

Ansible roles default directory path is /etc/ansible/roles
Make ansible playbook file
 
Now the next step is to use the installed jenkins roles to install the jenkins. For this purpose we will create a playbook  and hosts file with below content

$ cd ~/MyPlaybook/jenkins
create a file hosts and add below content
[jenkins_hosts]
192.168.33.15

Screenshot from 2015-11-30 12:55:41.png

Next create  a file site.yml and add below content
---
- hosts: jenkins_hosts
 roles:
     - { role: geerlingguy.jenkins }

Screenshot from 2015-11-30 12:59:08.png

so configuration file is done, the next step is to run ansible playbook command
$ ansible-playbook -i hosts site.yml
Now that Jenkins is running, go to http://192.168.33.15:8080. You'll be welcome by the default Jenkins screen.

Friday, 27 November 2015

Opstree SHOA Part 1: Build & Release


At Opstree we have started a new initiative called SHOA, Saturday Hands On Activity. Under this program we pick up a concept, tool or technology and do a hands on activity. At the end of the day whatever we do is followed by a blog or series of blog that we have understood during the day.
Since this is the first Hands On Activity so we are starting with Build & Release

 

What we intend to do 

 

Setup Build & Release for project under git repository https://github.com/OpsTree/ContinuousIntegration.

What all we will be doing to achieve it

  • Finalize a SCM tool that we are going to use puppet/chef/ansible.
  • Automated setup of Jenkins using SCM tool.
  • Automated setup of Nexus/Artifactory/Archiva using SCM tool.
  • Automated setup of Sonar using SCM tool.
  • Dev Environment setup using SCM tool: Since this is a web app project so our Devw443 environment will have Nginx & tomcat.
  • QA Environment setup using SCM tool: Since this is a web app project so our QA environment will have Nginx & tomcat.
  • Creation of various build jobs
    • Code Stability Job.
    • Code Quality Job.
    • Code Coverage Job.
    • Functional Test Job on dev environment.
  • Creation of release Job.
  • Creation of deployment job to do deployment on Dev & QA environment.

This activity is open for public as well so if you have any suggestion or you want to attend it you are most welcome

Monday, 26 October 2015

Marrying Nginx with ELB

Few weeks back I got a requirement to setup a highly available API server. I said not a big deal! I'll have Nginx as a reverse proxy(Why not directly exposing API via ELB a different story) and my API auto scaled setup will sit behind an internal ELB and things would be in place TA DA.



Things worked perfectly fine for few days, but one day the API consumer reported that they are not getting response back, what? When I checked the API url was indeed returning a 502 error code. It was really strange for nginx to be sending 502 response back that meant the highly scalable setup was down? Well I was proven wrong ELB was working perfectly fine as the curl request to internal ELB was returning proper response, so yes the highly available API setup was in place. What next, yes! Nginx error logs. I did saw Nginx reporting connection timeout with 502 error code. The interesting thing to note that it was an IP(random IP assigned to ELB), when I tried to do curl hit on that IP for API request, it did failed EUREKA EUREKA!! I reproduced the problem.

Well now I've to collect all this information and infer what is the logical cause of the problem, and yes there are lot of smart people available who would have fond the solution to this problem so I've to ask right question in Google :). The question was "Nginx using Ip instead of domain name" and the answer was "Nginx caches the IP at the startup and obviously as ELB is Elastic in nature so it's IP changes over the period of time". That was the reason Nginx was trying to talk to the older un-associated IP's of Internal ELB.

Finding solution was not a big task as it was just about making sure  Nginx should talk to ELB not the IP's associated with it, that's why said marrying nginx with ELB :).

I'll not go into the actual solution as there are already solutions available in web. I referred this really good blog as a solution.

http://ghost.thekindof.me/nginx-aws-elb-dns-resolution-nginx-resolver-directive-and-black-magic/

Sunday, 22 February 2015

PERCONA STANDALONE SERVER

As a DevOps activist I am exploring Percona XtraDB. In a series of blogs I will share my learnings. This blog intends to capture step by step details of installation of Percona XtraDB in Standalone mode

Introduction:


Percona Server is an enhanced drop-in replacement for Mysql. It offers breakthrough performance, scalability, features, and instrumentation.
Percona focus on providing a solution for the most demanding applications, empowering users to get the best performance and lowest downtime possible.

The Percona XtraDB Storage Engine:

  • Percona XtraDB is an enhanced version of the InnoDB storage engine, designed to better scale on modern hardware, and including a variety of other features useful in high performance environments. It is fully backwards compatible, and so can be used as a drop-in replacement for standard InnoDB. 
  • Percona XtraDB includes all of InnoDB's robust, reliable ACID-compliant design and advanced MVCC architecture, and builds on that solid foundation with more features, more tunability. more metrics, and more scalability.
  • It is designed to scale better on many cores, to use memory more efficiently, and to be more convenient and useful.

      Installation on ubuntu:

      STEP 1: Add Percona Software Repositories
      $ apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
      STEP 2: Add this to /etc/apt/sources.list:
      deb http://repo.percona.com/apt precise main
      deb-src http://repo.percona.com/apt precise main
      STEP 3: Update the local cache
      $ apt-get update
      STEP 4: Install the server and client packages
      $ apt-get install percona-server-server-5.6 percona-server-client-5.6
      STEP 5: Start Percona Server
      $ service mysql start
      Let me know if you have any suggestions. You can also contact me at belwal.mohit@gmail.com.

      Understanding Percona XtraDB cluster

      As a DevOps activist I am exploring Percona XtraDB. In a series of blogs I will share my learnings. This blog intends to capture theoretical knowledge of Percona XtraDB Cluster.

      Prerequisites

      1. You should have basic knowledge of mysql. 
      2. OS - Ubuntu

      What is Percona?

      Percona XtraDB cluster is an open source, free MySql high availability and scalability software.
      It provides:
      1. Synchronous Replication: Transaction either committed on all nodes or none.
      2. Multi-Master Replication: You can write to any node
      3. Parallel applying events on slave. Real “parallel replication”.
      4. Automatic node provisioning.
      5. Data consistency. No more unsynchronized slaves.

      Introduction

      1. The cluster consists of nodes. The cluster’s recommended configuration is to have 3 nodes, however 2 nodes can be used as well.
      2. Every node is a regular Mysql / Percona server setup. You can convert your existing MySQL / Percona Server into Node and roll Cluster using it as a base or you can detach Node from Cluster and use it as a regular server.
      3. Each node will contain full copy of data.


      percona.jpeg.jpg

      Benefits of this approach:

      • Whenever you execute a query, it is executed locally. All data is available locally, so no remote access is required.
      • No central management. You can loose any node at any time, and cluster will continue functioning.
      • It is a good solution for scaling read workload. You can put read queries to any of the nodes.

      Drawbacks:

      • Overhead of joining new node. New node will copy all data from an existing node. If it is 100 GB, it will copy 100 GB.
      • Not an effective write scaling solution. All writes have to go on all nodes.
      • Duplication of data. If you have 3 nodes, there will be 3 duplicates.

      Difference between Percona XtraDB Cluster and MySQL Replication

      For this we will have to look into the well known CAP theorem for distributed systems. According to this theorem, characteristics of Distributed systems are:
      C - Consistency (all your data is consistent on all nodes),
      A - Availability (your system is AVAILABLE to handle requests in case of failure of one or several nodes),
      P - Partitioning tolerance (in case of inter-node connection failure, each node is still available to handle requests).
      CAP theorem says that any Distributed system can have any two out of these three.
      • MySQL replication has: Availability and Partitioning tolerance.
      • Percona XtraDB Cluster has: Consistency and Availability.
      So, MySql replication does not guarantee Consistency of data, while Percona XtraDB cluster provides consistency while it looses partitioning tolerance.

      Components 

      Percona XtraDb Cluster is based on:
      • Percona Server with XtraDB and includes Write Set Replication patches.
      It uses:
      • Galera Library: A generic synchronous Multi-Master replication plugin for transactional applications.
      • Galera supports:
        • Incremental State Transfer (IST), useful in WAN deployments.
        • RSU, Rolling Schema Update. Schema change does not block operations against table.

      Percona XtraDB cluster limitations

      • Currently replication work only with InnoDB storage engine.
      That means writes to table of other types, including (mysql.*) tables, are not replicated.
      DDL statements are replicated in statement level and changes to mysql.* tables will get replicated that way.
      So you can issue: CREATE USER …. , this will be replicated,
      but issuing: INSERT INTO mysql.user …. , will not be replicated.
      You can also enable experimental MyISAM replication support with wsrep_replicate_myisam.
      • Unsupported queries:
        • LOCK/UNLOCK tables
        • lock function (GET_LOCK(), RELEASE_LOCK()....)
      • Due to cluster level concurrency control, transaction issuing COMMIT may be aborted at that stage.
      There can be two transactions writing to same rows and committing in separate Percona XtraDB Cluster nodes, and only one of the them can successfully commit. The failing one will be aborted. For cluster level aborts, Percona will give back deadlock error code.
      • The write throughput of whole cluster is limited by weakest node. If one node becomes slow, whole cluster will become slow.

      FEATURES


      High Availability

      In a basic setup with 3 nodes, the Percona XtraDB cluster will continue to function if you take any of the nodes down. Even in a situation of node crash, or if node becomes unavailable over network, the cluster will continue to work, and queries can be issued on working nodes.
      In case, when there are changes in data while node was down, there are two options that Node may use when it joins the cluster:
      1. State Snapshot Transfer (SST): SST method performs full copy of data from one node to other. It’s used when a new node joins the cluster. One of the existing node will transfer data to it.
         There are three available methods of SST:
        • mysqldump
        • rsync
        • xtrabackup
      Downside of “mysqldump” and “rsync” is that your cluster becomes READ-ONLY while data is copied from one node to other.
      while
      xtrabackup SST does not require this for entire syncing process.
      1. Incremental State Transfer (IST): If a node is down for a short period of time, and then starts up, the node is able to fetch only those changes made during the period it was down.
      This is done using caching mechanism on nodes. Each node contains a cache, ring-buffer of last N changes, and the node is able to transfer part of this cache. IST can be done only if the amount of changes needed to transfer is less than N. If it exceeds N, then the joining node has to perform SST.


      Multi-Master Replication

      • Multi-Master replication stands for the ability to write to any node in the cluster, and not to worry that it will get out-of-sync situation, as it regularly happens with regular MySQL replication if you imprudently write to the wrong server.
      • With Percona XtraDB Cluster you can write to any node, and the Cluster guarantees consistency of writes. That is, the write is either committed on all the nodes or not committed at all.
      All queries are executed locally on the node, and there is a special handling only on COMMIT. When the COMMIT is issued, the transaction has to pass certification on all the nodes. If it does not pass, you will receive ERROR as a response on that query. After that, transaction is applied on the local node.

      Let me know if you have any suggestions. You can also contact me at belwal.mohit@gmail.com.

      Monday, 16 February 2015

      Getting Started with Percona XtraDB Cluster

      Percona XtraDB Cluster

      As a DevOps activist I am exploring Percona XtraDB. In a series of blogs I will share my learnings. This blog intends to capture step by step details of installation of Percona XtraDB in Cluster mode. 

      Why Cluster Mode Introduction:

      Percona XtraDB cluster is High Availability and Scalability solution for MySQL users which provides
                Synchronous replication : Transaction either committed on all nodes or none.
                Multi-master replication : You can write to any node.
                Parallel applying events on slave : parallel event application on all slave nodes
                Automatic node provisioning
                Data consistency

        Straight into the Act:Installing Percona XtraDB Cluster

        Pre-requisites/Assumptions
        1. OS - Ububtu
        2. 3 Ubuntu nodes are available
        For the sake of this discussion lets name the nodes as
        node 1
        hostname:
        percona_xtradb_cluster1
        IP: 192.168.1.2

        node 2
        hostname: percona_xtradb_cluster2
        IP: 192.168.1.3

        node 3
        hostname: percona_xtradb_cluster3
        IP: 192.168.1.4
        Repeat the below steps on all nodes

        STEP 1 : Add the Percona repository
        $ echo "deb http://repo.percona.com/apt precise main" >> /etc/apt/sources.list.d/percona.list
        $ echo "deb-src http://repo.percona.com/apt precise main" >> /etc/apt/sources.list.d/percona.list
        $ apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
        STEP 2 : After adding percona repository, Update apt cache so that new packages can be included in our apt-cache.
        $ apt-get update
        STEP 3 : Install Percona XtraDB Cluster :
        $ apt-get install -y percona-xtradb-cluster-56 qpress xtrabackup
        STEP 4 : Install additional package for editing files, downloading etc :
        $ apt-get install -y python-software-properties vim wget curl netcat

        With the above steps we have, installed Percona XtraDB Cluster on every node. Now we'll configure each node, so that a cluster of three nodes can be formed.

        Node Configuration:

        Add/Modify file /etc/mysql/my.cnf on first node :
        [MYSQLD] #This section is for mysql configuration
        user = mysql
        default_storage_engine = InnoDB
        basedir = /usr
        datadir = /var/lib/mysql
        socket = /var/run/mysqld/mysqld.sock
        port = 3306
        innodb_autoinc_lock_mode = 2
        log_queries_not_using_indexes = 1
        max_allowed_packet = 128M
        binlog_format = ROW
        wsrep_provider = /usr/lib/libgalera_smm.so
        wsrep_node_address = 192.168.1.2
        wsrep_cluster_name="newcluster"
        wsrep_cluster_address = gcomm://192.168.1.2,192.168.1.3,192.168.1.4
        wsrep_node_name = cluster1
        wsrep_slave_threads = 4
        wsrep_sst_method = xtrabackup-v2
        wsrep_sst_auth = sst:secret

        [sst] #This section is for sst(state snapshot transfer) configuration
        streamfmt = xbstream

        [xtrabackup] #This section is defines tuning configuration for xtrabackup
        compress
        compact
        parallel = 2
        compress_threads = 2
        rebuild_threads = 2
        Note :
                 wsrep_node_address = {IP of current node}
                 wsrep_cluster_name= {Name of cluster}
                 wsrep_cluster_address = gcomm://{Comma separated IP address’s which are in cluster}
                 wsrep_node_name = {This is name of current node which is used to identify it in cluster}

        Now as we have done node configuration. Now start first node services:
        Start the node :
        $ service mysql bootstrap-pxc
        Make sst user for authentication of cluster nodes :
        $ mysql -e "GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sst'@'localhost' IDENTIFIED BY 'secret';"
        Check cluster status :
        $ mysql -e "show global status like 'wsrep%';"

        Configuration file for second node:
        [MYSQLD]
        user = mysql
        default_storage_engine = InnoDB
        basedir = /usr
        datadir = /var/lib/mysql
        socket = /var/run/mysqld/mysqld.sock
        port = 3306
        innodb_autoinc_lock_mode = 2
        log_queries_not_using_indexes = 1
        max_allowed_packet = 128M
        binlog_format = ROW
        wsrep_provider = /usr/lib/libgalera_smm.so
        wsrep_node_address = 192.168.1.3
        wsrep_cluster_name="newcluster"
        wsrep_cluster_address = gcomm://192.168.1.2,192.168.1.3,192.168.1.4
        wsrep_node_name = cluster2
        wsrep_slave_threads = 4
        wsrep_sst_method = xtrabackup-v2
        wsrep_sst_auth = sst:secret

        [sst]
        streamfmt = xbstream

        [xtrabackup]
        compress
        compact
        parallel = 2
        After doing configuration, start services of node 2.
        Start node 2 :
        $ service mysql start
        Check cluster status :
        $ mysql -e "show global status like 'wsrep%';"
        Now similarly you have to configure node 3. Changes are listed below.
        Changes in configuration for node 3 :
        wsrep_node_address = 192.168.1.4

        wsrep_node_name = cluster3
        Start node 3 :
        $ service mysql start

        Test percona XtraDb cluster:

        Log-in by mysql client in any node:
        mysql>create database opstree;
        mysql>use opstree;
        mysql>create table nu113r(name varchar(50));
        mysql>insert into nu113r values("zukin");
        mysql>select * from nu113r;
        Check the database on other node by mysql client:
        mysql>show databases;
        Note : There should be a database named “opstree”.
        mysql>use opstree;
        mysql>select * from nu113r; 
        Note : Data will be same as in the previous node.

        Let me know if you have any suggestions. You can also contact me at belwal.mohit@gmail.com.

        Tuesday, 13 January 2015

        Kernel-based Virtualization Machine

        What is virtualizatin?
        "Virtualization is a technology that combines or divides computing resources to present one or many operating environments using methodologies like hardware and software partitioning or aggregation, partial or complete machine simulation, emulation, time-sharing, and many others."

        This means, that virtualization uses technology to abstract from the real hardware and provides isolated environments, so called Virtual Machines. They are capable to run various applications or even a whole operating system. A goal not mentioned in the definition is to have nearly to native performance for running VMS. This is a very important point, because the users always want to get the most out of their hardware. Most of them are not willing to introduce virtualiztion technology, if a huge amount of CPU power is wasted by managing VMs.

        Kernel-based Virtual Machine :
        KVM is the first virtualization solution that has been integration into the vanilla Linux kernel. KVM has been initially developed by Qumranet, a small company located in Isreal. Redhat acquired Qumranet in September 2008, when KVM become more production ready. They see KVM as next generation of virtualiztion technology.

        KVM Architecture:
        Linux has all the mechanism a VMM needs to operate several VMs. KVM is implemented as a kernel module that can be loaded to extend Linux by these capabilities. 
        In a normal Linux environment each process runs either in user-mode or in kernel-mode. KVM introduces a third, the guest-mode. Therefore it relies on a 

        virtualization capable CPU either Intel VT or AMD-SVM extensions. A process in guest-mode has its own kernel-mode and user-mode. Thus, it is able to run an opeating system. Such processes are representing the VMs running on a KVM host. What the modes are used for from a hosts point of view:
        • user-mode: I/O when guest needs to access devices.
        • kernel-node: switch into guest-mode and handle exits due to I/O operations
        • guest-mode: execute guest code, which is the guest OS except I/O
        Resource Management:
        To increase the reuse code as mush as possible they mainly modified the Linux memory management, to allow mapping physical memory into the VMs address space. Therefore they added shadow page tables, that were needed in the early days of x8 virtualization.
        The scheduler of an operating system computers an order in that each process is assigned to one of the variable CPUs. In this way, all running process is assigned to one of the available CPUs. In this way, all running processes are share the computing time. Since the KVM developers wanted to reuse most of the mechanisms of Linux. They simply implemented each VM as a process, relying on its scheduler to assign computing power to the VMs.

        The KVM control interface: 
        Once the KVM kernel module has been loaded, the /dev/kvm device node appears in the file-system that represents the interface of KVM. It allows to control the hypervisor through a set of ioctls, commonly used in certain operating systems as an interface for processes running in user-mode to communicate with a driver. The ioctl() system call allows to execute several operations to create new virtual machines, assign memory to a virtual machine, assign and start virtual CPUs.

        Emulation of Hardware:
        To provide hardware like hard disks, cd drivers or networks  cards to the VMs, KVM uses a highly modified QEMU, This is a so called platform virtulization tool, which allows to emulate  an entire PC platform including graphics, net working, disk drives and many more.

        Execution-Model:
        Figure describe the execution model of KVM. This is the loop of the actions used to operate the VMs. These actions are separated by the three modes.
        let see the which tasks are done in which mode:
        • user-mode: The KVM module is called using ioctl() to execute guest code until I/O operations initiated by the guest or an external event occurs. Such an event may be the arrival of a network package, which could be the reply of a network package sent by the host earlier. Such events are expressed as signals that leads to an interruption of guest code execution.
        • Kernel-mode: The kernel causes the hardware to execute guest code natively. IF the processor exits the guest due to pending memory or I/O operations, the kernel performs the necessary tasks and resumes the flow of execution. If external events such as signals or I/O operations initiated by the guest exists, it exits to the user-mode.
        • guest-mode: This is on the hardware level, where the extended instruction set of  virtualization capable CPU is used to execute the native code, until an instruction called that needs assistance by KVM, a fault or an external interrupt.
        While a VM runs, There are plenty of switches between these modes.