Monday, 24 December 2012

Efficiently handling Code merge in Version Control System


One of the painful & mundane task that release engineers have to perform is to merge changes of one branch into another branch & in case of code conflicts the release engineer has to co-ordinate with all the developers to resolve those merge conflicts.

In our current setup the problem is more critical as development of two releases overlap with each other . We have a sprint cycle of 10 days where we have 5 days of active development after that code freeze is implemented & rest 5 days are only for big fixes. The next sprint starts just after the code freeze date of previous release. In ideal scenario this setup should work well but the biggest assumption behind successful execution of the process is their should be minimum code check-ins after code freeze & usually that doesn't happens. This results in parallel development in 2 branches & therefore while merging two branches their are lot of code conflicts.

The real problem starts when we start merging code, as currently their are close to 100 developers working on the same code-base which means a huge list of files in conflict mode & you have to chase down each & every person to resolve those conflicts. To overcome the above-said problem we are planning to do 2 things.

First one is instead of doing merge after a long duration we are planning to increase the frequency of merge from once in 5 day to twice a day which would help us to reduce the list of conflicting files.

As I always strive to automate things as much as possible, the second part is to at-least create an automated tool that will perform a dummy merge of two branches and list out all the files that would result in conflict mode along with listing the last user's who have modified the files in respective branch.

We are expecting 60-70% efficiency in code merge process, let's see how things goes. Feel free to drop any ideas if you have or in case of any concerns :).

Although I tried to be as generic as possible, but just to let you know we are using Git as version control system.

Tuesday, 20 November 2012

Managing logs of Application

A major issue that people face in managing a big system is log files management. In our setup we were primarily facing two issues
1.) We had around 10-15 different applications, it was a messy things to track the logs as you have to login to those systems to view the logs
2.) Other issue was cleaning up of log files

Resolution for the second issue was quiet easy, one of the solution can be to write a script that will delete the logs files older then say n days and then add this script in crontab to execute with some frequency say daily. This approach had an issue, with the addition of a new system you have to do this setup every-time. As a one time solution for this problem we have created a job in our CI system(jenkins) which can be configured to run after some frequency & then reads the details of machine, location of log files which needs to be cleaned. The second approach gave us the flexibility to manage cleaning of the log files from a single place.


For first issue obviously we have to look out for some tool & the first google hit :) suggested log.io and it seemed meeting all our requirements. So the one line definition goes like this Log IO is a Real time log monitoring tool, through this tool you can monitor multiple log files in a single browser window.

I referred the link given below to configure logio
http://linuxdo.blogspot.in/2012/05/install-logio-on-centos.html

I'm not going into the details of setting up of log.io or how it works, but if you have any confusion you can leave a comment

For your reference I'm attaching an image of log.io instance we are using




So happy logs tracking :)

Monday, 29 October 2012

Build & Release Challenges : Manual DB Updates Part 2

Previos

This blog was supposed to be about the new system, I thought of building to solve the problem that I discussed in my previous blog. Well for your disappointment this blog will be not about that, the reason is scope of the problem changed. In this blog I'll be discussing about the new scope and how discussion moved forward about it and what is the current state, which means that I'm still not able to solve this problem & suggestions are welcome :).

I'll again state the problem which was very simple enough, "database updates were not automated in non prod environments as same db scripts were modified during development". You need to refer to previous blog for more details about this problem. To solve this problem I came up with incremental db update approach, as per this approach all new modifications will be done as a new sql update which means that let's say you had a file 1.sql, if you need to do any modification a new file 1'.sql should be committed. In this way the system don't have to track the files, it just have to maintain what all files got executed, the new files which needs to be executed and execute the new files only. This solution can work in a normal setup very well, in fact in my last assignment I was using this approach only to have automated db updates across all environments.

The incremental db updates can't be run in current setup, the reason for that is we have very huge database order of 100GB, you can easily imagine that we can't afford to run same script with slight modifications i.e first script adding a column of size 20 then another script to change it's size to 40 finally renaming it to some other name. Instead of that a single script should be created after consolidating all these scripts.

The first solution that came to my mind after this new issue emerged, during non prod deployments we should already have database dump of previous release, more preferably cold dump. During deployment 3 steps would be performed first load previous release db dump, run all the consolidated scripts which will be consolidated & do the code deployment. Initially this solution looked fine enough but QA team raised concern as loading previous release dump meant that all the test data  they have created on the QA server would be lost and I was at the beginning of square :).

Another solution that could be implemented  was to have rollback script for each & every script committed. This convention will have an advantage of supporting incremental update i.e whenever a script will be updated first it's corresponding rollback script will be executed & then the script can be executed. This solution has it's own challenges the first challenge was it's really difficult to write rollback script of each & every script, another issues was you have to carefully manage the script files so that there will be no tight coupling between them as execution of rollback of one script will impact another script. Third issue although less significant is that you have to deal with data loss

We could also have used a hybrid approach that is combination of incremental & full db updates. Till QA phase we can use incremental db update mechanism in which all new script modifications are done as a new script and then they can be executed incrementally but for staging & production deployment db updates will be done as a full update which requires a human intervention i.e consolidation of scripts. This approach had 2 challenges the first & foremost was that it had manual intervention & second major issue was that we were duplicating the db scripts.

So these were the few approaches that we thought of & none was able to solve our problem completely, so we are still struggling with fully automating the db update process. Again any suggestions are most welcome :)





Sunday, 30 September 2012

Build & Release Challenges : Manual DB Updates

The first problem that I'm gonna discuss is manual db updates. In our current application we do have automated DB updates execution in the production environment, but not in the rest of environments i.e dev, qa, stage, performance test ... etc.

The process that we use for automated scripts execution in production environment  is that we create a release folder, this release folder contains all the sql scripts for the release along with a meta file. The release meta file contains the list of all the scripts that needs to be executed, the current system reads this meta file & executes all the scripts of release. This process is fair enough for production system since the release is deployed only once on production system. In production systems we don't have to track whether a script got executed or not i.e all the scripts execution is treated as atomic that is either all the scripts are executed or none is executed.

The drawback of atomic execution is the reason because of which this approach can not be applied to rest of the environments, since the db updates will always be incremental in rest of environments. In case of all other environments apart from production environment the release will be deployed multiple number of times, with each release new db scripts can be added  to the system and only those new scripts needs to be executed.

The new system that I'm trying to develop will have incremental db update capability. The system that I'm planning to deveplop will have following characterstics :
  • It should be able to keep track of script name for later reference.
  • It should store the release mapping to which this script belongs.
  • The sequence of the script to enforce the order of execution.
  • The system should also maintain whether the script is already or not.
  • The system should be able to handle error scenario i.e if a script execution fails a corrective action should be taken by the system
  • It should be extensible enough so that various kind of reports can be generated from it
In the next blog I'll be talking about the actual system how it is built

Prev
Next

    Friday, 28 September 2012

    Build & Release Challenges : Problems

    So here is the consolidated list of the problems that current system have, I've categorized all the issues in different categories so that they can be managed properly

    • CI Builds
      • Code stability builds are not in place
      • Code quality builds are not in place
      • Code deployment builds for non-prod environments not in place
      • A lot of manual steps in prod deployments
      • All the projects are performance critical so builds to do profiling of projects
    • Database updates is manual at all the stages of deployment
    • Automated smoke testing of all the applications
    • Integration of bug tracking system with version control system
    • A release server needs to be set-upped so that release can be managed properly
    • No documentation at all :)
    • Version Control System : We are using git as version control system
      • Branching strategy has some flaws as a result of which merging takes a lot of time
      • We don't have a GUI to manage git server
    Prev                                                                                                                  Next

    Build & Release Challenges

    I was planning to write this blog 2-3 days back in fact not a blog but a blog series & this blog will be the starting of this blog series. This blog will only give you an overview what I'll be discussing in the coming blogs, first the reason why I'm writing this blog :) well the reason is I've changed the job :) and I'll be working as a Build & Release manager. The challenge that I've have is that right now their is very little or no streamlined processes defined for build and release and obvious very less automation so I've to work on these things. This means that in the coming blogs I'll be discussing various problems that I'll be facing & how to overcome that.

    Monday, 14 May 2012

    Automated Database Update Or Rollback

    One of the important step during release is doing database update and rollback in case something goes wrong, usually people perform this operation manually. In this blog I'll talk about how we can automate this process by following some convention.

    Here I'm taking mysql database as an example we can have same conventions for other databases also

    Convention to manage rollback/updates of a release
    • Each project codebase at it's root will have a folder database_scripts
    • The database_scripts folder will contain folder for each release i.e Release1_1, Release2_0...
    • The database scripts release folder will in turn contains two folders update & rollback which will contain updates & rollbacks scripts for a release.
    Automating the rollback/update
    • The update folder will have a source input file FileSequencer.txt. This file will point to all the update scripts in correct order that needs to be executed for the release
    • In the similar manner rollback folder will have a source input file FileSequencer.txt. This file will point to all the rollback scripts in correct order that needs to be executed for the release
    • At last we will have a utility shell script, this script will take db details and execute all the scripts referred in FileSequencer.txt using mysql command

    Wednesday, 14 March 2012

    Release Strategy for Java Web based projects

    In this post I'll be discussing about the 2 strategies that  we can follow for releasing a Java based web project.

    A project can be primarily released in two ways
        Incremental Release
        Full Release

    Incremental Release is done in big projects which has multiple modules & usually few modules gets updated between two releases. It makes sense to include only updated modules in release archive and during deployment only update those modules in application server.

    Full release is usually done in small projects where the release archive contains all the components and then this release archive can be deployed to the application server as a whole

    Both incremental & full release strategy has their pros & cons, where full release strategy scores in simple release archive generation & deployment incremental release has upper hand in space usage by only having modified components in it, although it brings overhead when doing rollback.


    Release Steps in Incremental Release Strategy: If you are following incremental strategy in general you need to perform following steps

    1.) Checkout the latest code for the release
    2.) Generate the list of components which needs to be deloyed for the release
    3.) Generate the release archive based on the list of components
    4.) Stop the server(If hot deployment of components is not available)
    5.) Take the backup of existing application on application server as we may need to do rollback in case of any issues
    6.) Replace the components in application server with the components in the release archive
    7.) Start the server(If hot deployment of components is not available)


    Release steps in Full Release Strategy: As explained earlier Full release strategy is fairly simple, steps involved are:
    1.) Checkout the latest code for the release
    3.) Generate the release archive for whole application
    4.) Stop the server(If hot deployment of components is not available)
    6.) Deploy the release archive to the application server
    7.) Start the server(If hot deployment of components is not available)