gitlab hooks and disabling forced push

For people who do not know much about gitlab, it is a “github” equivalent with the possibility of deployment in a local network. Which means, that if your organization is not comfortable of pushing code to an external website like github, you can get all the features that github provides in your local environment.

The installation of gitlab is not an easy process. If you have an ubuntu-type system, you can get a .deb file and simply install it. Else installation is a very tedius task.

After installation and deployment, the first task that you would think about doing is to write some hooks for putting some restrictions and sanitizing the code moving into the different git repos. The trick to enable hooks in gitlab is to modify the “gitlab-shell/hooks/update” file.

On an ubuntu installation, the file can be found in the following folder

/opt/gitlab/embedded/service/gitlab-shell/hooks

But, if you have followed the installation notes – https://github.com/gitlabhq/gitlabhq/blob/master/doc/install/installation.md as in the documents, the update file is found in

/home/git/gitlab-shell/hooks

The “update” file is written in ruby. In order to abort a push, we need to exit with a “1” status. Let us create a very simple hook to disable all deletes. Add the following lines to the “update” after the line “require_relative”

require_relative ‘../lib/gitlab_update’


oldhash = ARGV[1]
newhash = ARGV[2]


cmd = “git cat-file -t #{newhash}”
action = IO.popen(cmd).read


if action == “delete”
    puts “deletes are not allowed”
    exit 1
end

For each push, gitlab’s update script receives the old commit hash and the new commit hash. using the “git cat-file” command we are getting the action / method performed. And then we are comparing it with the “delete” string to identify if it is a delete. We are exiting with a “1” to disable a push.

We can also write another hook to disable forced push. But it is easier to modify the “gitlab-shell/lib/gitlab_update.rb” to disable forced push. There is a function here which identifies if this is a forced push. To check forced push simply call this function inside the “exec” function. Here is the code that i added to the “exec” function to disable forced push.

 def exec
    # reset GL_ID env since we already
    # get value from it
    ENV[‘GL_ID’] = nil


    if forced_push?
      puts “Forced push now allowed!”
      exit 1
    end
……
end

Deployments

Deployments are a critical phase of any project. In the “stone age”, developers used to simply scp the files required into production. And there used to be issues when you are dealing with multiple http servers. Keeping all the servers in sync was always the issue.

Then capistrano came into picture. It made deployment of ruby/rails apps very easy. Me and a few other people went ahead and modified it to deploy code into production for php apps as well. But it was a tricky job since capistrano was originally designed to work for ruby/rails apps. Here is a sample capistrano code that sucks out code from svn and pushes it to multiple web servers – and then runs some specific post deployment tasks on them

deploy.rb

set :application, “app”
set :repository,  “http:///tags/TAG102”
set :imgpath, “/var/images”

# svn settings
set :deploy_via, :copy
set :scm, :subversion
set :scm_username, “svn_username”
set :scm_password, “svn_password”
set :scm_checkout, “export”
set :copy_cache, true
set :copy_exclude, [“.svn”, “**/.svn”]

# ssh settings
set :user, “server_username”
set :use_sudo, true
default_run_options[:pty] = true

#deployment settings
set :current_dir, “html”
set :deploy_to, “”
set :site_root, “/var/www/#{current_dir}”
set :keep_releases, 3

#web servers
role :web, “192.168.1.1”,”192.168.1.2″,”192.168.1.3″

#the actual script
namespace :deploy do
    desc <<-DESC
  deploy the app
    DESC
    task :update do
      transaction do
        update_code
            symlink
        end
      end

    task :finalize_update do
      transaction do
        sudo “chown -R apache.apache #{release_path}”
        sudo “ln -nfs #{imgpath}/images #{release_path}/images”     
      end
    end

    task :symlink do
      transaction do
            puts “Symlinking #{current_path} to #{site_root}.”
            sudo “ln -nfs #{release_path} #{site_root}”
      end
    end

    task :migrate do
      #do nothing
    end

    task :restart do
      #do nothing
    end   
end

This sucks out the code from the svn repository. creates a tar on local. Scps it to production web servers. Untars it to the specified location. Runs all the tasks specified in finalize_update and finally changes the symlink of “html” directory to the new deployed path. The good point about capistrano is that you are almost blind as to what happens in the backend. The bad point is that since you are blind, you do not know how to do what you want to do. It would need a bit of digging and a bit of tweaking to get your requirements fulfilled by this script.

Now lets check fabric.

Installation is quite easy using pip.

sudo pip install fabric

In case you like the old fashioned way, you can go ahead and download the source code and do a

sudo python setup.py install

To create a fabric script, you need to create a simple fab file with whatever you require. For example, if you need to run a simple command like ‘uname -a’ on all your servers, just create a simple script fabfile.py with the following code

from fabric.api import run

def host_type():
        run(‘uname -a’)

And run the script using the following command

$ fab -f fabfile.py -H localhost,127.0.0.1 host_type

[localhost] Executing task ‘host_type’
[localhost] run: uname -a
[localhost] Login password:
[localhost] out: Linux gamegeek 3.2.0-24-generic #37-Ubuntu SMP Wed Apr 25 08:43:22 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

[127.0.0.1] Executing task ‘host_type’
[127.0.0.1] run: uname -a
[127.0.0.1] out: Linux gamegeek 3.2.0-24-generic #37-Ubuntu SMP Wed Apr 25 08:43:22 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

Done.
Disconnecting from localhost… done.
Disconnecting from 127.0.0.1… done.

A simple fabric script which can do whatever the earlier capistrano script was doing is here.

fabfile.py

from __future__ import with_statement
from fabric.api import *
from fabric.operations import local,put

def production():
  env.user = ‘server_username’
  env.hosts = [‘192.168.1.1′,’192.168.1.2′,’192.168.1.3’]
  env.deploy_to = ”
  env.site_root = ‘/var/www/html’
  env.tag_name = ‘tag101’
  env.repository = {  ‘url’:’http:///tags/TAG101′,
            ‘username’: ‘svn_username’,
            ‘password’: ‘svn_password’,
            ‘command’: ‘svn export –force’,
          }
  env.image_path = ‘/var/images’

def deploy():
  checkout()
  pack()
  unpack()
  symlinks()
  makelive()

def checkout():
  local(‘%s –username %s –password %s –no-auth-cache %s /tmp/%s’ %
    (env.repository[‘command’], env.repository[‘username’], env.repository[‘password’], env.repository[‘url’], env.tag_name));

def pack():
  local(‘tar -czf /tmp/%s.tar.gz /tmp/%s’ % (env.tag_name, env.tag_name))

def unpack():
  put(‘/tmp/%s.tar.gz’ % (env.tag_name), ‘/tmp/’)
  with cd(‘%s’ % (env.deploy_to)):
    run(‘tar -xzf /tmp/%s.tar.gz’ % (env.tag_name))

def symlinks():
  run(‘ln -nfs %s/images %s/%s/images’ % (env.image_path, env.deploy_to, env.tag_name))

def makelive():
  run(‘ln -nfs %s/%s %s’ % (env.deploy_to, env.tag_name, env.site_root))

The good point is that i have more control on what i want to do using fabric as compared to capistrano. And it took me a lot less time to cook the fabric recipe as compared to capistrano.

To run this script simply do

fab production deploy

This will execute the tasks production and deploy in that order. You can have separate settings for staging and local in the same script. You can even go ahead and create your own deployment infrastructure and process to do whatever you want without running into any restrictions.

cfengine – a beginners guide A tool to automate infra…