Chevrolet Aveo ownership report

In 2006, when I was looking for a sedan to purchase, I had multiple options. Ford fiesta and hyundai verna were recently released. Honda city was selling like hot cakes. I was opting for either ford or hyundai. But then while discussing my options around, I came to know the chevrolet aveo was also there in the same segment and has some features which really impressed me – like the ground clearance and the engine performance. Surprisingly as per the maintenance figures shared by all three of them the chevy was coming out to be cheaper to maintain. So I went ahead and got it.

For 4 years, I was extremely happy with the car’s performance. I used to get good average of around 12-13 kmpl. Maintenance was every 5000 kms and it costed me only around 3000/- per service. The car was peppy and light to drive. My daily driving was around 30-40 kms, so it was much cheaper to maintain.

Then my first accident happened – I was standing still while a truck chewed through the driver side of my car. Driver side door and fender were totally damaged. When I went to the GM workshop, I could see the “smile” on their faces – which I was able to decrypt later. I got a quote of 60K from the GM workshop. I went to many other workshops, but they denied having parts for chevrolet cars. Finally I was able to find a workshop which did the repairing in around 20K – from which I had to pay around 10K.

This was the point where I should have realized that life would not be easy anymore with my aveo. The 25K service was very expensive – costed me around 15K. My shockers had leaked. I could not feel the difference in the ride quality but as per recommendation of GM, I got them changed. From then onwards, every service started costing 8-10K.

My driving went up from 30 km per day to 100 km per day. And I would finish off 5000 kms in less than 3 months. Another service would cost me 8-10K. So in addition to the rising petrol prices, I had to pay around 4K per month on the servicing of the car. The pervious shockers had a warranty of around 1 year and immediately after finishing 1 year the shockers gave away. Got them changed again – paying from my own pocket.

At 34K kms, the default goodyear tyres that came with the car gave away. Once I had 3 punchures on way to office on the same day. The next day, I took an off from office and got new tyres – again costed me 18K. Another hit was immediately after the 35K service. At around 37K, my batteries gave away. Interestingly there is a cut in the battery fitted in aveo, so only batteries from authorized workshops can be fitted. Also the complete computer would reset while fitting the battery and needs the authorization code to enable the car to be driven further. It is advanced technology but a pain. The battery which was normally costing around 6-7K in the market costed me 9K when I got it from the authorized workshop.

When my 40K service again costed me 10K for all the injector cleaning, oil change and other stuff, I decided to call it quits. I dumped the car at home and got another one for office commuting. Decided to go for a tata safari which is famous for some issues cropping up now and then. I thought that a chevy sitting at home which gets driven for around 200-300 kms in a month will be cheaper to maintain. But I was wrong.

After 41K kms, one fine day, I took the car to GM workshop with the problem that the car keeps on racing – engine RPM stays above 2000. They did a thorough cleanup of the car engine, charged me 7K for it and told me that the car’s ECM has gone bad. For those who are not aware of the ECM, it is the computer sitting inside the car and it controlls almost everything in the car including fuel, acceleration and other stuff. It is supposed to be very rugged and can withstand extreme dust and temperatures which is always the case inside the car. It is said that if the ECM goes bad the car should not start at all. Surprisingly my car was starting and rolling but was rolling very fast. 

They told me that it would cost another 30K to replace my ECM – as electronics are not covered in car insurance. I raised the problem with GM india on facebook and they told me to take the car back. This time they did not charge me again but did some thorough examination of the car and again told me that ECM has to be replaced. They offered me a discount of 1000 on labour charges. I was furious. A car which has been driven only for 41,000 kms can develop a bad ECM. Fortunately one of my friends knew a 3rd party agent who dealt in such stuff. I was able to get the car working in 18K with the agent’s help. And sold off the car as soon as it was healthy – before anything else goes wrong.

Today, in less than 2 years, my tata safari has done 40K kms. With service interval of 15K kms and service cost of around 5-6K it is a much cheaper car to maintain with lesser parts going bad.

The problem with GM cars is that they give you a happy feeling for the first 3 years with around 30-35,000 kms. After which the maintenance cost shoots up. After the free service from GM expires, and since there is no availability of parts in the market, you are solely dependent on the authorized service centers to cheat you or charge you whatever they like. And since the cars are not very reliable, the resale price is very less. After 5 years of driving a GM, you may realize that the money you spent in getting and maintaining the car was too much.

Ford also follows a similar structure where parts are not readily available outside the authorized service centers. But still the availability is much better than that of a GM. Also ford cars are more reliable. Hyundai, maruti, tata and mahindra can be easily maintained outside the authorized service centers. And they are much cheaper to maintain as parts and service centers are easily available.

But I would clearly stay away from GM and would advice the same.

Step by Step setting up solr cloud

Here is a quick step by step instruction for setting up solr cloud on few machines. We will be setting up solr cloud to match production environment – that is – there will be separate setup for zookeeper & solr will be sitting inside tomcat instead of the default jetty.

Apache zookeeper is a centralized service for maintaining configuration information. In case of solr cloud, the solr configuration information is maintained in zookeeper. Lets set up the zookeeper first.

We will be setting up solr cloud on 3 machines. Please make the following entries in /etc/host file on all machines.

172.22.67.101 solr1
172.22.67.102 solr2
172.22.67.103 solr3

Lets setup the zookeeper cloud on 2 machines

download and untar zookeeper in /opt/zookeeper directory on both servers solr1 & solr2. On both the servers do the following

root@solr1$ mkdir /opt/zookeeper/data
root@solr1$ cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
root@solr1$ vim /opt/zookeeper/zoo.cfg

Make the following changes in the zoo.cfg file

dataDir=/opt/zookeeper/data
server.1=solr1:2888:3888
server.2=solr2:2888:3888

Save the zoo.cfg file.

server.x=[hostname]:nnnnn[:nnnnn] : here x should match with the server id – which is there in the myid file in zookeeper data directory.

Assign different ids to the zookeeper servers

on solr1

root@solr1$ cat 1 > /opt/zookeeper/data/myid

on solr2

root@solr2$ cat 2 > /opt/zookeeper/data/myid

Start zookeeper on both the servers

root@solr1$ cd /opt/zookeeper
root@solr1$ ./bin/zkServer.sh start

Note : in future when you need to reset the cluster/shards information do the following

root@solr1$ ./bin/zkCli.sh -server solr1:2181
[zk: solr1:2181(CONNECTED) 0] rmr /clusterState.json

Now lets setup solr and start solr with the external zookeeper.

install solr on all 3 machines. I installed them in /opt/solr folder.
Start the first solr and upload solr configuration into the zookeeper cluster.

root@solr1$ cd /opt/solr/example
root@solr1$ java -DzkHost=solr1:2181,solr2:2181 -Dbootstrap_confdir=solr/collection1/conf/ -DnumShards=2 -jar start.jar

Here number of shards is specified as 2. So our cluster will have 2 shards and multiple replicas per shard.

Now start solr on remaining servers.

root@solr2$ cd /opt/solr/example
root@solr2$ java -DzkHost=solr1:2181,solr2:2181 -DnumShards=2 -jar start.jar

root@solr3$ cd /opt/solr/example
root@solr3$ java -DzkHost=solr1:2181,solr2:2181 -DnumShards=2 -jar start.jar

Note : It is important to put the “numShards” parameter, else numShards gets reset to 1.

Point your browser to http://172.22.67.101:8983/solr/
Click on “cloud”->”graph” on your left section and you can see that there are 2 nodes in shard 1 and 1 node in shard 2.

Lets feed some data and see how they are distributed across multiple shards.

root@solr3$ cd /opt/solr/example/exampledocs
root@solr3$ java -jar post.jar *.xml

Lets check the data as it was distributed between the two shards. Head back to to Solr admin cloud graph page at http://172.22.67.101:8983/solr/.

click on the first shard collection1, you may have 14 documents in this shard
click on the second shard collection1, you may have 18 documents in this shard
check the replicas for each shard, they should have the same counts
At this point, we can start issues some queries against the collection:

Get all documents in the collection:
http://172.22.67.101:8983/solr/collection1/select?q=*:*

Get all documents in the collection belonging to shard1:
http://172.22.67.101:8983/solr/collection1/select?q=*:*&shards=shard1

Get all documents in the collection belonging to shard2:
http://172.22.67.101:8983/solr/collection1/select?q=*:*&shards=shard2

Lets check what zookeeper has in its cluster.

root@solr3$ cd /opt/zookeeper/
root@solr3$ ./bin/zkCli.sh -server solr1:2181
[zk: 172.22.67.101:2181(CONNECTED) 1] get /clusterstate.json
{“collection1”:{
    “shards”:{
      “shard1”:{
        “range”:”80000000-ffffffff”,
        “state”:”active”,
        “replicas”:{
          “172.22.67.101:8983_solr_collection1”:{
            “shard”:”shard1″,
            “state”:”active”,
            “core”:”collection1″,
            “collection”:”collection1″,
            “node_name”:”172.22.67.101:8983_solr”,
            “base_url”:”http://172.22.67.101:8983/solr”,
            “leader”:”true”},
          “172.22.67.102:8983_solr_collection1”:{
            “shard”:”shard1″,
            “state”:”active”,
            “core”:”collection1″,
            “collection”:”collection1″,
            “node_name”:”172.22.67.102:8983_solr”,
            “base_url”:”http://172.22.67.102:8983/solr”}}},
      “shard2”:{
        “range”:”0-7fffffff”,
        “state”:”active”,
        “replicas”:{“172.22.67.103:8983_solr_collection1”:{
            “shard”:”shard2″,
            “state”:”active”,
            “core”:”collection1″,
            “collection”:”collection1″,
            “node_name”:”172.22.67.103:8983_solr”,
            “base_url”:”http://172.22.67.103:8983/solr”,
            “leader”:”true”}}}},
    “router”:”compositeId”}}

It can be seen that there are 2 shards. Shard 1 has only 1 replica and shard 2 has 2 replicas. As you keep on adding more nodes, the number of replicas per shard will keep on increasing.

Now lets configure solr to use tomcat and add zookeeper related and numShards configuration into tomcat for solr.

Install tomcat on all machines in /opt folder. And create the following files on all machines.

cat /opt/apache-tomcat-7.0.40/conf/Catalina/localhost/solr.xml
<?xml version=”1.0″ encoding=”UTF-8″?>
<Context path=”/solr/home”
     docBase=”/opt/solr/example/webapps/solr.war”
     allowlinking=”true”
     crosscontext=”true”
     debug=”0″
     antiResourceLocking=”false”
     privileged=”true”>

     <Environment name=”solr/home” override=”true” type=”java.lang.String” value=”/opt/solr/example/solr” />
</Context>

on solr1 & solr2 create the solr.xml file for shard1

root@solr1$ cat /opt/solr/example/solr/solr.xml

<?xml version=”1.0″ encoding=”UTF-8″ ?>
<solr persistent=”true” zkHost=”solr1:2181,solr2:2181″> 
  <cores defaultCoreName=”collection1″ adminPath=”/admin/cores” zkClientTimeout=”${zkClientTimeout:15000}” hostPort=”8080″ hostContext=”solr”>
    <core loadOnStartup=”true” shard=”shard1″ instanceDir=”collection1/” transient=”false” name=”collection1″/>
  </cores>
</solr>
 

on solr3 create the solr.xml file for shard2

root@solr3$ cat /opt/solr/example/solr/solr.xml

<?xml version=”1.0″ encoding=”UTF-8″ ?>
<solr persistent=”true” zkHost=”solr1:2181,solr2:2181″> 
  <cores defaultCoreName=”collection1″ adminPath=”/admin/cores” zkClientTimeout=”${zkClientTimeout:15000}” hostPort=”8080″ hostContext=”solr”>
    <core loadOnStartup=”true” shard=”shard2″ instanceDir=”collection1/” transient=”false” name=”collection1″/>
  </cores>
</solr>
 
 

set the numShards variable as a part of solr starup environment variable on all machines.

root@solr1$ cat /opt/apache-tomcat-7.0.40/bin/setenv.sh
export JAVA_OPTS=’ -Xms4096M -Xmx8192M -DnumShards=2 ‘

To be on the safer side, cleanup the clusterstate.json in zookeeper. Now start tomcat on all machines and check the catalina.out file for errors if any. Once all nodes are up, you should be able to point your browser to http://172.22.67.101:8080/solr -> cloud -> graph and see the 3 nodes which form the cloud.

Lets add a new node to shard 2. It will be added as a replica of current node on shard 2.

http://172.22.67.101:8983/solr/admin/cores?action=CREATE&name=collection1_shard2_replica2&collection=collection1&shard=shard2
And lets check the clusterstate.json file now

root@solr3$ cd /opt/zookeeper/
root@solr3$ ./bin/zkCli.sh -server solr1:2181
[zk: 172.22.67.101:2181(CONNECTED) 2] get /clusterstate.json
{“collection1”:{
    “shards”:{
      “shard1”:{
        “range”:”80000000-ffffffff”,
        “state”:”active”,
        “replicas”:{
          “172.22.67.101:8080_solr_collection1”:{
            “shard”:”shard1″,
            “state”:”active”,
            “core”:”collection1″,
            “collection”:”collection1″,
            “node_name”:”172.22.67.101:8080_solr”,
            “base_url”:”http://172.22.67.101:8080/solr”,
            “leader”:”true”},
          “172.22.67.102:8080_solr_collection1”:{
            “shard”:”shard1″,
            “state”:”active”,
            “core”:”collection1″,
            “collection”:”collection1″,
            “node_name”:”172.22.67.102:8080_solr”,
            “base_url”:”http://172.22.67.102:8080/solr”}}},
      “shard2”:{
        “range”:”0-7fffffff”,
        “state”:”active”,
        “replicas”:{
          “172.22.67.103:8080_solr_collection1”:{
            “shard”:”shard2″,
            “state”:”active”,
            “core”:”collection1″,
            “collection”:”collection1″,
            “node_name”:”172.22.67.103:8080_solr”,
            “base_url”:”http://172.22.67.103:8080/solr”,
            “leader”:”true”},
          “172.22.67.103:8080_solr_collection1_shard2_replica2”:{
            “shard”:”shard2″,
            “state”:”active”,
            “core”:”collection1_shard2_replica2″,
            “collection”:”collection1″,
            “node_name”:”172.22.67.103:8080_solr”,
            “base_url”:”http://172.22.67.103:8080/solr”}}}},
    “router”:”compositeId”}}

Similar to adding more nodes, you can unload and delete a node in solr.

http://172.22.67.101:8080/solr/admin/cores?action=UNLOAD&core=collection_shard2_replica2&deleteIndex=true
More details can be obtained from

http://wiki.apache.org/solr/SolrCloud

How bigpipe works

Bigpipe is a concept invented by facebook to help speed up page load times. It paralellizes browser rendering and server processing to achieve maximum efficiency. To understand bigpipe lets see how the a user request-response cycle is executed in the current scenario

  • Browser sends an HTTP request to web server.
  • Web server parses the request, pulls data from storage tier then formulates an HTML document and sends it to the client in an HTTP response.
  • HTTP response is transferred over the Internet to browser.
  • Browser parses the response from web server, constructs a DOM tree representation of the HTML document, and downloads CSS and JavaScript resources referenced by the document.
  • After downloading CSS resources, browser parses them and applies them to the DOM tree.
  • After downloading JavaScript resources, browser parses and executes them.

In this scenario, while the web server is processing and creating the HTML document, the browser is idle and when the browser is rendering the html page, the web server remains idle.

Bigpipe concept breaks the page into smaller chunks known as pagelets. And makes page rendering on browser and processing on server side as parallel processes speeding up the page load time.

The request response cycle in the bigpipe scenario is as follows.

  • The browser sends an HTTP request to web server.
  • Server quickly renders a page skeleton containing the tags and a body with empty div elements which act as containers to the pagelets. The HTTP connection to the browser stays open as the page is not yet finished.
  • Browser will start downloading the bigpipe javascript library and after that it’ll start rendering the page
  • The PHP server process is still executing and its building the pagelets. Once a pagelet has been completed it’s results are sent to the browser inside a BigPipe.onArrive(…) javascript tag.
  • Browser injects the html code for the pagelet received into the correct place. If the pagelet needs any CSS resources those are also downloaded.
  • After all pagelets have been received the browser starts to load all external javascript files needed by those pagelets asynchronously.
  • After javascripts are downloaded browser executes all inline javascripts.

This results in a parallel system where as the pagelets are being generated the browser is rendering the pagelets. From the user’s perspective the page is rendered progressively. The initial page content becomes visible much earlier, which dramatically improves user perceived latency of the page.

Source : https://www.facebook.com/note.php?note_id=389414033919
open bigpipe implementation : https://github.com/garo/bigpipe

Database library to handle multiple masters and multiple slaves

In a large scale mysql deployment there could be multiple masters and multiple slaves. Masters are generally in circular replication. And are used for running all inserts, updates and deletes. Slaves are used to run selects.

When you are dealing with multiple mysql instances running in a large scale environment, it is important to take care of lags between masters and slaves. To handle such scenarios, the code should be capable of firing query on a server dynamically. Which means that for each query, I as a developer should have the flexibility to decide which server the query should go.

A list of existing scenarios :

1. All registrations / username generation process should happen on a single master. If you generate usernames at both masters, there may be scenarios where, due to lag between mysql masters, the user is not reflected. And in such a case, the user may register again and land on another master. Creating the same username again and breaking the circular replication. So all registrations and check for “username exists” should happen on a single master.

2. For all other Insert, Update and Delete operations, the user should be stuck to a single master. Why ? Assume there is a lag of around 30 minutes between the masters and slaves. The user inserts a record and immediately wants to see what record has been inserted. If we fetch the record from another master or slave, the record will not be available, because it has not yet been replicated. To take care of this scenario, whenever a record is inserted the immediate select has to be from the same server.

3. For all other selects, the query can be fired on any of the slaves. For example, the user logs into the site and sees his own profile. We show him his profile using one of the slave servers. This can be cached as well. The point here is that for data which has not been updated recently – the query can be fired on any of the slaves.

The following piece of code/library handles most of the scenarios. Please feel free to suggest modifications or improvements.


/**
* Created by : Jayant Kumar
* Description : php database library to handle multiple masters & multiple slaves
**/
class DatabaseList // jk : Base class
{
public $db = array();
public function setDb($db)
{
$this->db = $db;
}
public function getDb()
{
return $this->db;
}
}
class SDatabaseList extends DatabaseList // jk : Slave mysql servers
{
function __construct()
{
$this->db[0] = array(‘ip’=>’10.20.1.11’, ‘u’=>’user11’, ‘p’=>’pass11’, ‘db’=>’database1’);
$this->db[1] = array(‘ip’=>’10.20.1.12’, ‘u’=>’user12’, ‘p’=>’pass12’, ‘db’=>’database1’);
$this->db[2] = array(‘ip’=>’10.20.1.13’, ‘u’=>’user13’, ‘p’=>’pass13’, ‘db’=>’database1’);
//print_r($db);
}
}
class MDatabaseList extends DatabaseList // jk : Master mysql servers
{
function __construct()
{
$this->db[0] = array(‘ip’=>’10.20.1.1’, ‘u’=>’user1’, ‘p’=>’pass1’, ‘db’=>’database1’);
$this->db[1] = array(‘ip’=>’10.20.1.2’, ‘u’=>’user2’, ‘p’=>’pass2’, ‘db’=>’database2’);
//print_r($db);
}
}
class MemcacheList extends DatabaseList // jk : memcache servers
{
function __construct()
{
$this->db[0] = array(‘ip’=>’localhost’, ‘port’=>11211);
}
}
Interface DatabaseSelectionStrategy  // jk : Database interface
{
public function getCurrentDb();
}
class StickyDbSelectionStrategy implements DatabaseSelectionStrategy // jk : sticky db . For update / delete / insert
{
private $dblist;
private $uid;
private $sessionDb;
private $sessionTimeout = 3600;
function __construct(DatabaseList $dblist)
{
$this->dblist = $dblist;
}
public function setUserId($uid)
{
$this->uid = $uid;
}
public function setSessionDb($sessionDb)
{
$this->sessionDb = $sessionDb->db;
}
private function getDbForUser() // jk : get db for this user. If not found – assign him random master db.
{
$memc = new Memcache;
foreach ($this->sessionDb as $key => $value) {
$memc->addServer($value[‘ip’], $value[‘port’]);
}
$dbIp = $memc->get($this->uid);
if($dbIp == null)
{
$masterlist = new MDatabaseList();
$randomdb = new RandomDbSelectionStrategy($masterlist);
$mdb = $randomdb->getCurrentDb();
$dbIp = $mdb[‘ip’];
$memc->set($this->uid, $dbIp, false, $this->sessionTimeout);
}
return $dbIp;
}
public function getCurrentDb()
{
$dbIp = $this->getDbForUser();
foreach ($this->dblist->db as $key => $value) 
{
if($value[‘ip’] == $dbIp)
return $value;
}
}
}
class RandomDbSelectionStrategy implements DatabaseSelectionStrategy // jk : select random db from list
{
private $dblist;
function __construct(DatabaseList $dblist)
{
//print_r($dblist);
$this->dblist = $dblist;
}
public function getCurrentDb()
{
//print_r($this->dblist);
$cnt = sizeof($this->dblist->db);
$rnd = rand(0,$cnt-1);
$current = $this->dblist->db[$rnd];
return $current;
}
}
class SingleDbSelectionStrategy implements DatabaseSelectionStrategy // jk : select one master db – to generate unique keys
{
private $dblist;
function __construct(DatabaseList $dblist)
{
$this->dblist = $dblist;
}
public function getCurrentDb()
{
//print_r($this->dblist);
return $this->dblist->db[0];
}
}
Interface Database
{
public function getIp();
public function getDbConnection();
}
class DatabaseFactory implements Database // cmt : database factory
{
private $db;
public function getIp()
{
return $this->db[‘ip’];
}
public function getDbConnection($type = ‘slave’, $uid = 0)
{
$dbStrategy;
switch($type)
{
case ‘slave’:
$dblist = new SDatabaseList();
//print_r($dblist);
$dbStrategy = new RandomDbSelectionStrategy($dblist);
break;
case ‘master’:
$dblist = new MDatabaseList();
//print_r($dblist);
$dbStrategy = new StickyDbSelectionStrategy($dblist);
$dbStrategy->setSessionDb(new MemcacheList());
$dbStrategy->setUserId($uid);
break;
case ‘unique’:
$dblist = new MDatabaseList();
//print_r($dblist);
$dbStrategy = new SingleDbSelectionStrategy($dblist);
break;
}
$this->db = $dbStrategy->getCurrentDb();
print_r($this->db);
// return mysql_connect($this->db[‘ip’], $this->db[‘u’], $this->db[‘p’], $this->db[‘db’]);
}
}
// tst :  test this out…
$factory = new DatabaseFactory();
echo ‘Slave : ‘; $factory->getDbConnection(‘slave’);
echo ‘Slave2 : ‘; $factory->getDbConnection(‘slave’);
echo ‘Unique : ‘; $factory->getDbConnection(‘unique’);
echo ‘New Master 100: ‘; $factory->getDbConnection(‘master’,100);
echo ‘New Master 101: ‘; $factory->getDbConnection(‘master’,101);
echo ‘New Master 102: ‘; $factory->getDbConnection(‘master’,102);
echo ‘old Master 100: ‘; $factory->getDbConnection(‘master’,100);
echo ‘old Master 102: ‘; $factory->getDbConnection(‘master’,102);
?>

how to create a 3 node riak cluster ?

A very brief intro about riak – http://basho.com/riak/. Riak is a distributed database written in erlang. Each node in a riak cluster contains the complete independent copy of the riak package. A riak cluster does not have any “master”. Data is distributed across nodes using consistent hashing – which ensures that the data is evenly distributed and a new node can be added with minimum reshuffling. Each object has in a riak cluster has multiple copies distributed acorss multiple nodes. Hence failure of a node does not necessarily result in data loss.

To setup a 3 node riak cluster, we first setup 3 machines with riak installed. To install riak on ubuntu machines all that needs to be done is download the “deb” package and do a dpkg -i “riak_x.x.x_amd64.deb”. The version I used here was 1.3.1. 3 machines with ips 10.20.220.2, 10.20.220.3 & 10.20.220.4 were setup

To setup riak on 1st node, there are 3 config changes that need to be done

1. replace http ip: in /etc/riak/app.config replace ip in {http, [ {“127.0.0.1”, 8098 } ]} with 10.20.220.2
2. replace pb_ip: in /etc/riak/app.config replace ip in {pb_ip,   “127.0.0.1” } with 10.20.220.2
3. change the name of the fiak machine to match your ip: in /etc/riak/vm.args change name to riak@10.20.220.2




If you had started the riak cluster earlier – before making the ip related changes, you will need to clear the ring and backend db. Do the following.

rm -rf /var/lib/riak/bitcask/
rm -rf /var/lib/riak/ring/



To start the first node, run riak start.

To prepare the second node, replace the ips with 10.20.220.3. Once done do a “riak start”. To join this node to the cluster do the following

root@riak2# riak-admin cluster join riak@10.20.220.2
Attempting to restart script through sudo -H -u riak
Success: staged join request for ‘riak@10.20.220.3’ to ‘riak@10.20.220.2’

check out the cluster plan

root@riak2# riak-admin cluster plan
Attempting to restart script through sudo -H -u riak
===============================Staged Changes================================
Action         Nodes(s)
——————————————————————————-
join           ‘riak@10.20.220.3’
——————————————————————————-

NOTE: Applying these changes will result in 1 cluster transition

###############################################################################
                         After cluster transition 1/1
###############################################################################

=================================Membership==================================
Status     Ring    Pending    Node
——————————————————————————-
valid     100.0%     50.0%    ‘riak@10.20.220.2’
valid       0.0%     50.0%    ‘riak@10.20.220.3’
——————————————————————————-
Valid:2 / Leaving:0 / Exiting:0 / Joining:0 / Down:0

WARNING: Not all replicas will be on distinct nodes

Transfers resulting from cluster changes: 32
  32 transfers from ‘riak@10.20.220.2’ to ‘riak@10.20.220.3’

Save the cluster

root@riak2# riak-admin cluster commit
Attempting to restart script through sudo -H -u riak
Cluster changes committed

Add 1 more node

Prepare the 3rd node by replacing the ip with 10.20.220.4. And add this node to the riak cluster.

root@riak3# riak-admin cluster join riak@10.20.220.2
Attempting to restart script through sudo -H -u riak
Success: staged join request for ‘riak@10.20.220.4’ to ‘riak@10.20.220.2’

check and commit the new node to the cluster.

root@riak3# riak-admin cluster plan
Attempting to restart script through sudo -H -u riak
=============================== Staged Changes ================================
Action         Nodes(s)
——————————————————————————-
join           ‘riak@10.20.220.4’
——————————————————————————-

NOTE: Applying these changes will result in 1 cluster transition

###############################################################################
                         After cluster transition 1/1
###############################################################################

================================= Membership ==================================
Status     Ring    Pending    Node
——————————————————————————-
valid      50.0%     34.4%    ‘riak@10.20.220.2’
valid      50.0%     32.8%    ‘riak@10.20.220.3’
valid       0.0%     32.8%    ‘riak@10.20.220.4’
——————————————————————————-
Valid:3 / Leaving:0 / Exiting:0 / Joining:0 / Down:0

WARNING: Not all replicas will be on distinct nodes

Transfers resulting from cluster changes: 21
  10 transfers from ‘riak@10.20.220.2’ to ‘riak@10.20.220.4’
  11 transfers from ‘riak@10.20.220.3’ to ‘riak@10.20.220.4’

root@riak3# riak-admin cluster commit
Attempting to restart script through sudo -H -u riak
Cluster changes committed
check status

root@riak3# riak-admin status | grep ring
Attempting to restart script through sudo -H -u riak
ring_members : [‘riak@10.20.220.2′,’riak@10.20.220.3′,’riak@10.20.220.4’]
ring_num_partitions : 64
ring_ownership : <<“[{‘riak@10.20.220.2’,22},{‘riak@10.20.220.3’,21},{‘riak@10.20.220.4’,21}]”>>
ring_creation_size : 64

For Advanced configuration refer:

http://docs.basho.com/riak/latest/cookbooks/Adding-and-Removing-Nodes/

how to cleanup a huge mongodb collection ?

As most of mongodb users must be knowing, mongodb works on RAM. The more RAM you give on the DB server, the happier mongodb is. But if the data/index size exceeds the RAM requirements, you see increasing response times for all your queries.

Recently we had an issue where the db size exceeded the RAM we had on our machine. Suddenly we saw the query response time increase to 10-20 times its original time. By luck we had a cleanup strategy in place but never got the chance to execute the same.

We were dealing with around 110 million entries and were expecting that after cleanup around 50% of entries would be removed. The problem was our setup.

We had multiple slaves in our replica set. So running a simple delete query on the master would send the entries to the slave as well. What we wanted to do was remove all entries which are say “n” days old. For an example say 6 months. The delete query for this would be

db.test1.remove( { ts : { $lt : ISODate(“2012-09-27T00:00:00.000Z”)  } } )

This will fire 1 query on master but for each record deleted on master, it will have a delete query written in the oplog. Which will replicate on slave. So if this query is run on master and we intend to remove 50 million entries from our existing 110 million entries, we would end up having 50 million entries in the oplog. Which is a lot of IO.

Another solution that crossed our mind was to disable oplog by creating a stand alone instance of mongodb and running our delete query there. This should have theoretically worked. But even when the oplog was disabled, the deletions were terribly slow. After firing the query and waiting for around 3 hours, we knew that this will not work.

This plan aborted, another small beam of light came through. Remember mysql and how we used to move data across tables.

Select * from table1 select * from table2 where

We tried replicating this statement in mongo and were successful.

db.col1.find( { ts : { $gt : ISODate(“2012-09-27T00:00:00.000Z”)  } } ).forEach( function(c){db.col2.insert(c)} )


This query took approximately 30 minutes to execute. And we had a new collection col2 ready with data greater than 6 months. Now all we needed to do was to rename the collections. Prefer swapping to backup existing data – in case something went wrong.

db.test1.renameCollection(temp);
db.test2.renameCollection(test1);
db.temp.renameCollection(test2);


In order to maintain the data, we converted the collection to a ttl collection.

db.test1.ensureIndex( { “ts” : 1 }, { expireAfterSeconds : 15552000 } )

So any entry which exceeds 6 months = 15552000 seconds will be automatically deleted.

G-Shock

I had been a fan of G-Shock watches for quite some time now. But this was my first experience of owning one. After lots of dilema on whether to get one, I went ahead and bought the Mudman 9300.

Features :
Thermometer
Compass
moon date
dual time
5 alarms with snooze
stop watch
count down timer
auto-backlight
power saving feature
solar powered
Battery level indicator
water resistant till 200 meters
Shock resistant
world clock
hourly chime

And really good looking. Worth the money spent…

Microsoft Licences

Recently I got the opportunity to be a part of the windows team. We are (yes still are) using a microsoft (yes the same microsoft) product to handle one of our websites due to legacy bindings – user base, existing technology, backend team.

My first encounter with microsoft on the enterprise end was when we were trying to use Microsoft Navision – supply chain management solution – in one of my previous companies. The reason why I say that we were “trying” to use was because it took us more than 6-8 months to put it into production. And spend another 3 months in training. Microsoft sucks the user. I saw that if I purchase 1 product from microsoft, the dependencies are so well built in that I eventually end up purchasing a lot of other microsoft product.

Microsoft NAV cost us around 1 million INR. Now I cannot use NAV as it is, it needs to be customized. And it cannot be customized by just any developer. NAV can only be customized only by companies / developers who have the licence to do so. The licence for customization is extremely expensive – maybe even more expensive than the licence for selling liquor in india. Once I pay for customization, I have to go ahead and deploy the software somewhere. For which I need microsoft licences – OS, web server, database server. And then ofcourse plan for HA (high availability) – which means atleast 2 of each. So the strategy here is that once you purchase a product licence, you need the complete platform licence and eventually you end up paying many times more than the actual product cost.

Another concept that I became aware of recently was “software assurance”. What is that ?? Well, have you heard of life insurance ? Software Assurance (SA) is somewhat similar to that. It ensures that you get all the patches and version upgrades – (may or may not be free) as and when they are released. So if you purchase windows 2012 and plan to shift to windows 2014 when it is released, it is possible. There may be some cost involved.

Among all microsoft licences, I believe that the DB licence is the killer. The standard licence costs 1/4th of the cost of enterprise licence.  The difference between enterprise and standard licence is that the standard can utilize only upto 2 cores in a machine. But an Enterprise edition can utilize upto any number of cores – and the licence cost is in mulitples of “dual cores”. So if you have a dual quad core machine (8 cores), you end up purchasing 4 licences which is 16 times that of the standard licence cost.

And why should I pay for microsoft? when there are so many technologies which are better and available for free of cost. If I have to pay for support, why should i pay for the product and then for the support. Why not get the product for free and then pay for support ?

Final accessment was that microsoft is like a spider’s web, once you get entangled, you keep on getting more and more entangled. And there is no getting out without losing your own investment. Beware!!!

Does this work ??

Recently had a very unique experience with the tata safari. I own a 1 year olo tata safari dicor 2.2
It was parked for about 2-3 hours near sector 30 noida. When i came back to the car, i saw the alarm blaring. And there was no one near the car. Assuming that i there was some malfunction, i locked and unlocked the car. But the alarm kept on blaring.
Finally when i was near the car, i saw that the driver side door was open. Looked around and saw no one interested in either the car or the alarm. Climbed inside and saw my sterio still in place. Thought that had left the door open °by mistake°. So started the engine and headed home. Then i noticed that my door – the driver side door was unlocked. Tried pushing the lock, but it was stuck. It was then i realized that some “not so smart” thief tried to break into my car and was unsuccessful.
This is when the story starts.
Had a sleepless night. Cause my car would not lock. I woke up 3-4 times just to check if my car was still standing. To add fuel to the fire, googled about tata safari theft and found that both sa#ari & scorpio are toppers in the list of vehicles which are stolen. My Tata safari has engine immobilizer, gear lock and the now broken central lock. But i read cases where a tata safari with gps also was stolen.
Next morning i went to a nearby service center of tata safari and told him the complete tale. People were awed. But they told me a different tale. That of replacing the complete lock set. And get a new set of keys. I was like “i wud almost never use the key to open the door. Why spend almost 7000 to replace it? I am ok with the key not working as long as the central locking works.”.
The driver side door look was examined. It was assumed that the thief tried inserting some sort of screw-driver to attempt to open the lock. As a result the key channel was damaged and the key would not go into the lock. The lock was in a tilted position due to which the central locking was open and “not movable”. My simple idea was to bring the lock to its original vertical position so that the central locking becomes possible. But sad to say the TASS guys were not willing to comply.
I took it to another TASS and then to a local mechanic. Made both of them understand that it was a simple matter of “bringing the lock back to its vertical position”. Everyone was of the opinion of a complete new lock set.
Even i was convinced that spending 7k was the only option. But as a last try approached another local mechanic and explained him the complete story. This guy said that he will try but cannot promise anything. He opened up the door from inside and took a look at the lock. And turned it anti-clockwise to make it vertical. And bingo my central locking was back in place. He took rs.200 but i was ok with it. He has saved a lot.
If i had gone by the book it would have cost me much more to put a solution in place which i would not be using much.