Wednesday, February 15, 2012

Scaling using mongodb : HA and Automatic sharding (part 1)

Mongodb introduces the concept of automatic sharding and replica sets. Lets see how both can be used to create a highly available cluster of database which is horizontally scalable.

First lets take a look at the HA capability of mongodb. HA is handled in mongodb by something known as Replica Sets. Replica sets are basically two or more mongodb nodes which are copies/replicas of each other. A replica set automatically selects a primary master and provides automated failover and disaster recovery.

Lets create a replica set of 2 nodes first and proceed from there.

I have taken 2 machines 192.168.1.241 and 192.168.1.242 for creating replica sets. Lets download the tar of mongodb 2.0.2 and put it in 241 and 242. We will create separate directories for database and replica db. Here is how we go about it.

241:/home/mongodb } mkdir -p data/db
241:/home/mongodb } mkdir -p repl/db
242:/home/mongodb } mkdir -p data/db
242:/home/mongodb } mkdir -p repl/db

Create primary mongodb replica set server on 241

241:/home/mongodb } ./bin/mongod --shardsvr --replSet set_a --logappend --logpath ./data/log --pidfilepath ./data/mongodb.pid --fork --dbpath ./data/db --directoryperdb --bind_ip 192.168.1.241 --port 10000 &
Create secondary replica set on 242 running on port 10001

242:/home/mongodb } ./bin/mongod --shardsvr --replSet set_a --logappend --logpath ./data/log --pidfilepath ./data/mongodb.pid --fork --dbpath ./data/db --directoryperdb --bind_ip 192.168.1.241 --port 10001 &
Check whether the replica set is working fine.

241:/home/mongodb } ./bin/mongo 192.168.1.241:10000
241: } > rs.status()
{
        "startupStatus" : 3,
        "info" : "run rs.initiate(...) if not yet done for the set",
        "errmsg" : "can't get local.system.replset config from self or any seed (EMPTYCONFIG)",
        "ok" : 0
}
241: } > rs.initiate()
{
        "info2" : "no configuration explicitly specified -- making one",
        "me" : "192.168.1.243:10000",
        "info" : "Config now saved locally.  Should come online in about a minute.",
        "ok" : 1
}
241: } > rs.isMaster()
db.isMaster()
{
        "setName" : "set_a",
        "ismaster" : true,
        "secondary" : false,
        "hosts" : [
                "192.168.1.241:10000"
        ],
        "primary" : "192.168.1.241:10000",
        "me" : "192.168.1.241:10000",
        "maxBsonObjectSize" : 16777216,
        "ok" : 1
}

We can see that 241 is master, but 242 is not added to the replica set. Lets add 242 as a secondary server of the replica set.

241: } >  rs.add("192.168.1.242:10001")
{ "ok" : 1 }

241: } > rs.conf()
{
        "_id" : "set_a",
        "version" : 2,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "192.168.1.241:10000"
                },
                {
                        "_id" : 1,
                        "host" : "192.168.1.242:10001"
                }
        ]
}
241: } > rs.status()
{
        "set" : "set_a",
        "date" : ISODate("2012-02-15T09:42:07Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "192.168.1.241:10000",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "optime" : {
                                "t" : 1329298881000,
                                "i" : 1
                        },
                        "optimeDate" : ISODate("2012-02-15T09:41:21Z"),
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "192.168.1.242:10001",
                        "health" : 1,
                        "state" : 3,
                        "stateStr" : "RECOVERING",
                        "uptime" : 46,
                        "optime" : {
                                "t" : 0,
                                "i" : 0
                        },
                        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2012-02-15T09:42:07Z"),
                        "pingMs" : 5339686
                }
        ],
        "ok" : 1
}
242 is in recovery mode - replicating the data that is already there on 241. Once the data is completely replicated, the second node starts acting as a secondary node.

241: } > rs.status()
{
        "set" : "set_a",
        "date" : ISODate("2012-02-15T10:33:32Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "192.168.1.241:10000",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "optime" : {
                                "t" : 1329298881000,
                                "i" : 1
                        },
                        "optimeDate" : ISODate("2012-02-15T09:41:21Z"),
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "192.168.1.242:10001",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 3131,
                        "optime" : {
                                "t" : 1329298881000,
                                "i" : 1
                        },
                        "optimeDate" : ISODate("2012-02-15T09:41:21Z"),
                        "lastHeartbeat" : ISODate("2012-02-15T10:33:31Z"),
                        "pingMs" : 0
                }
        ],
        "ok" : 1
}

Now the replica set is working fine and the secondary node will act as primary if the primary goes out of service. Ideally there should be 3 or more nodes added to any replica set.

No comments: