Monday, November 14, 2011

Storm Installation

New to Storm ? My previous post could help you in finding your feet. In this post, we'll be going the extra mile in an attempt to install Storm. This has got two aspects to it:
    - Setting up Storm locally
    - Setting up a Storm cluster
Let's begin with setting up the storm cluster locally, which hardly is a two step procedure.

Setting up Storm locally

This is kind of mandatory !!!
That's because even if your aim is to get topologies working on a cluster, submitting topologies to that cluster requires a 'storm client', which requires the storm to be setup on your system locally.
Moreover it is always better to dry run topologies on your local system before deploying them as a jar on the cluster. It saves you from the exhaustive debugging on the cluster. So moving forth, we'll be undertaking the following two tasks under this heading:
  1. Setting up Storm for running topologies on the local machine
  2. Setting up the Storm client
As an obvious prerequisite you must be working on Linux with Java 6 installed on it.
So steps for accomplishing the first task :
  • Download a storm release from
  • cd to the unzipped location of the storm setup to test if bin/storm is executable using any of these
    - bin/storm ui
    - bin/storm supervisor
    - bin/storm nimbus
Next to get the ball rolling on running topologies in Storm, you can best start with the 'storm-starter' project using Eclipse. Steps for this are :
  1. Obtain the storm-starter project from the following location :
  2. Add the storm-0.5.*.jar and other required jars present in the storm setup to the build path of your eclipse project.
  3. If you want to start with the simplest thing that could possibly work, the simplest part of this project i.e. the '' could do the trick.
  4. Since this topology uses the 'SplitSentence' bolt which has been implemented using python, here's a java substitute for the 'SplitSentence' class if your preference is java.

public static class SplitSentence implements IRichBolt {
    OutputCollector _collector;
    public void prepare(Map conf, TopologyContext context,        OutputCollector collector) {
       _collector = collector;
    public void execute(Tuple tuple) {
        String sentence = tuple.getString(0);
        for(String word: sentence.split(" ")) {
           _collector.emit(tuple, new Values(word));

    public void cleanup() {

    public void declareOutputFields(OutputFieldsDeclarer declarer) {
     declarer.declare(new Fields("word"));

Successfully accomplishing this leaves you have with a checked environment setup for testing and running any storm topology locally.

Setting up the Storm client

Communicating with a remote cluster and submitting topologies to it requires a Storm client on your system.
For this, configure the 'storm.yaml' file located in your storm setup's conf folder by adding the following line to it and place a copy of it at the location '~/.storm/storm.yaml' "ip_of_your_remote_cluster's_nimbus"
As an eg : "”
As an important note also check the permissions of this file so that it is accessible.
Now you should be able to deploy jars on any remote cluster(steps to setup a remote cluster have been listed later in the post) using :
    cd /path_to_your_storm_setup
    bin/storm jar location_of_jar_on_your_system/WordCount.jar     storm.starter.WordCountTopology
and kill running topologies using
    bin/storm kill wordcount

Setting up a Storm Cluster

Time to kick off with setting up a Storm cluster. Here I am assuming a cluster of 3 machines, of which one would be the master node i.e. nimbus and the other two are the worker nodes.

Prerequisites :
  1. Java 6 and Python 2.6
  2. JAVA_HOME should be set, if it is not set in bashrc
These should be installed on all the machines of the cluster.

Installation steps :
  • Setup the Zookeeper Cluster :
Zookeeper is the coordinator for a Storm cluster. The interaction between the nimbus and the worker nodes is done through the Zookeeper. So its compulsary to setup a Zookeeper cluster first. You can follow the instructions from here :
  • Install native dependencies
In the local mode, Storm uses a pure Java messaging system so that you don't need to install native dependencies on your development machine. But in case of a cluster, ZeroMQ and JZMQ are a prerequisite on all the nodes of the cluster including nimbus. 

Download and installation commands for ZeroMQ 2.1.7 :

  • Obtain ZeroMQ using
  •  tar -xzf zeromq-2.1.7.tar.gz
  •  cd zeromq-2.1.7
  •  ./configure
  •  make
  •  sudo make install

Download and installation commands for JZMQ :

  •  Obtain JZMQ using 
git clone                                   
  •  cd jzmq
  •  ./
  •  ./configure
  •  make 
  •  sudo make install

- Copy storm setup to all the machines in the cluster . Assuming the following IP for clarity : nimbus IP : A.B.C.Nimbus supervisor node Ips : A.B.C.Sup1 and A.B.C.Sup2 Edit the conf/storm.yaml file as follows: 

storm.yaml” file for master node/nimbus :
     - "A.B.C.Sup1"
     - "A.B.C.Sup2"

storm.local.dir: "path_to_any_dir_for_temp_storage"                               
java.library.path: "/usr/local/lib/" ""
nimbus.task.launch.secs: 240
supervisor.worker.start.timeout.secs: 240
supervisor.worker.timeout.secs: 240  

storm.yaml” file for all worker nodes :
      - "A.B.C.Sup1"
      - "A.B.C.Sup2" 
storm.local.dir: "path_to_any_dir_for_temp_storage"                               
java.library.path: "/usr/local/lib/" "A.B.C.Nimbus"
      - 6700
      - 6701
      - 6702
      - 6703

Note : Also copy this storm.yaml file to “~/.storm/” folder on the respective systems.
This completes the cluster setup and you can now submit topologies from your system to it after creating a jar. For further assistance in this follow :

That's all from my end . . .  Hope it was helpful !!!

Friday, November 11, 2011

Twitter's Storm : Real-time Hadoop

The data processing ecosystem started to experience scarcity of solutions that could process the rising volumes of structured and unstructured data. The traditional database management systems in no way could attain the required level of performance.
This massively growing data necessitated two kinds of processing solutions due to the nature of their source. One was “ batch processing ” that could perform functions on enormous volumes of stored data and the other was “ realtime processing ” that could continuously query the incoming data and stream out the results.

In this scenario, Hadoop proved to be a savior and brilliantly covered the first aspect of processing i.e. the batch processing but a reliable solution for realtime processing was yet to be conceived that could perform as well as hadoop does in its own sphere.

STORM somewhere seems to put an end to this search.

About Storm

Storm is a distributed, fault-tolerant stream processing system.” as stated by its developers. It can be called “Hadoop of Realtime” as it fulfills all the requirements of a realtime processing platform. Parallel realtime computation is now lot more easy with Storm in the picture. It is meant for :
  • Stream processing : process messages and update a variety of databases.
  • Continuous computation : do continuous computation and stream out the results as they're computed.
  • Distributed RPC : parallelize an intense query so that you can compute it in realtime.

  • Topology : It is a graph of computation. All nodes have a processing role to play. As we submit jobs in hadoop, in storm we submit topologies which continue executing until they are closed(shut down).

  • Modes of Operation :
    - Local Mode : When topologies are developed and tested on local machine.
    - Remote Mode : When topologies are developed and tested on remote cluster.

  • Nimbus : In case of a cluster, the master node is called Nimbus. To run topologies on the cluster our local machine communicates with nimbus which in turn assigns jobs to all the cluster nodes.

  • Stream : It is an unbounded stream of tuples which are processed in parallel in a distributed manner. Every stream has an id.

  • Spout : The source of streams in a topology. Generally takes obtains input streams from an external source and emits them into a topology. It can emit multiple streams each of a different definition. A spout can be reliable (capable of resending if the stream has not been processed by the topology) or unreliable (the spout emits and forgets about the stream).

  • Bolt : Consumes any number of streams from Spout and processes them to generate output stream. In case of complex computations there can be multiple bolts.

  • Storm client : Installing the Storm release locally gives a storm client which is used to communicate with remote clusters. It is run from /storm_setup/bin/storm