Monday, June 15, 2015

Installing SparkMLlib on Linux and Running SparkMLlib implementations

SparkMLlib is a machine learning library which ships with Apache Spark and can run on any Hadoop2/YARN cluster without any pre-installation. It is Spark’s scalable machine learning library consisting of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as underlying optimization primitives.

The key features of SparkMLlib include:

1. Scalability
2. Performance
3. User-friendly APIs
4. Integration with Spark and its other components

There is nothing special about MLlib installation, it is already included in Spark. So if your machine already has Spark installed and running, you have nothing to do especially for Spark MLlib. You can follow this link to install Spark in standalone mode if not already done.

Running Logistic Regression on SparkMllib


Logistic regression measures the relationship between the categorical dependent variable and one or more independent variables, which are usually continuous, by estimating probabilities. Logistic regression can be binomial or multinomial. Binomial or binary logistic regression deals with situations in which the observed outcome for a dependent variable can have only two possible types (for example, "dead" vs. "alive"). Multinomial logistic regression deals with situations where the outcome can have three or more possible types (e.g., "disease A" vs. "disease B" vs. "disease C").

Spark provides 'spark-submit.sh’ script to submit jobs to the Spark cluster. The jar spark-assembly-*-cdh*-hadoop*-cdh*.jar comprises all the algorithm implementations.

We shall be now running Logistic Regression as below:

Step-1: Export the required environment variables



export JAVA_HOME='your_java_home'                                                                                            
export SPARK_HOME='your_spark_home'

Step-2: Gather the dataset to run the algorithm on



mkdir ~/SparkMLlib
cd ~/SparkMLlib/
wget https://sites.google.com/site/jayatiatblogs/attachments/sample_binary_classification_data.txt                       

Now that you have the data set, copy it to HDFS.


hdfs dfs -mkdir -p /user/${USER}/classification_data
hdfs dfs -put -f $HOME/SparkMLlib/sample_binary_classification_data.txt /user/${USER}/classification_data/                                                                             

Step-3: Submit the job to run Logistic Regression using the 'spark-submit.sh’ script



$SPARK_HOME/bin/spark-submit --class org.apache.spark.examples.mllib.BinaryClassification --master local[2]

$SPARK_HOME/lib/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar --algorithm LR --regType L2 --regParam 1.0 /user/${USER}/classification_data/sample_binary_classification_data.txt          

If all works fine, you must see the following after a long log message:


Test areaUnderPR = 1.0.
Test areaUnderROC = 1.0.                                                                                                                 

Let’s do some cleaning of your HDFS.


hdfs dfs -rm -r -skipTrash /user/${USER}/classification_data                                                          

You can run the other implementations of SparkMLlib as well in a similar fashion with the required data.

Good luck.

Installing sparkling-water and Running sparkling-water's Deep Learning

Sparkling Water is designed to be executed as a regular Spark application. It provides a way to initialize H2O services on each node in the Spark cluster and access data stored in data structures of Spark and H2O.

Sparkling Water provides transparent integration for the H2O engine and its machine learning algorithms into the Spark platform, enabling:

1. Use of H2O algorithms in Spark workflow
2. Transformation between H2O and Spark data structures
3. Use of Spark RDDs as input for H2O algorithms
4. Transparent execution of Sparkling Water applications on top of Spark

To install Sparkling Water, Spark installation is a prerequisite. You can follow this link to install Spark in standalone mode if not already done.

Installing Sparkling Water


Create a working directory for Sparkling Water


mkdir $HOME/SparklingWater
cd $HOME/SparklingWater/                                                                                                                                                                     

Clone Sparkling Water for linux


git clone https://github.com/0xdata/sparkling-water.git                                                                                                                                                         

Running Deep Learning on Sparkling Water


Deep Learning is a new area of Machine Learning research which is closer to Artificial Intelligence. Deep Learning algorithms are based on the (unsupervised) learning of multiple levels of features or representations of the data. Higher level features are derived from lower level features to form a hierarchical representation. They are part of the broader machine learning field of learning representations of data. Also they learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.

1. Download a prebuilt spark setup. This is needed since the Spark installation directory is read-only and the examples we shall run would need to write to the Spark folder.


wget http://www.apache.org/dyn/closer.cgi/spark/spark-1.2.0/spark-1.2.0.tgz                                                                                                                                                         

2. Export the Spark home


export SPARK_HOME='$HOME/SparklingWater/spark-1.2.0-bin-hadoop2.3'                                                                                                                                                        

3. Run the DeepLearningDemo example from Sparkling Water. It runs DeepLearning on a subset of airlines dataset (see dataset here sparkling-water/examples/smalldata/allyears2k_headers.csv.gz).


bin/run-example.sh DeepLearningDemo                                                                                                                                                        

4. In the long logs of the running job, try to see the following snippets:


Sparkling Water started, status of context:
Sparkling Water Context:
 * number of executors: 3
 * list of used executors:
  (executorId, host, port)
  ------------------------
  (0,127.0.0.1,54325)
  (1,127.0.0.1,54327)
  (2,127.0.0.1,54321)
  ------------------------
Output of jobs

===> Number of all flights via RDD#count call: 43978
===> Number of all flights via H2O#Frame#count: 43978
===> Number of flights with destination in SFO: 1331
====>Running DeepLearning on the result of SQL query
                                                                                                                                                                

To stop the job press Ctrl+C. Logs similar to the above provide a lot of information about the job. You can also try running other algorithm implementation likewise.

Good Luck.

Running Naive Bayes Classification algorithm using Weka

Wiki says, "Naive Bayes is a simple technique for constructing classifiers: models that assign class labels to problem instances, represented as vectors of feature values, where the class labels are drawn from some finite set. It is not a single algorithm for training such classifiers, but a family of algorithms based on a common principle: all naive Bayes classifiers assume that the value of a particular feature is independent of the value of any other feature, given the class variable."

Weka also provides a Naive Bayes Classification algorithm implementation. Running Weka’s algorithms from command line, requires a very simple setup of Weka to be in place. All you need is to download latest (3-6-12 being the latest stable one) release of WEKA. Some useful links working at the time of writing this post is:

http://prdownloads.sourceforge.net/weka/weka-3-6-12.zip

or

http://sourceforge.net/projects/weka/files/weka-3-6/3.6.12/weka-3-6-12.zip/download

Next, you’ll need to unzip this setup, which would give you a directory with name “weka-3-6-12”. We would call it WEKA_HOME for reference in this blog post.

We shall be proceeding step-by-step here onwards.


Step-1: Download a dataset to run the classification on


The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed.
The classification goal is to predict if the client will subscribe (yes/no) a term deposit (variable y). You can read more about the dataset here http://mlr.cs.umass.edu/ml/datasets/Bank+Marketing

So, first we shall create a folder to store our dataset and then download it.



mkdir ~/WekaDataSet
cd ~/WekaDataSet
wget http://mlr.cs.umass.edu/ml/machine-learning-databases/00222/bank.zip                                   
unzip bank.zip


Step-2: Convert the data in CSV data format to ARFF


First we shall create a subset of the entire dataset so as to do a quick test. You can run the test on the entire dataset or other datasets as well later on.



cd bank
head -1000 bank-full.csv >> bank-subset.csv
java -cp $WEKA_HOME/weka.jar weka.core.converters.CSVLoader bank-subset.csv > bank-subset-preprocessed.arff

You should see a file called 'bank-subset-preprocessed.arff' in the 'bank' folder.


Step-3: Convert the Numeric data to Nominal using Weka's utility


Weka's filter called 'NumericToNominal' is meant for turning numeric attributes into nominal ones. Unlike discretization, it just takes all numeric values and adds them to the list of nominal values of that attribute. Useful after CSV imports, to enforce certain attributes to become nominal, e.g., the class attribute, containing values from 1 to 5.



java -cp $WEKA_HOME/weka.jar weka.filters.unsupervised.attribute.NumericToNominal -i bank-subset-preprocessed.arff -o bank-subset-preprocessed.nominal.arff

Step-4: Divide a part of the data as train and test data


Let's keep the entire 1000 records in the train dataset. We shall be using another utility from Weka called RemovePercentage. In the option -P we need to specify the percentage we wish to remove.



java -cp $WEKA_HOME/weka.jar weka.filters.unsupervised.instance.RemovePercentage -P 0 -i bank-subset-preprocessed.nominal.arff  -o  bank-subset-preprocessed-train.nominal.arff

For the test dataset we shall be using 40 percent of the dataset and the -p option needs to be 60.



java -cp $WEKA_HOME/weka.jar weka.filters.unsupervised.instance.RemovePercentage -P 60 -i bank-subset-preprocessed.nominal.arff  -o  bank-subset-preprocessed-test.nominal.arff

Step-5: Train the model


Using the Naive Bayes Classifier of Weka "weka.classifiers.bayes.NaiveBayes", we shall first train the model.
-t option: Specify the location of the train data file
-d option: Specify the name and location of the model file you wish to be generated



java -cp $WEKA_HOME/weka.jar weka.classifiers.bayes.NaiveBayes -t bank-subset-preprocessed-train.nominal.arff -d bank-subset-preprocessed-model.arff

Step-6: Test the model


This is the final step. We would test the model for accuracy using the same classifier but with a different option set.
-T option: Specify the location of the test data file
-l option: Specify the location of the created model file



java -cp $WEKA_HOME/weka.jar weka.classifiers.bayes.NaiveBayes -T bank-subset-preprocessed-test.nominal.arff -l bank-subset-preprocessed-model.arff

That's it. You can also try the same with different percentages and different datasets.

Hope it helped.

Installing H2O and Running ML Implementations of H2O

H2O is an open source predictive analytics platform. Unlike traditional analytics tools, H2O provides a combination of extraordinary math and high performance parallel processing with unrivaled ease of use.


As per it's description, it intelligently combines unique features not currently found in other machine learning platforms including:

1.Best of Breed Open Source Technology: H2O leverages the most popular OpenSource products like Apache Hadoop and Spark to give customers the flexibility to solve their most challenging data problems.
2.Easy-to-use WebUI and Familiar Interfaces: Set up and get started quickly using either H2O’s intuitive Web-based user interface or familiar programming environ- ments like R, Java, Scala, Python, JSON, and through our powerful APIs.
3.Data Agnostic Support for all Common Database and File Types: Easily explore and model big data from within Microsoft Excel, R Studio, Tableau and more. Connect to data from HDFS, S3, SQL and NoSQL data sources. Install and deploy anywhere
4. Massively Scalable Big Data Analysis: Train a model on complete data sets, not just small samples, and iterate and develop models in real-time with H2O’s rapid in-memory distributed parallel processing.
5. Real-time Data Scoring: Use the Nanofast Scoring Engine to score data against models for accurate predictions in just nanoseconds in any environment. Enjoy 10X faster scoring and predictions than the next nearest technology in the market.

Installing H2O on Linux


Installling H2O on you Linux machine (this section is tested with Centos 6.6) is very straight forward. Follow the steps below:


#Create a local directory for installation
mkdir H2O
cd H2O
#Download the latest release of H2O
wget http://h2o-release.s3.amazonaws.com/h2o/rel-noether/4/h2o-2.8.4.4.zip
#Unzip the downloaded file
unzip h2o-2.8.4.4.zip
cd h2o-2.8.4.4
#Start H2O
java -jar h2o.jar
                                                                                                                                                             

You must see a log like the below:


INFO WATER: ----- H2O started -----
INFO WATER: Build git branch: rel-noether
INFO WATER: Build git hash: 4089ab3911999c73dcb611ab2f51cfc9bb86898b
INFO WATER: Build git describe: jenkins-rel-noether-4
INFO WATER: Build project version: 2.8.4.4
INFO WATER: Built by: 'jenkins'
INFO WATER: Built on: 'Sat Feb  7 13:39:20 PST 2015'
INFO WATER: Java availableProcessors: 16
INFO WATER: Java heap totalMemory: 1.53 gb
INFO WATER: Java heap maxMemory: 22.75 gb
INFO WATER: Java version: Java 1.7.0_75 (from Oracle Corporation)
INFO WATER: OS   version: Linux 2.6.32-504.3.3.el6.x86_64 (amd64)
INFO WATER: Machine physical memory: 102.37 gb
                                                                                                                                                               

You can access the Web UI at http://localhost:54321

Running H2O's GLM function on R


We shall be running H2O's GLM on R here. We could also have done it without R using only the Linux command line. But I found it easier this way. 

GLM is Generalized Linear Model, a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution

If you don't have R already installed on your linux box, follow this link.

So we shall be performing couple of tasks to get GLM running on H2O.

Install H2O on R


You have installed H2O, then R and now we need to install H2O on R.


Open the R shell by typing "R" in your terminal and then enter the following commands there.   
install.packages("RCurl");
install.packages("rjson");
install.packages("statmod");
install.packages("survival");
q()

Now in your linux terminal type:


cd /location_of_your_H2O_setup/h2o-2.8.4.4
R
install.packages("location_of_your_H2O_setup/h2o-2.8.4.4/R/h2o_2.8.4.4.tar.gz", repos = NULL, type = "source")
library(h2o)
q()

If all went fine, congratulate yourself. You have H2O and R and H2O on R installed :-)

Running a Demo


H2O packages examples to demostrate how its algorithm implementations work. The GLM is also a part of those demos. It would download the data called prostate.csv from authorized location on the web and use it as input. This demo would perform Logistic Regression of Prostate Cancer Data.

All you have to do is:


cd /location_of_your_H2O_setup/h2o-2.8.4.4
R                                                                                                                                                           
demo(h2o.glm)

You should see logs like the below:


demo(h2o.glm)

        demo(h2o.glm)
        ---- ~~~~~~~
> # This is a demo of H2O's GLM function
> # It imports a data set, parses it, and prints a summary
> # Then, it runs GLM with a binomial link function using 10-fold cross-validation
> # Note: This demo runs H2O on localhost:54321
> library(h2o)
> localH2O = h2o.init(ip = "localhost", port = 54321, startH2O = TRUE)
Successfully connected to http://localhost:54321
R is connected to H2O cluster:
    H2O cluster uptime:         1 hours 45 minutes
    H2O cluster version:        2.8.4.4
    H2O cluster name:           jayati.tiwari
   H2O cluster total nodes:    1    H2O cluster total memory:   22.75 GB
    H2O cluster total cores:    16
    H2O cluster allowed cores:  16
    H2O cluster healthy:        TRUE
> prostate.hex = h2o.uploadFile(localH2O, path = system.file("extdata", "prostate.csv", package="h2o"), key = "prostate.hex")
  |======================================================================| 100%
> summary(prostate.hex)
 ID               CAPSULE          AGE             RACE        
 Min.   :  1.00   Min.   :0.0000   Min.   :43.00   Min.   :0.000
 1st Qu.: 95.75   1st Qu.:0.0000   1st Qu.:62.00   1st Qu.:1.000
 Median :190.50   Median :0.0000   Median :67.00   Median :1.000
 Mean   :190.50   Mean   :0.4026   Mean   :66.04   Mean   :1.087
 3rd Qu.:285.25   3rd Qu.:1.0000   3rd Qu.:71.00   3rd Qu.:1.000
 Max.   :380.00   Max.   :1.0000   Max.   :79.00   Max.   :2.000
 DPROS           DCAPS           PSA               VOL          
 Min.   :1.000   Min.   :1.000   Min.   :  0.300   Min.   : 0.00
 1st Qu.:1.000   1st Qu.:1.000   1st Qu.:  5.000   1st Qu.: 0.00
 Median :2.000   Median :1.000   Median :  8.725   Median :14.25
 Mean   :2.271   Mean   :1.108   Mean   : 15.409   Mean   :15.81
 3rd Qu.:3.000   3rd Qu.:1.000   3rd Qu.: 17.125   3rd Qu.:26.45
 Max.   :4.000   Max.   :2.000   Max.   :139.700   Max.   :97.60
 GLEASON      
 Min.   :0.000
 1st Qu.:6.000
 Median :6.000
 Mean   :6.384
 3rd Qu.:7.000
 Max.   :9.000

> prostate.glm = h2o.glm(x = c("AGE","RACE","PSA","DCAPS"), y = "CAPSULE", data = prostate.hex, family = "binomial", nfolds = 10, alpha = 0.5)
  |======================================================================| 100%
> print(prostate.glm)
IP Address: localhost
Port      : 54321
Parsed Data Key: prostate.hex
GLM2 Model Key: GLMModel__ba962660a263d41ab4531103562b4422
Coefficients:
      AGE      RACE     DCAPS       PSA Intercept
 -0.01104  -0.63136   1.31888   0.04713  -1.10896
Normalized Coefficients:
      AGE      RACE     DCAPS       PSA Intercept
 -0.07208  -0.19495   0.40972   0.94253  -0.33707
Degrees of Freedom: 379 Total (i.e. Null);  375 Residual
Null Deviance:     512.3
Residual Deviance: 461.3  AIC: 471.3
Deviance Explained: 0.09945
 Best Threshold: 0.328
Confusion Matrix:
        Predicted
Actual   false true   Error
  false    127  100 0.44053
  true      51  102 0.33333
  Totals   178  202 0.39737

AUC =  0.6887507 (on train)
Cross-Validation Models:
Nonzeros       AUC Deviance          Explained
Model 1         4 0.6532738          0.8965221
Model 2         4 0.6316527          0.8752008
Model 3         4 0.7100840          0.8955293
Model 4         4 0.8268698          0.9099155
Model 5         4 0.6354167          0.9079152
Model 6         4 0.6888889          0.8881883
Model 7         4 0.7366071          0.9091687
Model 8         4 0.6711310          0.8917893
Model 9         4 0.7803571          0.9178481
Model 10        4 0.7435897          0.9065831
> myLabels = c(prostate.glm@model$x, "Intercept")
> plot(prostate.glm@model$coefficients, xaxt = "n", xlab = "Coefficients", ylab = "Values")
> axis(1, at = 1:length(myLabels), labels = myLabels)
> abline(h = 0, col = 2, lty = 2)
> title("Coefficients from Logistic Regression\n of Prostate Cancer Data")
> barplot(prostate.glm@model$coefficients, main = "Coefficients from Logistic Regression\n of Prostate Cancer Data")

Great ! Your demo ran fine.

Starting H2O from R


Before we try running GLM from the R shell, we need to start H2O. We shall achieve this from within the R shell itself.


R                                                                                                                                                             
library(h2o)
localH2O <- data-blogger-escaped-br="" data-blogger-escaped-h2o.init="" data-blogger-escaped-ip="localhost" data-blogger-escaped-max_mem_size="4g" data-blogger-escaped-port="54321,">

You should see something like:


Successfully connected to http://localhost:54321
                                                                                                                                                                  

R is connected to H2O cluster:
    H2O cluster uptime:         2 hours 3 minutes 
    H2O cluster version:        2.8.4.4 
    H2O cluster name:           jayati.tiwari 
    H2O cluster total nodes:    1 
    H2O cluster total memory:   22.75 GB 
    H2O cluster total cores:    16 
    H2O cluster allowed cores:  16 
    H2O cluster healthy:        TRUE 

This starts H2O. 

Running H2O's GLM from R


In the same R shell continue to run the GLM example now.


prostate.hex = h2o.importFile(localH2O, path = "https://raw.github.com/0xdata/h2o/master/smalldata/logreg/prostate.csv", key = "prostate.hex")

h2o.glm(y = "CAPSULE", x = c("AGE","RACE","PSA","DCAPS"), data = prostate.hex, family = "binomial", nfolds = 10, alpha = 0.5)

This command should output the following on your terminal


|======================================================================| 100%
IP Address: localhost 
Port      : 54321 
Parsed Data Key: prostate.hex 
GLM2 Model Key: GLMModel__8efb9141cab4671715fc8319eae54ca8
Coefficients:
      AGE      RACE     DCAPS       PSA Intercept 
 -0.01104  -0.63136   1.31888   0.04713  -1.10896 
Normalized Coefficients:
      AGE      RACE     DCAPS       PSA Intercept 
 -0.07208  -0.19495   0.40972   0.94253  -0.33707 
Degrees of Freedom: 379 Total (i.e. Null);  375 Residual
Null Deviance:     512.3
Residual Deviance: 461.3  AIC: 471.3
Deviance Explained: 0.09945 
 Best Threshold: 0.328
Confusion Matrix:
        Predicted
Actual   false true   Error
  false    127  100 0.44053
  true      51  102 0.33333
  Totals   178  202 0.39737
AUC =  0.6887507 (on train) 
Cross-Validation Models:
Nonzeros       AUC Deviance Explained
Model 1         4 0.6532738          0.8965221
Model 2         4 0.6316527          0.8752008
Model 3         4 0.7100840          0.8955293
Model 4         4 0.8268698          0.9099155
Model 5         4 0.6354167          0.9079152
Model 6         4 0.6888889          0.8881883
Model 7         4 0.7366071          0.9091687
Model 8         4 0.6711310          0.8917893
Model 9         4 0.7803571          0.9178481
Model 10        4 0.7435897          0.9065831

As you can see, you have predictions in place and the accuracy score as well.  

Hope it helped !!


Socket Programming in Java

A socket literally means an electrical device receiving a plug or light bulb to make a connection. And in computer programming it means a method of communication between two programs one acting as the server (aka provider) and the other as the client (aka requester).

Why is Socket Programming required?

Socket programming is used when two machines need to exchange information for example in cases of web browsers, instant messaging applications and peer to peer file sharing systems. One of the machines prepares the socket and sends it across to the other machine for it to return information through this socket.

Socket Programming in Java

Socket Programming needs two processes, the provider and the requester. The following two java files represent the same. You can play with them to understand how simple exchange of messages occurs between the two.

Provider.java



import java.io.*;
import java.net.*;
public class Provider{
      ServerSocket providerSocket;
      Socket connection = null;
      ObjectOutputStream out;
      ObjectInputStream in;
      String message;
      Provider(){}
      void run()
      {
            try{
                  //1. creating a server socket
                  providerSocket = new ServerSocket(2004, 10);
                  //2. Wait for connection
                  System.out.println("Waiting for connection");
                  connection = providerSocket.accept();
                  System.out.println("Connection received from " + connection.getInetAddress().getHostName());
                  //3. get Input and Output streams
                  out = new ObjectOutputStream(connection.getOutputStream());
                  out.flush();
                  in = new ObjectInputStream(connection.getInputStream());
                  sendMessage("Connection successful");
                  //4. The two parts communicate via the input and output streams
                  do{
                        try{
                              message = (String)in.readObject();
                              System.out.println("client>" + message);
                              if (message.equals("bye"))
                                    sendMessage("bye");
                        }
                        catch(ClassNotFoundException classnot){
                              System.err.println("Data received in unknown format");
                        }
                  }while(!message.equals("bye"));
            }
            catch(IOException ioException){
                  ioException.printStackTrace();
            }
            finally{
                  //4: Closing connection
                  try{
                        in.close();
                        out.close();
                        providerSocket.close();
                  }
                  catch(IOException ioException){
                        ioException.printStackTrace();
                  }
            }
      }
      void sendMessage(String msg)
      {
            try{
                  out.writeObject(msg);
                  out.flush();
                  System.out.println("server>" + msg);
            }
            catch(IOException ioException){
                  ioException.printStackTrace();
            }
      }
      public static void main(String args[])
      {
            Provider server = new Provider();
            while(true){
                  server.run();
            }
      }
}

Requester.java



import java.io.*;
import java.net.*;
public class Requester{
      Socket requestSocket;
      ObjectOutputStream out;
      ObjectInputStream in;
      String message;
      Requester(){}
      void run()
      {
            try{
                  //1. creating a socket to connect to the server
                  requestSocket = new Socket("localhost", 2004);
                  System.out.println("Connected to localhost in port 2004");
                  //2. get Input and Output streams
                  out = new ObjectOutputStream(requestSocket.getOutputStream());                        
                  out.flush();
                  in = new ObjectInputStream(requestSocket.getInputStream());
                  //3: Communicating with the server
                  do{
                        try{
                              message = (String)in.readObject();
                              System.out.println("server>" + message);
                              sendMessage("Hi my server");
                              message = "bye";
                              sendMessage(message);
                        }
                        catch(ClassNotFoundException classNot){
                              System.err.println("data received in unknown format");
                        }
                  }while(!message.equals("bye"));
            }
            catch(UnknownHostException unknownHost){
                  System.err.println("You are trying to connect to an unknown host!");                                                      
            }
            catch(IOException ioException){
                  ioException.printStackTrace();
            }
            finally{
                  //4: Closing connection
                  try{
                        in.close();
                        out.close();
                        requestSocket.close();
                  }
                  catch(IOException ioException){
                        ioException.printStackTrace();
                  }
            }
      }
      void sendMessage(String msg)
      {
            try{
                  out.writeObject(msg);
                  out.flush();
                  System.out.println("client>" + msg);
            }
            catch(IOException ioException){
                  ioException.printStackTrace();
            }
      }
      public static void main(String args[])
      {
            Requester client = new Requester();
            client.run();
      }
}


You can extend these classes to much more complex use-cases. Hope this helped lay the foundation.

Installing R on Linux

I was a bit skeptical about writing this post due to the scarcity of content, but anyhow you know what I chose.

So, installing R on Ubuntu is all about the following two steps:


sudo apt-get install r-base
sudo apt-get install littler                                                                                                                        

To check you installation, type-in “R” in your terminal, which should open an R shell. Here you can try installing a package for example:


> install.packages("randomForest")
or
> install.packages("stringr")                                                                                                                   

If all worked fine, your machine has got R correctly installed. The same way you can install R on a CentOS machine using the following commands:


sudo yum install r-base
sudo yum install littler                                                                                                                             

Rest follow the same steps.

Hope this small piece of information is helpful to some of you.