Getting started with MatConvNet

Available at:

My goal is to use this toolbox to classify cars into four classes: Sedan, Minivan, SUV and pickup. I already have the data from my prior work (

My plan is to first use transfer learning i. e. to use one of the deep networks pre-trained with imagenet data to extract features and use an SVM classifier to do the actual classification. I plan to do this with Matlab 2016a on Windows 10 PC equipped with GeForce GTX 960. I am aware that Matlab also has deep learning support in its Neural Network toolbox, I am going with MatConNet in hopes that it will stay more cutting edge.

The code below is derived from and

clear; close all; clc;
%Load the pre-trained net
net = load('imagenet-vgg-f.mat');
net = vl_simplenn_tidy(net) ;

%Remove the last layer (softmax layer)
net.layers = net.layers(1 : end - 1);

%% This deals with reading the data and getting the ground truth class labels

%All files are are inside the root 
root = 'C:\Users\ninad\Dropbox\Ninad\Lasagne\data\c200x200\';
Files = dir(fullfile(root, '*.bmp'));

%Load the map which stores the class information
load MakeModels.mat
for i = 1 : length(Files)
    waitbar (i/ length(Files));
    % Read class from the map
    Q = Files(i).name(end - 18 : end - 4);
    Qout = MakeModels(Q);
    Files(i).class = Qout.Type;
    % Preprocess the data and get it ready for the CNN
    im = imread(fullfile(root, Files(i).name));
    im_ = single(im); % note: 0-255 range
    im_ = imresize(im_, net.meta.normalization.imageSize(1:2));
    im_ = bsxfun(@minus, im_, net.meta.normalization.averageImage);

    % run the CNN to compute the features
    feats = vl_simplenn(net, im_) ;
    Files(i).feats = squeeze(feats(end).x);

%% Classifier training

%Select training data fraction

trainFraction = 0.5;
randomsort = randperm(length(Files));
trainSamples = randomsort(1 : trainFraction * length(Files));
testSamples = randomsort(trainFraction * length(Files)+1 : end);

Labels = [Files.class];
Features = [Files.feats];

trainingFeatures = Features( :, trainSamples);
trainingLabels = Labels( :, trainSamples);

classifier = fitcecoc(trainingFeatures', trainingLabels);

%% Carry out the validation with rest of the data
testFeatures = Features( :, testSamples);
testLabels = Labels( :, testSamples);

predictedLabels = predict(classifier, testFeatures');

confMat = confusionmat(testLabels, predictedLabels);

% Convert confusion matrix into percentage form
confMat = bsxfun(@rdivide,confMat,sum(confMat,2))

% Display the mean accuracy



Getting started with Lasagne on Windows 10

Installing Lasagne on Windows 10 was straight forward after following the steps here. No changes were needed apart from changing paths in “C  Compiler” section as I had Visual studio 2012 already installed.

After installation, I wanted to start with a simpler tutorial than the MNIST one from Lasagne page. I followed one from here. Since there have been interface changes to Lasagne since the tutorial was published I had to make some changes to get it to work.
I had to change

net_output = l_output.get_output()


net_output = lasagne.layers.get_output(l_output, deterministic=True)


objective = lasagne.objectives.Objective(
loss = objective.get_loss(target=true_output)


loss = lasagne.objectives.aggregate(
   lasagne.objectives.categorical_crossentropy(net_output, true_output))

Also, I had to change “xrange” to “range” as I am using Python 3.

Java vs Matab code

Since Matlab uses JRE pretty extensively, it is very easy to write Matlab code which is equivalent to the java code.

This is the original java code.

int numInstances = 10000;
Classifierlearner = new HoeffdingTree();
RandomRBFGenerator stream = new RandomRBFGenerator();
int numberSamplesCorrect = 0;
int numberSamples = 0;
boolean isTesting = true;
while (stream.hasMoreInstances() && numberSamples < numInstances) {
	Instance trainInst = stream.nextInstance();
	if (isTesting) {
		if (learner.correctlyClassifies(trainInst)) {
double accuracy = 100.0 * (double) numberSamplesCorrect / (double) numberSamples;
System.out.println(numberSamples + " instances processed with " + accuracy + "% accuracy ");

And here is the corresponding Matlab code.

clear; close all;clc;
javaclasspath('C:\Users\Ninad\Desktop\moa-release-2014 .11\moa.jar');

import moa.classifiers.trees.HoeffdingTree;
import moa.streams.generators.RandomRBFGenerator;

numInstances =10000;



while(stream.hasMoreInstances && numberSamples<numInstances)

fprintf('%d instances processed with %f accuracy\n',numberSamples,accuracy);

Flickr app issues

I have my own app to sync my photos with Flickr which I built with Matlab. The app suddenly stopped working a week or so ago. The failure was happening in urlread. After stepping through the code, I was able to get following exception to show up: PKIX path building failed: unable to find valid certification path to requested target

This was happening because updated security certificate of Flickr. Fix of this problem is essentially adding the new security certificate to JRE’s keystore. A general solution is available at:

Since Matlab uses its own JRE, this might be little tricky to achieve. However, I was able to locate following solution which essentially achieves same thing as above for Matlab.

So I downloaded the Flickr certificate using Chrome and proceeded to apply the fix. But alas, the problem persisted. One thing I noticed while installing the certificate was that it was issued by AVG which appeared odd to me (I run AVG antivirus on my computer). So I dug around to find following which states that AVG (Avast) installs Man-in-the-Middle certificate.

So I was installing a wrong certificate all this time, which was causing the fix to fail. I went an “AVG free” computer and downloaded the security certificate for Flickr and proceeded with the fix. The app is now working again smoothly.

OpenCV and Java

This is adapted from:


<project name="Main" basedir="." default="rebuild-run">

   <property name="src.dir" value="src"/>

   <property name="lib.dir" value="${ocvJarDir}"/>
   <path id="classpath">
       <fileset dir="${lib.dir}" includes="**/*.jar"/>

   <property name="build.dir" value="build"/>
   <property name="classes.dir" value="${build.dir}/classes"/>
   <property name="jar.dir" value="${build.dir}/jar"/>

   <property name="main-class" value="${}"/>

   <target name="clean">
       <delete dir="${build.dir}"/>

   <target name="compile">
       <mkdir dir="${classes.dir}"/>
       <javac includeantruntime="false" srcdir="${src.dir}" destdir="${classes.dir}" classpathref="classpath"/>

   <target name="jar" depends="compile">
       <mkdir dir="${jar.dir}"/>
       <jar destfile="${jar.dir}/${}.jar" basedir="${classes.dir}">
               <attribute name="Main-Class" value="${main-class}"/>

   <target name="run" depends="jar">
       <java fork="true" classname="${main-class}">
           <sysproperty key="java.library.path" path="${ocvLibDir}"/>
               <path refid="classpath"/>
               <path location="${jar.dir}/${}.jar"/>

   <target name="rebuild" depends="clean,jar"/>

   <target name="rebuild-run" depends="clean,run"/>


import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.Scalar;
import org.opencv.highgui.*;
import org.opencv.core.MatOfRect;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.objdetect.CascadeClassifier;


* Created by dmalav on 4/30/15.
public class DetectFaces {

   public void run(String imageFile) {
       System.out.println("\nRunning DetectFaceDemo");

       // Create a face detector from the cascade file in the resources
       // directory.
       String xmlPath = "/home/cloudera/project/opencv-examples/lbpcascade_frontalface.xml";
       CascadeClassifier faceDetector = new CascadeClassifier(xmlPath);
       Mat image = Highgui.imread(imageFile);

       // Detect faces in the image.
       // MatOfRect is a special container class for Rect.
       MatOfRect faceDetections = new MatOfRect();
       faceDetector.detectMultiScale(image, faceDetections);

       System.out.println(String.format("Detected %s faces", faceDetections.toArray().length));

       // Draw a bounding box around each face.
       for (Rect rect : faceDetections.toArray()) {
           Core.rectangle(image, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(0, 255, 0));

       File f = new File(imageFile);
       // Save the visualized detection.
       String filename = f.getName();
       System.out.println(String.format("Writing %s", filename));
       Highgui.imwrite(filename, image);


import org.opencv.core.Core;


public class Main {

   public static void main(String... args) {


if (args.length == 0) {
   System.err.println("Usage Main /path/to/images");

       File[] files = new File(args[0]).listFiles();

   public static void showFiles(File[] files) {
       DetectFaces faces = new DetectFaces();
       for (File file : files) {
           if (file.isDirectory()) {
               System.out.println("Directory: " + file.getName());
               showFiles(file.listFiles()); // Calls same method again.
           } else {
               System.out.println("File: " + file.getAbsolutePath());

Adding preview for Matlab source files in Windows 7

By default there is no preview available for Matlab source file in Windows 7 preview pane (at least for Matlab version 2012a) which is silly because they are simply text files. After little googling I found this tooll called PreviewConfig. Few clicks and restarting the explorer and I was all set.

Computer vision, cloud computing and big data

I have been researching feasibility of running computer vision applications in cloud and costs involved. Here is an article I came across,

Here are couple of quotes from the article. Regarding processing:
“I use m1.xlarge servers on Amazon EC2, which are beefy enough to process two million Instagram-size photos a day, and only cost $12.48! I’ve used some open source frameworks to distribute the work in a completely scalable way, so this works out to $624 for a 50-machine cluster that can process 100 million pictures in 24 hours. That’s just 0.000624 cents per photo!”

Regarding fetching the data:
“If you have three different external services you’re pulling from, with four servers assigned to each service, you end up taking about 30 days to download 100 million photos. That’s $4,492.80 for the initial pull, which is not chump change, but not wildly outside a startup budget, especially if you plan ahead and use reserved instances to reduce the cost.”

UCR birding spots

  1. Botanical Gardens: The most obvious birding spot on campus is the Botanical Gardens. Diverse species of birds including easy to see California Towhees, vibrant Hummingbirds, secretive Bell’s and Rufous-crowned sparrows call the gardens home.  Some of the notable birds in the gardens in recent time include Winter Wren, Verdin, and Green-tailed Towhee. Just outside the entrance of the gardens there is a small riparian strip which is worth exploring as well.
  2. Health center riparian area: This small patch near the campus health center often holds surprises. Recent notable birds are Summer Tanager, wintering Hammond’s and Gray Flycatcher and  wintering Cassin’s Vireo.
  3. Bannockburn riparian area: This patch is at the intersection of University ave. and Canyon Crest Dr. Last winter a White-throated Sparrow was seen here many times.
  4. Picnic Hill: This hill attracts a lot of migrants. This year a Clay-colored Sparrow and Summer Tanager were seen on the hill.
  5. Sage Hills: These hills overlooking the freeway has native sagebrush and rocky terrain which attracts Sage Sparrows, Gnatcatchers, Rock and Canyon Wrens. Notable birds on the hill this year were Black-throated Sparrow and Brewer’s Sparrow.
  6. Agricultural operations (AgOps): This is a restricted access area of the campus. Agops holds a number of ponds which attract variety of waterfowls.

This map shows where these spots are located.

Installing Hadoop cluster on Linux

I am working on creating a Big Data platform in our lab. I managed to install hadoop 2.5.0 with help of these two guides on Ubuntu 14.04 LTS with Oracle JDK 7 (java version 1.7.0_65)

After successfully deploying on single computer I moved on

I ran into some problem where one of the datanodes failed to start due to following exception Incompatible clusterIDs in /app/hadoop/tmp/dfs/data

The fix was similar to

I followed  the manual fix where I edited clusterID in/app/hadoop/tmp/dfs/data/current/VERSION to match  /app/hadoop/tmp/dfs/name/current/VERSION.

After finishing the setup, the datanodes failed to find the namenode. The fix for the issue is given at:

After the Hadoop was up with all the datanodes, I moved to Yarn. However the nodes failed to connect to the manager
Retrying connect to server:

I had to modify yarn-site.xml according to answer here:

After this, I moved to running the example from An extra step is needed before copying the files
hdfs dfs -mkdir -p /user/hduser/gutenberg
The example itself can be run as
hduser@node0:/usr/local/hadoop/share/hadoop/mapreduce$ hadoop jar hadoop-mapreduce-examples-2.5.0.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output