Posts Tagged ‘recognition’

Image classification using SVMs in R

February 24, 2013 6 comments

Recently I did some Support Vector Machine (SVM) tests in R (statistical language¬†with functional parts for rapid prototyping and data analysis — somehow similar to Matlab, but open source ;)) for my current face recognition projects. To get my SVMs up and running in R, using image data as in- and output, I wrote a small demo script for classifying images. As test data I used 2 classes of images (lines from left top to right bottom and lines from left bottom to right top), with 10 samples each — like these:

The complete image set is available here.

For SVM classification simple train and test sets get used — for more sophisticated problems n-fold cross validation for searching good parameter settings is recommended instead. For everybody who did not yet work with SVMs, I’d recommend reading something about how to start with “good” SVM classification, like the pretty short and easy to read “A Practical Guide to Support Vector Classification” from Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen Lin (the LIBSVM-inventors).

Update: added parallel processing using parallel and mclapply for loading image data (for purpose of demonstration only, loading 10 images in parallel does not make a big difference ;)).

print('starting svm demo...')


# load img data
file_list <- dir(folder, pattern="png")
data <- mclapply(file_list, readPNG, mc.cores=2)
# extract subject id + img nr from names
subject_ids <- lapply(file_list, function(file_name) as.numeric(unlist(strsplit(file_name, "_"))[1]))
# rename subject id's to c1 and c2 for more clear displaying of results
img_ids <- lapply(file_list, function(file_name) as.numeric(unlist(strsplit(unlist(strsplit(file_name, "_"))[2], "\\."))[1]))

# specify which data should be used as test and train by the img nrs
train_test_border <- 7
# split data into train and test, and bring into array form to feed to svm
train_in <- t(array(unlist(data[img_ids < train_test_border]), dim=c(length(unlist(data[1])),sum(img_ids < train_test_border))))
train_out <- unlist(subject_ids[img_ids < train_test_border])
test_in <- t(array(unlist(data[img_ids >= train_test_border]), dim=c(length(unlist(data[1])),sum(img_ids >= train_test_border))))
test_out <- unlist(subject_ids[img_ids >= train_test_border])

# train svm - try out different kernels + settings here
svm_model <- svm(train_in, train_out, type='C', kernel='linear')

# evaluate svm
p <- predict(svm_model, train_in)
print(table(p, train_out))
p <- predict(svm_model, test_in)
print(table(p, test_out))

print('svm demo done!')

Facedetection with JavaCV and different haarcascades on Android

November 5, 2011 2 comments

UPDATE (2015)

The pan shot face recognition prototype from 2013 (see below) has been embedded in the prototypical face module of the mobilesec android authentication framework. The face module uses 2D frontal-only face detection and authentication, but additionally showcases pan shot face detection and authentication. It currently uses Android 4.4 and OpenCV 2.4.10 for Android. Additionally to the functionality provided in the old prototype the module features (beside others) KNN/SVM classification, with training and classification both done on the device, more detail settings that can be changed/played with and direct access to authentication data stored on the FS in order to manage it (as the whole thing is still a demo/showcase).

Face module of the mobilesec Android authentication framework:

To cite the face authentication module please again use my master thesis:

Findling, R. D. Pan Shot Face Unlock: Towards Unlocking Personal Mobile Devices using Stereo Vision and Biometric Face Information from multiple Perspectives. Department of Mobile Computing, School of Informatics, Communication and Media, University of Applied Sciences Upper Austria, 2013

UPDATE (2014)

In 2013 I’ve finished my master thesis about the pan shot face unlock. As part of the thesis I’ve prototypically implemented several face detection and recognition prototypes, including the pan shot face recognition prototype for Android 4.3, using OpenCV 2.4.8 for Android. This prototype features the same functionality as the old face detection demo described in this post – but extends it by face recognition based on KNN or SVM, with training and classification both done on the device. For those reason you should stick to the new code available in the following repository:

Details on the background of the prototype are available in my master thesis:

Findling, R. D. Pan Shot Face Unlock: Towards Unlocking Personal Mobile Devices using Stereo Vision and Biometric Face Information from multiple Perspectives. Department of Mobile Computing, School of Informatics, Communication and Media, University of Applied Sciences Upper Austria, 2013

UPDATE (2013)

OpenCV now features Android support natively. Therefore you should start with OpenCV for Android ( and add other haar or LBP cascades there (as also done in this post).

What is HaarCascadeTypes supposed to do?
The Android app “HaarCascadeTypes” extends the app “FacePreview” from the JavaCV project homepage. It’s a very small app that demonstrates which standard OpenCV haarcascades detect which types of faces (frontal, profile, …). As it is only a demo, it is not optimized in any way (e.g. it’s quite big).

The Application
The pictures below show what types of faces are detected by which haarcascades of OpenCV. The frame colors mean the usage of a specific haarcascade specification of OpenCV for the detected face:

  • Red: haarcascade_frontalface_alt.xml
  • Green: haarcascade_frontalface_alt2.xml
  • Blue: haarcascade_frontalface_alt_tree.xml
  • Yellow: haarcascade_frontalface_default.xml
  • White: haarcascade_profileface.xml

You can download either the final apk or the complete source code of the project. In the apk only two classifiers are enabled: one for frontal, one for profile face detection. Note that the detection is rather slow as these two face detections are done separately. The android java part of the source is also attached at the bottom of the post for quick review. Important: the opencv-libraries delivered with the source and apk are working only on Android < 4.x. If you want to use it on Android 4.x, you will have to get the opencv libraries precompiled elsewhere or compile it on your own, which is obviously not the easiest task.
apk download, md5: 054292522a2062a3c6b9c6a4664a727e, sha1: 41759a699a2a1adf2e6ce3443ac427d32aae0aab
source download, md5: 78b67179e5e87ed6b1b2634c1b3f9d23, sha1: 71484d13f73ea37c0a73bd2c39aa5b30a3b27fe0

Compiling the source
There are several things you have to concern when compiling the source on your own: e.g. you need a working android environment. A detailed description of how to get JavaCV working for android is stated at the JavaCV project homepage.

The Android-Java part of the source (containing the JavaCV-API calls):

 * Copyright (C) 2010,2011 Samuel Audet
 * FacePreview - A fusion of OpenCV's facedetect and Android's CameraPreview
 * samples, with JavaCV + JavaCPP as the glue in between.
 * This file was based on that came with the Samples for
 * Android SDK API 8, revision 1 and contained the following copyright notice:
 * Copyright (C) 2007 The Android Open Source Project
 * Licensed under the Apache License, Version 2.0 (the &amp;amp;amp;amp;quot;License&amp;amp;amp;amp;quot;); you may not
 * use this file except in compliance with the License. You may obtain a copy of
 * the License at
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an &amp;amp;amp;amp;quot;AS IS&amp;amp;amp;amp;quot; BASIS, WITHOUT
 * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 * License for the specific language governing permissions and limitations under
 * the License.
 * IMPORTANT - Make sure your AndroidManifiest.xml file includes the following:

package com.googlecode.javacv.facepreview;

import static com.googlecode.javacv.cpp.opencv_core.IPL_DEPTH_8U;
import static com.googlecode.javacv.cpp.opencv_core.cvGetSeqElem;
import static com.googlecode.javacv.cpp.opencv_core.cvLoad;
import static com.googlecode.javacv.cpp.opencv_objdetect.CV_HAAR_DO_CANNY_PRUNING;
import static com.googlecode.javacv.cpp.opencv_objdetect.cvHaarDetectObjects;

import java.nio.ByteBuffer;
import java.util.HashMap;
import java.util.List;

import android.content.Context;
import android.hardware.Camera;
import android.hardware.Camera.Size;
import android.os.Bundle;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.View;
import android.view.Window;
import android.view.WindowManager;
import android.widget.FrameLayout;

import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.cpp.opencv_core;
import com.googlecode.javacv.cpp.opencv_objdetect;
import com.googlecode.javacv.cpp.opencv_core.CvMemStorage;
import com.googlecode.javacv.cpp.opencv_core.CvRect;
import com.googlecode.javacv.cpp.opencv_core.CvSeq;
import com.googlecode.javacv.cpp.opencv_core.IplImage;
import com.googlecode.javacv.cpp.opencv_objdetect.CvHaarClassifierCascade;

// ----------------------------------------------------------------------

public class FacePreview extends Activity {

	private FrameLayout	layout;
	private FaceView	faceView;
	private Preview		mPreview;

	protected void onCreate(Bundle savedInstanceState) {


		// Hide the window title.

		// Create our Preview view and set it as the content of our activity.
		try {
			layout = new FrameLayout(this);
			faceView = new FaceView(this);
			mPreview = new Preview(this, faceView);
		} catch (IOException e) {
			new AlertDialog.Builder(this).setMessage(e.getMessage()).create().show();

// ----------------------------------------------------------------------

class FaceView extends View implements Camera.PreviewCallback {
	public static final int	SUBSAMPLING_FACTOR	= 4;

	private IplImage		grayImage;

	public static enum Feature {

	private static HashMap				mClassifierFiles	= new HashMap();
	private static String								mClassifierPrefix	= &amp;amp;amp;amp;quot;/com/googlecode/javacv/facepreview/&amp;amp;amp;amp;quot;;
	static {
		 mClassifierFiles.put(Feature.FRONTALFACE_ALT, mClassifierPrefix +
		mClassifierFiles.put(Feature.PROFILEFACE, mClassifierPrefix + &amp;amp;amp;amp;quot;haarcascade_profileface.xml&amp;amp;amp;amp;quot;);
		 mClassifierFiles.put(Feature.FRONTALFACE_ALT_TREE, mClassifierPrefix
		 + &amp;amp;amp;amp;quot;haarcascade_frontalface_alt_tree.xml&amp;amp;amp;amp;quot;);
		mClassifierFiles.put(Feature.FRONTALFACE_ALT2, mClassifierPrefix + &amp;amp;amp;amp;quot;haarcascade_frontalface_alt2.xml&amp;amp;amp;amp;quot;);
		 mClassifierFiles.put(Feature.FRONTALFACE_DEFAULT, mClassifierPrefix +
	private HashMap						mFaces				= new HashMap();
	private HashMap				mStorages			= new HashMap();
	private HashMap	mClassifiers		= new HashMap();

	public FaceView(FacePreview context) throws IOException {

		// Preload the opencv_objdetect module to work around a known bug.

		for (Feature f : mClassifierFiles.keySet()) {
			File classifierFile = Loader.extractResource(getClass(), mClassifierFiles.get(f), context.getCacheDir(),
					&amp;amp;amp;amp;quot;classifier&amp;amp;amp;amp;quot;, &amp;amp;amp;amp;quot;.xml&amp;amp;amp;amp;quot;);
			if (classifierFile == null || classifierFile.length() 				throw new IOException(&amp;amp;amp;amp;quot;Could not extract the classifier file from Java resource.&amp;amp;amp;amp;quot;);
			mClassifiers.put(f, new CvHaarClassifierCascade(cvLoad(classifierFile.getAbsolutePath())));
			if (mClassifiers.get(f).isNull()) {
				throw new IOException(&amp;amp;amp;amp;quot;Could not load the classifier file.&amp;amp;amp;amp;quot;);
			mStorages.put(f, CvMemStorage.create());

	public void onPreviewFrame(final byte[] data, final Camera camera) {
		try {
			Camera.Size size = camera.getParameters().getPreviewSize();
			processImage(data, size.width, size.height);
		} catch (RuntimeException e) {
			// The camera has probably just been released, ignore.

	protected void processImage(byte[] data, int width, int height) {
		// First, downsample our image and convert it into a grayscale IplImage
		if (grayImage == null || grayImage.width() != width / f || grayImage.height() != height / f) {
			grayImage = IplImage.create(width / f, height / f, IPL_DEPTH_8U, 1);
		int imageWidth = grayImage.width();
		int imageHeight = grayImage.height();
		int dataStride = f * width;
		int imageStride = grayImage.widthStep();
		ByteBuffer imageBuffer = grayImage.getByteBuffer();
		for (int y = 0; y &amp;amp;amp;amp;lt; imageHeight; y++) {
			int dataLine = y * dataStride;
			int imageLine = y * imageStride;
			for (int x = 0; x &amp;amp;amp;amp;lt; imageWidth; x++) {
				imageBuffer.put(imageLine + x, data[dataLine + f * x]);

		for (Feature feat : mClassifierFiles.keySet()) {
			mFaces.put(feat, cvHaarDetectObjects(grayImage, mClassifiers.get(feat), mStorages.get(feat), 1.1, 3,

	protected void onDraw(Canvas canvas) {
		Paint paint = new Paint();

		String s = &amp;amp;amp;amp;quot;FacePreview - This side up.&amp;amp;amp;amp;quot;;
		float textWidth = paint.measureText(s);
		canvas.drawText(s, (getWidth() - textWidth) / 2, 20, paint);

		for (Feature f : mClassifierFiles.keySet()) {
			if (mFaces.get(f) != null) {
				float scaleX = (float) getWidth() / grayImage.width();
				float scaleY = (float) getHeight() / grayImage.height();
				int total = mFaces.get(f).total();
				for (int i = 0; i &amp;amp;amp;amp;lt; total; i++) { 					CvRect r = new CvRect(cvGetSeqElem(mFaces.get(f), i)); 					int x = r.x(), y = r.y(), w = r.width(), h = r.height(); 					canvas.drawRect(x * scaleX, y * scaleY, (x + w) * scaleX, (y + h) * scaleY, paint); 				} 			} 		} 	} 	private int featureColor(Feature _f) { 		switch (_f) { 			case FRONTALFACE_ALT: 				return Color.RED; 			case FRONTALFACE_ALT2: 				return Color.GREEN; 			case FRONTALFACE_ALT_TREE: 				return Color.BLUE; 			case FRONTALFACE_DEFAULT: 				return Color.YELLOW; 			case PROFILEFACE: 				return Color.WHITE; 			default: 				throw new NullPointerException(&amp;amp;amp;amp;quot;no color defined for this feature type: &amp;amp;amp;amp;quot; + _f); 		} 	} } // ---------------------------------------------------------------------- class Preview extends SurfaceView implements SurfaceHolder.Callback { 	SurfaceHolder			mHolder; 	Camera					mCamera; 	Camera.PreviewCallback	previewCallback; 	Preview(Context context, Camera.PreviewCallback previewCallback) { 		super(context); 		this.previewCallback = previewCallback; 		// Install a SurfaceHolder.Callback so we get notified when the 		// underlying surface is created and destroyed. 		mHolder = getHolder(); 		mHolder.addCallback(this); 		mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); 	} 	public void surfaceCreated(SurfaceHolder holder) { 		// The Surface has been created, acquire the camera and tell it where 		// to draw. 		mCamera =; 		try { 			mCamera.setPreviewDisplay(holder); 		} catch (IOException exception) { 			mCamera.release(); 			mCamera = null; 			// TODO: add more exception handling logic here 		} 	} 	public void surfaceDestroyed(SurfaceHolder holder) { 		// Surface will be destroyed when we return, so stop the preview. 		// Because the CameraDevice object is not a shared resource, it's very 		// important to release it when the activity is paused. 		mCamera.stopPreview(); 		mCamera.release(); 		mCamera = null; 	} 	private Size getOptimalPreviewSize(List sizes, int w, int h) { 		final double ASPECT_TOLERANCE = 0.05; 		double targetRatio = (double) w / h; 		if (sizes == null) 			return null; 		Size optimalSize = null; 		double minDiff = Double.MAX_VALUE; 		int targetHeight = h; 		// Try to find an size match aspect ratio and size 		for (Size size : sizes) { 			double ratio = (double) size.width / size.height; 			if (Math.abs(ratio - targetRatio) &amp;amp;amp;amp;gt; ASPECT_TOLERANCE)
			if (Math.abs(size.height - targetHeight) &amp;amp;amp;amp;lt; minDiff) {
				optimalSize = size;
				minDiff = Math.abs(size.height - targetHeight);

		// Cannot find the one match the aspect ratio, ignore the requirement
		if (optimalSize == null) {
			minDiff = Double.MAX_VALUE;
			for (Size size : sizes) {
				if (Math.abs(size.height - targetHeight) &amp;amp;amp;amp;lt; minDiff) {
					optimalSize = size;
					minDiff = Math.abs(size.height - targetHeight);
		return optimalSize;

	public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
		// Now that the size is known, set up the camera parameters and begin
		// the preview.
		Camera.Parameters parameters = mCamera.getParameters();

		List sizes = parameters.getSupportedPreviewSizes();
		Size optimalSize = getOptimalPreviewSize(sizes, w, h);
		parameters.setPreviewSize(optimalSize.width, optimalSize.height);

		if (previewCallback != null) {
			Camera.Size size = parameters.getPreviewSize();
			byte[] data = new byte[size.width * size.height * ImageFormat.getBitsPerPixel(parameters.getPreviewFormat()) / 8];