When the last Face Detection library came out with the actual Mobile Vision API in Android, it was said to be designed to detect faces even at different orientations, at specific landmarks such as the eyes, the nose, and the edges of the lips.

However, even these were released some four/five years ago. Now, the Mobile Vision API is part of the Firebase ML Kit, and the old APIs are now deprecated.

Nonetheless, let me show you how to quickly build an Face Detector app in Android using the Face Detection API.

Note: Face Detection != Facial Recognition

Now, let’s be clear that we have our environment ready. Also to check that you have Android Studio w/ the latest SDK and the Google Play Services SDK.

So,

implementation 'com.android.support:appcompat-v7:28.0.0'
implementation 'com.google.android.gms:play-services:11.0.4'

Our project is now fully configured. We’ll build a UI to take an image from the gallery, detect a face in the image, and then overlay that face with a bounding box. We’ll also find check for cases where there’s more than one image, and prompt a dialog.

Now, navigate to the “content_main” (or “activity_main”, depending on how your layout is set up) in the layout folder, and replace the “TextView” node to this:

<ImageView
   android:layout_width="wrap_content"
   android:layout_height="wrap_content"
   android:id="@+id/imgview"/>

Next, inside the manifest file, add the meta-data under the Application tag for ensuring availability of the face detection library:

<meta-data
   android:name="com.google.android.gms.vision.DEPENDENCIES"
   android:value="face" />

Also, include the external storage permission:

<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />

Now, let’s build the app. Inside the Main Activity, you may want to firsthand include the extra set of includes that the app will be using:

import android.app.AlertDialog;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.graphics.RectF;
import android.graphics.drawable.BitmapDrawable;
import android.util.SparseArray;
import android.widget.ImageView;

import com.google.android.gms.vision.Frame;
import com.google.android.gms.vision.face.Face;
import com.google.android.gms.vision.face.FaceDetector;

import java.io.FileNotFoundException;
import java.io.InputStream;

Next, inside the onCreate method, replace the FloatingActionButton’s onClick method with the following:

FloatingActionButton fab = findViewById(R.id.fab);
fab.setOnClickListener(new View.OnClickListener() {
   @Override
   public void onClick(View view) {
       Intent pickPhoto = new Intent(Intent.ACTION_PICK,
               android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
       startActivityForResult(pickPhoto , 0);
   }
});

This action initiates an intent to the media storage to select an image, and then returns the image URI from the onActivityResult method. See that below:

protected void onActivityResult(int requestCode, int resultCode, Intent imageReturnedIntent) {
   super.onActivityResult(requestCode, resultCode, imageReturnedIntent);
   switch (requestCode) {
       case 0:
           if (resultCode == RESULT_OK) {
               Uri selectedImage = imageReturnedIntent.getData();
               try {
                   setUpFaceDetector(selectedImage);
               } catch (FileNotFoundException e) {
                   e.printStackTrace();
               }
           }
           break;
   }
}

Note the setUpFaceDetector method?

private void setUpFaceDetector(Uri selectedImage) throws FileNotFoundException {
   FaceDetector faceDetector = new
           FaceDetector.Builder(getApplicationContext()).setTrackingEnabled(false)
           .build();
   if(!faceDetector.isOperational()){
       new AlertDialog.Builder(this).setMessage("Could not set up the face detector!").show();
       return;
   }

   BitmapFactory.Options options = new BitmapFactory.Options();
   options.inMutable=true;

   InputStream ims = getContentResolver().openInputStream(selectedImage);

   Bitmap myBitmap = BitmapFactory.decodeStream(ims);

   Frame frame = new Frame.Builder().setBitmap(myBitmap).build();
   SparseArray<Face> faces = faceDetector.detect(frame);

   Log.d("TEST", "Num faces = " + faces.size());

   detectedResponse(myBitmap, faces);
}

The setUpFaceDetector method creates a new FaceDetector object using its builder, and then uses a BitmapFactory to load a mutable bitmap that decodes the image from the URI stream.

Now we’re ready to detect faces. We create a frame using the bitmap, then call the detect method on the FaceDetector, using this frame, to get back a SparseArray of Face objects. That’s it.

The detectedResponse method entails what we do with whatever was detected.

public void detectedResponse(Bitmap myBitmap, SparseArray<Face> faces) {
   Paint myRectPaint = new Paint();
   myRectPaint.setStrokeWidth(5);
   myRectPaint.setColor(Color.RED);
   myRectPaint.setStyle(Paint.Style.STROKE);

   Bitmap tempBitmap = Bitmap.createBitmap(myBitmap.getWidth(), myBitmap.getHeight(), Bitmap.Config.RGB_565);
   Canvas tempCanvas = new Canvas(tempBitmap);
   tempCanvas.drawBitmap(myBitmap, 0, 0, null);

   for(int i=0; i<faces.size(); i++) {
       Face thisFace = faces.valueAt(i);
       float x1 = thisFace.getPosition().x;
       float y1 = thisFace.getPosition().y;
       float x2 = x1 + thisFace.getWidth();
       float y2 = y1 + thisFace.getHeight();
       tempCanvas.drawRoundRect(new RectF(x1, y1, x2, y2), 2, 2, myRectPaint);
   }

   imageView.setImageDrawable(new BitmapDrawable(getResources(),tempBitmap));

   if (faces.size() < 1) {
       new AlertDialog.Builder(this).setMessage("Hey, there's no face in this photo. You think this is a joke?").show();
   }
   else if (faces.size() == 1) {
       new AlertDialog.Builder(this).setMessage("Okay. Thank you!").show();
   }
   else if (faces.size() > 1) {
       new AlertDialog.Builder(this).setMessage("Hey, there's more than one face in this photo. Bet why?").show();
   }
}

First, we set up a Paint object with a stroke width of 5 pixels, and a style of stroke (to only draw an outline instead of filling the entire space), for drawing on the image. Next, we set up a temporary bitmap using the original, from which we can create a new canvas to draw the new bitmap on.

Next, from the SparseArray of Faces we passed as an argument, we iterate through this array to get the coordinates of the bounding rectangle for the face. Note that the API returns the x,y coordinates of the top left corner, as well as the width and height. And rectangles require x,y of the top left and bottom right corners, so we have to calculate the bottom right using the top left, width and height.

Then, we set the imageView to our new bitmap.

And finally, we count the detected faces to check for cases of one detected face, no detected face, and multiple detected faces.

See full code on github: github.com/OlayinkaPeter/FaceDetector