You’re cropping images wrong

Learn how to use face detection based cropping in your Android app in 5 minutes

It happened to all of us. Your app needs to display 3rd party generated images within a strictly defined UI layout. The easiest way to handle this is to center-fit the image, and crop the sides that are out of bound. Sometimes it looks good, sometimes not.

Ugly thumbnails

The result of center-cropping images are ugly thumbnails. The below is the official screenshot of Google News app taken from Google Play Store. Note how the heads are cut midway.

Google News app official screenshot on Google Play Store

Another example is from Yahoo News (Hong Kong edition). Not surprisingly, the app screenshot is having the same ugly thumbnail issue. A large portion of the face is cut.

Yahoo News (Hong Kong edition) screenshot on Google Play Store

The proper way

Every image is different. Some contain faces, some contain animals or commercial products. Some “smart” content aware processing is needed to understand the image contents and preserve points of interest after cropping. Content-based intelligent cropping is what you might need.

Whenever possible, you should use an online service to handle image cropping for you, such as this one. However, there are situations where an on-device solution is a better choice — a face tracking app. An obvious example is the camera app, which can perform automatic focus on detected faces by tracking them in real-time.

Before creating a full blown iOS, Android and web applications using an online service of your choice, why not spend 5 minutes to create a mock-up to impress your boss.

We will be using this project to illustrate how simple it is to crop images by centering the detected faces:

Google offers a ready-to-use SDK for on-device face detection in real-time. The detection speed is incredibly fast, ranging from a few 10ms for small images, to about 100ms for large images.

The implementation

implementation ''

Here we use Google Play Services SDK version 11.6.2, which is the latest version as of this writing, but you should be using whatever latest version available to you.

detector = new FaceDetector.Builder(context)

There are multiple parameters we can use to customize our face detector, but the most useful ones are:

  • setMinFaceSize — the smallest ratio of a detectable face width compared to the width of the image
  • setTrackingEnabled — we disable face tracking for better detection speed performance
private Collection<PointF> findFaceCenters(@NonNull final Bitmap bitmap) {
if (this.detector.isOperational()) {
final SparseArray<Face> faces = detector.detect(new Frame.Builder()
final Collection<PointF> centers = new ArrayList<>(); for (int i = 0; i < faces.size(); i++) {
final Face face = faces.get(faces.keyAt(i));
centers.add(new PointF(face.getPosition().x + face.getWidth() / 2f, face.getPosition().y + face.getHeight() / 2f));
return centers;
return Collections.emptyList();

The centers of all faces can be obtained as simple as by passing an image to the detect() method. Note that this probably takes more than 16ms so you should do this on a background thread.

private static int findDownSamplingScale(@NonNull final File file) {
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeFile(file.getAbsolutePath(), options);
int width = options.outWidth;
int height = options.outHeight;
int scale = 1;
while (width * height > FACE_DETECTION_SIZE_MAX) {
width /= 2;
height /= 2;
scale *= 2;
return scale;

To avoid getting OutOfMemoryError, you should always check if your application can handle the size of the input image before passing it to the face detector.

We limit the max area of the input image, e.g. 640 x 640 pixels. Thanks to the high accuracy of the detector, an high resolution image is not required. Using a smaller image not only prevents running out of memory, but also improves the detection speed.

findDownSamplingScale() will return the scale down-sampled, or 1 if down-sampling is not required. We will need the calculated scale in a later stage.

PointF findFaceCenter(@NonNull final File file) {
final int scale = findDownSamplingScale(file);
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = scale;
final Bitmap bitmap = BitmapFactory.decodeFile(file.getAbsolutePath(), options); final Collection<PointF> centers = findFaceCenters(bitmap);
if (centers.isEmpty()) return new PointF(bitmap.getWidth() / 2f, bitmap.getHeight() / 2f);
float sumX = 0f;
float sumY = 0f;
for (final PointF center : centers) {
sumX += center.x;
sumY += center.y;
return new PointF(scale * sumX / centers.size(), scale * sumY / centers.size());

In this example, we ignore the size of the detected faces and simply calculate the center of all face centers, which is good enough in most cases. A better, but slower, approach could be to calculate the area-weighted center of the faces.

With the center of centers obtained, we can translate the position of the image in your ImageView, or any other image UI components. In this example project, Subsampling Scale Image View is used, which lets you manipulate the image scale and the image center position, as well we many other useful methods.

When you are done, you should call FaceDetector.release() or you will get a warning message about your application leaking some resources. This step can be done in your Activity’s onDestroy() rather than calling release() every time after a face detection process.

What’s the point?

I hope you can now create beautiful and smart apps with this little trick I’ve just illustrated. Content-based image cropping is to avoid delivering ugly content to users. Don’t be like Yahoo! Never create ugly apps again!