After detecting a face with CameraX and MLKit I need to pass the image to a custom TFLite model (I'm using this one), which detects a facemask. The model accepts images of 224x224 pixels, so I need to take out the part of ImageProxy#getImage() corresponding to Face#getBoundingBox() and resize it accordingly.
I've seen this answer which could have been fine but ThumbnailUtils.extractThumbnail() can't work with a Rect of 4 coordinates and it's relative to the center of the image, while the face's bounding box might be elsewhere.
The TFLite model accepts inputs like this:
val inputFeature0 = TensorBuffer
    .createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
    .loadBuffer(/* the resized image as ByteBuffer */)
Note that the ByteBuffer will have a size of 224 * 224 * 3 * 4 bytes (where 4 is DataType.FLOAT32.byteSize()).
Edit: I've cleaned up some of the old text because it was getting overwhelming. The code suggested below actually works: I just forgot to delete a piece of my own code which was already converting the same ImageProxy to Bitmap and it must have caused some internal buffer to be read until the end, so it was either necessary to rewind it manually or to delete that useless code altogether.
However, even if the cropRect is applied to the ImageProxy and the underlying Image, the resulting bitmap is still full size so there must be something else to do. The model is still returning NaN values, so I'm going to experiment with the raw output for a while.
fun hasMask(imageProxy: ImageProxy, boundingBox: Rect): Boolean {
    val model = MaskDetector.newInstance(context)
    val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
    // now the cropRect is set correctly but the image itself isn't
    // cropped before being converted to Bitmap
    imageProxy.setCropRect(box)
    imageProxy.image?.cropRect = box
    val bitmap = BitmapUtils.getBitmap(imageProxy) ?: return false
    val resized = Bitmap.createScaledBitmap(bitmap, 224, 224, false)
    // input for the model
    val buffer = ByteBuffer.allocate(224 * 224 * 3 * DataType.FLOAT32.byteSize())
    resized.copyPixelsToBuffer(buffer)
    // use the model and get the result as 2 Floats
    val outputFeature0 = model.process(inputFeature0).outputFeature0AsTensorBuffer
    val maskProbability = outputFeature0.floatArray[0]
    val noMaskProbability = outputFeature0.floatArray[1]
    model.close()
    return maskProbability > noMaskProbability
}