I am currently developing an app in Kotlin that uses the FACE API of Azure. To identify faces on images I need to send the image to the server. I use Retrofit 2.7.0 for the REST requests. Whenever I google about sending an image with retrofit, I come across the @Multipart annotation. For example here or here. None of the questions state why they do it. I found that apparently Multipart is the standard to send files via http. 
However I do not seem to need it for my request. The simple approach seems to work just fine. Seeing as everyone else seems to use multipart, I am probably missing something. So my question is, why would I need to use Multipart over the simple approach?
I currently use this approach:
interface FaceAPI {
    @Headers(value = ["$CONTENT_TYPE_HEADER: $CONTENT_TYPE_OCTET_STREAM"])
    @POST("face/v1.0/detect")
    suspend fun detectFace(
        @Query("recognitionModel") recognitionModel: String = RECOGNITION_MODEL_2,
        @Query("detectionModel") detectionModel: String = DETECTION_MODEL_2,
        @Query("returnRecognitionModel") returnRecognitionModel: Boolean = false,
        @Query("returnFaceId") returnFaceId: Boolean = true,
        @Query("returnFaceLandmarks") returnFaceLandmarks: Boolean = false,
        @Header(HEADER_SUBSCRIPTION_KEY) subscriptionKey: String = SubscriptionKeyProvider.getSubscriptionKey(),
        @Body image: RequestBody
    ): Array<DetectResponse>
}
And then I call it like this:
suspend fun detectFaces(image: InputStream): Array<DetectResponse> {
    return withContext(Dispatchers.IO) {
        val bytes = image.readAllBytes()
        val body = bytes.toRequestBody(CONTENT_TYPE_OCTET_STREAM.toMediaTypeOrNull(), 0, bytes.size)
        val faceApi = ApiFactory.createFaceAPI()
        faceApi.detectFace(image = body)
    }
}
This code works for images up to the 6 MB that Azure supports.
 
    