I'm trying to make some deep learning experiments on android on video samples. And I've got stuck into remuxing videos. I have a couple of questions to arrange information in my head:) I have read some pages: https://vec.io/posts/android-hardware-decoding-with-mediacodec and https://bigflake.com/mediacodec/#ExtractMpegFramesTest but still I have a mess.
My questions:
- Can I read video with
MediaExtractorand then pass data toMediaMuxerto save video in another file? Without using MediaCodec? - If I want to modify frames before saving, can I do that without using
Surface? Just by modifyingByteBuffer? I assume that I need to decode data fromMediaExtractor, then modify content, then encode it toMediaMuxer. - Does
sampleis the same asframein context of methodMediaExtractor::readSampleData? - Do I need to decode sample?