I would like to cut videos using ffmpeg for machine learning.
How can I ensure that if I cut (for example) 1s of video @25fps this will provide exactly 25 frames of synchronized audio and video?
I have seen that ffmpeg looks for keyframes at time of cutting, picking the nearest one. I had troubles as it was generating negative timestamps and filling the end of the cut video with copied frames.
I understand that metadata does not display real fps and so on.
So what would be the pipeline to obtain precise cuts with the exact amount of frames aligned with the audio stream?
Thaanks