Making an MP4 file with two video streams, each from a different webcam, like this:
ffmpeg -f v4l2 -thread_queue_size 32 -video_size 1920x1080 -input_format mjpeg -i /dev/video6 \
-f v4l2 -thread_queue_size 32 -video_size 1920x1080 -input_format mjpeg -i /dev/video0 \
-map 0:v -map 1:v -c:v libx264 -preset superfast test.mp4
In the resulting file, the stream from the cam named first will always be about 0.5 seconds behind the one from the cam named second. So in the above example, the stream from /dev/video6 will be behind that of /dev/video0. If I swap the two devices in the command, the stream from /dev/video0 will be behind. So the position on the command line counts.
The discrepancy can be roughly compensated for by adding -itsoffset 0.5 to the second input. But the 0.5 seconds are guesswork and perhaps even depend on the actual hardware.
Is there a way to automatically synchronize the two streams?
Update 1 (and partial solution): -copyts leads to synchronization, but now the video player controls are not working (cannot move back and forth in the video). With -copyts -start_at_zero, the controls are back, but the sync is gone. So it is possible (using -copyts), but somehow I need to get the playback controls working again. I actually found a way to do this: record using -copyts. Then run the whole thing through ffmpeg again, using only -start_at_zero:
ffmpeg -i test.mp4 -start_at_zero -map 0:v -c:v copy test2.mp4
Unfortunately, the next problem is adding sound. The start timestamps reported by my audio devices are way off, and do not match those reported by the cameras. I opened a separate question for this: FFmpeg: audio start time way off
Update 2: Seems to be related to Capturing multiple RTSP streams simultaneously in sync. A solution using ffserver is suggested there, but no details are given and ffserver is no longer maintained.
Update 3 (full solution): Please see FFmpeg: audio start time way off for a full solution that also explains how to add audio.