Seems like you are doing too much work to blend frames; that should just be a natural part of converting and not a big/complex param in a command.
I mean think about taking an image and scaling it down; you don’t have to do a custom algorithm to change the pixels when going from 10 inches to — let’s say — 1 inch in size. It’s just an inherent part of the process.
Looking at this answer here I believe there is a simpler way to approach this by using the fps video filter in FFmpeg. I believe — for your purposes — the command can be simplified to be just something like this:
ffmpeg -i input.mp4 -filter:v fps=fps=1/16 output.mp4
The key part is the -filter:v fps=fps=1/16 option.
Look at this other answer for another explanation on how the fps filter works.
And here are some other ideas based on the advice in this answer here on Stack Overflow…
This one uses the FFmpeg minterpolate filter:
ffmpeg -i input.mp4 -vf minterpolate=fps=1/16 output.mp4
And this one uses the FFmpeg framerate filter:
ffmpeg -i input.mpg -vf framerate=fps=1/16 output.mp4