4

This is a more up-to-date version of this question here, which luckily it avoids being a duplicate of because that question is about VirtualDub rather than FFmpeg.

This is not a subjective question as it can be answered definitely by A/V experts who are familiar with the industry literature that measures such metrics, as proven in the the above question, and there's every reason to suspect that things have changed in the last decade with the amount of development that's been done on both FFmpeg and the algorithms since.


I'm in the process of using FFmpeg to automate my workflow for YouTube uploads, which usually consist of old, rare, low-resolution videos. These need to be upscaled in order to concatenate my own HD outro to them, and often they also have weird aspect ratios.

A suggestion on how to approach upscaling them that I got from Reddit was to do something like this:

ffmpeg ... -vf "scale=iw*6:ih*6:flags=neighbor,scale=1966:-2" ...

This uses the nearest neighbour algorithm to upscale a video as close as possible to the desired resolution x amount of times (in this case going from the original video's 320x176 to 1920x1056) and then uses the standard scale filter to go up the few remaining pixels. This is supposedly better at avoiding artifacts than making one massive jump with the scale filter on its own, and can also be used to downscale by replacing the multiplications with divisions.

Is this the best general FFmpeg approach for upscaling or downscaling videos that I can use in my scripts, or does a better one exist? Or has the research pretty much stayed the same as in slhck's answer from 2012, where lanczos is best for downscaling and bicubic for upscaling?

Hashim Aziz
  • 13,835

0 Answers0