[FFmpeg-devel] ffv1enc: question about "Cannot allocate worst case packet size, the encoding could fail"

Jan Ekström jeebjp at gmail.com
Fri Oct 12 15:42:51 EEST 2018


On Fri, Oct 12, 2018 at 12:59 PM Jerome Martinez <jerome at mediaarea.net> wrote:
> I kindly request more details about how hard coding 2 GB in the code
> helps, for both machines having 1 GB & machines having 8 GB. Looks like
> I am personally not smart enough for understanding that alone.
>
> Jérôme
>

`git gui blame -- libavutil/mem.c` did in the end bring me to the
progenitor of this check, which is from 2005 (only checks to add for
padding were then added later):
"0ecca7a49f8e254c12a3a1de048d738bfbb614c6" aka "various security fixes
and precautionary checks".

At the time it probably seemed like nothing valid could get larger
than INT_MAX, but unfortunately with the image sizes and sample bit
depths we're handling these days that is definitely not the case any
more.

Given the realities of 2018, either we look at the code around it and
check that it's OK to remove it, or we have to start creating separate
parameters or functions for "large buffer allocation". Or some other
option. I do not know how other projects handle the possibility that
some overflow somewhere might cause the memory usage balloon, or if we
are actively hiding issues in other parts of our code due to the fact
that the allocation would just fail if larger than INT_MAX? Do other
projects that take in user binary input even care other than trying to
validate their data on some level before getting into allocation of
internal structures - and if a code path ends up allocating a lot of
memory is that even a reason to limit the allocator rather than
rethinking the parser code?

Best regards,
Jan


More information about the ffmpeg-devel mailing list