[FFmpeg-devel] [PATCH 1/8] avcodec/cinepak: Require 1 bit per 4x4 block as minimum input

Tomas Härdin tjoppen at acc.umu.se
Sun Aug 18 15:49:39 EEST 2019


sön 2019-08-18 klockan 14:18 +0200 skrev Michael Niedermayer:
> On Sun, Aug 18, 2019 at 01:40:01PM +0200, Tomas Härdin wrote:
> > sön 2019-08-18 klockan 12:19 +0200 skrev Michael Niedermayer:
> > > On Sun, Aug 18, 2019 at 12:00:45PM +0200, Paul B Mahol wrote:
> > > > On Sun, Aug 18, 2019 at 11:44 AM Michael Niedermayer <michael at niedermayer.cc>
> > > > > and yes i too wish there was a magic fix but i think most things that
> > > > > look like magic fixes have a fatal flaw. But maybe iam missing something
> > > > > in fact i hope that iam missing something and that there is a magic fix
> > > > > 
> > > > 
> > > > Magic fix is enabling reference counted frames in fuzzer.
> > > 
> > > That is covered by the part below which you maybe did not read
> > > 
> > > > > PS: if you think of changing the API, i dont think its the API.
> > > > > I mean every user application will read the frames it receives, so
> > > > > even if inside the decoder we just return the same frame with 2 pixels
> > > > > different the user doesnt know this and has to read the whole frame.
> > > > > The problem is moved around but its still there.
> > 
> > Copying is still slower than not copying. Enabling refcounting fixes
> > the timeout issue here, and will likely silence a whole bunch of false
> > positives for this class of files.
> 
> it makes probably sense to enable ref counting but we should
> immedeatly in the next or a previous commit make the fuzzer read the frames
> from the  decoder. Thats what basically every user app would do.
> Otherwise we would have a bunch of issues closed and then reopened
> later.

Why should we care how much work the user does? We're fuzzing lavc
here, not user programs. Certain use cases are decode-only (ffprobe
-show_frames for example)

> an alternative viewpoint to this would be to set the refcounting flag
> from the input so the fuzzer itself has control over it and we test
> both codepathes. This would improve coverage

Not a bad idea. But as this example shows, not refcounting has
performance implications. The fuzzer should be more lenient with
timeout in that case, imo.

> > It would be beneficial to have a consistent way of signalling that a
> > frame didn't change, since a bunch of codecs have that property.
> > Currently it's a mix of discontinuous timestamps, longer frame
> > durations and repeating identical frames.
> 
> yes, i strongly agree.
>
> > > so feeding really crazy resolutions into a decoder requires some
> > > small but proportional amout of input bytes.
> > > double the width and the minimum input bytes double sort of.
> > 
> > Lavc is already very lenient with what it accepts. How do you detect
> > the difference between "this packet is too small to decode to an entire
> > frame" from "this packet is too small but we could still get a few MBs
> > out of it"?
> 
> In reality this is actually not hard because a frame that is smaller
> than the minimum valid size is generally for many codecs so small 
> it really wont contain anything usefull to decode.
> And we have discard_damaged_percentage where the user can tune it too.
> Patches using discard_damaged_percentage are sometimes objected too
> though so it is not consistently used.

I remain skeptical of this, since it makes the parsing more complicated
and slower, and increases mental load. There's a sliding scale of
strictness here, and if I try to implement even something basic as "the
length in the header should match the length of the actual data" then
FATE breaks. We can of course change FATE to accept that changed
strictness.

I get the feeling whenver we put in checks like this it's only a matter
of time before the fuzzers find out some other way to trip up the
decoders. This is why I advocate for maximum strictness unless there's
a good reason to not be strict.

/Tomas



More information about the ffmpeg-devel mailing list