[FFmpeg-devel] [PATCH] avcodec/flac_parser: Do not loose header count in find_headers_search()

Michael Niedermayer michael at niedermayer.cc
Tue Feb 4 19:34:47 EET 2020


On Tue, Feb 04, 2020 at 11:10:00AM +0000, Andreas Rheinhardt wrote:
> Michael Niedermayer:
> > Fixes: Timeout
> > Fixes: out of array access
> > Fixes: 20274/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_FLAC_fuzzer-5649631988154368
> > Fixes: 19275/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_FLAC_fuzzer-5757535722405888
> > 
> > Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
> > Signed-off-by: Michael Niedermayer <michael at niedermayer.cc>
> > ---
> >  libavcodec/flac_parser.c | 12 ++++++++----
> >  1 file changed, 8 insertions(+), 4 deletions(-)
> > 
> > diff --git a/libavcodec/flac_parser.c b/libavcodec/flac_parser.c
> > index 9ffa288548..3424583c49 100644
> > --- a/libavcodec/flac_parser.c
> > +++ b/libavcodec/flac_parser.c
> > @@ -208,16 +208,20 @@ static int find_headers_search(FLACParseContext *fpc, uint8_t *buf,
> >      uint32_t x;
> >  
> >      for (i = 0; i < mod_offset; i++) {
> > -        if ((AV_RB16(buf + i) & 0xFFFE) == 0xFFF8)
> > -            size = find_headers_search_validate(fpc, search_start + i);
> > +        if ((AV_RB16(buf + i) & 0xFFFE) == 0xFFF8) {
> > +            int ret = find_headers_search_validate(fpc, search_start + i);
> > +            size = FFMAX(size, ret);
> > +        }
> >      }
> >  
> >      for (; i < buf_size - 1; i += 4) {
> >          x = AV_RN32(buf + i);
> >          if (((x & ~(x + 0x01010101)) & 0x80808080)) {
> >              for (j = 0; j < 4; j++) {
> > -                if ((AV_RB16(buf + i + j) & 0xFFFE) == 0xFFF8)
> > -                    size = find_headers_search_validate(fpc, search_start + i + j);
> > +                if ((AV_RB16(buf + i + j) & 0xFFFE) == 0xFFF8) {
> > +                    int ret = find_headers_search_validate(fpc, search_start + i + j);
> > +                    size = FFMAX(size, ret);
> > +                }
> >              }
> >          }
> >      }
> > 
> 1. Actually, find_headers_search_validate() can return an error and
> these FFMAX (as well as the FFMAX already present in
> find_new_headers()) will make sure that they are overwritten.

yes


> 2. The linked list of buffered headers is currently traversed pretty
> often: E.g. if it seems that no new headers have been found in a call
> to find_new_headers(), the linked list will be traversed again in
> order to count the number of buffered headers. Maybe one should
> instead directly increment fpc->nb_headers_buffered after a new header
> has been added to the list in find_headers_search_validate() and only
> use the return value of find_headers_search_validate(),
> find_headers_search(), find_new_headers() to forward the allocation
> failure errors? (If one stores a pointer to the end of the linked
> list, one can also remove the while-loop in
> find_headers_search_validate(); don't know if this helps with the
> timeout.)

These are good ideas. The problem iam facing a bit here is that i
need to (or rather i want to) fix this in the releases too, so one
goal is to keep the change simple as that makes it more likely to
backport without too much issues.
We can clean up the code afterwards and in fact there is probably
more that can be cleaned up then what you mention (just by gut feeling
from this code)


> There would be two scenarios in which this might lead to different
> outcomes than the current code: In the case that you are fixing right
> now where an prima-facie invalid header might mask a header that seems
> to be valid. And in the case where an allocation failure happens and
> the failure is currently forwarded. That's because nb_headers_buffered
> is currently only updated when no allocation error is reported
> (obviously, because the return value is used for both allocation
> failure and new number of buffered headers). I am actually unsure what
> to do in this scenario. Given that a parser is not allowed to return
> errors, wouldn't it make sense to score the headers that we do have
> (if there are at least FLAC_MIN_HEADERS)?

first thought (which may be missing something) is dont allocate at all
use a fixed size array to keep track of items.
Having an unlimited number or items does not work very well and when 
we need an upper limit then we can allocate that during init and have
one issue less to deal with
if you want to work on something like this, you sure can as i ATM have
too much todo already ...


> 3. What array does this out-of-array-access affect? I presume it is
> the link_penalty-array, but I just don't see how this is possible.

stack through recursive calls after things become inconsistent

thanks

[...]
-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

The real ebay dictionary, page 1
"Used only once"    - "Some unspecified defect prevented a second use"
"In good condition" - "Can be repaird by experienced expert"
"As is" - "You wouldnt want it even if you were payed for it, if you knew ..."
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 181 bytes
Desc: not available
URL: <https://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20200204/5bb5cb83/attachment.sig>


More information about the ffmpeg-devel mailing list