[FFmpeg-devel] [PATCH] avcodec/mjpegdec: Adjustable discard threshold

Michael Niedermayer michael at niedermayer.cc
Mon Jun 28 19:37:47 EEST 2021


On Sun, Jun 27, 2021 at 01:38:01PM -0400, Andriy Gelman wrote:
> On Sun, 27. Jun 15:53, Michael Niedermayer wrote:
> > Fixes regression
> > Fixes: last frame of Ticket9287
> > 
> > Analysed-by: Andriy Gelman
> > Signed-off-by: Michael Niedermayer <michael at niedermayer.cc>
> > ---
> >  libavcodec/mjpegdec.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/libavcodec/mjpegdec.c b/libavcodec/mjpegdec.c
> > index 02a987fd0c..fbc94c46d7 100644
> > --- a/libavcodec/mjpegdec.c
> > +++ b/libavcodec/mjpegdec.c
> > @@ -338,7 +338,7 @@ int ff_mjpeg_decode_sof(MJpegDecodeContext *s)
> >      av_log(s->avctx, AV_LOG_DEBUG, "sof0: picture: %dx%d\n", width, height);
> >      if (av_image_check_size(width, height, 0, s->avctx) < 0)
> >          return AVERROR_INVALIDDATA;
> > -    if (s->buf_size && (width + 7) / 8 * ((height + 7) / 8) > s->buf_size * 4LL)
> > +    if (s->buf_size && (width + 7) / 8 * ((height + 7) / 8) * (100LL - s->avctx->discard_damaged_percentage) > s->buf_size * 4LL * 100)
> >          return AVERROR_INVALIDDATA;
> >  
> >      nb_components = get_bits(&s->gb, 8);
> 
> Would a check for discard_damaged_percentage be more accurate in the mjpeg_decode_scan() function?
> Because mapping buf_size to the number of decoded pixels seems only an estimate at
> this point.

yes


> 
> As I understand the goal of this check was to initially prevent timeouts from
> the fuzzer. The timeouts were caused because there were lots of SOF markers with
> large frames (so there were many calls to ff_get_buffer()), without SOS markers being called.
> 
> If the goal is prevent timeout, would it be better to somehow delay calling ff_get_buffer() until we actually
> start to decode the pixels?   
> I think this was actually done in c8197f73e684b0edc450f3dc2b2b4b3fb9dedd0d, but was reverted recently.

The timeout thing is a bit more annoying
doing the allocation only once the first pixel is encountered doesnt really fix it.
Its not wrong to do that if that was easy and clean to do but it seemed this
isnt so clean and easy

The problem is if we consider the maximum sized frame and one coded block and
then repeat that. I would guess thats not much better. Theres still O(1) data 
causing O(width*height) computations
but when we limit the input data to a width*height * Constant than at least in
theory we are no longer causing  O(width*height) computations with O(1) input 
Thats what that line of code really is trying to achieve
It tries to ensure that there is at least a reasonable fragment of a valid
frame before spending CPU and memory.
If we check later its much more accurate but we then already spend the CPU
and memory

Thanks

[...]
-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Any man who breaks a law that conscience tells him is unjust and willingly 
accepts the penalty by staying in jail in order to arouse the conscience of 
the community on the injustice of the law is at that moment expressing the 
very highest respect for law. - Martin Luther King Jr
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: not available
URL: <https://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20210628/86130a9b/attachment.sig>


More information about the ffmpeg-devel mailing list