[FFmpeg-devel] [PATCH 6/6] avcodec/nuv: Avoid duplicating frames

Michael Niedermayer michael at niedermayer.cc
Mon Aug 26 12:29:12 EEST 2019


On Mon, Aug 26, 2019 at 09:53:46AM +0200, Paul B Mahol wrote:
> On Sun, Aug 25, 2019 at 11:35 PM Michael Niedermayer <michael at niedermayer.cc>
> wrote:
> 
> > On Sun, Aug 25, 2019 at 01:05:35PM -0300, James Almer wrote:
> > > On 8/24/2019 3:18 PM, Michael Niedermayer wrote:
> > > > Testcase:
> > 14843/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_NUV_fuzzer-5661969614372864
> > > > Testcase:
> > 16257/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_NUV_fuzzer-5769175464673280
> > > >
> > > > Found-by: continuous fuzzing process
> > https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
> > > > Signed-off-by: Michael Niedermayer <michael at niedermayer.cc>
> > > > ---
> > > >  libavcodec/nuv.c          | 25 +++++++++++++++++++++----
> > > >  tests/ref/fate/nuv-rtjpeg |  1 -
> > > >  2 files changed, 21 insertions(+), 5 deletions(-)
> > > >
> > > > diff --git a/libavcodec/nuv.c b/libavcodec/nuv.c
> > > > index 75b14bce5b..39479d2389 100644
> > > > --- a/libavcodec/nuv.c
> > > > +++ b/libavcodec/nuv.c
> > > > @@ -42,6 +42,7 @@ typedef struct NuvContext {
> > > >      unsigned char *decomp_buf;
> > > >      uint32_t lq[64], cq[64];
> > > >      RTJpegContext rtj;
> > > > +    AVPacket flush_pkt;
> > > >  } NuvContext;
> > > >
> > > >  static const uint8_t fallback_lquant[] = {
> > > > @@ -172,6 +173,20 @@ static int decode_frame(AVCodecContext *avctx,
> > void *data, int *got_frame,
> > > >          NUV_COPY_LAST     = 'L'
> > > >      } comptype;
> > > >
> > > > +    if (!avpkt->data) {
> > > > +        if (avctx->internal->need_flush) {
> > > > +            avctx->internal->need_flush = 0;
> > > > +            ret = ff_setup_buffered_frame_for_return(avctx, data,
> > c->pic, &c->flush_pkt);
> > > > +            if (ret < 0)
> > > > +                return ret;
> > > > +            *got_frame = 1;
> > > > +        }
> > > > +        return 0;
> > > > +    }
> > > > +    c->flush_pkt = *avpkt;
> > > > +    c->pic->pkt_dts = c->flush_pkt.dts;
> > > > +
> > > > +
> > > >      if (buf_size < 12) {
> > > >          av_log(avctx, AV_LOG_ERROR, "coded frame too small\n");
> > > >          return AVERROR_INVALIDDATA;
> > > > @@ -204,8 +219,8 @@ static int decode_frame(AVCodecContext *avctx,
> > void *data, int *got_frame,
> > > >          }
> > > >          break;
> > > >      case NUV_COPY_LAST:
> > > > -        keyframe = 0;
> > > > -        break;
> > > > +        avctx->internal->need_flush = 1;
> > > > +        return buf_size;
> > > >      default:
> > > >          keyframe = 1;
> > > >          break;
> > > > @@ -313,6 +328,7 @@ retry:
> > > >      if ((result = av_frame_ref(picture, c->pic)) < 0)
> > > >          return result;
> > > >
> > > > +    avctx->internal->need_flush = 0;
> > > >      *got_frame = 1;
> > > >      return orig_size;
> > > >  }
> > > > @@ -364,6 +380,7 @@ AVCodec ff_nuv_decoder = {
> > > >      .init           = decode_init,
> > > >      .close          = decode_end,
> > > >      .decode         = decode_frame,
> > > > -    .capabilities   = AV_CODEC_CAP_DR1,
> > > > -    .caps_internal  = FF_CODEC_CAP_INIT_CLEANUP,
> > > > +    .capabilities   = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY,
> > > > +    .caps_internal  = FF_CODEC_CAP_SETS_PKT_DTS |
> > FF_CODEC_CAP_SETS_PKT_POS |
> > > > +                      FF_CODEC_CAP_INIT_CLEANUP,
> > > >  };
> > > > diff --git a/tests/ref/fate/nuv-rtjpeg b/tests/ref/fate/nuv-rtjpeg
> > > > index b6f3b080dc..0914b985ec 100644
> > > > --- a/tests/ref/fate/nuv-rtjpeg
> > > > +++ b/tests/ref/fate/nuv-rtjpeg
> > > > @@ -6,7 +6,6 @@
> > > >  0,        118,        118,        0,   460800, 0x54aedafe
> > > >  0,        152,        152,        0,   460800, 0xb7aa8b56
> > > >  0,        177,        177,        0,   460800, 0x283ea3b5
> > > > -0,        202,        202,        0,   460800, 0x283ea3b5
> > >
> > > I haven't been following these cfr -> vfr patches too closely, but why
> > > remove the duplicate frames (thus changing the expected compliant
> > > behavior) instead of ensuring the duplicate ones are made with no
> > > performance penalty by reusing the buffer reference from the previous
> > frame?
> >
> > Because a filter or encoder or whatever follows the decoder would then
> > process
> > them multiple times unneccessarily.
> >
> 
> Incorrect. This "extra" as you call processing is necessary.

no
if your final output is vfr and for that you drop duplicate frames (be that
in the decoder or filter)
you would get the exact same output as when you drop in the decoder
just that by earlier droping you avoid some processing.
If the frames where marked by some flag as duplicate or if they refer to
the same memory then finding them becomes easier and there is less
processing. (this was suggested by james and myself but sofar others
have not confirmed they would agree to that)

Either a flag or using the same ref for the duplicate frames would
require changes as well. The current ref fixes only work with the
fuzzer IIRC

Thanks


[...]
-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Never trust a computer, one day, it may think you are the virus. -- Compn
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 181 bytes
Desc: not available
URL: <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20190826/c341efa4/attachment.sig>


More information about the ffmpeg-devel mailing list