[FFmpeg-devel] [PATCHv6 4/4] libavcodec: v4l2: add support for v4l2 mem2mem codecs

wm4 nfxjfg at googlemail.com
Tue Aug 29 14:50:51 EEST 2017


On Tue, 29 Aug 2017 12:03:42 +0200
Jorge Ramirez <jorge.ramirez-ortiz at linaro.org> wrote:

> On 08/29/2017 10:56 AM, wm4 wrote:
> > On Mon, 28 Aug 2017 23:36:08 +0200
> > Jorge Ramirez <jorge.ramirez-ortiz at linaro.org> wrote:
> >  
> >> On 08/28/2017 09:53 PM, wm4 wrote:  
> >>> On Mon, 28 Aug 2017 21:24:26 +0200
> >>> Jorge Ramirez <jorge.ramirez-ortiz at linaro.org> wrote:
> >>>     
> >>>> On 08/28/2017 02:16 PM, Jorge Ramirez wrote:  
> >>>>> On 08/28/2017 12:47 PM, wm4 wrote:  
> >>>>>>> I guess that instead of polling for the AVBufferRef to be unreferenced,
> >>>>>>> I can associate a sync (ie a sempahore) to each buffer, take it on
> >>>>>>> release and post the semaphore on the AVBufferRefs being unreferenced.
> >>>>>>> that is actually pretty clean in terms of cpu usage.  
> >>>>>> That would just freeze an API user calling avcodec_close(), when it
> >>>>>> keeps around decoded AVFrames for later use.  
> >>>>> yes I understand, but it does avoid using the CPU to poll for the
> >>>>> buffer release (an incremental improvement)
> >>>>>
> >>>>> but yes I think that the message is that even though this proposal
> >>>>> might suffice for simple video players (my tests) is not good enough
> >>>>> for other users requiring the decoded frame for post processing.
> >>>>>
> >>>>> is this a blocker to upstream or could I continue working with it
> >>>>> flagging the encoder/decoder as EXPERIMENTAL? the current situation at
> >>>>> least keeps video players happy.  
> >>> I'd say yes this is a blocker. We usually try to avoid committing
> >>> half-finished code, because it often means it will be never finished.  
> >> hi, I forgot to say earlier, thanks for all the review over the past
> >> couple of days (it has been of much help).
> >>
> >> on the half finished matter, the issue that I face is that the current
> >> code doesn't cover the use case where _all_ the processed frames have to
> >> be kept available indefinitely (this is why I thought that perhaps
> >> setting .capabilities to AV_CODEC_CAP_EXPERIMENTAL could be an option to
> >> upstream and get more exposure to other users;
> >>
> >> I do plan to continue supporting v4l2 ffmpeg integration - mmaped
> >> filters, DRM and so on...having invested this long I do want to see this
> >> through; and since I can't guaranteed that some "force majeure" wont
> >> happen I think the sooner the code I have been working on can get
> >> exposure the sooner we will start seeing contributions.
> >>
> >> Anyhow, the current code does support the typical use case of most video
> >> players so it would benefit a considerable amount of users.
> >>
> >> does it have to be an all or nothing at this point or could we flag the
> >> v4l2 m2m as experimental codecs?  
> > You could just copy the frames before returning them to the user to
> > avoid breaking refcounting.  
> 
> thinking again about this I'd rather not do that (it will impact 
> performance too much) and Hendrik gave me some pointers yesterday in 
> line with what you said as well.
> I implemented reference counting delegating the closing of _some_ 
> resources needed to keep the buffers alive.
> 
> closing the codec now doesnt wait or leave dangling buffers.
> 
> the AVBufferRef free callback looks just like this
> 
> static void free_v4l2buf_cb(void *opaque, uint8_t *unused)
> {
>      V4L2Buffer* avbuf = opaque;
>      V4L2m2mContext *s = container_of(avbuf->context, V4L2m2mContext, 
> capture);
> 
>      atomic_fetch_sub_explicit(&s->refcount, 1, memory_order_acq_rel);
> 
>      if (s->reinit) {
>          if (!atomic_load(&s->refcount))
>              sem_post(&s->refsync);
>          return;
>      }
> 
>      if (avbuf->context->streamon) {
>          avbuf->context->ops.enqueue(avbuf);
>          return;
>      }
> 
>      if (!atomic_load(&s->refcount))
>          avpriv_v4l2m2m_end(s);
> }
> 
> The only case where I can't get away without waiting for the AVBufferRef 
> to be released is when re-initializing the frame dimensions (ie, 
> resolution changes/format) _during_ streaming since I need to release 
> _all_ hardware buffers and queue them again.
> 
> will this be acceptable?
> I have just tested these changes and works as expected.

The implementation seems rather roundabout and complex - why not use
AVBufferRef? But apart from that, yes.

> >  
> >>>     
> >>>>>        
> >>>> just wondering, if the AVBufferRefs must live for ever (ie, after the
> >>>> codecs have been closed), what do other codecs dequeuing from a limited
> >>>> number of re-usable hardware allocated buffers do?
> >>>> do they use the CPU allocate and copy the data from those buffers to the
> >>>> heap?
> >>>>     
> >>> Like I wrote before: hwaccels use AVHWFramesContext, which was made
> >>> more or less for this situation. If you want FD support later (for
> >>> something like zero-copy transcoding or playback), AVHWFramesContext
> >>> will probably be mandatory anyway. But I guess it's a big change for
> >>> someone not familiar with the codebase.  
> >> Yes I had a look and it seems not an easy change to integrate.
> >>
> >> Still I'd like to make sure we are talking about the same requirement
> >> because if AVHWFramesContext works around the issue [1] , I can probably
> >> do the same with a few more lines of code (including the FD support for
> >> v4l2 which is pretty straight forward)
> >>
> >> [1]  When:
> >> a) there is a limited number of buffers allocated by the hardware and
> >> b) these buffers are mapped to the process address space and
> >> c) the application can choose to keep _all_ decoded buffers for post
> >> processing,
> >>
> >> then there is no other option than copying each of the processed buffers
> >> to newly allocated areas in the heap (there can be no magic on this
> >> since the hardware buffers are always limited and have to be reused).  
> > The semantics of AVHWFramesContext are such that if the user keeps
> > enough AVFrames references to exhaust the frame pool, trying to continue
> > decoding will result in an error. The whole point is to make the
> > limited and fixed buffer allocation visible to the API user.  
> 
> makes sense although I havent found such an interface; in my view, the 
> user should be able to register an observer to receive async events from 
> codecs (be these from hardware or codec state machines)
> could you point me where that is? the way I understand ffmpeg is that 
> everything seem to be working synchronously with no room for events like 
> this (so the user would only be reported of an error after it tries to 
> get a frame for instance..)

that doesn't seem to have anything to do with our previous discussion.

Async behavior could be easily added to the API. But the current API is
a state machine that doesn't know about time.

> >> notice that I do insist on continue using V4L2_MEMORY_MMAP (instead of
> >> switching to V4L2_MEMORY_USERPTR) because it is easy to export the
> >> buffers as DMABUFs (~30 lines of code) and then pass these in FDs (which
> >> could be associated to short lived AVBufferRefs for DRM)  
> > No idea what's the difference between those.
> >
> > If you want to support direct FD/dmabuf export, adapting to
> > AVHWFramesContext now would probably be easier in total. Especially
> > because of the implied API change. But I'd wait for Mark Thompson's
> > comments on that before making any big changes. AFAIK he posted a
> > proposal patch for a DRM AVHWFramesContext too.  
> 
> why not just add a FD to the AVBufferRef and let the user decide whether 
> to use it or not?

Because there's too much framework-y stuff in ffmpeg that expects
AVHWFramesContext. It's basically needed to do anything with it, and
avoids ad-hoc approaches the old hwaccel API was full of.


More information about the ffmpeg-devel mailing list