[FFmpeg-devel] [PATCH 2/2] lavd/pulse_audio_enc: allow non monotonic streams

Lukasz M lukasz.m.luki at gmail.com
Mon Jan 13 02:41:00 CET 2014


On 8 January 2014 14:34, Michael Niedermayer <michaelni at gmx.at> wrote:

> On Wed, Jan 08, 2014 at 01:51:36PM +0100, Lukasz M wrote:
> > >
> > >
> > > > OTOH, I am afraid this patchset is wrong for another reason: if a
> > > timestamp
> > > > discontinuity is feed to the device, then av_get_output_timestamp()
> will
> > > > return strange results while the samples around the discontinuity
> are in
> > > the
> > > > device buffer. I believe the application should not pass through the
> > > > timestamps from the input and rather synthesize its own monotonic
> > > > timestamps.
> > >
> >
> > I did expect that flush is done before writing packet with discontinued
> > timestamp.
> > In case of pulseaudio pa_simple_drain (which waist until whole buffer is
> > played) can be used when discontinuity is detected.
> >
>
> > Regarding synthesizing monotonic timestamps ffmpeg does it itself, so
> > application may just do nothing, but it is not easy by application to
> > determine current position in audio stream. Imagine you have 1sec buffer
> > and you do seek every 0,1sec. Determining current position in stream is
> > quite hard.
>
> iam not sure its easier if you pass timestamps through unchanged
>
> consider mpeg-ps, mpeg-ts or ogg these all allow timestamp
> discontinuities so knowing timestamp 0 is displayed doesnt neccesarily
> tell you which that is.
> You could of course "remove" these discontinuities but then you
> do almost the same as if you remove the discontinuities from seeks too
>

Unfortunately I have poor knowledge about codec etc so I miss things like
this one.
I assume you talk about case like this one:
https://trac.ffmpeg.org/ticket/1438
I haven't test my code that doesn't take care about that, but I believe it
will not work properly.
I agree with you there is no point in it, so patches dropped.
Thanks for valuable remarks.


> a function like
>
> av_get_currently_presented_frame(AVOutputFormat *s, AVFrame *out, int
> *time_delta_out)
>
> that would give the user not just a timestamp but the whole currently
> presented AVFrame (or AVPacket) could give you actual timestamp in
> ether API, byte position in file, and all the metadata/side data
> thats in it (might be usefull to display something like the currently
> played artist / song title)
>
> also a function like av_get_currently_presented_frame() could be
> implemented in generic code requiring no changed in output devices
> just information about the current latency of the device


This may be nice feature, but I don't think so it may solve any issue with
pts discontinuity.
Obtaining timestamp of a check and latency itself are usually non atomic
and you probably cannot return currently presented frame with full accuracy.


More information about the ffmpeg-devel mailing list