[FFmpeg-devel] [RFC] Seeking API

François Revol revol
Fri Jan 23 17:07:52 CET 2009


> On Thu, Jan 22, 2009 at 06:24:59PM +0100, Fran?ois Revol wrote:
> > > * seeking in relation to a single specific stream makes little 
> > > sense, 
> > > rather
> > >   seeking should happen relative to the set of streams that is 
> > > presented
> > >   to the user (= the ones not disabled by AVStream.discard)
> > 
> > I agree to disagree.
> 
> > For once, the BeOS API does allow seeking individual tracks, so 
> > does 
> > Haiku.
> 
> BeOS is a video player?

It has a Media Kit API that exposes demuxer and codec addons to native 
apps, allowing to build a media player from it, much like gstreamer in 
GNU/Linux.
But since it's a generic API, not only for media players it puts the 
seek semantics at the track level for more flexibility. Data buffers 
are tagged with presentation time anyway.

> > I believe the rationale is that is that it allows processing of 
> > separate streams separately, even if the result is to render them 
> > at 
> > the same time.
> 
> I have difficulty seeing in how far this is related to the seeking 
> API.
> I surely could understand if a user app wanted to pull a frame for
> a specific stream out of the demuxer and surely could understand if
> it wanted to receive packets with specific delays to cancel buffering
> but i cannot see how one wanted to seek 2 streams to unneccesarily
> different times.

See forward ref 1 and 2

> > 
> > For example, a player might want to buffer video or audio packets, 
> > or 
> > pre-render them, and in the end it's the player that handles 
> > synchronization.
> 
> of course

forward ref 1

> > Specifically it might want to seek at keyframes on each tracks, and 
> > start playing the most in advance track, and unpause the other ones 
> > as 
> > presentation times match.
> 
> Iam still not seeing how this is related to the seeking API.
> If a demuxer supports outputting frames the way you describe it 
> surely
> could with the suggested API.
> And if there was a API by which the user app could indicate from 
> which
> stream it wants the next packet and the demuxer supported that then
> it surely could pull frames the way it prefers.

forward ref 2

except it might want to interleave calls to get frames for both tracks, 
or even from separate threads which each call a decoder, so it would 
need to

seek(t0+0) read(track0) 
seek(t1+0) read(track1) 
seek(t0+1) read(track0) 
seek(t1+1) read(track1) 
...

Very efficient.

When I hit this problem in FFExtractor I tried to instanciate one 
demuxer per track but it was abysmally slow.

> > It might also be wanted by a video editing app might want to 
> > extract 
> > specific frames to display them and audio data at specific point to 
> > show them, without having to close/reopen the file each time or 
> > care if 
> > the others tracks has been seeked too.
> 
> Do you speak about seeking or something else?

yes, separate parts of the program (2 threads or not) asking for things 
out of order.


Fran?ois.




More information about the ffmpeg-devel mailing list