[MPlayer-users] There is no cache when playing m3u8 stream (hls-protocol)

wm4 nfxjfg at googlemail.com
Fri Sep 20 20:25:02 CEST 2013


On Fri, 20 Sep 2013 18:55:05 +0200
Reimar Döffinger <Reimar.Doeffinger at gmx.de> wrote:

> On Fri, Sep 20, 2013 at 01:13:59PM +0200, wm4 wrote:
> > On Fri, 20 Sep 2013 12:34:45 +0200
> > Reimar Döffinger <Reimar.Doeffinger at gmx.de> wrote:
> > 
> > > wm4 <nfxjfg at googlemail.com> wrote:
> > > >On Thu, 19 Sep 2013 19:46:07 +0200
> > > >Reimar Döffinger <Reimar.Doeffinger at gmx.de> wrote:
> > > >
> > > >> On Thu, Sep 19, 2013 at 11:19:45AM +0200, Waldemar wrote:
> > > >> > Hello list,
> > > >> > when playing m3u8 live streams (hls-protocol), Mplayer dosen't use
> > > >> > cache. This leads to a delay when MPlayer jumps from one segment of
> > > >> > the playlist to the next and the hole stream becomes 'stutter'.
> > > >> 
> > > >> Due to the somewhat misdesigned way this is handled by FFmpeg,
> > > >caching
> > > >> HLS is not possible.
> > > >> Or rather, it is only possible after the demuxer, where you'd have
> > > >> to cache each stream (audio and video for example) separately.
> > > >> MPlayer currently doesn't do this.
> > > >> And there are more issues with the HLS demuxer I believe.
> > > >> In general, unless it works much better in ffplay (I don't think so)
> > > >> you'll have to ask FFmpeg about fixing it.
> > > >
> > > >It seems all complex network protocols are implemented as demuxers.
> > > >RTSP is another case. I can't get proper RTSP playback to work in my
> > > >mplayer fork, because it pretty much expects that you call
> > > >av_read_frame() all the time, and spending more than a few milliseconds
> > > >outside of it makes it drop packets. And of course, -cache will do
> > > >nothing because it's all implemented in the demuxer.
> > > >
> > > >ffplay doesn't have this problem: the demuxer runs in its own thread,
> > > >so it can read ahead by a number of packets. This also acts as a cache.
> > > 
> > > That is exactly what my hackish patch makes MPlayer do. The code is mostly there, just by default MPlayer buffers as little data as possible after demuxing.
> > > Whether a separate thread is even necessary would be a different question.
> > > Note that buffering will cause some issues, so it should not be used if there is no real reason.
> > 
> > If it works for HLS, that's fine. I guess you don't care about ffmpeg
> > RTSP, because you have other RTSP implementations which work better.
> 
> Actually I think the main reason is the RTSP over UDP doesn't work
> for most people anyway and I am quite convinced that issue does not
> exist for RTSP over TCP.

Yes, the URLs I tested seemed to use UDP. I'm not sure what part oif
the RTSP protocol decides whether to use UDP or TCP.

> > If you want to use ffmpeg's RTSP implementation, the problem is that
> > even just waiting for video vsync will give you a ridiculous amount of
> > packet drops.
> 
> I suspect there might be a few things horribly broken in FFmpeg's
> receive buffer use, it seems to force a value smaller than the
> already ridiculously small default size.
> Otherwise if you'd set net.core.rmem_max and net.core.rmem_default
> to something ridiculous like 20 MB that issue should disappear
> completely.

Doesn't seem to help.

> Do you know of some public URL to test?

rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov

rtsp://a1966.v1252936.c125293.g.vq.akamaistream.net/7/1966/125293/v0001/mp4.od.origin.zdf.de.gl-systemhaus.de/none/zdf/13/09/130914_1405_hko_1456k_p13v11.mp4

> > > It is a design that pretty thoroughly breaks layering.
> > > In case of rtsp this is mostly just to the degree that the protocol itself does it.
> > > For hls I have doubts it was really justified instead if just the easy way for those implementing it.
> > 
> > Well, ffmpeg demuxers are so designed that they have to read data in a
> > blocking manner. There is no way to "suspend" a read call, and resume
> > I/O at a later point.
> 
> Actually there are mechanisms for that. But it is not relevant if you
> already have a cache implementation.

Like what mechanisms? Some streaming mechanisms can probably drop
packets easily, but typical "desktop" (like mkv etc.) file formats
don't easily allow this.

> > So if a demuxer reads too "much" data, the best
> > stream layer cache won't help you, and the whole player freezes. From
> > this perspective, the demuxer should run in a separate thread anyway.
> 
> No, the part of the cache doing the reading must run in a separate
> thread, and it already does in basically all implementations.
> That is why the whole thing wouldn't be an issue if it wasn't
> implemented as a demuxer.

The cache can become empty. The MPlayer cache blocks in this situation,
blocking the playback core as well.

> > > > As a relatively simple hack, one could open the demuxer in
> > > >stream_ffmpeg.c, and send the packets as byte stream using some simple
> > > >encoding - then -cache would work.
> > > 
> > > I'd rather have FFmpeg do it. After all its cache:// protocol has the same issue. But there will be some issues with that, both practical and probably ideological objections.
> > 
> > I'm not sure how ffmpeg would even do this, other than running parts of
> > the stream implementations in a thread and would complicate its
> > implementation.
> 
> As said, any cache will already do this.
> "just" re-packing the RSTP data so it can be put into an ordinary
> cache pretty much would solve things for the vast majority of programs
> already implementing a cache.

Oh, I see. I think I misunderstood you at first. Yes, that'd perhaps be
nice, though somehow I doubt the developers would be open to this.
Also, the ffmpeg stream I/O interface is pretty limited and would
probably need new functions each time a new protocol is added. (I
wonder why ffmpeg doesn't do CTRLs like mplayer has them everywhere...
they're nice for opening loopholes, so that the general API isn't
littered with special cases.)


More information about the MPlayer-users mailing list