[MPlayer-users] There is no cache when playing m3u8 stream (hls-protocol)

wm4 nfxjfg at googlemail.com
Fri Sep 20 13:13:59 CEST 2013


On Fri, 20 Sep 2013 12:34:45 +0200
Reimar Döffinger <Reimar.Doeffinger at gmx.de> wrote:

> wm4 <nfxjfg at googlemail.com> wrote:
> >On Thu, 19 Sep 2013 19:46:07 +0200
> >Reimar Döffinger <Reimar.Doeffinger at gmx.de> wrote:
> >
> >> On Thu, Sep 19, 2013 at 11:19:45AM +0200, Waldemar wrote:
> >> > Hello list,
> >> > when playing m3u8 live streams (hls-protocol), Mplayer dosen't use
> >> > cache. This leads to a delay when MPlayer jumps from one segment of
> >> > the playlist to the next and the hole stream becomes 'stutter'.
> >> 
> >> Due to the somewhat misdesigned way this is handled by FFmpeg,
> >caching
> >> HLS is not possible.
> >> Or rather, it is only possible after the demuxer, where you'd have
> >> to cache each stream (audio and video for example) separately.
> >> MPlayer currently doesn't do this.
> >> And there are more issues with the HLS demuxer I believe.
> >> In general, unless it works much better in ffplay (I don't think so)
> >> you'll have to ask FFmpeg about fixing it.
> >
> >It seems all complex network protocols are implemented as demuxers.
> >RTSP is another case. I can't get proper RTSP playback to work in my
> >mplayer fork, because it pretty much expects that you call
> >av_read_frame() all the time, and spending more than a few milliseconds
> >outside of it makes it drop packets. And of course, -cache will do
> >nothing because it's all implemented in the demuxer.
> >
> >ffplay doesn't have this problem: the demuxer runs in its own thread,
> >so it can read ahead by a number of packets. This also acts as a cache.
> 
> That is exactly what my hackish patch makes MPlayer do. The code is mostly there, just by default MPlayer buffers as little data as possible after demuxing.
> Whether a separate thread is even necessary would be a different question.
> Note that buffering will cause some issues, so it should not be used if there is no real reason.

If it works for HLS, that's fine. I guess you don't care about ffmpeg
RTSP, because you have other RTSP implementations which work better.
If you want to use ffmpeg's RTSP implementation, the problem is that
even just waiting for video vsync will give you a ridiculous amount of
packet drops.

One could argue that ffmpeg shouldn't assume everything is designed
like ffplay, but on the other hand, most media players are heavily
multithreaded.

> >So I would expect this works much better in ffplay (for RTSP it
> >definitely does).
> >
> >So, the question is: is this really a FFmpeg misdesign, or a weak point
> >in mplayer's architecture?
> 
> It is a design that pretty thoroughly breaks layering.
> In case of rtsp this is mostly just to the degree that the protocol itself does it.
> For hls I have doubts it was really justified instead if just the easy way for those implementing it.

Well, ffmpeg demuxers are so designed that they have to read data in a
blocking manner. There is no way to "suspend" a read call, and resume
I/O at a later point. So if a demuxer reads too "much" data, the best
stream layer cache won't help you, and the whole player freezes. From
this perspective, the demuxer should run in a separate thread anyway.

> > As a relatively simple hack, one could open the demuxer in
> >stream_ffmpeg.c, and send the packets as byte stream using some simple
> >encoding - then -cache would work.
> 
> I'd rather have FFmpeg do it. After all its cache:// protocol has the same issue. But there will be some issues with that, both practical and probably ideological objections.

I'm not sure how ffmpeg would even do this, other than running parts of
the stream implementations in a thread and would complicate its
implementation.


More information about the MPlayer-users mailing list