[Ffmpeg-devel] [RFC] mms support
Fri Dec 8 08:23:54 CET 2006
As I continue discussions with myself....
On Nov 24, 2006, at 9:37 PM, Ryan Martell wrote:
> On Nov 14, 2006, at 10:54 PM, Ryan Martell wrote:
>> I was looking at adding mms support to ffmpeg.
> So I have this somewhat working.
>> I have downloaded libmms from sourceforge, which claims to be
>> LGPL'd, with all permissions gained. Unfortunately, it still
>> linked against the Glib Gnet, which is GPL. I have now ripped out
>> the dependency (simply parsing of the uri and byteswapping), so it
>> should be totally LGPL. With the VC1 codec stuff working as well
>> as it does, this would allow ffmpeg to natively support streaming
>> WM files..
> I took the libmms as mentioned, created an AVInputFormat for mms,
> and linked the two together. I unfortunately had to include a lot
> of duplicate code from asf.c. Furthermore, I currently preallocate
> the entire video based on it's size, and read into that. Needless
> to say, that doesn't work for streaming (where the size isn't known).
> Also, although the asf packet size is fixed, the speed at which
> they are consumed is variable, which means that knowing how much to
> read from the tcp port is somewhat problematic (maintaining a
> buffer ahead of where you currently are (this is also a problem for
> me on h264 streaming, as if the connection isn't fast enough, since
> there isn't an idea of prebuffering, i can stall when I run out of
>> I know I would need to add another protocol handler, but I'm
>> curious how to wire that up to work in conjuction with the asf
>> code (which is what I would need to use to parse the header).
> I'm wondering if I should have setup a URLProtocol for mms, which
> would then allow me to not use any of the asf code (it would handle
> it in place)?
Okay, so my first pass was using libmms and the AVInputFormat.
My second pass was libmms and URLProtocol. This works great, but it
doesn't allow for pausing the stream, seeking, etc.
So for my third attempt (you never get something right until you do
it at least three times, right?), I have completely rewritten the mms
code today from the spotty specification (TCP only, for now). My
code is totally different than libmms, and uses a state machine and
all the standard ffmpeg goodies. Bonus is that I'm going to release
it as LGPL into FFMPEG, so that we all get the benefits...
It works great, as far as I can tell. (I get all the data off the
net, pausing works, stopping works, seeking might work...)
Now I'm trying to bolt it into asf.c, and that's where the fun is.
Most of asf.c is designed to use a ByteIOContext pointing to a file,
and it does all sorts of wonderful seeks and url_ftells, etc. I
thought maybe i could just pass it an ASF packet (via a stack based
ByteIOContext, with some magic on the positioning for the ftells) and
it would use everything out if it (allowing me not to maintain
state), but it only uses a little at a time...
So my question is, what's the best way to do this- I have big packets
coming over the wire, they need to go into a ByteIOContext with
ftells that return proper values, and they have to be persistant
across calls to read_packet, because the codec doesn't consume an
entire asf packet at a time.
The simplest solution is the ByteIOContext for the entire file, but
that's wasteful (and impossible on live streams). Should I create an
internal ByteIOContext that I write to as I get the packets, then
another ByteIOContext that the asf code consumes, with the read proc
reading from my internal ByteIOContext? Can ByteIOContext's be used
as circular queues, where I can read and write from the same one, and
the ftell stuff gets incremented and all that jazz (it can't purge
unread data, of course)?
Or is there some way to use an AVParser?
Suggestions would be most welcome; I'm getting tired of talking to
myself here... ;-)
More information about the ffmpeg-devel