[NUT-devel] [nut]: r604 - docs/nutissues.txt
Rich Felker
dalias at aerifal.cx
Tue Feb 12 19:32:00 CET 2008
On Tue, Feb 12, 2008 at 01:42:33PM +0100, Michael Niedermayer wrote:
> On Tue, Feb 12, 2008 at 12:12:52AM -0500, Rich Felker wrote:
> > On Tue, Feb 12, 2008 at 05:51:44AM +0100, Michael Niedermayer wrote:
> > > > I'm
> > > > absolutely against any proposal that precludes implementations from
> > > > performing zero-copy decoding from the stream buffer.
> > >
> > > Well if you have a buffer in which the frame is, just write the header
> > > before it. If not just write the header and then (f)read() the rest.
> >
> > I'm envisioning a model where the demuxer directly exports frames from
> > their original location in the stream buffer. It's likely that this
> > buffer needs to be treated as read-only. It might be mmapped from a
> > file (in which case writing would invoke COW)
>
> COW would be invoked only if the kernel tries to keep the pages in some
> sort of cache, otherwise there is absolutely no need for COW.
Even if not a copy, there'd be a page fault, which is expensive in
itself.
> > or the previous frame
> > might not yet be released by the caller (in which case, writing the
> > header would clobber the contents of the previous frame) -- imagine
> > for instance that we need to decode the next audio frame to buffer it,
> > but aren't ready to decode the previous video frame yet.
>
> I wonder if anyone will ever implement such a mmap system, at least it
> will fail with files larger than 2gb on x86-linux and other sizes for other
> platforms.
Nope, you only map the section of the stream that's still in use.
Normally this will be several megs at most.
> Also such a system would need carefull use of madvise() to ensure
> the kernel doesnt read pages through (slow) page faults or uses random
> guessing of what pages can be discarded.
Indeed.
> > The reason I'm against this whole header compression thing is that it
> > puts lots of nasty restraints on the demuxer implementation that
> > preclude efficient design.
>
> The header compression only puts restraints on a AFAIK hypothetic design
> not one actually existing?
> And its not just the demuxer but the whole surrounding API which would
> have to be designed for such mmap zero copy.
Precluding the optimal design is bad practice even if the optimal
design has not yet been implemented anywhere. Also keep in mind that
what I said does NOT only apply to the mmap design but to a nice clean
read-based design with low copying that probably IS similar to various
current implementations.
> > The nasty cases can be special-cased to
> > minimize their cost, but the tradeoff is implementation size and
> > complexity. The only options I'm willing to consider (and which I
> > still dislike) are:
> >
>
> > A. Only allow header elision (a more correct name than compression)
> > for frames below a very small size limit (e.g. small enough that the
> > compression makes at least a 1% difference to space requirements).
>
> That would be a compromise though iam not happy about it at all, its another
> one of these idiotic philosophical restrictions. If the person muxing
> decides that 0.5% overhead is more important than 0.5% time spend in
> an HYPOHETICAL demuxer implementation
No, I already told you the issue here is not just time but buffer
size. Requiring an unboundedly large buffer in addition to the main
stream buffer is a potentially huge burden leading to all sorts of
robustness issues. As long as the buffer size is bounded and small, no
one really cares.
> in his specific situation. Than
> this is his decission and i wonder from where you take the arogance to
> think you know better what is best for everyone else. And these are
It's NOT the business of the person making the file to put
restrictions on the process reading the file. This has been one of my
key issues for NUT since the beginning. Just because
idiot_who_makes_file thinks the only purpose of the file is to watch
it with player_x doesn't mean they're exempt from the features of NUT
that make the file nicely usable for editing, optimal seeking, etc.
> > > > > +Issue small-frames
> > > > > +------------------
> > > > > +The original intent of nut frames was that 1 container frame == 1 codec
> > > > > +frame. Though this does not seem to be explicitly written in nut.txt.
> > > >
> > > > It is.
> > >
> > > I thought so too but i searched, and didnt find it so where is it?
> >
> > It's at least in nut-english.txt :)
>
> It should be in nut.txt as well ...
If it's not feel free to add it, but I think it belongs in the
information about codecs, since the definition of a frame is pretty
codec-specific. Expressing the abstract idea in the semantic
requirements section of nut.txt would be nice though!
> > > > > +Also it is inefficient for very small frames, AMR-NB for example has 6-32
> > > > > +bytes per frame.
> > > >
> > > > I don't think anyone really cares that much about efficiency when
> > > > storing
> > >
> > > I do, others likely care about 10% overhead as well.
> >
> > Care about it for what applications? For storing a master file in some
> > odd barely-compressed audio form, I wouldn't care. For streaming radio
> > or movie rips, of course it would matter, but I think someone would
> > choose an appropriate codec for such applications..
>
> Which codec would that be? (we are talking about 5-10kbit/sec).
Vorbis.
> > > > shit codecs
> > >
> > > I think this applies more to audio codes, if it limited to coding shit
> > > i wont bother and gladly leave it alone. ;)
> > >
> > > > in NUT. Obviously any good codec will use large
> > > > frame sizes
> > >
> > > no
> >
> > Is there any audio codec of real interest with such small frame sizes?
>
> AMR-NB has 6-32 byte frames see libavcodec/libamr.c
> qcelp has 4-35 byte frames see soc/qcelp/qcelp_parser.c
And is there any interest in these codecs aside from watching files
generated by camera phones and transcoding them to something sane?
> Iam not vetoing the decisson of 1frame=1frame for normal sized frames,
> which was what we decided upon. I dont think tiny frames have been considered.
> And we never planned to enforce 1frame=1frame for PCM. which is maybe
> the reason why its not in the spec? A carelessly written rule would have
> fatal consequences for PCM.
For PCM, there's no seeking issue because one always knows how to seek
to an arbitrary sample in PCM 'frames'. In this case I would just
recommend having a SHOULD clause that PCM frames SHOULD be same or
shorter duration than the typical frame durations for other streams in
the file (as a way of keeping interleaving clean and aiding players
that don't want to do sample-resolution seeking inside PCM frames).
> > I'm not worried about zero copy for such a low bitrate stream. My
> > concern is that idiots could use this nasty header compression for
> > 20mbit/sec h264 for no good reason, just because it "sounds good".
>
> And if they did, what bad would happen? COW copying every 20th page?
> what overhead does that cause?
> Also even if decoding such a file were 2% slower, its one file which
COW is not the worst-case. If you read my original post, the common
and worst case is NOT necessarily mmap but rather when you can't
clobber the source stream because the calling process still has a
valid reference to previous frames in the stream which cannot be
clobbered. This would require a memcpy of the ENTIRE FRAME which would
easily make performance 10% slower or much worse.
If you really want this header-elision, I'm willing to consider it for
SMALL frames. But please don't flame about wanting to support it for
large frames where it's absolutely useless and has lots of practical
problems for efficient implementations!
Rich
More information about the NUT-devel
mailing list