[NUT-devel] [nut]: r604 - docs/nutissues.txt

Michael Niedermayer michaelni at gmx.at
Tue Feb 12 13:42:33 CET 2008


On Tue, Feb 12, 2008 at 12:12:52AM -0500, Rich Felker wrote:
> On Tue, Feb 12, 2008 at 05:51:44AM +0100, Michael Niedermayer wrote:
> > > I'm
> > > absolutely against any proposal that precludes implementations from
> > > performing zero-copy decoding from the stream buffer.
> > 
> > Well if you have a buffer in which the frame is, just write the header
> > before it. If not just write the header and then (f)read() the rest.
> 
> I'm envisioning a model where the demuxer directly exports frames from
> their original location in the stream buffer. It's likely that this
> buffer needs to be treated as read-only. It might be mmapped from a
> file (in which case writing would invoke COW) 

COW would be invoked only if the kernel tries to keep the pages in some
sort of cache, otherwise there is absolutely no need for COW.


> or the previous frame
> might not yet be released by the caller (in which case, writing the
> header would clobber the contents of the previous frame) -- imagine
> for instance that we need to decode the next audio frame to buffer it,
> but aren't ready to decode the previous video frame yet.

I wonder if anyone will ever implement such a mmap system, at least it
will fail with files larger than 2gb on x86-linux and other sizes for other
platforms. Also such a system would need carefull use of madvise() to ensure
the kernel doesnt read pages through (slow) page faults or uses random
guessing of what pages can be discarded.


> 
> The reason I'm against this whole header compression thing is that it
> puts lots of nasty restraints on the demuxer implementation that
> preclude efficient design. 

The header compression only puts restraints on a AFAIK hypothetic design
not one actually existing?
And its not just the demuxer but the whole surrounding API which would
have to be designed for such mmap zero copy.


> The nasty cases can be special-cased to
> minimize their cost, but the tradeoff is implementation size and
> complexity. The only options I'm willing to consider (and which I
> still dislike) are:
> 

> A. Only allow header elision (a more correct name than compression)
> for frames below a very small size limit (e.g. small enough that the
> compression makes at least a 1% difference to space requirements).

That would be a compromise though iam not happy about it at all, its another
one of these idiotic philosophical restrictions. If the person muxing
decides that 0.5% overhead is more important than 0.5% time spend in
an HYPOHETICAL demuxer implementation in his specific situation. Than
this is his decission and i wonder from where you take the arogance to
think you know better what is best for everyone else. And these are
people who might want to apply nut in situations you havent thought about.
The 0.5% time might be available, the 0.5% bandwidth maybe is not, and
again this are 0.5% time for an implementation you have envisioned not one
actually existing.


> 
> B. Require that the elided header fit in the same number of bytes (or
> fewer) than the minimal NUT frame header size for the corresponding
> framecode. This would allow overwrite-in-place without clobbering
> other frames, but still the COW issue remains which I don't like...
> 
> > > > +Issue small-frames
> > > > +------------------
> > > > +The original intent of nut frames was that 1 container frame == 1 codec
> > > > +frame. Though this does not seem to be explicitly written in nut.txt.
> > > 
> > > It is.
> > 
> > I thought so too but i searched, and didnt find it so where is it?
> 
> It's at least in nut-english.txt :)

It should be in nut.txt as well ...


> 
> > > > +Also it is inefficient for very small frames, AMR-NB for example has 6-32
> > > > +bytes per frame.
> > > 
> > > I don't think anyone really cares that much about efficiency when
> > > storing 
> > 
> > I do, others likely care about 10% overhead as well.
> 
> Care about it for what applications? For storing a master file in some
> odd barely-compressed audio form, I wouldn't care. For streaming radio
> or movie rips, of course it would matter, but I think someone would
> choose an appropriate codec for such applications..

Which codec would that be? (we are talking about 5-10kbit/sec).


> 
> > > shit codecs
> > 
> > I think this applies more to audio codes, if it limited to coding shit
> > i wont bother and gladly leave it alone. ;)
> > 
> > > in NUT. Obviously any good codec will use large
> > > frame sizes 
> > 
> > no
> 
> Is there any audio codec of real interest with such small frame sizes?

AMR-NB has 6-32 byte frames see libavcodec/libamr.c
qcelp has 4-35 byte frames see soc/qcelp/qcelp_parser.c


> 
> > > > +Solutions:
> > > > +A. Enforce 1 container frame == 1 codec frame even if it causes 10% overhead.
> > > 
> > > Yes.
> > 
> > Vote noted. And my veto as well unless option D is selected which would
> > also include A.
> 
> It's frustrating how you intend to "veto" decisions that were made 4
> years ago now when we're trying to finally get NUT to see the light.

Iam not vetoing the decisson of 1frame=1frame for normal sized frames,
which was what we decided upon. I dont think tiny frames have been considered.
And we never planned to enforce 1frame=1frame for PCM. which is maybe
the reason why its not in the spec? A carelessly written rule would have
fatal consequences for PCM.


> 
> > > > +B. Allow multiple frames as long as the whole packet is less than some
> > > > +   fixed minimum in bytes (like 256byte)
> > > 
> > > Very very bad. Demuxing requires a codec-specific framer.
> > > 
> > > > +C. Allow multiple frames as long as the whole packet is less than some
> > > > +   fixed minimum in bytes (like 256byte) and the codec uses a constant
> > > > +   framesize in samples.
> > > 
> > > This does not help.
> > 
> > help what?
> 
> I meant that I don't see how it's any less offensive than option B.

So it is offensive.


> The fundamental problem is still there: inability to demux and seek to
> individual frames without codec incest. It's sad that we've gotten to
> the point where 1 byte of overhead per frame is too much.. Even OGG
> and MKV have at least that much!

try mpeg-ps ;)


> 
> > > > +D. Use header compression, that is allow to store the first (few) bytes
> > > > +   of a codec frame together with its size in the framecode table. This
> > > > +   would allow us to store the size of a frame without any redundancy.
> > > > +   Thus effectivly avoiding the overhead small frames cause.
> > > 
> > > At the cost of efficient decoding... If such a horrid proposal were
> > 
> > Id guess the overhead of working with 6 byte frames and decoding
> > a framecode before each takes more time than copying 60 bytes. Also depending
> > on the media this comes from 10% extra disk IO should beat 1 copy pass in
> > slowness.
> > 
> > And i dont give a damn about zero copy for a 400-1000 byte / sec codec!
> > also so much for bad compression ...
> 
> I'm not worried about zero copy for such a low bitrate stream. My
> concern is that idiots could use this nasty header compression for
> 20mbit/sec h264 for no good reason, just because it "sounds good".

And if they did, what bad would happen? COW copying every 20th page?
what overhead does that cause?
Also even if decoding such a file were 2% slower, its one file which
has been muxed by someone intentionally ignoring recommanditions
and defaults. Such a person will make many more such decissions and
likely just use WMV or something. I think the header elision would be your
least concern with such a file.

[...]
-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

I have never wished to cater to the crowd; for what I know they do not
approve, and what they approve I do not know. -- Epicurus
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: <http://lists.mplayerhq.hu/pipermail/nut-devel/attachments/20080212/b390880c/attachment.pgp>


More information about the NUT-devel mailing list