
On Mon, Feb 11, 2008 at 10:41:51PM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 12:33:22AM +0100, michael wrote:
Author: michael Date: Tue Feb 12 00:33:21 2008 New Revision: 604
Log: 3 more issues which have come up in the past but have IIRC never been resolved.
Modified: docs/nutissues.txt
Modified: docs/nutissues.txt ============================================================================== --- docs/nutissues.txt (original) +++ docs/nutissues.txt Tue Feb 12 00:33:21 2008 @@ -110,3 +110,42 @@ Solutions: A. Store such alternative playlists of scenes in info packets somehow. B. Design a separate layer for it. C. Do not support this. + + +Issue header-compression +------------------------ +Headers of codec frames often contain low entropy information or things +we already know like the frame size. + +A. Store header bytes and length in the framecode table. +B. Leave things as they are.
I think this one was resolved strongly as a leave-it-alone.
resolved by whom? :)
I'm absolutely against any proposal that precludes implementations from performing zero-copy decoding from the stream buffer.
Well if you have a buffer in which the frame is, just write the header before it. If not just write the header and then (f)read() the rest.
+Issue small-frames +------------------ +The original intent of nut frames was that 1 container frame == 1 codec +frame. Though this does not seem to be explicitly written in nut.txt.
It is.
I thought so too but i searched, and didnt find it so where is it?
+Also it is inefficient for very small frames, AMR-NB for example has 6-32 +bytes per frame.
I don't think anyone really cares that much about efficiency when storing
I do, others likely care about 10% overhead as well.
shit codecs
I think this applies more to audio codes, if it limited to coding shit i wont bother and gladly leave it alone. ;)
in NUT. Obviously any good codec will use large frame sizes
no
or compression will not be good.
also not true
+Solutions: +A. Enforce 1 container frame == 1 codec frame even if it causes 10% overhead.
Yes.
Vote noted. And my veto as well unless option D is selected which would also include A.
+B. Allow multiple frames as long as the whole packet is less than some + fixed minimum in bytes (like 256byte)
Very very bad. Demuxing requires a codec-specific framer.
+C. Allow multiple frames as long as the whole packet is less than some + fixed minimum in bytes (like 256byte) and the codec uses a constant + framesize in samples.
This does not help.
help what?
+D. Use header compression, that is allow to store the first (few) bytes + of a codec frame together with its size in the framecode table. This + would allow us to store the size of a frame without any redundancy. + Thus effectivly avoiding the overhead small frames cause.
At the cost of efficient decoding... If such a horrid proposal were
Id guess the overhead of working with 6 byte frames and decoding a framecode before each takes more time than copying 60 bytes. Also depending on the media this comes from 10% extra disk IO should beat 1 copy pass in slowness. And i dont give a damn about zero copy for a 400-1000 byte / sec codec! also so much for bad compression ... [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB No human being will ever know the Truth, for even if they happen to say it by chance, they would not even known they had done so. -- Xenophanes