[nut]: r604 - docs/nutissues.txt

Author: michael Date: Tue Feb 12 00:33:21 2008 New Revision: 604 Log: 3 more issues which have come up in the past but have IIRC never been resolved. Modified: docs/nutissues.txt Modified: docs/nutissues.txt ============================================================================== --- docs/nutissues.txt (original) +++ docs/nutissues.txt Tue Feb 12 00:33:21 2008 @@ -110,3 +110,42 @@ Solutions: A. Store such alternative playlists of scenes in info packets somehow. B. Design a separate layer for it. C. Do not support this. + + +Issue header-compression +------------------------ +Headers of codec frames often contain low entropy information or things +we already know like the frame size. + +A. Store header bytes and length in the framecode table. +B. Leave things as they are. + + +Issue small-frames +------------------ +The original intent of nut frames was that 1 container frame == 1 codec +frame. Though this does not seem to be explicitly written in nut.txt. +Also it is inefficient for very small frames, AMR-NB for example has 6-32 +bytes per frame. + +Solutions: +A. Enforce 1 container frame == 1 codec frame even if it causes 10% overhead. +B. Allow multiple frames as long as the whole packet is less than some + fixed minimum in bytes (like 256byte) +C. Allow multiple frames as long as the whole packet is less than some + fixed minimum in bytes (like 256byte) and the codec uses a constant + framesize in samples. +D. Use header compression, that is allow to store the first (few) bytes + of a codec frame together with its size in the framecode table. This + would allow us to store the size of a frame without any redundancy. + Thus effectivly avoiding the overhead small frames cause. + + +Issue pcm-frames +---------------- +No word is said about how many or few PCM samples should be in a frame. + +Solutions: +A. Define an maximum number of samples (like 512) +B. Define an maximum timespam (like 0.1 sec) +C. Define an maximum number of bytes (like 1024)

On Tue, Feb 12, 2008 at 12:33:22AM +0100, michael wrote:
Author: michael Date: Tue Feb 12 00:33:21 2008 New Revision: 604
Log: 3 more issues which have come up in the past but have IIRC never been resolved.
Modified: docs/nutissues.txt
Modified: docs/nutissues.txt ============================================================================== --- docs/nutissues.txt (original) +++ docs/nutissues.txt Tue Feb 12 00:33:21 2008 @@ -110,3 +110,42 @@ Solutions: A. Store such alternative playlists of scenes in info packets somehow. B. Design a separate layer for it. C. Do not support this. + + +Issue header-compression +------------------------ +Headers of codec frames often contain low entropy information or things +we already know like the frame size. + +A. Store header bytes and length in the framecode table. +B. Leave things as they are.
I think this one was resolved strongly as a leave-it-alone. I'm absolutely against any proposal that precludes implementations from performing zero-copy decoding from the stream buffer.
+Issue small-frames +------------------ +The original intent of nut frames was that 1 container frame == 1 codec +frame. Though this does not seem to be explicitly written in nut.txt.
It is.
+Also it is inefficient for very small frames, AMR-NB for example has 6-32 +bytes per frame.
I don't think anyone really cares that much about efficiency when storing shit codecs in NUT. Obviously any good codec will use large frame sizes or compression will not be good.
+Solutions: +A. Enforce 1 container frame == 1 codec frame even if it causes 10% overhead.
Yes.
+B. Allow multiple frames as long as the whole packet is less than some + fixed minimum in bytes (like 256byte)
Very very bad. Demuxing requires a codec-specific framer.
+C. Allow multiple frames as long as the whole packet is less than some + fixed minimum in bytes (like 256byte) and the codec uses a constant + framesize in samples.
This does not help.
+D. Use header compression, that is allow to store the first (few) bytes + of a codec frame together with its size in the framecode table. This + would allow us to store the size of a frame without any redundancy. + Thus effectivly avoiding the overhead small frames cause.
At the cost of efficient decoding... If such a horrid proposal were accepted, it would have to be restricted to frames smaller than ~256 bytes. Otherwise the buffer requirements for reconstructing the complete frame would be troublesome and for very large frames it would make a huge performance difference.
+Issue pcm-frames +---------------- +No word is said about how many or few PCM samples should be in a frame. + +Solutions: +A. Define an maximum number of samples (like 512) +B. Define an maximum timespam (like 0.1 sec) +C. Define an maximum number of bytes (like 1024)
PCM is already a much bigger question due to sample format and interleaving issues. Should there be a new fourcc for each combination of PCM properties, or a single fourcc with extradata? If the latter, the number of samples per frame would probably be coded in the extradata or something. I'm open to ideas. Rich

On Mon, Feb 11, 2008 at 10:41:51PM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 12:33:22AM +0100, michael wrote:
Author: michael Date: Tue Feb 12 00:33:21 2008 New Revision: 604
Log: 3 more issues which have come up in the past but have IIRC never been resolved.
Modified: docs/nutissues.txt
Modified: docs/nutissues.txt ============================================================================== --- docs/nutissues.txt (original) +++ docs/nutissues.txt Tue Feb 12 00:33:21 2008 @@ -110,3 +110,42 @@ Solutions: A. Store such alternative playlists of scenes in info packets somehow. B. Design a separate layer for it. C. Do not support this. + + +Issue header-compression +------------------------ +Headers of codec frames often contain low entropy information or things +we already know like the frame size. + +A. Store header bytes and length in the framecode table. +B. Leave things as they are.
I think this one was resolved strongly as a leave-it-alone.
resolved by whom? :)
I'm absolutely against any proposal that precludes implementations from performing zero-copy decoding from the stream buffer.
Well if you have a buffer in which the frame is, just write the header before it. If not just write the header and then (f)read() the rest.
+Issue small-frames +------------------ +The original intent of nut frames was that 1 container frame == 1 codec +frame. Though this does not seem to be explicitly written in nut.txt.
It is.
I thought so too but i searched, and didnt find it so where is it?
+Also it is inefficient for very small frames, AMR-NB for example has 6-32 +bytes per frame.
I don't think anyone really cares that much about efficiency when storing
I do, others likely care about 10% overhead as well.
shit codecs
I think this applies more to audio codes, if it limited to coding shit i wont bother and gladly leave it alone. ;)
in NUT. Obviously any good codec will use large frame sizes
no
or compression will not be good.
also not true
+Solutions: +A. Enforce 1 container frame == 1 codec frame even if it causes 10% overhead.
Yes.
Vote noted. And my veto as well unless option D is selected which would also include A.
+B. Allow multiple frames as long as the whole packet is less than some + fixed minimum in bytes (like 256byte)
Very very bad. Demuxing requires a codec-specific framer.
+C. Allow multiple frames as long as the whole packet is less than some + fixed minimum in bytes (like 256byte) and the codec uses a constant + framesize in samples.
This does not help.
help what?
+D. Use header compression, that is allow to store the first (few) bytes + of a codec frame together with its size in the framecode table. This + would allow us to store the size of a frame without any redundancy. + Thus effectivly avoiding the overhead small frames cause.
At the cost of efficient decoding... If such a horrid proposal were
Id guess the overhead of working with 6 byte frames and decoding a framecode before each takes more time than copying 60 bytes. Also depending on the media this comes from 10% extra disk IO should beat 1 copy pass in slowness. And i dont give a damn about zero copy for a 400-1000 byte / sec codec! also so much for bad compression ... [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB No human being will ever know the Truth, for even if they happen to say it by chance, they would not even known they had done so. -- Xenophanes

On Tue, Feb 12, 2008 at 05:51:44AM +0100, Michael Niedermayer wrote:
I'm absolutely against any proposal that precludes implementations from performing zero-copy decoding from the stream buffer.
Well if you have a buffer in which the frame is, just write the header before it. If not just write the header and then (f)read() the rest.
I'm envisioning a model where the demuxer directly exports frames from their original location in the stream buffer. It's likely that this buffer needs to be treated as read-only. It might be mmapped from a file (in which case writing would invoke COW) or the previous frame might not yet be released by the caller (in which case, writing the header would clobber the contents of the previous frame) -- imagine for instance that we need to decode the next audio frame to buffer it, but aren't ready to decode the previous video frame yet. The reason I'm against this whole header compression thing is that it puts lots of nasty restraints on the demuxer implementation that preclude efficient design. The nasty cases can be special-cased to minimize their cost, but the tradeoff is implementation size and complexity. The only options I'm willing to consider (and which I still dislike) are: A. Only allow header elision (a more correct name than compression) for frames below a very small size limit (e.g. small enough that the compression makes at least a 1% difference to space requirements). B. Require that the elided header fit in the same number of bytes (or fewer) than the minimal NUT frame header size for the corresponding framecode. This would allow overwrite-in-place without clobbering other frames, but still the COW issue remains which I don't like...
+Issue small-frames +------------------ +The original intent of nut frames was that 1 container frame == 1 codec +frame. Though this does not seem to be explicitly written in nut.txt.
It is.
I thought so too but i searched, and didnt find it so where is it?
It's at least in nut-english.txt :)
+Also it is inefficient for very small frames, AMR-NB for example has 6-32 +bytes per frame.
I don't think anyone really cares that much about efficiency when storing
I do, others likely care about 10% overhead as well.
Care about it for what applications? For storing a master file in some odd barely-compressed audio form, I wouldn't care. For streaming radio or movie rips, of course it would matter, but I think someone would choose an appropriate codec for such applications..
shit codecs
I think this applies more to audio codes, if it limited to coding shit i wont bother and gladly leave it alone. ;)
in NUT. Obviously any good codec will use large frame sizes
no
Is there any audio codec of real interest with such small frame sizes?
+Solutions: +A. Enforce 1 container frame == 1 codec frame even if it causes 10% overhead.
Yes.
Vote noted. And my veto as well unless option D is selected which would also include A.
It's frustrating how you intend to "veto" decisions that were made 4 years ago now when we're trying to finally get NUT to see the light.
+B. Allow multiple frames as long as the whole packet is less than some + fixed minimum in bytes (like 256byte)
Very very bad. Demuxing requires a codec-specific framer.
+C. Allow multiple frames as long as the whole packet is less than some + fixed minimum in bytes (like 256byte) and the codec uses a constant + framesize in samples.
This does not help.
help what?
I meant that I don't see how it's any less offensive than option B. The fundamental problem is still there: inability to demux and seek to individual frames without codec incest. It's sad that we've gotten to the point where 1 byte of overhead per frame is too much.. Even OGG and MKV have at least that much!
+D. Use header compression, that is allow to store the first (few) bytes + of a codec frame together with its size in the framecode table. This + would allow us to store the size of a frame without any redundancy. + Thus effectivly avoiding the overhead small frames cause.
At the cost of efficient decoding... If such a horrid proposal were
Id guess the overhead of working with 6 byte frames and decoding a framecode before each takes more time than copying 60 bytes. Also depending on the media this comes from 10% extra disk IO should beat 1 copy pass in slowness.
And i dont give a damn about zero copy for a 400-1000 byte / sec codec! also so much for bad compression ...
I'm not worried about zero copy for such a low bitrate stream. My concern is that idiots could use this nasty header compression for 20mbit/sec h264 for no good reason, just because it "sounds good". Rich

On Tue, Feb 12, 2008 at 12:12:52AM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 05:51:44AM +0100, Michael Niedermayer wrote:
I'm absolutely against any proposal that precludes implementations from performing zero-copy decoding from the stream buffer.
Well if you have a buffer in which the frame is, just write the header before it. If not just write the header and then (f)read() the rest.
I'm envisioning a model where the demuxer directly exports frames from their original location in the stream buffer. It's likely that this buffer needs to be treated as read-only. It might be mmapped from a file (in which case writing would invoke COW)
COW would be invoked only if the kernel tries to keep the pages in some sort of cache, otherwise there is absolutely no need for COW.
or the previous frame might not yet be released by the caller (in which case, writing the header would clobber the contents of the previous frame) -- imagine for instance that we need to decode the next audio frame to buffer it, but aren't ready to decode the previous video frame yet.
I wonder if anyone will ever implement such a mmap system, at least it will fail with files larger than 2gb on x86-linux and other sizes for other platforms. Also such a system would need carefull use of madvise() to ensure the kernel doesnt read pages through (slow) page faults or uses random guessing of what pages can be discarded.
The reason I'm against this whole header compression thing is that it puts lots of nasty restraints on the demuxer implementation that preclude efficient design.
The header compression only puts restraints on a AFAIK hypothetic design not one actually existing? And its not just the demuxer but the whole surrounding API which would have to be designed for such mmap zero copy.
The nasty cases can be special-cased to minimize their cost, but the tradeoff is implementation size and complexity. The only options I'm willing to consider (and which I still dislike) are:
A. Only allow header elision (a more correct name than compression) for frames below a very small size limit (e.g. small enough that the compression makes at least a 1% difference to space requirements).
That would be a compromise though iam not happy about it at all, its another one of these idiotic philosophical restrictions. If the person muxing decides that 0.5% overhead is more important than 0.5% time spend in an HYPOHETICAL demuxer implementation in his specific situation. Than this is his decission and i wonder from where you take the arogance to think you know better what is best for everyone else. And these are people who might want to apply nut in situations you havent thought about. The 0.5% time might be available, the 0.5% bandwidth maybe is not, and again this are 0.5% time for an implementation you have envisioned not one actually existing.
B. Require that the elided header fit in the same number of bytes (or fewer) than the minimal NUT frame header size for the corresponding framecode. This would allow overwrite-in-place without clobbering other frames, but still the COW issue remains which I don't like...
+Issue small-frames +------------------ +The original intent of nut frames was that 1 container frame == 1 codec +frame. Though this does not seem to be explicitly written in nut.txt.
It is.
I thought so too but i searched, and didnt find it so where is it?
It's at least in nut-english.txt :)
It should be in nut.txt as well ...
+Also it is inefficient for very small frames, AMR-NB for example has 6-32 +bytes per frame.
I don't think anyone really cares that much about efficiency when storing
I do, others likely care about 10% overhead as well.
Care about it for what applications? For storing a master file in some odd barely-compressed audio form, I wouldn't care. For streaming radio or movie rips, of course it would matter, but I think someone would choose an appropriate codec for such applications..
Which codec would that be? (we are talking about 5-10kbit/sec).
shit codecs
I think this applies more to audio codes, if it limited to coding shit i wont bother and gladly leave it alone. ;)
in NUT. Obviously any good codec will use large frame sizes
no
Is there any audio codec of real interest with such small frame sizes?
AMR-NB has 6-32 byte frames see libavcodec/libamr.c qcelp has 4-35 byte frames see soc/qcelp/qcelp_parser.c
+Solutions: +A. Enforce 1 container frame == 1 codec frame even if it causes 10% overhead.
Yes.
Vote noted. And my veto as well unless option D is selected which would also include A.
It's frustrating how you intend to "veto" decisions that were made 4 years ago now when we're trying to finally get NUT to see the light.
Iam not vetoing the decisson of 1frame=1frame for normal sized frames, which was what we decided upon. I dont think tiny frames have been considered. And we never planned to enforce 1frame=1frame for PCM. which is maybe the reason why its not in the spec? A carelessly written rule would have fatal consequences for PCM.
+B. Allow multiple frames as long as the whole packet is less than some + fixed minimum in bytes (like 256byte)
Very very bad. Demuxing requires a codec-specific framer.
+C. Allow multiple frames as long as the whole packet is less than some + fixed minimum in bytes (like 256byte) and the codec uses a constant + framesize in samples.
This does not help.
help what?
I meant that I don't see how it's any less offensive than option B.
So it is offensive.
The fundamental problem is still there: inability to demux and seek to individual frames without codec incest. It's sad that we've gotten to the point where 1 byte of overhead per frame is too much.. Even OGG and MKV have at least that much!
try mpeg-ps ;)
+D. Use header compression, that is allow to store the first (few) bytes + of a codec frame together with its size in the framecode table. This + would allow us to store the size of a frame without any redundancy. + Thus effectivly avoiding the overhead small frames cause.
At the cost of efficient decoding... If such a horrid proposal were
Id guess the overhead of working with 6 byte frames and decoding a framecode before each takes more time than copying 60 bytes. Also depending on the media this comes from 10% extra disk IO should beat 1 copy pass in slowness.
And i dont give a damn about zero copy for a 400-1000 byte / sec codec! also so much for bad compression ...
I'm not worried about zero copy for such a low bitrate stream. My concern is that idiots could use this nasty header compression for 20mbit/sec h264 for no good reason, just because it "sounds good".
And if they did, what bad would happen? COW copying every 20th page? what overhead does that cause? Also even if decoding such a file were 2% slower, its one file which has been muxed by someone intentionally ignoring recommanditions and defaults. Such a person will make many more such decissions and likely just use WMV or something. I think the header elision would be your least concern with such a file. [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB I have never wished to cater to the crowd; for what I know they do not approve, and what they approve I do not know. -- Epicurus

On Tue, Feb 12, 2008 at 01:42:33PM +0100, Michael Niedermayer wrote:
On Tue, Feb 12, 2008 at 12:12:52AM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 05:51:44AM +0100, Michael Niedermayer wrote:
I'm absolutely against any proposal that precludes implementations from performing zero-copy decoding from the stream buffer.
Well if you have a buffer in which the frame is, just write the header before it. If not just write the header and then (f)read() the rest.
I'm envisioning a model where the demuxer directly exports frames from their original location in the stream buffer. It's likely that this buffer needs to be treated as read-only. It might be mmapped from a file (in which case writing would invoke COW)
COW would be invoked only if the kernel tries to keep the pages in some sort of cache, otherwise there is absolutely no need for COW.
Even if not a copy, there'd be a page fault, which is expensive in itself.
or the previous frame might not yet be released by the caller (in which case, writing the header would clobber the contents of the previous frame) -- imagine for instance that we need to decode the next audio frame to buffer it, but aren't ready to decode the previous video frame yet.
I wonder if anyone will ever implement such a mmap system, at least it will fail with files larger than 2gb on x86-linux and other sizes for other platforms.
Nope, you only map the section of the stream that's still in use. Normally this will be several megs at most.
Also such a system would need carefull use of madvise() to ensure the kernel doesnt read pages through (slow) page faults or uses random guessing of what pages can be discarded.
Indeed.
The reason I'm against this whole header compression thing is that it puts lots of nasty restraints on the demuxer implementation that preclude efficient design.
The header compression only puts restraints on a AFAIK hypothetic design not one actually existing? And its not just the demuxer but the whole surrounding API which would have to be designed for such mmap zero copy.
Precluding the optimal design is bad practice even if the optimal design has not yet been implemented anywhere. Also keep in mind that what I said does NOT only apply to the mmap design but to a nice clean read-based design with low copying that probably IS similar to various current implementations.
The nasty cases can be special-cased to minimize their cost, but the tradeoff is implementation size and complexity. The only options I'm willing to consider (and which I still dislike) are:
A. Only allow header elision (a more correct name than compression) for frames below a very small size limit (e.g. small enough that the compression makes at least a 1% difference to space requirements).
That would be a compromise though iam not happy about it at all, its another one of these idiotic philosophical restrictions. If the person muxing decides that 0.5% overhead is more important than 0.5% time spend in an HYPOHETICAL demuxer implementation
No, I already told you the issue here is not just time but buffer size. Requiring an unboundedly large buffer in addition to the main stream buffer is a potentially huge burden leading to all sorts of robustness issues. As long as the buffer size is bounded and small, no one really cares.
in his specific situation. Than this is his decission and i wonder from where you take the arogance to think you know better what is best for everyone else. And these are
It's NOT the business of the person making the file to put restrictions on the process reading the file. This has been one of my key issues for NUT since the beginning. Just because idiot_who_makes_file thinks the only purpose of the file is to watch it with player_x doesn't mean they're exempt from the features of NUT that make the file nicely usable for editing, optimal seeking, etc.
+Issue small-frames +------------------ +The original intent of nut frames was that 1 container frame == 1 codec +frame. Though this does not seem to be explicitly written in nut.txt.
It is.
I thought so too but i searched, and didnt find it so where is it?
It's at least in nut-english.txt :)
It should be in nut.txt as well ...
If it's not feel free to add it, but I think it belongs in the information about codecs, since the definition of a frame is pretty codec-specific. Expressing the abstract idea in the semantic requirements section of nut.txt would be nice though!
+Also it is inefficient for very small frames, AMR-NB for example has 6-32 +bytes per frame.
I don't think anyone really cares that much about efficiency when storing
I do, others likely care about 10% overhead as well.
Care about it for what applications? For storing a master file in some odd barely-compressed audio form, I wouldn't care. For streaming radio or movie rips, of course it would matter, but I think someone would choose an appropriate codec for such applications..
Which codec would that be? (we are talking about 5-10kbit/sec).
Vorbis.
shit codecs
I think this applies more to audio codes, if it limited to coding shit i wont bother and gladly leave it alone. ;)
in NUT. Obviously any good codec will use large frame sizes
no
Is there any audio codec of real interest with such small frame sizes?
AMR-NB has 6-32 byte frames see libavcodec/libamr.c qcelp has 4-35 byte frames see soc/qcelp/qcelp_parser.c
And is there any interest in these codecs aside from watching files generated by camera phones and transcoding them to something sane?
Iam not vetoing the decisson of 1frame=1frame for normal sized frames, which was what we decided upon. I dont think tiny frames have been considered. And we never planned to enforce 1frame=1frame for PCM. which is maybe the reason why its not in the spec? A carelessly written rule would have fatal consequences for PCM.
For PCM, there's no seeking issue because one always knows how to seek to an arbitrary sample in PCM 'frames'. In this case I would just recommend having a SHOULD clause that PCM frames SHOULD be same or shorter duration than the typical frame durations for other streams in the file (as a way of keeping interleaving clean and aiding players that don't want to do sample-resolution seeking inside PCM frames).
I'm not worried about zero copy for such a low bitrate stream. My concern is that idiots could use this nasty header compression for 20mbit/sec h264 for no good reason, just because it "sounds good".
And if they did, what bad would happen? COW copying every 20th page? what overhead does that cause? Also even if decoding such a file were 2% slower, its one file which
COW is not the worst-case. If you read my original post, the common and worst case is NOT necessarily mmap but rather when you can't clobber the source stream because the calling process still has a valid reference to previous frames in the stream which cannot be clobbered. This would require a memcpy of the ENTIRE FRAME which would easily make performance 10% slower or much worse. If you really want this header-elision, I'm willing to consider it for SMALL frames. But please don't flame about wanting to support it for large frames where it's absolutely useless and has lots of practical problems for efficient implementations! Rich

On Tue, Feb 12, 2008 at 01:32:00PM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 01:42:33PM +0100, Michael Niedermayer wrote:
On Tue, Feb 12, 2008 at 12:12:52AM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 05:51:44AM +0100, Michael Niedermayer wrote:
I'm absolutely against any proposal that precludes implementations from performing zero-copy decoding from the stream buffer.
Well if you have a buffer in which the frame is, just write the header before it. If not just write the header and then (f)read() the rest.
I'm envisioning a model where the demuxer directly exports frames from their original location in the stream buffer. It's likely that this buffer needs to be treated as read-only. It might be mmapped from a file (in which case writing would invoke COW)
COW would be invoked only if the kernel tries to keep the pages in some sort of cache, otherwise there is absolutely no need for COW.
Even if not a copy, there'd be a page fault, which is expensive in itself.
true
or the previous frame might not yet be released by the caller (in which case, writing the header would clobber the contents of the previous frame) -- imagine for instance that we need to decode the next audio frame to buffer it, but aren't ready to decode the previous video frame yet.
I wonder if anyone will ever implement such a mmap system, at least it will fail with files larger than 2gb on x86-linux and other sizes for other platforms.
Nope, you only map the section of the stream that's still in use. Normally this will be several megs at most.
Also such a system would need carefull use of madvise() to ensure the kernel doesnt read pages through (slow) page faults or uses random guessing of what pages can be discarded.
Indeed.
The reason I'm against this whole header compression thing is that it puts lots of nasty restraints on the demuxer implementation that preclude efficient design.
The header compression only puts restraints on a AFAIK hypothetic design not one actually existing? And its not just the demuxer but the whole surrounding API which would have to be designed for such mmap zero copy.
Precluding the optimal design is bad practice even if the optimal design has not yet been implemented anywhere.
Also keep in mind that what I said does NOT only apply to the mmap design but to a nice clean read-based design with low copying that probably IS similar to various current implementations.
I dont see how It wouldnt affect libavformat, also it wouldnt affect the simple: x= malloc(size+header) put header in x fread into x+header
The nasty cases can be special-cased to minimize their cost, but the tradeoff is implementation size and complexity. The only options I'm willing to consider (and which I still dislike) are:
A. Only allow header elision (a more correct name than compression) for frames below a very small size limit (e.g. small enough that the compression makes at least a 1% difference to space requirements).
That would be a compromise though iam not happy about it at all, its another one of these idiotic philosophical restrictions. If the person muxing decides that 0.5% overhead is more important than 0.5% time spend in an HYPOHETICAL demuxer implementation
No, I already told you the issue here is not just time but buffer size. Requiring an unboundedly large buffer in addition to the main stream buffer is a potentially huge burden leading to all sorts of robustness issues. As long as the buffer size is bounded and small, no one really cares.
Seriously, i dont think that having 2 compressed frames instead of one in memory is a significant burden, the uncomperssed ones are so much bigger ... But iam not that much interrested in huge frames, a simple "header elision with frames > 1024 shall be considered an error" would probably be ok.
in his specific situation. Than this is his decission and i wonder from where you take the arogance to think you know better what is best for everyone else. And these are
It's NOT the business of the person making the file to put restrictions on the process reading the file. This has been one of my key issues for NUT since the beginning. Just because idiot_who_makes_file thinks the only purpose of the file is to watch it with player_x doesn't mean they're exempt from the features of NUT that make the file nicely usable for editing, optimal seeking, etc.
True if the world was just a bunch of warez kids leeching pr0n of bittorrent. But lets take a big internet provider who wants to offer TV to their customers and for that purpose gives everyone a HW decoder (the PCs being to slow maybe ...) Why should that ISP not be able to make the ideal decission during encoding for the decoders which he know everyone will be using?
+Issue small-frames +------------------ +The original intent of nut frames was that 1 container frame == 1 codec +frame. Though this does not seem to be explicitly written in nut.txt.
It is.
I thought so too but i searched, and didnt find it so where is it?
It's at least in nut-english.txt :)
It should be in nut.txt as well ...
If it's not feel free to add it, but I think it belongs in the information about codecs, since the definition of a frame is pretty codec-specific. Expressing the abstract idea in the semantic requirements section of nut.txt would be nice though!
I will as soon as the RAW frame issue is decided, unless i forget, in which case please flame me!
+Also it is inefficient for very small frames, AMR-NB for example has 6-32 +bytes per frame.
I don't think anyone really cares that much about efficiency when storing
I do, others likely care about 10% overhead as well.
Care about it for what applications? For storing a master file in some odd barely-compressed audio form, I wouldn't care. For streaming radio or movie rips, of course it would matter, but I think someone would choose an appropriate codec for such applications..
Which codec would that be? (we are talking about 5-10kbit/sec).
Vorbis.
5kbit/sec ?
shit codecs
I think this applies more to audio codes, if it limited to coding shit i wont bother and gladly leave it alone. ;)
in NUT. Obviously any good codec will use large frame sizes
no
Is there any audio codec of real interest with such small frame sizes?
AMR-NB has 6-32 byte frames see libavcodec/libamr.c qcelp has 4-35 byte frames see soc/qcelp/qcelp_parser.c
And is there any interest in these codecs aside from watching files generated by camera phones and transcoding them to something sane?
Theres no sense in transcoding the audio. It just will reduce the quality and very significantly increate the bitrate because the codecs you call sane are VERY bad at such low bitrate.
Iam not vetoing the decisson of 1frame=1frame for normal sized frames, which was what we decided upon. I dont think tiny frames have been considered. And we never planned to enforce 1frame=1frame for PCM. which is maybe the reason why its not in the spec? A carelessly written rule would have fatal consequences for PCM.
For PCM, there's no seeking issue because one always knows how to seek to an arbitrary sample in PCM 'frames'. In this case I would just recommend having a SHOULD clause that PCM frames SHOULD be same or shorter duration than the typical frame durations for other streams in the file (as a way of keeping interleaving clean and aiding players that don't want to do sample-resolution seeking inside PCM frames).
Well ... I do have a AVI with all audio in a single chunk (something like 10 or 20mb), do you want that .... in nut? mplayers avi demuxer can even seek in that avi :) so honestly i dont think a should requirement alone would be a good idea. [...]
If you really want this header-elision, I'm willing to consider it for SMALL frames. But please don't flame about wanting to support it for large frames where it's absolutely useless and has lots of practical problems for efficient implementations!
Fine ill add it for small frames only. Would a 1024 byte limit be ok? [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB While the State exists there can be no freedom; when there is freedom there will be no State. -- Vladimir Lenin

On Tue, Feb 12, 2008 at 08:24:21PM +0100, Michael Niedermayer wrote:
Also keep in mind that what I said does NOT only apply to the mmap design but to a nice clean read-based design with low copying that probably IS similar to various current implementations.
I dont see how It wouldnt affect libavformat, also it wouldnt affect the simple: x= malloc(size+header) put header in x fread into x+header
This design is highly inefficient for high bitrate streams. The efficient implementation is: - fill large buffer directly with read() - while(at least one whole frame in buffer) return pointer to the frame in-place in the buffer - keep a lock on all sections of the buffer which the caller has not yet "released". allow the initial segment of the buffer which has been released to be reused, copying any partial frame at the end of buffer back to the beginning. - if there is insufficient released space at beginning of the buffer, allocate a new buffer. Just because libavformat does not use the efficient implementation does not mean we should preclude it. Maybe in the future we'll even want to do that in lavf, but if not, still other implementations might. It should give at least a 5-10% performance boost on playback of high-bitrate video.
Seriously, i dont think that having 2 compressed frames instead of one in memory is a significant burden, the uncomperssed ones are so much bigger ...
Think of a possible hardware implementation where the uncompressed frame is only ever seen by the decoder chip. Then the demuxer buffers might be rather small. Obviously this would put limits on the _codec_ parameters, which the device manufacturer would publish as the supported profile, but it would be rather bad to also have limits on the container parameters. Remember, we limited framecodes to a single byte for a very good reason -- the possibility of small hardware implementations. If a device supports codec X, it should support ANY valid NUT file containing media encoded with codec X. The NUT parameters should not preclude support for the file.
But iam not that much interrested in huge frames, a simple "header elision with frames > 1024 shall be considered an error" would probably be ok.
That would make me happy. I would even be happy allowing sizes up to 4096 or so if you think it would help.
It's NOT the business of the person making the file to put restrictions on the process reading the file. This has been one of my key issues for NUT since the beginning. Just because idiot_who_makes_file thinks the only purpose of the file is to watch it with player_x doesn't mean they're exempt from the features of NUT that make the file nicely usable for editing, optimal seeking, etc.
True if the world was just a bunch of warez kids leeching pr0n of bittorrent.
But lets take a big internet provider who wants to offer TV to their customers and for that purpose gives everyone a HW decoder (the PCs being to slow maybe ...) Why should that ISP not be able to make the ideal decission during encoding for the decoders which he know everyone will be using?
But that's the thing -- he does not know what everyone will be using. He knows what he WANTS everyone to be using, but then hackers like us go and save the streams and want to use them directly for reencoding, archiving, warezing, etc. Look how nasty the situation now is with DVDs and digital TV. The content producers just assume everyone is using a crappy TV to watch their content and not doing anything else with it. They fill the video with mixed telecine, incorrect interlacing flags, etc. And in the end, everyone loses. It's not just the people who want to rip/save the streams to clean files to watch on a computer, but also the people with nice high-resolution TVs that need to present a progressive picture. Due to the crap the content producers deliver, folks always end up with flicker and aliasing. Obviously NUT can't solve these video processing issues. But we can stick to the philosophy that just because the content producer is too short-sighted to envision everything the content recipient might want to do with the content, that's not an excuse for delivering something broken with artificial limitations.
It should be in nut.txt as well ...
If it's not feel free to add it, but I think it belongs in the information about codecs, since the definition of a frame is pretty codec-specific. Expressing the abstract idea in the semantic requirements section of nut.txt would be nice though!
I will as soon as the RAW frame issue is decided, unless i forget, in which case please flame me!
PCM you mean? For video, RAW frames are quite obviously single pictures. :) Yes, that means even if the resolution is 1x1 pixel. :) :) :) For audio, I would be happy with a clean SHOULD like I suggested before, but even more happy if you have a clean technical requirement to ensure that frames not be too big without resorting to physical units (limit on number of samples would be okay, but I don't particularly like limit on number of seconds since it's not scale-invariant).
Which codec would that be? (we are talking about 5-10kbit/sec).
Vorbis.
5kbit/sec ?
I've gotten good results at 24kbit/sec with 32000 Hz sampling, mono. If we drop the sampling rate, I think 8-10kbit/sec is very realistic. I don't know about 5... would need to run some experiments. If the bitstream were optimized heavily to kill the overhead, it might work very well, while technically no longer being "vorbis".
AMR-NB has 6-32 byte frames see libavcodec/libamr.c qcelp has 4-35 byte frames see soc/qcelp/qcelp_parser.c
And is there any interest in these codecs aside from watching files generated by camera phones and transcoding them to something sane?
Theres no sense in transcoding the audio. It just will reduce the quality and very significantly increate the bitrate because the codecs you call sane are VERY bad at such low bitrate.
Well, until there's a free decoder there's a lot of interest in transcoding them, but maybe the free decoder will be done and ready for merge soon? :)
For PCM, there's no seeking issue because one always knows how to seek to an arbitrary sample in PCM 'frames'. In this case I would just recommend having a SHOULD clause that PCM frames SHOULD be same or shorter duration than the typical frame durations for other streams in the file (as a way of keeping interleaving clean and aiding players that don't want to do sample-resolution seeking inside PCM frames).
Well ... I do have a AVI with all audio in a single chunk (something like 10 or 20mb), do you want that .... in nut? mplayers avi demuxer can even seek in that avi :)
so honestly i dont think a should requirement alone would be a good idea.
Of course I don't want that in NUT. I suppose we should have a nice strong technical requirement to prevent it. How about just a limit to 4096 samples per frame? If one uses the maximum allowed frame size then, the overhead would be trivial (1 byte per 4096 bytes in the worst case, i.e. 0.025%, and much less with 16bit/stereo/etc.).
If you really want this header-elision, I'm willing to consider it for SMALL frames. But please don't flame about wanting to support it for large frames where it's absolutely useless and has lots of practical problems for efficient implementations!
Fine ill add it for small frames only. Would a 1024 byte limit be ok?
Yeah. As I said above, even a larger limit, maybe up to 4096, would be fine with me. The limit should be on the _total_ frame size, BTW, not the size with the elided header, so that the "extra buffer" needed for reassembly has a fixed max size. BTW, note that this also provides a cheap way of compressing silence with PCM audio: have a framecode for "all zero frame" and then use it to encode the whole frame. :) The same could potentially work with other codecs too. Rich

On Tue, Feb 12, 2008 at 08:37:21PM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 08:24:21PM +0100, Michael Niedermayer wrote:
Also keep in mind that what I said does NOT only apply to the mmap design but to a nice clean read-based design with low copying that probably IS similar to various current implementations.
I dont see how It wouldnt affect libavformat, also it wouldnt affect the simple: x= malloc(size+header) put header in x fread into x+header
This design is highly inefficient for high bitrate streams. The efficient implementation is:
And why exactly is above inefficient? The kernel internally has its own cache and prereads an appropriate amount, at least it should. Duplicating the kernel cache seems rather the inefficient one. Also its much more compelx ... Also do you have some benchmarks, iam curious what the real difference is. [...]
But iam not that much interrested in huge frames, a simple "header elision with frames > 1024 shall be considered an error" would probably be ok.
That would make me happy. I would even be happy allowing sizes up to 4096 or so if you think it would help.
I think it would, especially with mpeg headers ... you know 00 00 01 blah startcode shit ;) Also see svn log, ive just added elision headers, comments welcome ... Also time for nut to become a negative overhead container :) [...]
It should be in nut.txt as well ...
If it's not feel free to add it, but I think it belongs in the information about codecs, since the definition of a frame is pretty codec-specific. Expressing the abstract idea in the semantic requirements section of nut.txt would be nice though!
I will as soon as the RAW frame issue is decided, unless i forget, in which case please flame me!
PCM you mean?
yes [...]
AMR-NB has 6-32 byte frames see libavcodec/libamr.c qcelp has 4-35 byte frames see soc/qcelp/qcelp_parser.c
And is there any interest in these codecs aside from watching files generated by camera phones and transcoding them to something sane?
Theres no sense in transcoding the audio. It just will reduce the quality and very significantly increate the bitrate because the codecs you call sane are VERY bad at such low bitrate.
Well, until there's a free decoder there's a lot of interest in transcoding them, but maybe the free decoder will be done and ready for merge soon? :)
Well theres a AMR and QCELP decoder in SOC SVN, help finishing them is surely welcome ...
For PCM, there's no seeking issue because one always knows how to seek to an arbitrary sample in PCM 'frames'. In this case I would just recommend having a SHOULD clause that PCM frames SHOULD be same or shorter duration than the typical frame durations for other streams in the file (as a way of keeping interleaving clean and aiding players that don't want to do sample-resolution seeking inside PCM frames).
Well ... I do have a AVI with all audio in a single chunk (something like 10 or 20mb), do you want that .... in nut? mplayers avi demuxer can even seek in that avi :)
so honestly i dont think a should requirement alone would be a good idea.
Of course I don't want that in NUT. I suppose we should have a nice strong technical requirement to prevent it. How about just a limit to 4096 samples per frame? If one uses the maximum allowed frame size then, the overhead would be trivial (1 byte per 4096 bytes in the worst case, i.e. 0.025%, and much less with 16bit/stereo/etc.).
ok, ill add a 4096 sample limit [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Why not whip the teacher when the pupil misbehaves? -- Diogenes of Sinope

On Wed, Feb 13, 2008 at 03:54:44PM +0100, Michael Niedermayer wrote:
On Tue, Feb 12, 2008 at 08:37:21PM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 08:24:21PM +0100, Michael Niedermayer wrote:
Also keep in mind that what I said does NOT only apply to the mmap design but to a nice clean read-based design with low copying that probably IS similar to various current implementations.
I dont see how It wouldnt affect libavformat, also it wouldnt affect the simple: x= malloc(size+header) put header in x fread into x+header
This design is highly inefficient for high bitrate streams. The efficient implementation is:
And why exactly is above inefficient? The kernel internally has its own cache and prereads an appropriate amount, at least it should.
If it prereads, then there's an extra memcpy from the kernel buffer to the userspace buffer. However, preread and buffering can be disabled by posix_fadvise() on systems that fully support it. Then, the only activity on read() will be a single read from the disk directly to userspace buffers, or a single memcpy if some other process had already caused the file to be cached (but never a read into one buffer followed by a memcpy into another). Also, keep in mind not every platform/device will even have a kernel/user split. Data might directly move from the media reader device into a buffer that the demuxer has direct access to.
Duplicating the kernel cache seems rather the inefficient one. Also its much more compelx ...
Mildly, but keep in mind that you can't _just_ use the kernel cache. Your example involved using the libc's stdio buffering too; otherwise the small reads for processing the container data would be incredibly expensive. On the old glibc I used to use, I measured fread performance being atrociously bad, always going through the stdio buffer and therefore double-copying. I don't know if it's as bad anymore, but my libc's implementation outperformed (old) glibc by several times on a loop-and-read-file benchmark with certain read-chunk sizes and fread(). I suspect this sort of badness is more the norm than the exception.
Also do you have some benchmarks, iam curious what the real difference is.
I suppose I could counstruct one. It might also be documented somewhere in the Austin Group/Open Group notes on why posix_fadvise and posix_madvise were adopted...
But iam not that much interrested in huge frames, a simple "header elision with frames > 1024 shall be considered an error" would probably be ok.
That would make me happy. I would even be happy allowing sizes up to 4096 or so if you think it would help.
I think it would, especially with mpeg headers ... you know 00 00 01 blah startcode shit ;) Also see svn log, ive just added elision headers, comments welcome ... Also time for nut to become a negative overhead container :)
=) =) =)
For PCM, there's no seeking issue because one always knows how to seek to an arbitrary sample in PCM 'frames'. In this case I would just recommend having a SHOULD clause that PCM frames SHOULD be same or shorter duration than the typical frame durations for other streams in the file (as a way of keeping interleaving clean and aiding players that don't want to do sample-resolution seeking inside PCM frames).
Well ... I do have a AVI with all audio in a single chunk (something like 10 or 20mb), do you want that .... in nut? mplayers avi demuxer can even seek in that avi :)
so honestly i dont think a should requirement alone would be a good idea.
Of course I don't want that in NUT. I suppose we should have a nice strong technical requirement to prevent it. How about just a limit to 4096 samples per frame? If one uses the maximum allowed frame size then, the overhead would be trivial (1 byte per 4096 bytes in the worst case, i.e. 0.025%, and much less with 16bit/stereo/etc.).
ok, ill add a 4096 sample limit
Sounds good. And for uncompressed video a frame must be exactly one picture regardless of size. Is that okay? Rich

On Mon, Feb 11, 2008 at 10:41:51PM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 12:33:22AM +0100, michael wrote:
Log: 3 more issues which have come up in the past but have IIRC never been resolved.
--- docs/nutissues.txt (original) +++ docs/nutissues.txt Tue Feb 12 00:33:21 2008 +Issue small-frames +------------------ +The original intent of nut frames was that 1 container frame == 1 codec +frame. Though this does not seem to be explicitly written in nut.txt.
It is.
+Also it is inefficient for very small frames, AMR-NB for example has 6-32 +bytes per frame.
I don't think anyone really cares that much about efficiency when storing shit codecs in NUT. Obviously any good codec will use large frame sizes or compression will not be good.
Somebody correct me if I am wrong, but I think AMR currently is the best quality speech codec... Diego

On Tue, Feb 12, 2008 at 09:01:04AM +0100, Diego Biurrun wrote:
+Also it is inefficient for very small frames, AMR-NB for example has 6-32 +bytes per frame.
I don't think anyone really cares that much about efficiency when storing shit codecs in NUT. Obviously any good codec will use large frame sizes or compression will not be good.
Somebody correct me if I am wrong, but I think AMR currently is the best quality speech codec...
No, PCM is the best quality speech codec. :) :) :) AMR is good for shit-quality applications like telephone, but not for anything where you want the audio to actually be recognizable. Rich

On Tue, Feb 12, 2008 at 01:17:50PM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 09:01:04AM +0100, Diego Biurrun wrote:
+Also it is inefficient for very small frames, AMR-NB for example has 6-32 +bytes per frame.
I don't think anyone really cares that much about efficiency when storing shit codecs in NUT. Obviously any good codec will use large frame sizes or compression will not be good.
Somebody correct me if I am wrong, but I think AMR currently is the best quality speech codec...
No, PCM is the best quality speech codec. :) :) :)
nonsense, live speech is much higher quality. ;)))
AMR is good for shit-quality applications like telephone, but not for anything where you want the audio to actually be recognizable.
have you ever tested AMR and qcelp or are you just quessing? [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Asymptotically faster algorithms should always be preferred if you have asymptotical amounts of data

On Tue, Feb 12, 2008 at 08:26:58PM +0100, Michael Niedermayer wrote:
On Tue, Feb 12, 2008 at 01:17:50PM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 09:01:04AM +0100, Diego Biurrun wrote:
+Also it is inefficient for very small frames, AMR-NB for example has 6-32 +bytes per frame.
I don't think anyone really cares that much about efficiency when storing shit codecs in NUT. Obviously any good codec will use large frame sizes or compression will not be good.
Somebody correct me if I am wrong, but I think AMR currently is the best quality speech codec...
No, PCM is the best quality speech codec. :) :) :)
nonsense, live speech is much higher quality. ;)))
Please inform me where I can download this "live speech" codec! It sounds amazing!!
AMR is good for shit-quality applications like telephone, but not for anything where you want the audio to actually be recognizable.
have you ever tested AMR and qcelp or are you just quessing?
I've listened to music over cellphone. :-P Rich

On Mon, 11 Feb 2008 22:41:51 -0500 Rich Felker <dalias@aerifal.cx> wrote:
I don't think anyone really cares that much about efficiency when storing shit codecs in NUT. Obviously any good codec will use large frame sizes or compression will not be good.
Ever heard of VoIP? In such applications it is essential to have as little latency as possible. Do the maths and you will see that you need a codec frame size between 10 and 30ms to get a decent experience. At 8KHz that's just a couple 100 samples. Albeu PS: I don't dispute the fact that putting such codec in nut doesn't make much sense, just that these codec are not useless shit as you seems to imply.

On Tue, Feb 12, 2008 at 03:24:34PM +0100, Alban Bedel wrote:
On Mon, 11 Feb 2008 22:41:51 -0500 Rich Felker <dalias@aerifal.cx> wrote:
I don't think anyone really cares that much about efficiency when storing shit codecs in NUT. Obviously any good codec will use large frame sizes or compression will not be good.
Ever heard of VoIP? In such applications it is essential to have as little latency as possible. Do the maths and you will see that you need a codec frame size between 10 and 30ms to get a decent experience. At 8KHz that's just a couple 100 samples.
At 8khz you might as well send the contents of /dev/urandom instead of a signal...
PS: I don't dispute the fact that putting such codec in nut doesn't make much sense, just that these codec are not useless shit as you seems to imply.
=P Rich

Rich Felker <dalias@aerifal.cx> writes:
On Tue, Feb 12, 2008 at 03:24:34PM +0100, Alban Bedel wrote:
On Mon, 11 Feb 2008 22:41:51 -0500 Rich Felker <dalias@aerifal.cx> wrote:
I don't think anyone really cares that much about efficiency when storing shit codecs in NUT. Obviously any good codec will use large frame sizes or compression will not be good.
Ever heard of VoIP? In such applications it is essential to have as little latency as possible. Do the maths and you will see that you need a codec frame size between 10 and 30ms to get a decent experience. At 8KHz that's just a couple 100 samples.
At 8khz you might as well send the contents of /dev/urandom instead of a signal...
Please stop this nonsense. Everybody who has ever used a telephone can certify that 8kHz is perfectly adequate for plain speech. Perhaps someone could let Rich make a test call on their phone so he can see for himself. -- Måns Rullgård mans@mansr.com

On Tue, Feb 12, 2008 at 06:42:30PM +0000, Måns Rullgård wrote:
Rich Felker <dalias@aerifal.cx> writes:
On Tue, Feb 12, 2008 at 03:24:34PM +0100, Alban Bedel wrote:
On Mon, 11 Feb 2008 22:41:51 -0500 Rich Felker <dalias@aerifal.cx> wrote:
I don't think anyone really cares that much about efficiency when storing shit codecs in NUT. Obviously any good codec will use large frame sizes or compression will not be good.
Ever heard of VoIP? In such applications it is essential to have as little latency as possible. Do the maths and you will see that you need a codec frame size between 10 and 30ms to get a decent experience. At 8KHz that's just a couple 100 samples.
At 8khz you might as well send the contents of /dev/urandom instead of a signal...
Please stop this nonsense. Everybody who has ever used a telephone can certify that 8kHz is perfectly adequate for plain speech.
Perhaps someone could let Rich make a test call on their phone so he can see for himself.
Sorry I should have appended a smiley or something. But seriously, I've seen much better quality at higher samplerates using more general-purpose codecs (vorbis) with similar or marginally higher bitrate. I'm very skeptical that the whole "speech compression" field is just an industry patent game rather than something really useful. Besides, there are plenty of times I'm talking on the phone and want someone to hear a non-vocal sound such as music that's playing or a bad sound coming from a broken machine, etc.. Rich

On Tue, Feb 12, 2008 at 01:32:55PM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 03:24:34PM +0100, Alban Bedel wrote:
On Mon, 11 Feb 2008 22:41:51 -0500 Rich Felker <dalias@aerifal.cx> wrote:
I don't think anyone really cares that much about efficiency when storing shit codecs in NUT. Obviously any good codec will use large frame sizes or compression will not be good.
Ever heard of VoIP? In such applications it is essential to have as little latency as possible. Do the maths and you will see that you need a codec frame size between 10 and 30ms to get a decent experience. At 8KHz that's just a couple 100 samples.
At 8khz you might as well send the contents of /dev/urandom instead of a signal...
Thats one way to safe phone costs ... [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Many that live deserve death. And some that die deserve life. Can you give it to them? Then do not be too eager to deal out death in judgement. For even the very wise cannot see all ends. -- Gandalf

On Tue, Feb 12, 2008 at 08:35:04PM +0100, Michael Niedermayer wrote:
On Tue, Feb 12, 2008 at 01:32:55PM -0500, Rich Felker wrote:
On Tue, Feb 12, 2008 at 03:24:34PM +0100, Alban Bedel wrote:
On Mon, 11 Feb 2008 22:41:51 -0500 Rich Felker <dalias@aerifal.cx> wrote:
I don't think anyone really cares that much about efficiency when storing shit codecs in NUT. Obviously any good codec will use large frame sizes or compression will not be good.
Ever heard of VoIP? In such applications it is essential to have as little latency as possible. Do the maths and you will see that you need a codec frame size between 10 and 30ms to get a decent experience. At 8KHz that's just a couple 100 samples.
At 8khz you might as well send the contents of /dev/urandom instead of a signal...
Thats one way to safe phone costs ...
With the crap most people talk about on their cellphones, I wonder if it might convey more useful information too... :) :) Rich
participants (6)
-
Alban Bedel
-
Diego Biurrun
-
michael
-
Michael Niedermayer
-
Måns Rullgård
-
Rich Felker