
Michael Niedermayer <michaelni@gmx.at> writes:
On Tue, Feb 12, 2008 at 07:37:53PM +0100, Alban Bedel wrote:
On Tue, 12 Feb 2008 17:57:03 +0100 Michael Niedermayer <michaelni@gmx.at> wrote:
On Tue, Feb 12, 2008 at 05:47:13PM +0100, Alban Bedel wrote:
On Tue, 12 Feb 2008 16:00:10 +0100 (CET) michael <subversion@mplayerhq.hu> wrote:
Modified: docs/nutissues.txt ============================================================================== --- docs/nutissues.txt (original) +++ docs/nutissues.txt Tue Feb 12 16:00:09 2008 @@ -162,3 +162,8 @@ How do we identify the interleaving A. fourcc B. extradata
I would vote for this with a single fourcc for pcm and a single fourcc for raw video. Having infos about the data format packed in the fourcc is ugly and useless. That just lead to inflexible lookup tables and the like.
Instead we should just define the format in a way similar to what mp_image provide for video (colorspace, packed or not, shift used for the subsampled planes, etc). That would allow implementations simply supporting all definable format, instead of a selection of what happened to be commonly used formats at the time the implementation was written.
The key points here are that * colorspace/shift for subsampled planes, etc is not specific to RAW, its more like sample_rate or width/height
Sure, but when a "real" codec is used, it's the decoder business to tell the app what output format it will use. NUT can provide infos about the internal format used by the codec,
Only very few codecs have headers which store informations about things like shift for subsampled planes. Thus if this information is desired it has to come from the container more often than not. If its not desired then we also dont need it for raw IMHO.
With compressed video, the decoder informs the caller of the pixel format. With raw video, this information must come from the container, one way or other.
that would help dealing with decoder including slow colorspace conversions.
I have no interrest in supporting or helping this case, and i suspect iam not alone here.
But that's definitly non-essential information, any player should be able to do without it.
However for RAW data the "decoder" need to know the exact format used, just like some other decoder need some huffman tables or whatever. And the logical place for such information is the extradata imho.
see above, also there really are 2 things 1. How things are stored (packed vs. planar, the precisse byte packing, ... 2. What is stored (colorspace details YUV BT123 vs BT567, chroma shift, channel positions
1. defines the format that is packing of raw bytes this is somehow similar to mpeg4 vs h261 thus i think it should be specified by the fourcc 2. is needed for non raw as well which makes fourcc and extradata unuseable
The colourspace and whatnot are only needed if the compressed data is actually decoded, and in this case the decoder should be extracting this information from whatever headers the format uses.
On a related subject, it might also be useful to define the channel disposition when there is more than one. Mono and stereo can go by with the classical default, but as soon as there is more channels it is really unclear. And imho such info could still be usefull with 1 or 2 channels. Something like the position of each channel in polar coordinate (2D or 3D?) should be enouth.
I agree What about that LFE channel thing?
I was thinking about simply setting the distance to 0, however a flag for "non-directional" channels might be better.
This is wrong, LFE is not about direction but about the type of speaker. LFE stands for "Low-frequency effects". If id move a other random speaker at disatnce 0 and the LFE one out and switch channels it wont sound correct ...
And where do we put this info, The stream header seems the logic place if you ask me ...
I agree, this is essential information for proper presentation it definitly belong there.
Good, now we just need to agree on some half sane way to store it. for(i=0; i<num_channels; i++){ x_position s y_position s z_position s channel_flags v }
CHANNEL_FLAG_LFE 1
seems ok?
I'm not convinced this is the right way to go. Consider a recording made with several directional microphones in the same location. Using spherical coordinates could be a solution. Whatever the coordinate system, the location and orientation of the listener must be specified, even if there is only one logical choice. -- Måns Rullgård mans@mansr.com