
A few things I came across reading it now: Keyframes - update definition from Michael? Representation of Time - "single contiguous interval of time" failes to mention EOR. It's mentioned later though. Should this be improved? Also it might be worth mentioning that multiple 'tracks' could be stored as chapters using info packets to label them, but if so they obviously need to be timed one after another rather than all starting at time=0. Rich

Hi On Sun, Feb 03, 2008 at 03:27:54PM -0500, Rich Felker wrote: [...]
Also it might be worth mentioning that multiple 'tracks' could be stored as chapters using info packets to label them, but if so they obviously need to be timed one after another rather than all starting at time=0.
Different content (compared to different viewpoints/encodings) of something should not overlap timewise to begin with. If thats unclear it should be clarified! (I assume you are thinking of recording several radio stations or something like that, IMHO that should be done to seperate files or the result should be remuxed at the end) It would catastrophically break seeking with back ptrs, violate the overlapping chapters restriction for info packets. Make generic info covering subsets of streams needed (which you insisted to be removed). And significantly increase the bandwidth needed for reading nut files as all but one interleaved track would be discarded. Anyway i still belive that info packets covering subsets of streams should be added back. If this is not done people will just invent ad hoc solutions this is the third time now such info would be usefull. First was that some streams simply might share info. Second where attachments applyng to several streams. Third is now tracks starting from 0 where a track might be a video+audio stream of a music video for example. [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB It is dangerous to be right in matters on which the established authorities are wrong. -- Voltaire

On Sun, Feb 03, 2008 at 10:10:40PM +0100, Michael Niedermayer wrote:
Hi
On Sun, Feb 03, 2008 at 03:27:54PM -0500, Rich Felker wrote: [...]
Also it might be worth mentioning that multiple 'tracks' could be stored as chapters using info packets to label them, but if so they obviously need to be timed one after another rather than all starting at time=0.
Different content (compared to different viewpoints/encodings) of something should not overlap timewise to begin with. If thats unclear it should be
Obviously. I think this is reflected in the document; if not, please feel free to clarify it.
clarified! (I assume you are thinking of recording several radio stations or something like that, IMHO that should be done to seperate files or the result should be remuxed at the end)
I was thinking of storing a whole CD or VCD in one NUT file with chapters for each track, which is valid but not necessarily a good idea depending on the intended use. Rich

Michael Niedermayer <michaelni@gmx.at> writes:
Hi
On Sun, Feb 03, 2008 at 03:27:54PM -0500, Rich Felker wrote: [...]
Also it might be worth mentioning that multiple 'tracks' could be stored as chapters using info packets to label them, but if so they obviously need to be timed one after another rather than all starting at time=0.
Different content (compared to different viewpoints/encodings) of something should not overlap timewise to begin with. If thats unclear it should be clarified! (I assume you are thinking of recording several radio stations or something like that, IMHO that should be done to seperate files or the result should be remuxed at the end)
I guess this means NUT is not intended to be used in typical broadcast environments where each channel has far more bandwidth than a single program requires. This is why MPEG-TS supports multiple independent programs. -- Måns Rullgård mans@mansr.com

On Sun, Feb 03, 2008 at 09:47:24PM +0000, Måns Rullgård wrote:
Michael Niedermayer <michaelni@gmx.at> writes:
Hi
On Sun, Feb 03, 2008 at 03:27:54PM -0500, Rich Felker wrote: [...]
Also it might be worth mentioning that multiple 'tracks' could be stored as chapters using info packets to label them, but if so they obviously need to be timed one after another rather than all starting at time=0.
Different content (compared to different viewpoints/encodings) of something should not overlap timewise to begin with. If thats unclear it should be clarified! (I assume you are thinking of recording several radio stations or something like that, IMHO that should be done to seperate files or the result should be remuxed at the end)
I guess this means NUT is not intended to be used in typical broadcast environments where each channel has far more bandwidth than a single program requires. This is why MPEG-TS supports multiple independent programs.
You could use nut for that but a few things wont work. Info packets arent capable to describe such mixed content very well because theres no way to have an info packet apply to a program. You would have to duplicate the info packet for each stream in a program. Chapters (which would differ between programs) cant overlap timewise. Seeking with back_ptrs is at the mercy of having frequent keyframes in all streams. I assume this would be true for your broadcast scenario, still its not optimal, but also not worse than mpeg-ts. Possible changes to nut which would improve this would be. * returning to the original info packets as i designed them, to store generic information. (this would fix the first 2 issues) * multiple back_ptrs for such multi program/track files/streams that was suggested by rich and would allow seeking with stored broadcasts even if some streams had only rare keyframes. [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB I count him braver who overcomes his desires than him who conquers his enemies for the hardest victory is over self. -- Aristotle

On Mon, Feb 04, 2008 at 12:03:23AM +0100, Michael Niedermayer wrote:
On Sun, Feb 03, 2008 at 09:47:24PM +0000, Måns Rullgård wrote:
Michael Niedermayer <michaelni@gmx.at> writes:
Hi
On Sun, Feb 03, 2008 at 03:27:54PM -0500, Rich Felker wrote: [...]
Also it might be worth mentioning that multiple 'tracks' could be stored as chapters using info packets to label them, but if so they obviously need to be timed one after another rather than all starting at time=0.
Different content (compared to different viewpoints/encodings) of something should not overlap timewise to begin with. If thats unclear it should be clarified! (I assume you are thinking of recording several radio stations or something like that, IMHO that should be done to seperate files or the result should be remuxed at the end)
I guess this means NUT is not intended to be used in typical broadcast environments where each channel has far more bandwidth than a single program requires. This is why MPEG-TS supports multiple independent programs.
You could use nut for that but a few things wont work.
Info packets arent capable to describe such mixed content very well because theres no way to have an info packet apply to a program. You would have to duplicate the info packet for each stream in a program.
Chapters (which would differ between programs) cant overlap timewise.
Seeking with back_ptrs is at the mercy of having frequent keyframes in all streams. I assume this would be true for your broadcast scenario, still its not optimal, but also not worse than mpeg-ts.
Possible changes to nut which would improve this would be. * returning to the original info packets as i designed them, to store generic information. (this would fix the first 2 issues) * multiple back_ptrs for such multi program/track files/streams
I don't consider this an improvement. Muxing unrelated content is a HARMFUL activity (makes it more resource-intensive to process with no gains) and outside the scope of NUT. The whole purpose of interleaved muxing is to be able to retrieve and synchronize information that is meant for simultaneous processing and presentation; it's not just something pointlessly done for its own sake. I'm against any features that have no use except for helping people do stupid things like this..
that was suggested by rich and would allow seeking with stored broadcasts even if some streams had only rare keyframes.
While I did at one point suggest multiple back_ptrs, it was never intended for such stupid use. We later discovered that we could get adequate results with just a single back_ptr, so I dropped the recommendation. Rich

On Sun, Feb 03, 2008 at 10:27:21PM -0500, Rich Felker wrote:
On Mon, Feb 04, 2008 at 12:03:23AM +0100, Michael Niedermayer wrote:
On Sun, Feb 03, 2008 at 09:47:24PM +0000, Måns Rullgård wrote:
Michael Niedermayer <michaelni@gmx.at> writes:
Hi
On Sun, Feb 03, 2008 at 03:27:54PM -0500, Rich Felker wrote: [...]
Also it might be worth mentioning that multiple 'tracks' could be stored as chapters using info packets to label them, but if so they obviously need to be timed one after another rather than all starting at time=0.
Different content (compared to different viewpoints/encodings) of something should not overlap timewise to begin with. If thats unclear it should be clarified! (I assume you are thinking of recording several radio stations or something like that, IMHO that should be done to seperate files or the result should be remuxed at the end)
I guess this means NUT is not intended to be used in typical broadcast environments where each channel has far more bandwidth than a single program requires. This is why MPEG-TS supports multiple independent programs.
You could use nut for that but a few things wont work.
Info packets arent capable to describe such mixed content very well because theres no way to have an info packet apply to a program. You would have to duplicate the info packet for each stream in a program.
Chapters (which would differ between programs) cant overlap timewise.
Seeking with back_ptrs is at the mercy of having frequent keyframes in all streams. I assume this would be true for your broadcast scenario, still its not optimal, but also not worse than mpeg-ts.
Possible changes to nut which would improve this would be. * returning to the original info packets as i designed them, to store generic information. (this would fix the first 2 issues) * multiple back_ptrs for such multi program/track files/streams
I don't consider this an improvement. Muxing unrelated content is a HARMFUL activity (makes it more resource-intensive to process with no gains) and outside the scope of NUT. The whole purpose of interleaved muxing is to be able to retrieve and synchronize information that is meant for simultaneous processing and presentation; it's not just something pointlessly done for its own sake. I'm against any features that have no use except for helping people do stupid things like this..
Or in other words you dont want nut to be useable in broadcast environments.
that was suggested by rich and would allow seeking with stored broadcasts even if some streams had only rare keyframes.
While I did at one point suggest multiple back_ptrs, it was never intended for such stupid use.
No, it was intended for an even more stupid one ;) That is to seek in files where streams have rare and randomly placed keyframes, that is you cant seek while demuxing and decoding all streams.
We later discovered that we could get adequate results with just a single back_ptr, so I dropped the recommendation.
Yes and i fully agree that multiple back_ptrs are unneeded normally. The broadcast case is an exception though where they could be usefull. [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Freedom in capitalist society always remains about the same as it was in ancient Greek republics: Freedom for slave owners. -- Vladimir Lenin

On Sun, Feb 03, 2008 at 09:47:24PM +0000, Måns Rullgård wrote:
Michael Niedermayer <michaelni@gmx.at> writes:
Hi
On Sun, Feb 03, 2008 at 03:27:54PM -0500, Rich Felker wrote: [...]
Also it might be worth mentioning that multiple 'tracks' could be stored as chapters using info packets to label them, but if so they obviously need to be timed one after another rather than all starting at time=0.
Different content (compared to different viewpoints/encodings) of something should not overlap timewise to begin with. If thats unclear it should be clarified! (I assume you are thinking of recording several radio stations or something like that, IMHO that should be done to seperate files or the result should be remuxed at the end)
I guess this means NUT is not intended to be used in typical broadcast environments where each channel has far more bandwidth than a single program requires. This is why MPEG-TS supports multiple independent programs.
There's nothing keeping you from partitioning a physical channel via a separate protocol layer. For example, a DSL line has far more bandwidth than needed to watch youtube, but that doesn't mean that html, email, etc. is interleaved into the FLV container!! Rather, you have TCP for multiplex unrelated connections over a single physical network line. The same principle applies to broadcast channels as well. There is no excuse for container/transport incest!!!!! Rich

On Sun, Feb 03, 2008 at 10:30:22PM -0500, Rich Felker wrote:
On Sun, Feb 03, 2008 at 09:47:24PM +0000, Måns Rullgård wrote:
Michael Niedermayer <michaelni@gmx.at> writes:
Hi
On Sun, Feb 03, 2008 at 03:27:54PM -0500, Rich Felker wrote: [...]
Also it might be worth mentioning that multiple 'tracks' could be stored as chapters using info packets to label them, but if so they obviously need to be timed one after another rather than all starting at time=0.
Different content (compared to different viewpoints/encodings) of something should not overlap timewise to begin with. If thats unclear it should be clarified! (I assume you are thinking of recording several radio stations or something like that, IMHO that should be done to seperate files or the result should be remuxed at the end)
I guess this means NUT is not intended to be used in typical broadcast environments where each channel has far more bandwidth than a single program requires. This is why MPEG-TS supports multiple independent programs.
There's nothing keeping you from partitioning a physical channel via a separate protocol layer. For example, a DSL line has far more bandwidth than needed to watch youtube, but that doesn't mean that html, email, etc. is interleaved into the FLV container!! Rather, you have TCP for multiplex unrelated connections over a single physical network line. The same principle applies to broadcast channels as well. There is no excuse for container/transport incest!!!!!
Thats all nice and well, just that the reason why nut cant be used directly is because of maybe 2-3 lines in the spec. And as solution you suggest an additional protocol layer. Which is equivalent to double layer muxing. Which actually violates the spec ... And which i then have to somehow support in ffmpeg in addition to mpeg-ts and everything else that implements things the other way around. Well i wont implement it. Feel free to send a patch, but expect me to reject it if its 1000+ lines. [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Thouse who are best at talking, realize last or never when they are wrong.

On Mon, Feb 04, 2008 at 05:14:32AM +0100, Michael Niedermayer wrote:
There's nothing keeping you from partitioning a physical channel via a separate protocol layer. For example, a DSL line has far more bandwidth than needed to watch youtube, but that doesn't mean that html, email, etc. is interleaved into the FLV container!! Rather, you have TCP for multiplex unrelated connections over a single physical network line. The same principle applies to broadcast channels as well. There is no excuse for container/transport incest!!!!!
Thats all nice and well, just that the reason why nut cant be used directly is because of maybe 2-3 lines in the spec.
There are lots of things that could be added to NUT with just 2-3 lines in the spec, but which have no place in a media container... Ease of adding to the spec is not an argument for something. Rather, one must think of all the troubles it makes. Remember, perfection is reached not when there's nothing left to add but when there's nothing left to remove.
And as solution you suggest an additional protocol layer. Which is equivalent to double layer muxing. Which actually violates the spec ...
Storing other containers inside NUT violates the spec. Transmitting many NUT streams at once on a single physical link does not violate the spec anymore than sending NUT over TCP or storing it interleaved with other data on physical disk platters does.
And which i then have to somehow support in ffmpeg in addition to mpeg-ts and everything else that implements things the other way around.
FFmpeg does not need its own TCP stack or filesystem code, because the task of transmitting/storing multiple unrelated data streams on a single physical link/device belongs to the operating system and/or hardware, not to applications. Rich

On Monday 04 February 2008 05:50:48 Rich Felker wrote:
On Mon, Feb 04, 2008 at 05:14:32AM +0100, Michael Niedermayer wrote:
There's nothing keeping you from partitioning a physical channel via a separate protocol layer. For example, a DSL line has far more bandwidth than needed to watch youtube, but that doesn't mean that html, email, etc. is interleaved into the FLV container!! Rather, you have TCP for multiplex unrelated connections over a single physical network line. The same principle applies to broadcast channels as well. There is no excuse for container/transport incest!!!!!
Thats all nice and well, just that the reason why nut cant be used directly is because of maybe 2-3 lines in the spec.
There are lots of things that could be added to NUT with just 2-3 lines in the spec, but which have no place in a media container... Ease of adding to the spec is not an argument for something. Rather, one must think of all the troubles it makes. Remember, perfection is reached not when there's nothing left to add but when there's nothing left to remove.
And as solution you suggest an additional protocol layer. Which is equivalent to double layer muxing. Which actually violates the spec ...
Storing other containers inside NUT violates the spec. Transmitting many NUT streams at once on a single physical link does not violate the spec anymore than sending NUT over TCP or storing it interleaved with other data on physical disk platters does.
the intention is not to incest the container with the transport, but to - store N streams in a single NUT multiplex - identify which of the N streams belong together (in what mpegts calls program: 1 video, 1+ audio etc) by means of some info packet. Requiring multiple NUT streams is simply not practical and out of question in certain environments (e.g. broadcast channels) . Simply look at how difficult it can be receiving crappy rt*p streams that require more than 1 socket. On the other side having a program map shouldn't require much of an effort

Nico Sabbi wrote:
Requiring multiple NUT streams is simply not practical and out of question in certain environments (e.g. broadcast channels) . Simply look at how difficult it can be receiving crappy rt*p streams that require more than 1 socket. On the other side having a program map shouldn't require much of an effort
What about postponing this issue to solve it with the net-nut that Michael drafted long ago. Having an all-sizes-fit-all solution isn't the best sometimes. lu -- Luca Barbato Gentoo Council Member Gentoo/linux Gentoo/PPC http://dev.gentoo.org/~lu_zero

On Mon, Feb 04, 2008 at 10:24:27AM +0100, Nico Sabbi wrote:
And as solution you suggest an additional protocol layer. Which is equivalent to double layer muxing. Which actually violates the spec ...
Storing other containers inside NUT violates the spec. Transmitting many NUT streams at once on a single physical link does not violate the spec anymore than sending NUT over TCP or storing it interleaved with other data on physical disk platters does.
the intention is not to incest the container with the transport,
Regardless of the intent, that's what it is.
but to - store N streams in a single NUT multiplex - identify which of the N streams belong together (in what mpegts calls program: 1 video, 1+ audio etc) by means of some info packet.
It's easy to identify which ones belong together as a program: THEY ALL DO! If they don't, then they don't belong in the same container.
Requiring multiple NUT streams is simply not practical and out of question in certain environments (e.g. broadcast channels) .
How so?
Simply look at how difficult it can be receiving crappy rt*p streams that require more than 1 socket.
This is unrelated. For sockets, surely you would not want to send unrelated, irrelevant-to-the-recipient data; it's just a waste of bandwidth. Where the issue comes in is with broadcast channels over unidirectional links where all recipients receive the same content and the physical channel's bandwidth is large enough for multiple programs. It's really the fault of the hardware and protocol layers for not already partitioning such channels into multiple virtual channels. If they were going to switch to NUT (which I don't see them doing regardless of what idiotic tailored-to-their-broken-systems features we bloat NUT up with) they could just as well add a proper protocol layer for channel partitioning at the same time. Rich

Rich Felker <dalias@aerifal.cx> writes:
This is unrelated. For sockets, surely you would not want to send unrelated, irrelevant-to-the-recipient data; it's just a waste of bandwidth. Where the issue comes in is with broadcast channels over unidirectional links where all recipients receive the same content and the physical channel's bandwidth is large enough for multiple programs. It's really the fault of the hardware and protocol layers for not already partitioning such channels into multiple virtual channels. If they were going to switch to NUT (which I don't see them doing regardless of what idiotic tailored-to-their-broken-systems features we bloat NUT up with) they could just as well add a proper protocol layer for channel partitioning at the same time.
There are standards for IP over MPEG-TS. One could supposedly use that for NUT over RTSP over IP over MPEG-TS... -- Måns Rullgård mans@mansr.com

On Mon, Feb 04, 2008 at 09:39:04PM +0000, Måns Rullgård wrote:
Rich Felker <dalias@aerifal.cx> writes:
This is unrelated. For sockets, surely you would not want to send unrelated, irrelevant-to-the-recipient data; it's just a waste of bandwidth. Where the issue comes in is with broadcast channels over unidirectional links where all recipients receive the same content and the physical channel's bandwidth is large enough for multiple programs. It's really the fault of the hardware and protocol layers for not already partitioning such channels into multiple virtual channels. If they were going to switch to NUT (which I don't see them doing regardless of what idiotic tailored-to-their-broken-systems features we bloat NUT up with) they could just as well add a proper protocol layer for channel partitioning at the same time.
There are standards for IP over MPEG-TS. One could supposedly use that for NUT over RTSP over IP over MPEG-TS...
I don't see how the extra layer of IP helps anything here. :) Really what I'm looking for is just a way that the _stream_ layer (byteio or whatever it's called in ffmpeg framework) would output a plain NUT stream for the selected program among whatever programs are being transmitted over the physical channel that the hardware is receiving. I don't care so much how it's done. My point all along has just been that I believe this is at a very different layer from multiplexing different parts of a single program which are meant to be synchronized and presented together. Rich

On Mon, Feb 04, 2008 at 05:43:01PM -0500, Rich Felker wrote:
On Mon, Feb 04, 2008 at 09:39:04PM +0000, Måns Rullgård wrote:
Rich Felker <dalias@aerifal.cx> writes:
This is unrelated. For sockets, surely you would not want to send unrelated, irrelevant-to-the-recipient data; it's just a waste of bandwidth. Where the issue comes in is with broadcast channels over unidirectional links where all recipients receive the same content and the physical channel's bandwidth is large enough for multiple programs. It's really the fault of the hardware and protocol layers for not already partitioning such channels into multiple virtual channels. If they were going to switch to NUT (which I don't see them doing regardless of what idiotic tailored-to-their-broken-systems features we bloat NUT up with) they could just as well add a proper protocol layer for channel partitioning at the same time.
There are standards for IP over MPEG-TS. One could supposedly use that for NUT over RTSP over IP over MPEG-TS...
I don't see how the extra layer of IP helps anything here. :) Really what I'm looking for is just a way that the _stream_ layer (byteio or whatever it's called in ffmpeg framework) would output a plain NUT stream for the selected program among whatever programs are being transmitted over the physical channel that the hardware is receiving. I don't care so much how it's done. My point all along has just been that I believe this is at a very different layer from multiplexing different parts of a single program which are meant to be synchronized and presented together.
Iam still waiting for the patch for ffmpeg... It is either A. easy, so it should be only a few minutes work for you. B. not easy, in which case its likely not such a good idea as the alternative seems easier. It after all just has to pass the program info to AVProgram. And keep in mind if its a nightmare to implement in ffmpeg it likely will be as well for most other applications. The same applies to it being easy. And the protocol "whatever its called ;)" will give you a multi nut in your case, like it or not. You will either need a double layer protocol or double layer demuxer. Because mpeg-ts comes out of it currently, and if you replace it by your protocol+nut, then your protocol+nut will come out. [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB It is dangerous to be right in matters on which the established authorities are wrong. -- Voltaire

On Tue, Feb 05, 2008 at 12:29:40AM +0100, Michael Niedermayer wrote:
There are standards for IP over MPEG-TS. One could supposedly use that for NUT over RTSP over IP over MPEG-TS...
I don't see how the extra layer of IP helps anything here. :) Really what I'm looking for is just a way that the _stream_ layer (byteio or whatever it's called in ffmpeg framework) would output a plain NUT stream for the selected program among whatever programs are being transmitted over the physical channel that the hardware is receiving. I don't care so much how it's done. My point all along has just been that I believe this is at a very different layer from multiplexing different parts of a single program which are meant to be synchronized and presented together.
Iam still waiting for the patch for ffmpeg...
Code is only justified for implementing a solution that's already designed and specified. Throwing code at something new is a sign of amateurs.
It is either A. easy, so it should be only a few minutes work for you. B. not easy, in which case its likely not such a good idea as the alternative seems easier. It after all just has to pass the program info to AVProgram.
And keep in mind if its a nightmare to implement in ffmpeg it likely will be as well for most other applications. The same applies to it being easy.
And the protocol "whatever its called ;)" will give you a multi nut in your case, like it or not. You will either need a double layer protocol or double layer demuxer. Because mpeg-ts comes out of it currently, and if you replace it by your protocol+nut, then your protocol+nut will come out.
Depends on what you're talking about it "coming out" of. No one says that an interleaved mess of video, html, email, pings, etc. comes out of the ethernet, because there's an appropriate layer delivering to the application only the data it's interested in (and which belongs to it). My intent was never for such monstrosities to be written to disk as a single file, but separated at the transport level. Of course even if they did remain on disk, it's like talking about zip or rar files. The possibility that someone might put two separate nut programs in some ugly wrapping structure on disk doesn't mean nut should support multiple programs internally any more than the possibility that someone might create a .rar file containing a nut file and windows codec binaries together means that nut should support embedding windows codec dlls in the headers... Rich

On Tue, Feb 05, 2008 at 01:21:33AM -0500, Rich Felker wrote:
On Tue, Feb 05, 2008 at 12:29:40AM +0100, Michael Niedermayer wrote:
There are standards for IP over MPEG-TS. One could supposedly use that for NUT over RTSP over IP over MPEG-TS...
I don't see how the extra layer of IP helps anything here. :) Really what I'm looking for is just a way that the _stream_ layer (byteio or whatever it's called in ffmpeg framework) would output a plain NUT stream for the selected program among whatever programs are being transmitted over the physical channel that the hardware is receiving. I don't care so much how it's done. My point all along has just been that I believe this is at a very different layer from multiplexing different parts of a single program which are meant to be synchronized and presented together.
Iam still waiting for the patch for ffmpeg...
Code is only justified for implementing a solution that's already designed and specified. Throwing code at something new is a sign of amateurs.
Designing things which cannot be implemented with reasonable complexity. Is plain and simple bad design. But then you dont even provide any design you just vaguly point toward some mysterious protocols. And i can also describe a kernel, compiler and browser in a few vague and simple sounding words, this doesnt make the implementation simple. Also fact is, users will give a damn about your incest theorems. If they want menus, multiple programs, ... in their files they will use a container supporting that. Noone will sit down and start designing and then implementing the protocols and other support structures vaguly pointed at by you. And in the end that just leads to a double container containing everything the users want. [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Many things microsoft did are stupid, but not doing something just because microsoft did it is even more stupid. If everything ms did were stupid they would be bankrupt already.

Hi On Tue, Feb 05, 2008 at 01:21:33AM -0500, Rich Felker wrote: [...]
And the protocol "whatever its called ;)" will give you a multi nut in your case, like it or not. You will either need a double layer protocol or double layer demuxer. Because mpeg-ts comes out of it currently, and if you replace it by your protocol+nut, then your protocol+nut will come out.
Depends on what you're talking about it "coming out" of. No one says that an interleaved mess of video, html, email, pings, etc. comes out of the ethernet, because there's an appropriate layer delivering to the application only the data it's interested in (and which belongs to it).
My intent was never for such monstrosities to be written to disk as a single file, but separated at the transport level. Of course even if they did remain on disk, it's like talking about zip or rar files. The possibility that someone might put two separate nut programs in some ugly wrapping structure on disk doesn't mean nut should support multiple programs internally any more than the possibility that someone might create a .rar file containing a nut file and windows codec binaries together means that nut should support embedding windows codec dlls in the headers...
Ill give a concrete example A user has a DVD with menus and some alternative scenes/chapters. With my design he just transcodes this in a single nut file and can play it. With your design, he has to transcode this into maybe 50+ files somehow kept together in an archive, lets say a .tar. With some unspecified way to store menus and all the support structures. ffplay, mplayer, ffmpeg, xine, vlc, ... will then get a command line argument called mydvd.tar There is no mysterious protocol between the file/http/ftp/... protocol and the demuxer unless such new second layer protocol or demuxer is implemented. Its not a natural part of file io to turn your single file into 50 streams easy useable and seekable by the demuxer. And i dont even want to start thinking about non seekable input or what effect that would have on complexity. [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Good people do not need laws to tell them to act responsibly, while bad people will find a way around the laws. -- Plato

On Tue, Feb 05, 2008 at 02:33:10PM +0100, Michael Niedermayer wrote:
My intent was never for such monstrosities to be written to disk as a single file, but separated at the transport level. Of course even if they did remain on disk, it's like talking about zip or rar files. The possibility that someone might put two separate nut programs in some ugly wrapping structure on disk doesn't mean nut should support multiple programs internally any more than the possibility that someone might create a .rar file containing a nut file and windows codec binaries together means that nut should support embedding windows codec dlls in the headers...
Ill give a concrete example A user has a DVD with menus and some alternative scenes/chapters. With my design he just transcodes this in a single nut file and can play it.
You speak of your design as if there is a design under discussion. There is not. The topic at hand is interleaving multiple unrelated programs, which does not in any way provide support for storing DVD menus. Nor does the current design of NUT preclude menus (though it certainly does not encourage them) given a proper spec for the metadata to describe them.
With your design,
There is not "my design" but the frozen design. Aside from fixing any minor details, which you and several others have been doing a very fine job of, NUT is finished.
he has to transcode this into maybe 50+ files somehow kept together in an archive, lets say a .tar.
The number would be more like 2. And they would not be in a .tar file except when distributed by idiotic warez-mentality folks (and then it would probably be .rar anyway).
With some unspecified way to store menus and all the support structures.
Feel free to propose a specification for menus. I believe this was on the agenda for a long time but considered sufficiently unimportant (and at a separate layer of specification) that it could be relegated to after NUT was completely finished.
ffplay, mplayer, ffmpeg, xine, vlc, ... will then get a command line argument called mydvd.tar
Absolutely not. One would extract the nonsensical archive with the normal archive tools, if such a thing were used.
There is no mysterious protocol between the file/http/ftp/... protocol and the demuxer unless such new second layer protocol or demuxer is implemented. Its
Again you are mixing unrelated issues. The topic at hand is partitioning of broadcast channels. No one in their right mind would transmit such a multi-program broadcast stream over http/ftp/etc. except perhaps as a link between devices involved in the actual broadcast. It would be wasting a HUGE amount of bw for no purpose, and if someone really did want all the programs, using a different http link for each is the appropriate way. Thinking they would be merged on disk and playable in that form is as ridiculous as thinking folks would expect to be able to run mplayer with a tcpdump packet capture file as its input... Rich

On Tue, Feb 05, 2008 at 12:14:12PM -0500, Rich Felker wrote: [...]
With some unspecified way to store menus and all the support structures.
Feel free to propose a specification for menus. I believe this was on the agenda for a long time but considered sufficiently unimportant (and at a separate layer of specification) that it could be relegated to after NUT was completely finished.
The generic info packets would have allowed to store menus.
ffplay, mplayer, ffmpeg, xine, vlc, ... will then get a command line argument called mydvd.tar
Absolutely not. One would extract the nonsensical archive with the normal archive tools, if such a thing were used.
You might, none of the users will, they will more likely just transcode it to matroska and use the resulting single file. And if that happens not to work with a player they will choose a different player. Its a simple thing, nut either supports what people want, and the way people want or people will use another container. Technical details have very little effect on user decissions.
There is no mysterious protocol between the file/http/ftp/... protocol and the demuxer unless such new second layer protocol or demuxer is implemented. Its
Again you are mixing unrelated issues. The topic at hand is partitioning of broadcast channels. No one in their right mind would transmit such a multi-program broadcast stream over http/ftp/etc.
Well people do up and download mpeg-ts to mphq :)
except perhaps as a link between devices involved in the actual broadcast.
Yes, you mux your mpeg-ts maybe in realtime, maybe off line and then transmit it. Nut current cannot be used as replacement. It requires a second layer which outweights its advantages over mpeg-ts. Not a single person said they would even consider nut as an replacement, dont you think thats maybe an indication that you are moving in the wrong direction? And when you speak about partitioning, dont forget that all timing and buffering constraints must be met. All the packets must be transmitted so the decoder buffers neither over nor underflow. Theres no feedback saying stop or saying "i want more packets". Spliting this over 2 layers is not going to make it much simpler. And that reminds me, for broadcast, we might need an additional timestamp to synchronize the decoder. Oterwise clock drift between decoder and encoder can cause buffer over/underflows. This also affects single program per nut. But i assume the mystery protocol takes care of that as well. [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Breaking DRM is a little like attempting to break through a door even though the window is wide open and the only thing in the house is a bunch of things you dont want and which you would get tomorrow for free anyway

On Tue, Feb 05, 2008 at 07:57:48PM +0100, Michael Niedermayer wrote:
On Tue, Feb 05, 2008 at 12:14:12PM -0500, Rich Felker wrote: [...]
With some unspecified way to store menus and all the support structures.
Feel free to propose a specification for menus. I believe this was on the agenda for a long time but considered sufficiently unimportant (and at a separate layer of specification) that it could be relegated to after NUT was completely finished.
The generic info packets would have allowed to store menus.
As far as I can tell the existing info packet framework can do menus just fine, as long as there's a spec for menu markup. If you claim otherwise, please explain what the problem is and I'm interested in solving it. I do not intend to preclude use of menus, even though I think most users find it more of an annoyance than a feature.
ffplay, mplayer, ffmpeg, xine, vlc, ... will then get a command line argument called mydvd.tar
Absolutely not. One would extract the nonsensical archive with the normal archive tools, if such a thing were used.
You might, none of the users will, they will more likely just transcode it to matroska and use the resulting single file. And if that happens not to work with a player they will choose a different player.
Its a simple thing, nut either supports what people want, and the way people want or people will use another container. Technical details have very little effect on user decissions.
I think the degree to which menus are desired is strongly overestimated. I've never seen anyone use them in matroska. But nonetheless I'm fine with supporting them.
There is no mysterious protocol between the file/http/ftp/... protocol and the demuxer unless such new second layer protocol or demuxer is implemented. Its
Again you are mixing unrelated issues. The topic at hand is partitioning of broadcast channels. No one in their right mind would transmit such a multi-program broadcast stream over http/ftp/etc.
Well people do up and download mpeg-ts to mphq :)
except perhaps as a link between devices involved in the actual broadcast.
Yes, you mux your mpeg-ts maybe in realtime, maybe off line and then transmit it. Nut current cannot be used as replacement. It requires a second layer which outweights its advantages over mpeg-ts. Not a single person said they would even consider nut as an replacement, dont you think thats maybe an indication that you are moving in the wrong direction?
No one's considering NUT as a replacement for .rar either because it's for a different purpose. Sadly MPEG-TS has an incestuous purpose, mixing multiple logical layers into one. That doesn't mean we should copy it. Even if we copied stupid MPEG-TS stuff in NUT, still no one would use it instead of MPEG-TS. They have lots of stupid legacy reasons for wanting backwards, ill-designed stuff from MPEG specs. Our target audience should not be people who lack any rational capabilities or else we have to make something lowered down to their intelligence level...
And when you speak about partitioning, dont forget that all timing and buffering constraints must be met. All the packets must be transmitted so the decoder buffers neither over nor underflow. Theres no feedback saying stop or saying "i want more packets". Spliting this over 2 layers is not going to make it much simpler.
As long as the bitrate constraints are already met and streams are padded to occupy their allotted portion of the channel, just interleave according to bitrate. And again, NUT is NOT designed for meeting ridiculous buffering constraints of particular hardware. It's designed for device-independent media streams, data that's universally usable without gratuitous buffer requirements beyond what's naturally needed. This is why we don't have stupid things like preload. Now you're talking about all kinds of ridiculous device-dependent issues which do not belong in NUT. The days of tiny buffers will be over long before anyone adopts NUT in broadcast applications, regardless of what design decisions we make. The revolutionary thing is being a format that's oriented towards device-independence, as opposed to being oriented towards particular implementations.
And that reminds me, for broadcast, we might need an additional timestamp to synchronize the decoder. Oterwise clock drift between decoder and encoder can cause buffer over/underflows. This also affects single program per nut. But i assume the mystery protocol takes care of that as well.
There's no use for additional timestamps. The decoder just needs to synchronize time to the timestamps in the stream, compensating for any drift. It makes no difference whether you use the audio timestamps or the video timestamps or some special additional out-of-band timestamp system as long as you do the compensation one way or another. Rich

Rich Felker <dalias@aerifal.cx> writes:
No one's considering NUT as a replacement for .rar either because it's for a different purpose. Sadly MPEG-TS has an incestuous purpose, mixing multiple logical layers into one. That doesn't mean we should copy it.
Even if we copied stupid MPEG-TS stuff in NUT, still no one would use it instead of MPEG-TS. They have lots of stupid legacy reasons for wanting backwards, ill-designed stuff from MPEG specs. Our target audience should not be people who lack any rational capabilities or else we have to make something lowered down to their intelligence level...
It's nice to see the old Rich Felker is back, and that he is still the expert on everything he's never been near. -- Måns Rullgård mans@mansr.com

On Tue, Feb 05, 2008 at 11:50:29PM -0500, Rich Felker wrote:
On Tue, Feb 05, 2008 at 07:57:48PM +0100, Michael Niedermayer wrote:
On Tue, Feb 05, 2008 at 12:14:12PM -0500, Rich Felker wrote: [...]
With some unspecified way to store menus and all the support structures.
Feel free to propose a specification for menus. I believe this was on the agenda for a long time but considered sufficiently unimportant (and at a separate layer of specification) that it could be relegated to after NUT was completely finished.
The generic info packets would have allowed to store menus.
As far as I can tell the existing info packet framework can do menus just fine, as long as there's a spec for menu markup. If you claim otherwise, please explain what the problem is and I'm interested in solving it. I do not intend to preclude use of menus, even though I think most users find it more of an annoyance than a feature.
I have no interrest to support some custom blinking and sparkling GUI menus. What i want and what generic info packets fully supported until you decided that this functionality does not belong in nut. But rather belongs into some other hypothetical (=non existing) layer was for example: Info packet 1 Title "scene abcd" Start <timestamp> End <timestamp> Stream 1,2,3,4 ... Info packet 5 Title "scene defg" Start <timestamp> End <timestamp> Stream 1,2,3,4 Info packet 6 Title "scene defg without the nasty screams in the background" Start <timestamp> End <timestamp> Stream 1,2,3,5 ... Info packet 10 Title "Foobar with happy ending" Play 1 Play 3 Play 5 ... Info packet 11 Title "Foobar with dramatic ending" Play 1 Play 4 Play 5 ... Info packet 12 Title "Foobar with happy ending, the censored for conservatives edition" Play 1 Play 6 ... Info packet 13 Title "Main movie menu" Alernative 10 Alernative 11 Alernative 12 [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB I have often repented speaking, but never of holding my tongue. -- Xenocrates

On Thu, Feb 07, 2008 at 02:50:02AM +0100, Michael Niedermayer wrote:
I have no interrest to support some custom blinking and sparkling GUI menus. What i want and what generic info packets fully supported until you decided that this functionality does not belong in nut. But rather belongs into some other hypothetical (=non existing) layer was for example:
Info packet 1 Title "scene abcd" Start <timestamp> End <timestamp> Stream 1,2,3,4
...
Info packet 5 Title "scene defg" Start <timestamp> End <timestamp> Stream 1,2,3,4
Info packet 6 Title "scene defg without the nasty screams in the background" Start <timestamp> End <timestamp> Stream 1,2,3,5
...
Info packet 10 Title "Foobar with happy ending" Play 1 Play 3 Play 5 ...
Info packet 11 Title "Foobar with dramatic ending" Play 1 Play 4 Play 5 ...
Info packet 12 Title "Foobar with happy ending, the censored for conservatives edition" Play 1 Play 6 ...
Info packet 13 Title "Main movie menu" Alernative 10 Alernative 11 Alernative 12
This whole sort of "play X, play Y, ..." scripting begins to reak an awful lot of QuickTime... Look how hard it is to play QT files right (or even decide what it means to play them "right") due to ridiculous edit-type functionality in the container. I understand how stuff like this is useful for certain purposes, but I still tend to think it's better kept outside of NUT. (Not only does it preclude the creation of files which no player knows how to play "right"; it also makes it easier for someone working with menus to edit the menu structure without recreating the whole file.) Honestly I don't remember at this point what all the arguments involved in the info flamewar were, what the pros and cons were, etc. I know at one point I was in support of certain types of limited-coverage info packets that apply only to segments of the program, and my recollection is that it was more an issue of technical problems (locating them, semantics for them, etc.) than theoretical objection that led to my support of a proposal without them. In any case, my health would do well not to revisit this unpleasant part of the past... Rich
participants (5)
-
Luca Barbato
-
Michael Niedermayer
-
Måns Rullgård
-
Nico Sabbi
-
Rich Felker