
On Tue, Feb 05, 2008 at 07:57:48PM +0100, Michael Niedermayer wrote:
On Tue, Feb 05, 2008 at 12:14:12PM -0500, Rich Felker wrote: [...]
With some unspecified way to store menus and all the support structures.
Feel free to propose a specification for menus. I believe this was on the agenda for a long time but considered sufficiently unimportant (and at a separate layer of specification) that it could be relegated to after NUT was completely finished.
The generic info packets would have allowed to store menus.
As far as I can tell the existing info packet framework can do menus just fine, as long as there's a spec for menu markup. If you claim otherwise, please explain what the problem is and I'm interested in solving it. I do not intend to preclude use of menus, even though I think most users find it more of an annoyance than a feature.
ffplay, mplayer, ffmpeg, xine, vlc, ... will then get a command line argument called mydvd.tar
Absolutely not. One would extract the nonsensical archive with the normal archive tools, if such a thing were used.
You might, none of the users will, they will more likely just transcode it to matroska and use the resulting single file. And if that happens not to work with a player they will choose a different player.
Its a simple thing, nut either supports what people want, and the way people want or people will use another container. Technical details have very little effect on user decissions.
I think the degree to which menus are desired is strongly overestimated. I've never seen anyone use them in matroska. But nonetheless I'm fine with supporting them.
There is no mysterious protocol between the file/http/ftp/... protocol and the demuxer unless such new second layer protocol or demuxer is implemented. Its
Again you are mixing unrelated issues. The topic at hand is partitioning of broadcast channels. No one in their right mind would transmit such a multi-program broadcast stream over http/ftp/etc.
Well people do up and download mpeg-ts to mphq :)
except perhaps as a link between devices involved in the actual broadcast.
Yes, you mux your mpeg-ts maybe in realtime, maybe off line and then transmit it. Nut current cannot be used as replacement. It requires a second layer which outweights its advantages over mpeg-ts. Not a single person said they would even consider nut as an replacement, dont you think thats maybe an indication that you are moving in the wrong direction?
No one's considering NUT as a replacement for .rar either because it's for a different purpose. Sadly MPEG-TS has an incestuous purpose, mixing multiple logical layers into one. That doesn't mean we should copy it. Even if we copied stupid MPEG-TS stuff in NUT, still no one would use it instead of MPEG-TS. They have lots of stupid legacy reasons for wanting backwards, ill-designed stuff from MPEG specs. Our target audience should not be people who lack any rational capabilities or else we have to make something lowered down to their intelligence level...
And when you speak about partitioning, dont forget that all timing and buffering constraints must be met. All the packets must be transmitted so the decoder buffers neither over nor underflow. Theres no feedback saying stop or saying "i want more packets". Spliting this over 2 layers is not going to make it much simpler.
As long as the bitrate constraints are already met and streams are padded to occupy their allotted portion of the channel, just interleave according to bitrate. And again, NUT is NOT designed for meeting ridiculous buffering constraints of particular hardware. It's designed for device-independent media streams, data that's universally usable without gratuitous buffer requirements beyond what's naturally needed. This is why we don't have stupid things like preload. Now you're talking about all kinds of ridiculous device-dependent issues which do not belong in NUT. The days of tiny buffers will be over long before anyone adopts NUT in broadcast applications, regardless of what design decisions we make. The revolutionary thing is being a format that's oriented towards device-independence, as opposed to being oriented towards particular implementations.
And that reminds me, for broadcast, we might need an additional timestamp to synchronize the decoder. Oterwise clock drift between decoder and encoder can cause buffer over/underflows. This also affects single program per nut. But i assume the mystery protocol takes care of that as well.
There's no use for additional timestamps. The decoder just needs to synchronize time to the timestamps in the stream, compensating for any drift. It makes no difference whether you use the audio timestamps or the video timestamps or some special additional out-of-band timestamp system as long as you do the compensation one way or another. Rich