
On Sun, Mar 12, 2006 at 10:53:03PM +0100, Michael Niedermayer wrote:
@@ -106,7 +106,7 @@ t (v coded universal timestamp) tmp v stream_id= tmp % stream_count - value= (tmp / stream_count) * stream[ stream_id ].timebase + value= (tmp / stream_count) * timebase[stream_id]
BTW an idea... instead of having timebase stored in the stream header, what if we stored a list of timebases in the main header, then the stream header just had an index into that list? That was if we had 20 different streams but only 2 distinct timebases, the universal timestamps would still be compact. It would also allow additional non-stream timebases to be stored for use in chapter extents.
ok
Hmm, one thing that needs consideration though is if there's a limit on the number of timebases. Up until now we never worried about limits on the number of streams since we assumed a demuxer could just ignore streams it's not interested in. However due to universal timestamp, the demuxer needs to be aware of all timebases in the file. Should we address this, or just assume that a super-limited demuxer can reread the list from the main header if it needs to? Sorry for bringing up a nasty issue -- I just realized it. Rich