From dalias at aerifal.cx Wed Jun 11 06:14:46 2003 From: dalias at aerifal.cx (D Richard Felker III) Date: Wed, 11 Jun 2003 00:14:46 -0400 Subject: [MPlayer-G2-dev] more on frame timing, framerate-changing filters Message-ID: <20030611041446.GA27990@brightrain.aerifal.cx> Hey. I'm working more on improving inverse-telecine stuff. Actually I think I'm going to test the new design on G1 (even though it's much more of a pain to deal with fields the way I want to in G1, it's easier to test with a fully-functional player, and people will be able to benefit from it sooner). Anyway, to get to the point, I'm hoping to make this code friendly for porting to G2 (I already have the timing-change stuff worked out for it), but I'd like to clear up how we want timing to work. Right now, it seems G2 has a duration field for the image, but I'm not sure whether that's preferred, or whether we should instead have a timestamp relative to the previous frame... IMO, for some filters, we may need to know *future* information to decide a duration, whereas we should already know the pts relative to prev frame. And of course, for performance purposes (and coding simplicity), it's optimal not to have to grab future frames from up the chain before we output a given frame. Consider for example the case of vf_decimate (near-duplicate frame dropper). If it wants to output duration, it has to grab frames into the future until it finds a non-duplicate. But with pts relative to previous, it can defer the task of decoding those extra frames until the current frame has been displayed, and *then* decoding those extra frames takes place during the "sleep time" anyway, so it won't cause the a/v sync to stutter on slow systems. Also, on another matter. I know G1's whole a/v sync system has been based on a lot of approximations and feedback measurement. This is great for mplayer, and probably also for encoding from variable-fps input to fixed-fps avi output with mencoder. However, especially with the new perfect-a/v-sync mpeg code in G2, I'd really like to see support for "exact" timing in G2. Maybe a way to use a time division base and specify all pts stuff in terms of those (exact) units. It would be much perferred for precision encoding and video processing work, and nice for output to containers like nut (mpcf) which will support such things. I'm envisioning my inverse telecine wanting to use a time base of 1/120 second, for flawless output of mixed telecine, 30fps progressive, and 60fps interlaced content as a single progressive output stream. (Such a perfect rip could *really* get the anime fansub groups interested in using mplayer/nut instead of the windows junk they use now....) If you're worried about not wanting all framerate-changing filters (or apps) to have to support this stuff, it could be an optional feature, where any filter in the chain can ignore the exact info and just use "float pts" the rest of the way down the chain... Rich From ajh at watri.org.au Wed Jun 11 11:51:37 2003 From: ajh at watri.org.au (Anders Johansson) Date: Wed, 11 Jun 2003 17:51:37 +0800 Subject: [MPlayer-G2-dev] more on frame timing, framerate-changing filters In-Reply-To: <20030611041446.GA27990@brightrain.aerifal.cx> References: <20030611041446.GA27990@brightrain.aerifal.cx> Message-ID: <20030611095137.GC27352@watri.org.au> Hi, > Also, on another matter. I know G1's whole a/v sync system has been > based on a lot of approximations and feedback measurement. This is > great for mplayer, and probably also for encoding from variable-fps > input to fixed-fps avi output with mencoder. However, especially with > the new perfect-a/v-sync mpeg code in G2, I'd really like to see > support for "exact" timing in G2. Maybe a way to use a time division > base and specify all pts stuff in terms of those (exact) units. It > would be much perferred for precision encoding and video processing > work, and nice for output to containers like nut (mpcf) which will > support such things. I'de like to see this as well. The sync should not be based on audio but instead on the realtime clock or an external timebase (worldclock cards can be purchased for PC). If the soundcard isn't synked to the RTC then one could fix it using sample stuffing or dropping (inaudible). > > Rich > //Anders From andrej at lucky.net Wed Jun 11 14:45:46 2003 From: andrej at lucky.net (Andriy N. Gritsenko) Date: Wed, 11 Jun 2003 15:45:46 +0300 Subject: [MPlayer-G2-dev] Re: more on frame timing, framerate-changing filters In-Reply-To: <20030611041446.GA27990@brightrain.aerifal.cx> References: <20030611041446.GA27990@brightrain.aerifal.cx> Message-ID: <20030611124546.GA41940@lucky.net> Hi, D Richard Felker III! Sometime (on Wednesday, June 11 at 7:05) I've received something... >If you're worried about not wanting all framerate-changing filters (or >apps) to have to support this stuff, it could be an optional feature, >where any filter in the chain can ignore the exact info and just use >"float pts" the rest of the way down the chain... I'm sorry, I don't understand exactly all you said but I think for any filter with time prediction (inverse-telecine, video time-rendering for slower/faster play, etc.) we have at least have as input for get_frame(): 1) wanted pts (presentation time stamp of current pulled frame); 2) wanted duration (relative time stamp of frame that will be requested next time). So filter could know must it pull frame from previous filter or not and may it return duplicate or dropped frame instead. Also that info will let us have variable framelength/fps because filter will be not related on fps anymore. With best wishes. Andriy. From andrej at lucky.net Wed Jun 11 14:58:48 2003 From: andrej at lucky.net (Andriy N. Gritsenko) Date: Wed, 11 Jun 2003 15:58:48 +0300 Subject: [MPlayer-G2-dev] Re: more on frame timing, framerate-changing filters In-Reply-To: <20030611124546.GA41940@lucky.net> References: <20030611041446.GA27990@brightrain.aerifal.cx> <20030611124546.GA41940@lucky.net> Message-ID: <20030611125848.GB41940@lucky.net> Hi! Hmm, may be I said that badly. Sorry for my bad English. >I'm sorry, I don't understand exactly all you said but I think for any I meant not I understood nothing at all but I meant I didn't understand something in that you said, just because I didn't dig the subject. With best wishes. Andriy. From dalias at aerifal.cx Wed Jun 11 17:33:40 2003 From: dalias at aerifal.cx (D Richard Felker III) Date: Wed, 11 Jun 2003 11:33:40 -0400 Subject: [MPlayer-G2-dev] more on frame timing, framerate-changing filters In-Reply-To: <20030611095137.GC27352@watri.org.au> References: <20030611041446.GA27990@brightrain.aerifal.cx> <20030611095137.GC27352@watri.org.au> Message-ID: <20030611153340.GB30224@brightrain.aerifal.cx> On Wed, Jun 11, 2003 at 05:51:37PM +0800, Anders Johansson wrote: > Hi, > > > Also, on another matter. I know G1's whole a/v sync system has been > > based on a lot of approximations and feedback measurement. This is > > great for mplayer, and probably also for encoding from variable-fps > > input to fixed-fps avi output with mencoder. However, especially with > > the new perfect-a/v-sync mpeg code in G2, I'd really like to see > > support for "exact" timing in G2. Maybe a way to use a time division > > base and specify all pts stuff in terms of those (exact) units. It > > would be much perferred for precision encoding and video processing > > work, and nice for output to containers like nut (mpcf) which will > > support such things. > > I'de like to see this as well. The sync should not be based on audio > but instead on the realtime clock or an external timebase (worldclock > cards can be purchased for PC). If the soundcard isn't synked to the > RTC then one could fix it using sample stuffing or dropping (inaudible). Well what you said is a good thought too, but I don't think it's related to what I said... :) Rich From dalias at aerifal.cx Wed Jun 11 17:33:10 2003 From: dalias at aerifal.cx (D Richard Felker III) Date: Wed, 11 Jun 2003 11:33:10 -0400 Subject: [MPlayer-G2-dev] Re: more on frame timing, framerate-changing filters In-Reply-To: <20030611124546.GA41940@lucky.net> References: <20030611041446.GA27990@brightrain.aerifal.cx> <20030611124546.GA41940@lucky.net> Message-ID: <20030611153310.GA30224@brightrain.aerifal.cx> On Wed, Jun 11, 2003 at 03:45:46PM +0300, Andriy N. Gritsenko wrote: > Hi, D Richard Felker III! > > Sometime (on Wednesday, June 11 at 7:05) I've received something... > > >If you're worried about not wanting all framerate-changing filters (or > >apps) to have to support this stuff, it could be an optional feature, > >where any filter in the chain can ignore the exact info and just use > >"float pts" the rest of the way down the chain... > > I'm sorry, I don't understand exactly all you said but I think for any > filter with time prediction (inverse-telecine, video time-rendering for > slower/faster play, etc.) we have at least have as input for get_frame(): > 1) wanted pts (presentation time stamp of current pulled frame); > 2) wanted duration (relative time stamp of frame that will be requested > next time). > So filter could know must it pull frame from previous filter or not and > may it return duplicate or dropped frame instead. Also that info will let > us have variable framelength/fps because filter will be not related on > fps anymore. Duplicate/dropped? You really misunderstand the whole issue here I think... Some filters might drop frames, but duplication is near-useless. The much more common case will be rearrangement of data, where input and output frames don't necessarily correspond in any fixed pattern. And anyway, the above paragraph you quoted was about use of exact absolute timestamp/timebase versus the current system with floats where the player/encoder at the end keeps trying to correct/fudge the numbers to make it work. Rich From arpi at thot.banki.hu Sun Jun 15 00:46:51 2003 From: arpi at thot.banki.hu (Arpi) Date: Sun, 15 Jun 2003 00:46:51 +0200 Subject: [MPlayer-G2-dev] more on frame timing, framerate-changing filters In-Reply-To: <20030611041446.GA27990@brightrain.aerifal.cx> Message-ID: <200306142246.h5EMkpfj007226@mail.mplayerhq.hu> Hi, > Anyway, to get to the point, I'm hoping to make this code friendly for > porting to G2 (I already have the timing-change stuff worked out for > it), but I'd like to clear up how we want timing to work. Right now, > it seems G2 has a duration field for the image, but I'm not sure > whether that's preferred, or whether we should instead have a > timestamp relative to the previous frame... IMO, for some filters, we > may need to know *future* information to decide a duration, whereas we > should already know the pts relative to prev frame. And of course, for > performance purposes (and coding simplicity), it's optimal not to have > to grab future frames from up the chain before we output a given Yes. Actually I spent a lot of time thinking on this, and got to the conclusion that no optimal (or near optimal) solution exists. At least in mplayer world, where lots of containers and codecs with different behaviour and timing models are supported. I've reduced this game to 2 basic types: - absolute timestamps (when to display the frame) - frame durations (how long display the frame) Both may be given for a frame, and teh pts_flags tells you which ones are available and (!!!) how accurate they are. (some containers have only inaccurate timestamps, but fixed fps/duration) The main problem is that in several cases only codecs (or filters) know the final duration value, so it cannot be used in demuxer level to calculate the accurate timestamp (as i wanted to do earlier). > frame. Consider for example the case of vf_decimate (near-duplicate > frame dropper). If it wants to output duration, it has to grab frames > into the future until it finds a non-duplicate. But with pts relative > to previous, it can defer the task of decoding those extra frames > until the current frame has been displayed, and *then* decoding those > extra frames takes place during the "sleep time" anyway, so it won't > cause the a/v sync to stutter on slow systems. Actually you should (have to) report dropped frames too, by returning NULL. It ensures that next filters know about frames were dropped (some temporal filters or for example field<->frame splitters/merge filters require this), and the final a-v sync code also know about frame being dropped. Imho the vf_decimate should run one frame before the playback pointer, so it always return the previous frame if different enough, or return NULL if similar enough. Filters altering playback rate should modify the duration of incoming frames, and reset timestamp of generated new (==inserted) frames. See the tfields port for example. And, yes, i know it's not the optimal solution, so i'm open to better models, although i know there is no better way (within the given constraints) > Also, on another matter. I know G1's whole a/v sync system has been > based on a lot of approximations and feedback measurement. This is yes > great for mplayer, and probably also for encoding from variable-fps > input to fixed-fps avi output with mencoder. However, especially with > the new perfect-a/v-sync mpeg code in G2, I'd really like to see > support for "exact" timing in G2. Maybe a way to use a time division > base and specify all pts stuff in terms of those (exact) units. It i was thinking on this too, and as you see it's done so in demuxer layer (rate multiplier and divisor instead of float fps). although i don't think it does really worth the extra code, and to worry about the integer ranges everywhere (you never know if the base rate is 1 or 1/100000000000000, so even long long may overflow) Kabi once doen some calculations on ffmpeg-devel when this topic was discussed there (about the ticker code) and with double it's accurate enough to run over several thousands of hours without a single frame delay. and the pts values may be (and should be, and usually are) calculated from integer counters (frameno/fps or integer_pts/pts_scale) by the demuxers. > would be much perferred for precision encoding and video processing > work, and nice for output to containers like nut (mpcf) which will > support such things. I'm envisioning my inverse telecine wanting to > use a time base of 1/120 second, for flawless output of mixed > telecine, 30fps progressive, and 60fps interlaced content as a single you shouldn't rely on assuming any fps, think of if anyone is doing -speed 1.356 and then use teh telecine filter. you should use only rates, not absolute values. ie for inv. telecine multiply incoming duration by 4/5... > progressive output stream. (Such a perfect rip could *really* get the > anime fansub groups interested in using mplayer/nut instead of the > windows junk they use now....) :) > If you're worried about not wanting all framerate-changing filters (or > apps) to have to support this stuff, it could be an optional feature, > where any filter in the chain can ignore the exact info and just use > "float pts" the rest of the way down the chain... argh A'rpi / Astral & ESP-team -- Developer of MPlayer G2, the Movie Framework for all - http://www.MPlayerHQ.hu From arpi at thot.banki.hu Sun Jun 15 00:50:31 2003 From: arpi at thot.banki.hu (Arpi) Date: Sun, 15 Jun 2003 00:50:31 +0200 Subject: [MPlayer-G2-dev] more on frame timing, framerate-changing filters In-Reply-To: <20030611095137.GC27352@watri.org.au> Message-ID: <200306142250.h5EMoVPH007640@mail.mplayerhq.hu> Hi, > > Also, on another matter. I know G1's whole a/v sync system has been > > based on a lot of approximations and feedback measurement. This is > > great for mplayer, and probably also for encoding from variable-fps > > input to fixed-fps avi output with mencoder. However, especially with > > the new perfect-a/v-sync mpeg code in G2, I'd really like to see > > support for "exact" timing in G2. Maybe a way to use a time division > > base and specify all pts stuff in terms of those (exact) units. It > > would be much perferred for precision encoding and video processing > > work, and nice for output to containers like nut (mpcf) which will > > support such things. > > I'de like to see this as well. The sync should not be based on audio > but instead on the realtime clock or an external timebase (worldclock of course, it's planned as option for g2 sync core (sync to any audio or video stream or rtc or any external clock) > cards can be purchased for PC). If the soundcard isn't synked to the imho it's far better to sync to sound card, than resampling audio to get in sync with wall clock... except for some special uses when wall clock sync is more important than audio quality (like streaming media). > RTC then one could fix it using sample stuffing or dropping (inaudible). how do you do sample drop/insert without hearing it? afaik the ntv (?) video capture app does audio resampling (not sample insert/drop) to keep the sync. A'rpi / Astral & ESP-team -- Developer of MPlayer G2, the Movie Framework for all - http://www.MPlayerHQ.hu From arpi at thot.banki.hu Sun Jun 15 00:54:35 2003 From: arpi at thot.banki.hu (Arpi) Date: Sun, 15 Jun 2003 00:54:35 +0200 Subject: [MPlayer-G2-dev] [RFC] first draft of stream/demux-metadata support (+playlist-infos) In-Reply-To: <200305262240.09477.FabianFranz@gmx.de> Message-ID: <200306142254.h5EMsZJJ008015@mail.mplayerhq.hu> Hi, any updates/news on this $SUBJ ? Fabian, are you still working/thinking on thsi or you passed this to me for finishing??? I really want the stream and demuxer APIs being finalized. (yes i know i need to design & implement the subtitle streams in demuxers but it isn't a big issue, i already have it almost complete in my mind. anyway if anyone has any ideas on this topic, feel free to tell me ASAP) A'rpi / Astral & ESP-team -- Developer of MPlayer G2, the Movie Framework for all - http://www.MPlayerHQ.hu From arpi at thot.banki.hu Sun Jun 15 01:01:41 2003 From: arpi at thot.banki.hu (Arpi) Date: Sun, 15 Jun 2003 01:01:41 +0200 Subject: [MPlayer-G2-dev] get_buffer/release_buffer vs. get_image/mpi Message-ID: <200306142301.h5EN1fpR012455@mail.mplayerhq.hu> Hi, I've spent some time thinking on the $SUBJ, but i finally found a small issue being a show-stopper: releasing teh buffer. I don't really understand/knwo how is it done in ffmpeg (Michael?) but I see it as a big issue in our codec/filter layer: Ok, you do get_buffer() at any time (init or decode), and when filled by content (decoded video) you return it to caller. But when and where will it be released? The codec cannot release it, as it have to return it to caller and the caller will use its content. The caller can't release it, as it don't know about the allocators future plans (maybe its used as reference frame...) Maybe some kind of reference counters could help there, but it adds extra complexity, while my primary goal was simplify the buffering code. So i've decided to keep the mpi stuff for now, maybe we'll check get/release thingie again at g3 :) But i still have some idea, by simplifying mpi a bit (remove _IP and _IPB types, and the one-get_image-per-one-decode-call restriction) but keep the current per-filter allocation and release-at-uninit implementation. But i'm not even sure it worth the mess. A'rpi / Astral & ESP-team -- Developer of MPlayer G2, the Movie Framework for all - http://www.MPlayerHQ.hu From ivan at cacad.com Sun Jun 15 05:05:21 2003 From: ivan at cacad.com (Ivan Kalvachev) Date: Sun, 15 Jun 2003 06:05:21 +0300 (EEST) Subject: [MPlayer-G2-dev] get_buffer/release_buffer vs. get_image/mpi In-Reply-To: <200306142301.h5EN1fpR012455@mail.mplayerhq.hu> References: <200306142301.h5EN1fpR012455@mail.mplayerhq.hu> Message-ID: <1099.10.100.0.14.1055646321.squirrel@mail.cacad.com> Arpi said: > Hi, > > I've spent some time thinking on the $SUBJ, but i finally found a small > issue being a show-stopper: releasing teh buffer. > I don't really understand/knwo how is it done in ffmpeg (Michael?) but I > see it as a big issue in our codec/filter layer: > Ok, you do get_buffer() at any time (init or decode), and when filled by > content (decoded video) you return it to caller. But when and where will > it be released? The codec cannot release it, as it have to return it to > caller and the caller will use its content. The caller can't release it, > as it don't know about the allocators future plans (maybe its used as > reference frame...) > Maybe some kind of reference counters could help there, but it adds > extra complexity, while my primary goal was simplify the buffering code. > > So i've decided to keep the mpi stuff for now, maybe we'll check > get/release thingie again at g3 :) > > But i still have some idea, by simplifying mpi a bit (remove _IP and > _IPB types, and the one-get_image-per-one-decode-call restriction) but > keep the current per-filter allocation and release-at-uninit > implementation. But i'm not even sure it worth the mess. > > > A'rpi / Astral & ESP-team > Sorry, I cannot understand what are you talking about? I think that get/release_buffer works this way. get_buffer when new frame is needed, codec gives the type (I,P or B) release_buffer, then the frame is: compleate(drown/skiped) and displayed and no longer used for prediction. ( i think that ffmpeg have some age thing, donno why, probably this way it counts how many prediction buffer it requers). The only problem is that the displayed frame for the decoder is not displayed frame for the mplayer, as mplayer should keep sync. The bad side is that it is possible to allocate all available buffers, and to stick on waiting for flip_page. Workaround it to allocate TEMP buffer and render into it. This also leads to buffer ahead Maybe i should send you a patch of my xvmc implementation. For it I have made full use of get/release . I used 2 flags - first one set by codec, sencond set by video system. If both flags are cleared then buffer is free. I warn you xvmc doesn't work yet, so i may have scrued up things compleatly:/ Best Regards Ivan Kalvachev From michaelni at gmx.at Sun Jun 15 10:53:20 2003 From: michaelni at gmx.at (Michael Niedermayer) Date: Sun, 15 Jun 2003 10:53:20 +0200 Subject: [MPlayer-G2-dev] get_buffer/release_buffer vs. get_image/mpi In-Reply-To: <200306142301.h5EN1fpR012455@mail.mplayerhq.hu> References: <200306142301.h5EN1fpR012455@mail.mplayerhq.hu> Message-ID: <200306151053.21689.michaelni@gmx.at> Hi On Sunday 15 June 2003 01:01, Arpi wrote: > Hi, > > I've spent some time thinking on the $SUBJ, but i finally found a small > issue being a show-stopper: releasing teh buffer. > I don't really understand/knwo how is it done in ffmpeg (Michael?) but very simple, its released when its not needed anymore decode_video() F1= get_buffer(IP-type) return 0 decode_video() F2= get_buffer(IP-type) return F1 decode_video() F3= get_buffer(B-type) return F3 decode_video() release_buffer(F3) F4= get_buffer(B-type) return F4 decode_video() release_buffer(F4) release_buffer(F1) F5= get_buffer(IP-type) return F2 ... [...] -- Michael level[i]= get_vlc(); i+=get_vlc(); (violates patent EP0266049) median(mv[y-1][x], mv[y][x-1], mv[y+1][x+1]); (violates patent #5,905,535) buf[i]= qp - buf[i-1]; (violates patent #?) for more examples, see http://mplayerhq.hu/~michael/patent.html stop it, see http://petition.eurolinux.org & http://petition.ffii.org/eubsa/en From michaelni at gmx.at Sun Jun 15 11:03:49 2003 From: michaelni at gmx.at (Michael Niedermayer) Date: Sun, 15 Jun 2003 11:03:49 +0200 Subject: [MPlayer-G2-dev] get_buffer/release_buffer vs. get_image/mpi In-Reply-To: <1099.10.100.0.14.1055646321.squirrel@mail.cacad.com> References: <200306142301.h5EN1fpR012455@mail.mplayerhq.hu> <1099.10.100.0.14.1055646321.squirrel@mail.cacad.com> Message-ID: <200306151103.49694.michaelni@gmx.at> Hi On Sunday 15 June 2003 05:05, Ivan Kalvachev wrote: [...] > > Sorry, I cannot understand what are you talking about? > I think that get/release_buffer works this way. > get_buffer when new frame is needed, codec gives the type (I,P or B) > release_buffer, then the frame is: compleate(drown/skiped) and displayed > and no longer used for prediction. ( i think that ffmpeg have some age > thing, donno why, probably this way it counts how many prediction buffer > it requers). age is used to skip drawing of MBs, it is set by the get_buffer() implementation, and simply means the the number of get_buffer() calls since this buffer was returned the last time thats usefull because if a MB is skiped more often then the age of the frame then it is still in the buffer (assuming only simpe IP frames are used, otherwise its a bit more complex) and so it doesnt need to be copied from the previous frame [...] -- Michael level[i]= get_vlc(); i+=get_vlc(); (violates patent EP0266049) median(mv[y-1][x], mv[y][x-1], mv[y+1][x+1]); (violates patent #5,905,535) buf[i]= qp - buf[i-1]; (violates patent #?) for more examples, see http://mplayerhq.hu/~michael/patent.html stop it, see http://petition.eurolinux.org & http://petition.ffii.org/eubsa/en From dalias at aerifal.cx Sun Jun 15 23:55:33 2003 From: dalias at aerifal.cx (D Richard Felker III) Date: Sun, 15 Jun 2003 17:55:33 -0400 Subject: [MPlayer-G2-dev] more on frame timing, framerate-changing filters In-Reply-To: <200306142246.h5EMkpfj007226@mail.mplayerhq.hu> References: <20030611041446.GA27990@brightrain.aerifal.cx> <200306142246.h5EMkpfj007226@mail.mplayerhq.hu> Message-ID: <20030615215533.GA30224@brightrain.aerifal.cx> On Sun, Jun 15, 2003 at 12:46:51AM +0200, Arpi wrote: > Hi, > > > Anyway, to get to the point, I'm hoping to make this code friendly for > > porting to G2 (I already have the timing-change stuff worked out for > > it), but I'd like to clear up how we want timing to work. Right now, > > it seems G2 has a duration field for the image, but I'm not sure > > whether that's preferred, or whether we should instead have a > > timestamp relative to the previous frame... IMO, for some filters, we > > may need to know *future* information to decide a duration, whereas we > > should already know the pts relative to prev frame. And of course, for > > performance purposes (and coding simplicity), it's optimal not to have > > to grab future frames from up the chain before we output a given > > Yes. > Actually I spent a lot of time thinking on this, and got to the conclusion > that no optimal (or near optimal) solution exists. > At least in mplayer world, where lots of containers and codecs with > different behaviour and timing models are supported. > > I've reduced this game to 2 basic types: > - absolute timestamps (when to display the frame) > - frame durations (how long display the frame) Yes, and I'm asking if we can change it to absolute timestamp and relative timestamp... > > frame. Consider for example the case of vf_decimate (near-duplicate > > frame dropper). If it wants to output duration, it has to grab frames > > into the future until it finds a non-duplicate. But with pts relative > > to previous, it can defer the task of decoding those extra frames > > until the current frame has been displayed, and *then* decoding those > > extra frames takes place during the "sleep time" anyway, so it won't > > cause the a/v sync to stutter on slow systems. > > Actually you should (have to) report dropped frames too, by returning NULL. > It ensures that next filters know about frames were dropped (some temporal > filters or for example field<->frame splitters/merge filters require this), > and the final a-v sync code also know about frame being dropped. I disagree. You're right if the filter is just doing something silly like blindly dropping frames to lower output framerate. But if your filter is actually adding or dropping frames for a good reason (inverse telecine, duplicate removal, etc.), then temporal (noise?) filters and field split/merge stuff should have no problem getting the output stream as-is with no information about what was dropped. In fact, I don't even like thinking about the chain in terms of "drops" or "added frames", since from my point of view, the whole idea is that there is no inherent 1-1 correspondence between input frames and output frames. As for A/V sync being preserved, this does not require notifying the next filter about drops either. You just have to merge the duration of the frame you dropped in with the durations of the other frames you're sending out so that the sum duration is not changed. > Imho the vf_decimate should run one frame before the playback pointer, > so it always return the previous frame if different enough, or return NULL > if similar enough. Hmm, I'd have to think about it more, but I'm not sure this can be done right without shifting a/v sync by at least one frame...which would be bad. > Filters altering playback rate should modify the duration of incoming > frames, and reset timestamp of generated new (==inserted) frames. > See the tfields port for example. Let me explain how I'd prefer to do it. I'd like to have relative_pts rather than duration, which is exactly the same thing as duration of the previous frame. Then, what tfields would do is output both fields with relative_pts of input.relative_pts/2. Here, the exact same thing could be done with duration as-is. > And, yes, i know it's not the optimal solution, so i'm open to better > models, although i know there is no better way (within the given > constraints) >From the perspective of implementing the player loop, relative_pts and duration are pretty much exactly the same. However, as I was saying in my original mail, relative_pts is much better from the filters' perspective since you can know it earlier. Also relative_pts seems to be a more natural quantity to work with when thinking of output frames as the basic unit rather than input frames. For instance, smoothing out frame times with inverse telecine requires adding 1/120 to pts for frames that are shown for 3 fields, and subtracting 1/120 for frames that are shown only 2 fields. To emulate this with durations, you'd need to know whether the *next* frame is shown for 3 fields or just 2, which requires buffering up even more -- bleh. (You can't just adjust durations of the current frame based on whether it's shown 2 fields or 3. That will work with perfect telecine, but in the wild perfect telecine does not exist, so in all practical settings it will break a/v sync!!) Keep in mind that I'm already having to buffer at least 6 fields for the new smart algorithm, so if I'm limited to duration rather than relative_pts, I'll also have to buffer an additional frame... > Kabi once doen some calculations on ffmpeg-devel when this topic was > discussed there (about the ticker code) and with double it's accurate > enough to run over several thousands of hours without a single frame delay. OK, I guess it can be ignored then... > > would be much perferred for precision encoding and video processing > > work, and nice for output to containers like nut (mpcf) which will > > support such things. I'm envisioning my inverse telecine wanting to > > use a time base of 1/120 second, for flawless output of mixed > > telecine, 30fps progressive, and 60fps interlaced content as a single > > you shouldn't rely on assuming any fps, think of if anyone is doing -speed > 1.356 and then use teh telecine filter. you should use only rates, not > absolute values. ie for inv. telecine multiply incoming duration by 4/5... Obviously I didn't mean having numbers like this hard-coded... :) Just one possible setup for the output muxer, which a user might configure. And anyway you mean 5/4, not 4/5, but as I descrived above that will NOT work for real-world content, just ideal 3:2 telecine. Rich From arpi at thot.banki.hu Mon Jun 16 01:41:09 2003 From: arpi at thot.banki.hu (Arpi) Date: Mon, 16 Jun 2003 01:41:09 +0200 Subject: [MPlayer-G2-dev] more on frame timing, framerate-changing filters In-Reply-To: <20030615215533.GA30224@brightrain.aerifal.cx> Message-ID: <200306152341.h5FNf9Bh030341@mail.mplayerhq.hu> Hi, > > I've reduced this game to 2 basic types: > > - absolute timestamps (when to display the frame) > > - frame durations (how long display the frame) > > Yes, and I'm asking if we can change it to absolute timestamp and > relative timestamp... why? btw note that duration != relative timestamp f(x-1) f(x) f(x+1) | f(x).rel_pts | f(x).dur | |<--------------->|<----------------->| ^ ^ ^ f(x-1).pts f(x).pts f(x+1).pts > > > frame. Consider for example the case of vf_decimate (near-duplicate > > > frame dropper). If it wants to output duration, it has to grab frames > > > into the future until it finds a non-duplicate. But with pts relative > > > to previous, it can defer the task of decoding those extra frames > > > until the current frame has been displayed, and *then* decoding those > > > extra frames takes place during the "sleep time" anyway, so it won't > > > cause the a/v sync to stutter on slow systems. > > > > Actually you should (have to) report dropped frames too, by returning NULL. > > It ensures that next filters know about frames were dropped (some temporal > > filters or for example field<->frame splitters/merge filters require this), > > and the final a-v sync code also know about frame being dropped. > > I disagree. You're right if the filter is just doing something silly > like blindly dropping frames to lower output framerate. But if your > filter is actually adding or dropping frames for a good reason > (inverse telecine, duplicate removal, etc.), then temporal (noise?) > filters and field split/merge stuff should have no problem getting the > output stream as-is with no information about what was dropped. In > fact, I don't even like thinking about the chain in terms of "drops" > or "added frames", since from my point of view, the whole idea is that > there is no inherent 1-1 correspondence between input frames and > output frames. it depends. i wanted to keep the option of seeing dropped frames in filters. it isn't used yet, but i see several ideas for it. for example, think of a filter merging fields to frame. for example you want to split decoded frame to fields, apply similar-frame-remover filter (on fields) and then merge fields. the merge filter should know about the number of fields being dropped. (ok i know it's not a good example, tell a better one). but i can imagine other cases too, when it's useful to a filter to know about dropped frames. and since it's near zero work to implement (it's already done in core, see VFCAP_NOTIFY_DROPPED_FRAMES) why to drop it? > As for A/V sync being preserved, this does not require notifying the > next filter about drops either. You just have to merge the duration of > the frame you dropped in with the durations of the other frames you're > sending out so that the sum duration is not changed. this is good idea > > Imho the vf_decimate should run one frame before the playback pointer, > > so it always return the previous frame if different enough, or return NULL > > if similar enough. > > Hmm, I'd have to think about it more, but I'm not sure this can be > done right without shifting a/v sync by at least one frame...which > would be bad. why? > > Filters altering playback rate should modify the duration of incoming > > frames, and reset timestamp of generated new (==inserted) frames. > > See the tfields port for example. > > Let me explain how I'd prefer to do it. I'd like to have relative_pts > rather than duration, which is exactly the same thing as duration of > the previous frame. Then, what tfields would do is output both fields > with relative_pts of input.relative_pts/2. Here, the exact same thing > could be done with duration as-is. the problem is that duration of previous frame is not (always) available when decoding a frame. so again it needs extra complexity in decoders, to keep it in their private area or so. > > And, yes, i know it's not the optimal solution, so i'm open to better > > models, although i know there is no better way (within the given > > constraints) > > >From the perspective of implementing the player loop, relative_pts and > duration are pretty much exactly the same. However, as I was saying in > my original mail, relative_pts is much better from the filters' > perspective since you can know it earlier. > > Also relative_pts seems to be a more natural quantity to work with > when thinking of output frames as the basic unit rather than input > frames. For instance, smoothing out frame times with inverse telecine > requires adding 1/120 to pts for frames that are shown for 3 fields, > and subtracting 1/120 for frames that are shown only 2 fields. To > emulate this with durations, you'd need to know whether the *next* > frame is shown for 3 fields or just 2, which requires buffering up > even more -- bleh. (You can't just adjust durations of the current > frame based on whether it's shown 2 fields or 3. That will work with > perfect telecine, but in the wild perfect telecine does not exist, so > in all practical settings it will break a/v sync!!) Keep in mind that > I'm already having to buffer at least 6 fields for the new smart > algorithm, so if I'm limited to duration rather than relative_pts, > I'll also have to buffer an additional frame... > > > Kabi once doen some calculations on ffmpeg-devel when this topic was > > discussed there (about the ticker code) and with double it's accurate > > enough to run over several thousands of hours without a single frame delay. > > OK, I guess it can be ignored then... > > > > would be much perferred for precision encoding and video processing > > > work, and nice for output to containers like nut (mpcf) which will > > > support such things. I'm envisioning my inverse telecine wanting to > > > use a time base of 1/120 second, for flawless output of mixed > > > telecine, 30fps progressive, and 60fps interlaced content as a single > > > > you shouldn't rely on assuming any fps, think of if anyone is doing -speed > > 1.356 and then use teh telecine filter. you should use only rates, not > > absolute values. ie for inv. telecine multiply incoming duration by 4/5... > > Obviously I didn't mean having numbers like this hard-coded... :) Just > one possible setup for the output muxer, which a user might configure. ah ok > And anyway you mean 5/4, not 4/5, but as I descrived above that will > NOT work for real-world content, just ideal 3:2 telecine. :) A'rpi / Astral & ESP-team -- Developer of MPlayer G2, the Movie Framework for all - http://www.MPlayerHQ.hu From ajh at watri.org.au Mon Jun 16 03:19:19 2003 From: ajh at watri.org.au (Anders Johansson) Date: Mon, 16 Jun 2003 09:19:19 +0800 Subject: [MPlayer-G2-dev] more on frame timing, framerate-changing filters In-Reply-To: <200306142250.h5EMoVPH007640@mail.mplayerhq.hu> References: <20030611095137.GC27352@watri.org.au> <200306142250.h5EMoVPH007640@mail.mplayerhq.hu> Message-ID: <20030616011919.GD27352@watri.org.au> Hi, > Hi, > > > > Also, on another matter. I know G1's whole a/v sync system has been > > > based on a lot of approximations and feedback measurement. This is > > > great for mplayer, and probably also for encoding from variable-fps > > > input to fixed-fps avi output with mencoder. However, especially with > > > the new perfect-a/v-sync mpeg code in G2, I'd really like to see > > > support for "exact" timing in G2. Maybe a way to use a time division > > > base and specify all pts stuff in terms of those (exact) units. It > > > would be much perferred for precision encoding and video processing > > > work, and nice for output to containers like nut (mpcf) which will > > > support such things. > > > > I'de like to see this as well. The sync should not be based on audio > > but instead on the realtime clock or an external timebase (worldclock > > of course, it's planned as option for g2 sync core > (sync to any audio or video stream or rtc or any external clock) Sounds good. > > cards can be purchased for PC). If the soundcard isn't synked to the > > imho it's far better to sync to sound card, than resampling audio to > get in sync with wall clock... except for some special uses when wall > clock sync is more important than audio quality (like streaming media). > > > RTC then one could fix it using sample stuffing or dropping (inaudible). > > how do you do sample drop/insert without hearing it? > afaik the ntv (?) video capture app does audio resampling (not > sample insert/drop) to keep the sync. Since the soundcard sample clock and the sync sample clock differs verry litttle in frequency the difference can be compensated for by duplicating or removing samples. I made some tests, even if one have to do it for every 100 samples it is impossible to hear, if more often the distorsion can be removed by applying a short lowpass filter with a cutofff of about 0.95 (relative frequency). I will continue on the new sound library in this week, I will allow for external sample clock in the design. > > A'rpi / Astral & ESP-team > //Anders From arpi at thot.banki.hu Thu Jun 26 01:43:10 2003 From: arpi at thot.banki.hu (Arpi) Date: Thu, 26 Jun 2003 01:43:10 +0200 Subject: [MPlayer-G2-dev] gui - skin engine Message-ID: <200306252343.h5PNhAVb018108@mail.mplayerhq.hu> Hi, Just checked the skin cvs of gmplayer, and there are a lot of nice skins. It makes me wonder if we will support g1 skins in g2, or the g2 gui will have a new skin format? If we keep g1 format (maybe extend) then there should be a common, multiplatform skin loader/renderer library. If it's well written, then it's split to platform-independent and dependent parts. Ie there could be skin renderer/interface modules for Xlib, gtk, qt, win32 etc. Afaik Faust already proted teh g1 skin loader to win32, dunno how much it differs from the linux code. A'rpi / Astral & ESP-team -- Developer of MPlayer G2, the Movie Framework for all - http://www.MPlayerHQ.hu From space2 at atlastelecom.ro Thu Jun 26 06:48:31 2003 From: space2 at atlastelecom.ro (Szasz Pal) Date: Thu, 26 Jun 2003 07:48:31 +0300 Subject: [MPlayer-G2-dev] gui - skin engine In-Reply-To: <200306252343.h5PNhAVb018108@mail.mplayerhq.hu> References: <200306252343.h5PNhAVb018108@mail.mplayerhq.hu> Message-ID: <200306260748.31299.space2@atlastelecom.ro> > Ie there could be skin renderer/interface modules for Xlib, gtk, qt, win32 Just an idea: check (e)FLTK, it works under linux, windows, mac,... Pali From saschasommer at freenet.de Thu Jun 26 08:49:29 2003 From: saschasommer at freenet.de (Sascha Sommer) Date: Thu, 26 Jun 2003 08:49:29 +0200 Subject: [MPlayer-G2-dev] gui - skin engine References: <200306252343.h5PNhAVb018108@mail.mplayerhq.hu> Message-ID: <007a01c33baf$1852c8e0$656354d9@oemcomputer> > Hi, > > Just checked the skin cvs of gmplayer, and there are a lot of nice skins. agree ;) > It makes me wonder if we will support g1 skins in g2, or the g2 gui will > have a new skin format? I think it would be better to extend the existing one, if there are things to extend at all. Another skin format will only bring incompatiblity, without having real advantages. > If we keep g1 format (maybe extend) then there should be a common, > multiplatform skin loader/renderer library. If it's well written, then > it's split to platform-independent and dependent parts. > Ie there could be skin renderer/interface modules for Xlib, gtk, qt, win32 > etc. Afaik Faust already proted teh g1 skin loader to win32, dunno how much > it differs from the linux code. > My skinloader is currently only one file and not all info is loaded (fonts etc.). The main difference is that the G1 skinloader stores the images in png format, while I am converting them to desktop pixelformat to speedup rendering later. I also do not load already existing images twice. Attached is the header for reference. Maybe you can find some usefull design ideas there. What are your ideas for the next layer? Make it already platform dependant or implement some window manager like abstraction layer? If we design it well even gui rendering inside the video window (think of fbdev and vesa) should be no big deal. Sascha -------------- next part -------------- A non-text attachment was scrubbed... Name: skinload.h Type: application/octet-stream Size: 5164 bytes Desc: not available URL: From pontscho at kac.poliod.hu Thu Jun 26 10:16:00 2003 From: pontscho at kac.poliod.hu (Zoltan Ponekker) Date: Thu, 26 Jun 2003 10:16:00 +0200 (CEST) Subject: [MPlayer-G2-dev] gui - skin engine In-Reply-To: <007a01c33baf$1852c8e0$656354d9@oemcomputer> Message-ID: Hali > > Just checked the skin cvs of gmplayer, and there are a lot of nice skins. > agree ;) :) > > It makes me wonder if we will support g1 skins in g2, or the g2 gui will > > have a new skin format? > > I think it would be better to extend the existing one, if there are things > to extend > at all. Another skin format will only bring incompatiblity, without having > real advantages. I want to made the skin loader to compatible with the old versions. > > If we keep g1 format (maybe extend) then there should be a common, > > multiplatform skin loader/renderer library. If it's well written, then > > it's split to platform-independent and dependent parts. > > Ie there could be skin renderer/interface modules for Xlib, gtk, qt, win32 > > etc. Afaik Faust already proted teh g1 skin loader to win32, dunno how > much > > it differs from the linux code. > > > > My skinloader is currently only one file and not all info is loaded (fonts > etc.). > The main difference is that the G1 skinloader stores the images in png > format, while I am > converting them to desktop pixelformat to speedup rendering later. Convert in the loading ? It is on ma todo list. > I also do not load already existing images twice. > Attached is the header for reference. Maybe you can find some usefull design > ideas there. Yes, it's nice ! > What are your ideas for the next layer? Make it already platform dependant > or implement > some window manager like abstraction layer? If we design it well even gui > rendering Yes. > inside the video window (think of fbdev and vesa) should be no big deal. Hm. vesa + X ? I'm started to rewrite the loader, someone interested ? Pontscho / fresh!mindworkz --- MPlayer Core Team - www.MPlayerHQ.hu Ps: sorry for my bad english :) From saschasommer at freenet.de Thu Jun 26 10:52:07 2003 From: saschasommer at freenet.de (Sascha Sommer) Date: Thu, 26 Jun 2003 10:52:07 +0200 Subject: [MPlayer-G2-dev] gui - skin engine References: Message-ID: <003b01c33bc0$3a1f5f40$3eb0e3d9@oemcomputer> > > inside the video window (think of fbdev and vesa) should be no big deal. > > Hm. vesa + X ? Imo you only need a mouse driver, no X. The gui could be rendered just like osd. vf_gui ;) Sascha From kinali at gmx.net Sat Jun 28 11:23:38 2003 From: kinali at gmx.net (Attila Kinali) Date: Sat, 28 Jun 2003 11:23:38 +0200 Subject: [MPlayer-G2-dev] gui - skin engine In-Reply-To: <200306260748.31299.space2@atlastelecom.ro> References: <200306252343.h5PNhAVb018108@mail.mplayerhq.hu> <200306260748.31299.space2@atlastelecom.ro> Message-ID: <20030628112338.08b6cfcc.kinali@gmx.net> On Thu, 26 Jun 2003 07:48:31 +0300 Szasz Pal wrote: > > Ie there could be skin renderer/interface modules for Xlib, gtk, qt, win32 > Just an idea: check (e)FLTK, it works under linux, windows, mac,... I dont think anyone will like fltk here because it's written in c++, although i like it very much for it's clean code ( i learned x11 programming from fltk) Attila Kinali -- Emacs ist f?r mich kein Editor. F?r mich ist das genau das gleiche, als wenn ich nach einem Fahrrad (f?r die Sonntagbr?tchen) frage und einen pangalaktischen Raumkreuzer mit 10 km Gesamtl?nge bekomme. Ich wei? nicht, was ich damit soll. -- Frank Klemm, de.comp.os.unix.discussion From kinali at gmx.net Sat Jun 28 11:27:16 2003 From: kinali at gmx.net (Attila Kinali) Date: Sat, 28 Jun 2003 11:27:16 +0200 Subject: [MPlayer-G2-dev] gui - skin engine In-Reply-To: <003b01c33bc0$3a1f5f40$3eb0e3d9@oemcomputer> References: <003b01c33bc0$3a1f5f40$3eb0e3d9@oemcomputer> Message-ID: <20030628112716.7930da0d.kinali@gmx.net> On Thu, 26 Jun 2003 10:52:07 +0200 "Sascha Sommer" wrote: > Imo you only need a mouse driver, no X. The gui could be rendered just like > osd. > vf_gui ;) BTW: what about a more general gui/osd scheme ? I mean, what happend if we handle osd / gui just like another video window which can be mapped separatly into it's own window or above the current video ? This would enable us to use a gui system (ie the gui and menu stuff like dvdnav) over any -vo module w/o having two separate systems for gui and osd. Attila Kinali -- Emacs ist f?r mich kein Editor. F?r mich ist das genau das gleiche, als wenn ich nach einem Fahrrad (f?r die Sonntagbr?tchen) frage und einen pangalaktischen Raumkreuzer mit 10 km Gesamtl?nge bekomme. Ich wei? nicht, was ich damit soll. -- Frank Klemm, de.comp.os.unix.discussion From alex at fsn.hu Sat Jun 28 12:57:20 2003 From: alex at fsn.hu (Alex Beregszaszi) Date: Sat, 28 Jun 2003 12:57:20 +0200 Subject: [MPlayer-G2-dev] gui - skin engine In-Reply-To: <003b01c33bc0$3a1f5f40$3eb0e3d9@oemcomputer> References: <003b01c33bc0$3a1f5f40$3eb0e3d9@oemcomputer> Message-ID: <20030628125720.66a670be.alex@fsn.hu> Hi, > Imo you only need a mouse driver, no X. The gui could be rendered just > like osd. > vf_gui ;) very cool idea, imho the vo-s should pass the mouse actions too (probably switchable), so vf_gui eventually could work with every vo but the main point here is that we should make the skin loader individual and smart ;) -- Alex Beregszaszi (MPlayer Core Developer -- http://www.mplayerhq.hu/) From FabianFranz at gmx.de Mon Jun 30 13:07:13 2003 From: FabianFranz at gmx.de (Fabian Franz) Date: Mon, 30 Jun 2003 13:07:13 +0200 Subject: [MPlayer-G2-dev] gui - skin engine In-Reply-To: <20030628125720.66a670be.alex@fsn.hu> References: <003b01c33bc0$3a1f5f40$3eb0e3d9@oemcomputer> <20030628125720.66a670be.alex@fsn.hu> Message-ID: <200306301307.13731.FabianFranz@gmx.de> Am Samstag, 28. Juni 2003 12:57 schrieb Alex Beregszaszi: > Hi, > > > Imo you only need a mouse driver, no X. The gui could be rendered just > > like osd. > > vf_gui ;) > > very cool idea, imho > > the vo-s should pass the mouse actions too (probably switchable), so > vf_gui eventually could work with every vo And if the gui is also controllable by keyboard, it can work now with every vo ... I think this could also solve many other issues ... ... like with - still-frames - percentage-display, while doing caching ... (important for browser-plugins & co) So I would like that the video-driver is initialised from on beginning, so that such status-messages and gui are possible. Important for a new gui is also for me: - Possibility to make one "bar" extendable over the whole size, so that it grows and shrinks with the parent-window ... (important for a mini-gui in browser-plugin or similar) > > but the main point here is that we should make the skin loader > individual and smart ;) :-)) cu Fabian