[FFmpeg-devel] [PATCH] Fix potential infinite discard loop.

Don Moir donmoir at comcast.net
Sat Feb 4 20:58:34 CET 2012


> On Sat, Feb 04, 2012 at 02:27:32PM -0500, Don Moir wrote:
>> 2) make sure os->keyframe_seek is set back to 0 always and don't depend 
>> on any return value. Currently this only set back to 0 only if the return 
>> value from
>> ff_seek_frame_binary is < 0. This is not correct. This fixed all the 
>> remaining weirdness.
>
> That is wrong, os->keyframe_seek must stay 1 when we want to seek to a
> keyframe, otherwise the code that checks it in ogg_read_packet becomes
> dead code.
> I don't know 100% sure that that code in ogg_read_packet is necessary,
> though I would suspect so when a non-keyframe and then a keyframe are
> packed together into a single ogg packet.

All I can tell you about that is if I don't reset os->keyframe back to 0, it 
breaks alot of files. I will see if I can come up with more on that. By 
setting it back to 0, all my ogg files work perfectly.

>> 3) call ogg_reset after the seek - this is not perfect and there is 
>> something more with this but without this - the first packet read will 
>> contain stale information as mentioned above. You will still get 
>> AV_NOPTS_VALUE on the first read which is not right either. I think 
>> calling ogg_reset and setting the lastpts and lastdts in the ogg_stream 
>> may do it.
>
> ogg_read_timestamp calls ogg_reset, thus this _should_ not be necessary.
>

ogg_read_timestamp will never be reached if there are sufficient 
index_entries. These are built on the fly during normal packet reading and 
so if read_timestamp is not reached then ogg_reset will not be called. 



More information about the ffmpeg-devel mailing list