[FFmpeg-devel] [PATCH v2] avformat/mpegts: set data broadcast streams as such

Jan Ekström jeebjp at gmail.com
Mon Apr 25 14:33:48 EEST 2022


On Mon, Apr 25, 2022 at 1:36 PM Jan Ekström <jeebjp at gmail.com> wrote:
>
> On Tue, Apr 19, 2022 at 11:06 PM Marton Balint <cus at passwd.hu> wrote:
> >
> >
> >
> > On Tue, 19 Apr 2022, Jan Ekström wrote:
> >
> > > On Tue, Apr 19, 2022 at 1:13 PM Jan Ekström <jeebjp at gmail.com> wrote:
> > >>
> > >> On Tue, Apr 19, 2022 at 3:00 AM Marton Balint <cus at passwd.hu> wrote:
> > >> >
> > >> >
> > >> >
> > >> > On Thu, 14 Apr 2022, Jan Ekström wrote:
> > >> >
> > >> > > On Mon, Apr 11, 2022 at 1:50 PM Jan Ekström <jeebjp at gmail.com> wrote:
> > >> > >>
> > >> > >> From: Jan Ekström <jan.ekstrom at 24i.com>
> > >> > >>
> > >> > >> Additionally, they should not be probed, as this is essentially
> > >> > >> various types of binary data.
> > >> > >>
> > >> > >> Signed-off-by: Jan Ekström <jan.ekstrom at 24i.com>
> > >> > >> ---
> > >> > >
> > >> > > Ping.
> > >> > >
> > >> > > Basically this checks if we have an unknown stream with a private
> > >> > > stream type still at the end of the per-stream loop in PMT parsing,
> > >> > > and then cancels the stop of parsing that usually occurs as a PMT is
> > >> > > hit. Instead the logic will continue parsing further. When an SDT is
> > >> > > then found and a PMT for that program has already been received, it
> > >> > > will then stop header reading at that point.
> > >> >
> > >> > But why does it matter when the initial parsing is stopped? I mean it
> > >> > stops at the first PMT right now, nobody expects it to find all the
> > >> > programs and all the streams or all the stream codecs/parameters.
> > >> >
> > >> > I am saying, that ideally, the ts->stop_parse magic should not be needed
> > >> > to be changed to fix your issue and when an SDT is detected with the
> > >> > broadcast descriptor that should stop any existing data stream parsing. Do
> > >> > you know why it does not work like that?
> > >> >
> > >>
> > >> If the codec is unknown after header parsing, the general parsing
> > >> logic is utilized to probe which codec is possibly in that unknown
> > >> stream, and thus more data is read from that stream, which can cause
> > >> further delays.
> > >>
> > >> If the codec is known as data, it leads to no such result.
> > >>
> > >> Basically, the idea is to figure out whether a stream is a data stream
> > >> or not during header parsing, if possible.
> > >>
> > >
> > > Just double-checked and the difference is whether
> > > max_stream_analyze_duration gets utilized in
> > > libavformat/demux.c::avformat_find_stream_info .
> > >
> > > If a stream is marked as unknown, it will get checked for longer. If
> > > it is marked as a known "codec" it gets through quicker.
> >
> > The part of the patch which parses the SDT and sets the codec is fine.
> > But the fact that you have to set the codec during mpegts_read_header
> > to make it stop parsing definitely looks like some bug in some code
> > somewhere. It should be enough to set the codec sometime later in
> > mpegts_read_packet() (which is called during avformat_find_stream_info())
> >
> > Or to make it even more strange: comment out handle_packets() in
> > mpegts_read_header. And it will work just fine. So if nothing is parsed
> > in mpegts_read_header then it works. If something is parsed, then it does
> > not. Kind of unexpected...
> >
> > Regards,
> > Marton
>
> Hi,
>
> So I poked at this again and did the changes you requested. The result
> is that if the "continue parsing until SDT if you have these sorts of
> streams in-band" logic is removed, it leads to the FATE test ending up
> with the result of "codec_name=unknown". Which makes the test rather
> useless as it doesn't actually show that the stream is noted as a data
> stream. Additionally, I thought the logic made sense since as much as
> I dislike having information outside of PMT being utilized for PMT
> purposes, that is effectively what this SDT descriptor is utilized
> for. Given I specifically limited this logic to unknown streams within
> a specific stream identifier range, the effect of this change would
> have been far-from-global as well.
>
> I have a hunch that information is copied from the lavf-internal codec
> context to codecpar at one point, and then no further synchronization
> is attempted.

So, when checking the following diff:

diff --git a/libavformat/demux.c b/libavformat/demux.c
index ad7b5dbf83..6e483acb10 100644
--- a/libavformat/demux.c
+++ b/libavformat/demux.c
@@ -2909,6 +2909,11 @@ int avformat_find_stream_info(AVFormatContext
*ic, AVDictionary **options)
         FFStream *const sti = ffstream(st);
         const char *errmsg;

+        av_log(ic, AV_LOG_VERBOSE,
+               "stream %u (%sinitialized), codec context codec: %s,
codecpar codec: %s\n",
+               i, sti->avctx_inited ? "" : "not ",
+               avcodec_get_name(sti->avctx->codec_id),
avcodec_get_name(st->codecpar->codec_id));
+
         /* if no packet was ever seen, update context now for
has_codec_parameters */
         if (!sti->avctx_inited) {
             if (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO &&


..the result was that:
- codec context was already noted as initialized
- the codecpar codec was bin_data
- the codec context one was still none.

I hacked it to do the update in case of sti->need_context_update in
the if visible in the diff, and it led to the update of the end
result.

Thus it seems like read_frame_internal - or some other function should
probably check whether a context update was requested by the demuxer.
That way it might be possible to stop the processing earlier as well.

Jan


More information about the ffmpeg-devel mailing list