[FFmpeg-devel] Suggestions on best way to add capture timestamp functionality over RTSP

Rūdolfs Bundulis rudolfs.bundulis at gmail.com
Sat Mar 26 16:52:17 EET 2022


Hi,

I was looking for some advice on how to best add capture timestamp support
to ffmpeg (namely RTP muxer) so that it would be accepted as a patch. I'll
try to explain the rationale and what I see as an mvp and would like to get
some feedback.

Currently, a lot of live streaming stacks use rtmp onFI messages to signal
capture timestamp mapping to pts, but with stuff like OPUS / AV1 / ...
becoming more and more popular, using rtmp becomes harder since none of
these codecs are supported in rtmp. Some vendors add extensions to rtmp to
be able to ingest these codecs, but rtsp already has muxers for most of
them, but it sadly lacks a good mechanism to send capture timestamps that
would help to estimate a/v desync and end to end latency.

So, there is this rtp header extension (
https://github.com/webrtc/webrtc-org/blob/gh-pages/experiments/rtp-hdrext/abs-capture-time/index.md)
and I just found that ffmpeg now supports AV_PKT_DATA_S12M_TIMECODE side
data, that would allow passing the capture time allthrough from the capture
device to the muxer (which was the main obstacle why I did not evaluate
this earlier).

So, what I'd like to do and get that accepted is:
1) Add a feature to the rtpenc.c, controlled by a flag which would be off
by default, which generates the abs-capture-time header if
AV_PKT_DATA_S12M_TIMECODE is present. The spec says that one should not
send the header on each packet which makes sense so I assume I need and
on/off flag and a frequency parameter, or a single parameter which just
does nothing if defaulted to -1 ? In this case, what would be a good unit
of frequency of the header (since potentially this could be used with any
kind of media) - milliseconds? Or maybe leave the frequency up to the
caputre device?

2) Add a way to actually embed this sidedata into a captured stream. As far
as I can see decklink is the only device which currently generates this
kind of side data, but unfortunately only from the smpte coming from the
source. I have access to some decklink cards, but I certainly am not able
to set up a signal chain with embedded smpte. So I see a couple of options
here:
    a) add a "wallclock" parameter to the decklink "timecode_format"
option, which would generate the side data value from wallclock
    b) add a filter which just adds AV_PKT_DATA_S12M_TIMECODE data to any
packet going through (possibly, with a configurable frequency).
The latter of course suffers from the fact that all latencies up to the
filter are lost, but then again it could be then combined with any type of
input device.

It would be great to get some feedback on this and understand what is a
good patch functionality wise, that would actually get accepted.


More information about the ffmpeg-devel mailing list