[Ffmpeg-devel-irc] ffmpeg-devel.log.20191023

burek burek at teamnet.rs
Thu Oct 24 03:05:03 EEST 2019


[00:28:16 CEST] <taliho> :jamrial is it acceptable to add a sleep inside the internal send_frame of the encoder? 
[00:28:25 CEST] <BtbN> no
[00:29:09 CEST] <taliho> wouldn't you have this kind of behvaviour if the encoder is working asyncrhonously? 
[00:29:32 CEST] <BtbN> A hard sleep? No
[00:29:33 CEST] <taliho> what if there is some internal initialization time
[00:29:47 CEST] <BtbN> That's what the init function is for
[00:30:08 CEST] <BtbN> But I can't think of any situation where sleeping in a hot function is ever acceptable
[00:30:19 CEST] <BtbN> Or really, ANY function in an encoder/decoder
[00:30:38 CEST] <taliho> ok
[00:31:41 CEST] <philipl> BtbN: These boards have the normal nvdec/nvenc hardware, but for them, nvidia are using completely different software. They don't release an equivalent of the desktop drivers. Instead they have these v4l2m2m drivers...
[00:31:45 CEST] <philipl> It's silly.
[00:32:08 CEST] <BtbN> Yeah, and from the sounds of it, the driver is pretty hot garbage as well.
[00:32:14 CEST] <philipl> yes.
[00:32:48 CEST] <philipl> the kernel part is open-source hot garbage though.
[00:32:50 CEST] <taliho> in their sample code, they deque the compressed packets in a separate thread 
[00:35:44 CEST] <taliho> in our code, we send a frame and then deque packets in the same thread. so I feel if the encoder is working in asynchronously it may return EAGAIN if the device is busy
[00:40:37 CEST] <BtbN> Why would it ever return EAGAIN after you took all frames out of it that are in there?
[00:40:47 CEST] <BtbN> When sending in new ones
[00:44:04 CEST] <taliho> in the send_frame call you check, are there any buffers to push this frame. if there is you push the frame. otherwise it returns EAGAIN
[00:44:36 CEST] <taliho> you only take out the frames from the buffers in the send_frame call 
[00:47:21 CEST] <taliho> in the avcodec_receive_packet call you remove compressed packets from a different buffer on the hardware device
[04:43:54 CEST] <taliho> :jamrial :BtbN to follow on send_frame returning EAGAIN... I suspect this was a bug in our v4l2_m2m code
[04:45:36 CEST] <taliho> some initialization code is being done after the first frame has been sent to the nano 
[04:46:26 CEST] <taliho> in receive_packet
[04:50:31 CEST] <taliho> moving this code to init so that it's called before send_frame solves the problem
[04:53:40 CEST] <jamrial> cool
[09:32:52 CEST] <cone-199> ffmpeg 03Paul B Mahol 07master:dd239bdb65c6: avfilter/vf_vaguedenoiser: add more gray formats
[09:40:59 CEST] <cone-199> ffmpeg 03Paul B Mahol 07master:1f327f5d277c: avfilter/vf_bm3d: add gray14 format
[09:45:55 CEST] <cone-199> ffmpeg 03Paul B Mahol 07master:7832e05c35ee: avfilter/vf_lut2: fix typo, correctly support gray14
[09:50:43 CEST] <cone-199> ffmpeg 03Jun Zhao 07master:0e3d5bdc0802: lavfi/bilateral: Clean the option description and unused code
[09:55:26 CEST] <cone-199> ffmpeg 03Paul B Mahol 07master:ba7d55d3fc98: avfilter/vf_deband: add more gray formats
[10:22:57 CEST] <cone-199> ffmpeg 03Paul B Mahol 07master:1cdc805228c7: avfilter/vf_floodfill: add more gray formats
[10:22:59 CEST] <cone-199> ffmpeg 03Paul B Mahol 07master:8732eb124e56: avfilter/vf_floodfill: better fix for crash
[12:41:36 CEST] <cone-199> ffmpeg 03Paul B Mahol 07master:7df808ea8443: avfilter/settb: switch to activate
[14:59:40 CEST] <cone-199> ffmpeg 03Zhao Zhili 07master:eafc8afafcd6: avcodec/tests: add h265_levels to .gitignore
[14:59:41 CEST] <cone-199> ffmpeg 03Zhao Zhili 07master:11cfff04eda7: FATE/dnn: add .gitignore
[17:14:41 CEST] <taliho> anyone know if Marton is on irc? 
[17:18:28 CEST] <durandal_1707> jamrial: have any more comments to latest clamp patch?
[17:27:19 CEST] <jamrial> durandal_1707: no
[17:43:49 CEST] <cone-199> ffmpeg 03Paul B Mahol 07master:ac0f5f4c1717: avfilter/vf_maskedclamp: add x86 SIMD
[18:41:21 CEST] <durandal_1707> ubitux: found 144x144x144 cube file in the wild
[18:41:48 CEST] <ubitux> that's unfortunate
[18:42:08 CEST] <ubitux> downsample it to our max size and no one will notic
[18:42:10 CEST] <ubitux> +e
[18:58:39 CEST] <durandal_1707> ubitux: its just 10mb more
[19:07:36 CEST] <taliho> :jamrial http://ffmpeg.org/pipermail/ffmpeg-devel/2019-October/252020.html
[19:07:49 CEST] <taliho> would appreciate if you could have a look when you have some time
[19:22:19 CEST] <durandal_1707> Lynne: what's status of vulkan and its filters? are you still working on it?
[19:24:37 CEST] <Lynne> waiting on jkqxz to okay the cuda interop
[19:40:47 CEST] <taliho> :jamrial thanks 
[19:51:31 CEST] <philipl> jkqxz: https://github.com/philipl/FFmpeg/pull/1/files
[19:51:49 CEST] <philipl> That's after addressing all the last feedback. The git history is a mess, but I will squash and adjust once it's finalised.
[19:56:06 CEST] <jkqxz> "if (src->hw_frames_ctx && dst->hw_frames_ctx) {" is true in the sw-map + upload to other device case (because the mapped sw frame has an associated hw_frames_ctx to keep track of the mapping).
[19:56:31 CEST] <jkqxz> Similarly downloading from one device to a frame mapped from another device.
[20:01:22 CEST] <philipl> That's why I added the additional checks on source_frames
[20:01:40 CEST] <philipl> Did I not achieve the result I wanted?
[20:02:00 CEST] <philipl> it should reject transfers if either src or dst is a derived frame context, meaning mapped frames. Yes?
[20:08:02 CEST] <philipl> jkqxz: ^
[20:11:45 CEST] <ubitux> durandal_1707: at this point we probably want to allocate dynamically according to the input
[20:12:46 CEST] <durandal_1707> ubitux: but that will make filtering much slower
[20:13:32 CEST] <ubitux> it's already on the heap, i'm just asking to make that alloc conditional on the input
[20:13:54 CEST] <ubitux> one heap alloc of N*N*N, but with N from the input instead of harcoded
[20:16:52 CEST] <durandal_1707> as alreay said, can not be done without making filter significally slower, try it
[20:17:05 CEST] <durandal_1707> otherwise i would do it already
[20:26:41 CEST] <jkqxz> philipl:  But it should be able to upload them using the SW transfer on the other context.
[20:49:02 CEST] <jkqxz> Hmm, no.  I'm confusing it with the hw_frames_ctx carried through AVFilterLinks.  The frame itself carries it in the HWMapDescriptor, so it doesn't interfere.
[21:08:45 CEST] <philipl> jkqxz: I'm not sure what the conclusion of your line of thinking is. :-) I'd at least start with establishing if the logic I wrote is safe. And if there are scenarios where a frame from a derived context can be used, we could add support for it. Given the specific devices we support and the mapping and transfer combinations we support, I don't think any real case exists.
[21:10:40 CEST] <jkqxz> The case I was thinking of (single-copy cross-device via sw-mapping) is not a problem.
[21:11:08 CEST] <jkqxz> (For example: "./ffmpeg_g -y -init_hw_device vaapi:/dev/dri/renderD129 -init_hw_device vaapi:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_device vaapi0 -hwaccel_output_format vaapi -i in.mp4 -an -filter_hw_device vaapi1 -vf 'hwdownload,format=nv12,hwmap=mode=write+overwrite+direct:reverse=1' -c:v h264_vaapi out.mp4".)
[21:13:18 CEST] <philipl> OK.
[21:15:28 CEST] <jkqxz> doc/errno.txt suggests that ENOTSUP is not suitable for use everywhere.
[21:15:50 CEST] <philipl> OK. I can switch that to ENOSYS.
[21:16:23 CEST] <philipl> ENOTSUP seemed semantically closest but we use ENOSYS in other places for it.
[21:19:11 CEST] <jkqxz> Without an external query I'm not sure that transfer_data_hw_supported is doing anything any more?
[21:19:54 CEST] <philipl> It's still used.
[21:19:55 CEST] <jkqxz> Using the av_hwframe_map() approach of just calling the other direction function if the first one returns ENOSYS might be simpler.
[21:20:09 CEST] <philipl> Fair.
[21:20:16 CEST] <philipl> I can do that.
[21:27:19 CEST] <jkqxz> For the doc, I'm not sure it says quite the right thing.  A device /is/ required when the filter is initialised (coming directly from the user or on the inlink to derive from).
[21:28:27 CEST] <jkqxz> Dunno what the right phrasing is there.
[21:30:08 CEST] <jkqxz> Having thought about that more, I agree with your comment above about the derived frames contexts.
[21:30:44 CEST] <jkqxz> So generic parts LGTM after that.
[21:34:52 CEST] <philipl> jkqxz: thanks. I'll adjust the ENOSYS and remove the transfer direction supported function. For the docs, i tried to copy the phrasing from hwmap. If I've diverged, I'll correct it.
[21:41:21 CEST] <Lynne> https://paste.sr.ht/~sircmpwn/23e31a29f427066ef261b2ffa7fd9bf46530d904
[21:41:37 CEST] <Lynne> nice, gitlab started spying
[21:42:21 CEST] Action: ddevault stirs
[21:47:42 CEST] <ubitux> jamrial: http://coverage.ffmpeg.org/
[21:47:48 CEST] <ubitux> i think you asked me to fix that a while ago
[21:48:02 CEST] <ubitux> i moved to gcovr since lcov does not seem to work anymore
[21:48:06 CEST] <jamrial> ubitux: yes, awesome, thank you :D
[21:48:24 CEST] <ubitux> i'll check if it still works over time
[21:48:47 CEST] <ubitux> but i had to look into this for another project, and gcovr did the trick so...
[21:49:38 CEST] <ubitux> this is basically the result of gcovr -r . --html-details -o foo.html after a make fate on a --toolchain=gcov build
[21:49:54 CEST] <ubitux> (with debug on, and CPUFLAGS=none for the run)
[21:50:56 CEST] <ubitux> we should probably add a rule in the Makefile for that (maybe i'll send a patch) and drop the old wonky lcov rules
[21:51:04 CEST] <ubitux> anyway, gtg, cya
[21:51:35 CEST] <jamrial> later
[00:00:00 CEST] --- Thu Oct 24 2019


More information about the Ffmpeg-devel-irc mailing list