[FFmpeg-devel] [PATCH] ffv1enc_vulkan: remove arbitrary limitation of the number of slices
Lynne
dev at lynne.ee
Fri Nov 22 03:06:31 EET 2024
On 11/21/24 23:13, Jerome Martinez wrote:
> Le 21/11/2024 à 20:02, Lynne via ffmpeg-devel a écrit :
>> + if (f->num_h_slices <= 0 && f->num_v_slices <= 0) {
>> f->num_h_slices = 32;
>> - if (f->num_v_slices <= 0)
>> f->num_v_slices = 32;
>> + } else if (f->num_h_slices) {
>
> + } else if (f->num_h_slices&& f->num_v_slices <= 0) {
>
>
> Without the addition, it is still 1024 at the end if both -slices_h &
> -slices_v are used and this is not the expected result.
>
>> + f->num_v_slices = 1024 / f->num_h_slices;
>> + } else if (f->num_v_slices) {
>
> + } else if (f->num_v_slices&& f->num_h_slices <= 0) {
>
>
> Without the addition, it is still 1024 at the end if both -slices_h &
> -slices_v are used and this is not the expected result.
>
>> + f->num_h_slices = 1024 / f->num_v_slices;
>> + }
>
> As we are there, both "1024" above should be replaced by
> f->slices ? 1024 : f->_slices
> (or similar, not sure, -slices it is from the default options) iiii
>
> in order to well manage the case when -slices and -slice_h, or -slices
> and -slice_v are used.
> (if I don't do mistakes, if all 3 options are used -slice_h -slice_v
> have precedence and I am fine with that)
>
> Users of the software FFV1 encoders use -slices and ignoring this
> option for the HW encoder would be misleading.
I'll think about it, but I dislike the -slices option.
> Or maybe using the same algorithm than the one in the SW encoder in
> the case -slices alone is used in order to have the same behavior.
That's simply not happening.
> Generally speaking, changing the number of slices has a huge impact,
> even if having several frames could make that there are 1024 slices in
> the workflow:
> replacing -slices_h 32 -slices_v 32 by -slices_h 16 -slices_v 16
> divides the encoder performance by 3 with my 6K 16-bit test file and
> by 2 with my 2K 10-bit test file.
> Is it planned to mitigate that in a later patch?
> having less slices may be important for performance, my guess is that
> the performance is limited (not proportional to the count of pixels)
> with 2K content because slices of 64x64 are too small.
Mitigate what? Its a hardware/driver/compiler limitation, bug, or
something. We can't do anything.
>> - f->num_h_slices = FFMIN(f->num_h_slices, avctx->width);
>> - f->num_v_slices = FFMIN(f->num_v_slices, avctx->height);
>> + if (f->num_h_slices * f->num_v_slices > 1024) {
>> + av_log(avctx, AV_LOG_ERROR, "Too many slices (%i),
>> maximum supported "
>> + "by the standard is 1024\n",
>
> + av_log(avctx, AV_LOG_ERROR, "Too many slices (%i),
> maximum supported is 1024\n",
>
>
>
> 1024 is an arbitrary limitation of the current FFV1 encoder & decoder
> in FFmpeg but not in the standard, it was increased from 256 to 1024
> in FFmpeg in 2017 without touching the (draft of) spec and it could be
> changed in the future if there is an interest (but well, even with 8K,
> having 1024 slices means slices of 256x256, I am personally fine with
> keeping this limitation, just not saying that this is from the standard).
In that case I'll leave 1024 as the default but allow users to specify a
greater amount of slices.
The output will still be compatible with the standard, and with any new
future decoders.
But someone else can update the software decoder to handle this.
More information about the ffmpeg-devel
mailing list