[FFmpeg-cvslog] doc/filters: Shift CUDA-based filters to own section.
Danil Iashchenko
git at videolan.org
Mon Mar 17 09:13:43 EET 2025
ffmpeg | branch: master | Danil Iashchenko <danyaschenko at gmail.com> | Sun Mar 16 19:15:08 2025 +0000| [a1c6ca1683708978c24ed8a632bb29fafc9dacdf] | committer: Gyan Doshi
doc/filters: Shift CUDA-based filters to own section.
> http://git.videolan.org/gitweb.cgi/ffmpeg.git/?a=commit;h=a1c6ca1683708978c24ed8a632bb29fafc9dacdf
---
doc/filters.texi | 3229 ++++++++++++++++++++++++++++--------------------------
1 file changed, 1651 insertions(+), 1578 deletions(-)
diff --git a/doc/filters.texi b/doc/filters.texi
index 0ba7d3035f..37b8674756 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -8619,45 +8619,6 @@ Set planes to filter. Default is first only.
This filter supports the all above options as @ref{commands}.
- at section bilateral_cuda
-CUDA accelerated bilateral filter, an edge preserving filter.
-This filter is mathematically accurate thanks to the use of GPU acceleration.
-For best output quality, use one to one chroma subsampling, i.e. yuv444p format.
-
-The filter accepts the following options:
- at table @option
- at item sigmaS
-Set sigma of gaussian function to calculate spatial weight, also called sigma space.
-Allowed range is 0.1 to 512. Default is 0.1.
-
- at item sigmaR
-Set sigma of gaussian function to calculate color range weight, also called sigma color.
-Allowed range is 0.1 to 512. Default is 0.1.
-
- at item window_size
-Set window size of the bilateral function to determine the number of neighbours to loop on.
-If the number entered is even, one will be added automatically.
-Allowed range is 1 to 255. Default is 1.
- at end table
- at subsection Examples
-
- at itemize
- at item
-Apply the bilateral filter on a video.
-
- at example
-./ffmpeg -v verbose \
--hwaccel cuda -hwaccel_output_format cuda -i input.mp4 \
--init_hw_device cuda \
--filter_complex \
-" \
-[0:v]scale_cuda=format=yuv444p[scaled_video];
-[scaled_video]bilateral_cuda=window_size=9:sigmaS=3.0:sigmaR=50.0" \
--an -sn -c:v h264_nvenc -cq 20 out.mp4
- at end example
-
- at end itemize
-
@section bitplanenoise
Show and measure bit plane noise.
@@ -9243,58 +9204,6 @@ Only deinterlace frames marked as interlaced.
The default value is @code{all}.
@end table
- at section bwdif_cuda
-
-Deinterlace the input video using the @ref{bwdif} algorithm, but implemented
-in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec
-and/or nvenc.
-
-It accepts the following parameters:
-
- at table @option
- at item mode
-The interlacing mode to adopt. It accepts one of the following values:
-
- at table @option
- at item 0, send_frame
-Output one frame for each frame.
- at item 1, send_field
-Output one frame for each field.
- at end table
-
-The default value is @code{send_field}.
-
- at item parity
-The picture field parity assumed for the input interlaced video. It accepts one
-of the following values:
-
- at table @option
- at item 0, tff
-Assume the top field is first.
- at item 1, bff
-Assume the bottom field is first.
- at item -1, auto
-Enable automatic detection of field parity.
- at end table
-
-The default value is @code{auto}.
-If the interlacing is unknown or the decoder does not export this information,
-top field first will be assumed.
-
- at item deint
-Specify which frames to deinterlace. Accepts one of the following
-values:
-
- at table @option
- at item 0, all
-Deinterlace all frames.
- at item 1, interlaced
-Only deinterlace frames marked as interlaced.
- at end table
-
-The default value is @code{all}.
- at end table
-
@section ccrepack
Repack CEA-708 closed captioning side data
@@ -9408,48 +9317,6 @@ ffmpeg -f lavfi -i color=c=black:s=1280x720 -i video.mp4 -shortest -filter_compl
@end example
@end itemize
- at section chromakey_cuda
-CUDA accelerated YUV colorspace color/chroma keying.
-
-This filter works like normal chromakey filter but operates on CUDA frames.
-for more details and parameters see @ref{chromakey}.
-
- at subsection Examples
-
- at itemize
- at item
-Make all the green pixels in the input video transparent and use it as an overlay for another video:
-
- at example
-./ffmpeg \
- -hwaccel cuda -hwaccel_output_format cuda -i input_green.mp4 \
- -hwaccel cuda -hwaccel_output_format cuda -i base_video.mp4 \
- -init_hw_device cuda \
- -filter_complex \
- " \
- [0:v]chromakey_cuda=0x25302D:0.1:0.12:1[overlay_video]; \
- [1:v]scale_cuda=format=yuv420p[base]; \
- [base][overlay_video]overlay_cuda" \
- -an -sn -c:v h264_nvenc -cq 20 output.mp4
- at end example
-
- at item
-Process two software sources, explicitly uploading the frames:
-
- at example
-./ffmpeg -init_hw_device cuda=cuda -filter_hw_device cuda \
- -f lavfi -i color=size=800x600:color=white,format=yuv420p \
- -f lavfi -i yuvtestsrc=size=200x200,format=yuv420p \
- -filter_complex \
- " \
- [0]hwupload[under]; \
- [1]hwupload,chromakey_cuda=green:0.1:0.12[over]; \
- [under][over]overlay_cuda" \
- -c:v hevc_nvenc -cq 18 -preset slow output.mp4
- at end example
-
- at end itemize
-
@section chromanr
Reduce chrominance noise.
@@ -10427,38 +10294,6 @@ For example to convert the input to SMPTE-240M, use the command:
colorspace=smpte240m
@end example
- at section colorspace_cuda
-
-CUDA accelerated implementation of the colorspace filter.
-
-It is by no means feature complete compared to the software colorspace filter,
-and at the current time only supports color range conversion between jpeg/full
-and mpeg/limited range.
-
-The filter accepts the following options:
-
- at table @option
- at item range
-Specify output color range.
-
-The accepted values are:
- at table @samp
- at item tv
-TV (restricted) range
-
- at item mpeg
-MPEG (restricted) range
-
- at item pc
-PC (full) range
-
- at item jpeg
-JPEG (full) range
-
- at end table
-
- at end table
-
@section colortemperature
Adjust color temperature in video to simulate variations in ambient color temperature.
@@ -18988,84 +18823,6 @@ testsrc=s=100x100, split=4 [in0][in1][in2][in3];
@end itemize
- at anchor{overlay_cuda}
- at section overlay_cuda
-
-Overlay one video on top of another.
-
-This is the CUDA variant of the @ref{overlay} filter.
-It only accepts CUDA frames. The underlying input pixel formats have to match.
-
-It takes two inputs and has one output. The first input is the "main"
-video on which the second input is overlaid.
-
-It accepts the following parameters:
-
- at table @option
- at item x
- at item y
-Set expressions for the x and y coordinates of the overlaid video
-on the main video.
-
-They can contain the following parameters:
-
- at table @option
-
- at item main_w, W
- at item main_h, H
-The main input width and height.
-
- at item overlay_w, w
- at item overlay_h, h
-The overlay input width and height.
-
- at item x
- at item y
-The computed values for @var{x} and @var{y}. They are evaluated for
-each new frame.
-
- at item n
-The ordinal index of the main input frame, starting from 0.
-
- at item pos
-The byte offset position in the file of the main input frame, NAN if unknown.
-Deprecated, do not use.
-
- at item t
-The timestamp of the main input frame, expressed in seconds, NAN if unknown.
-
- at end table
-
-Default value is "0" for both expressions.
-
- at item eval
-Set when the expressions for @option{x} and @option{y} are evaluated.
-
-It accepts the following values:
- at table @option
- at item init
-Evaluate expressions once during filter initialization or
-when a command is processed.
-
- at item frame
-Evaluate expressions for each incoming frame
- at end table
-
-Default value is @option{frame}.
-
- at item eof_action
-See @ref{framesync}.
-
- at item shortest
-See @ref{framesync}.
-
- at item repeatlast
-See @ref{framesync}.
-
- at end table
-
-This filter also supports the @ref{framesync} options.
-
@section owdenoise
Apply Overcomplete Wavelet denoiser.
@@ -21516,11 +21273,9 @@ If the specified expression is not valid, it is kept at its current
value.
@end table
- at anchor{scale_cuda}
- at section scale_cuda
+ at section scale_vt
-Scale (resize) and convert (pixel format) the input video, using accelerated CUDA kernels.
-Setting the output width and height works in the same way as for the @ref{scale} filter.
+Scale and convert the color parameters using VTPixelTransferSession.
The filter accepts the following options:
@table @option
@@ -21528,981 +21283,685 @@ The filter accepts the following options:
@item h
Set the output video dimension expression. Default value is the input dimension.
-Allows for the same expressions as the @ref{scale} filter.
+ at item color_matrix
+Set the output colorspace matrix.
- at item interp_algo
-Sets the algorithm used for scaling:
+ at item color_primaries
+Set the output color primaries.
- at table @var
- at item nearest
-Nearest neighbour
+ at item color_transfer
+Set the output transfer characteristics.
-Used by default if input parameters match the desired output.
+ at end table
- at item bilinear
-Bilinear
+ at section scharr
+Apply scharr operator to input video stream.
- at item bicubic
-Bicubic
+The filter accepts the following option:
-This is the default.
+ at table @option
+ at item planes
+Set which planes will be processed, unprocessed planes will be copied.
+By default value 0xf, all planes will be processed.
- at item lanczos
-Lanczos
+ at item scale
+Set value which will be multiplied with filtered result.
+ at item delta
+Set value which will be added to filtered result.
@end table
- at item format
-Controls the output pixel format. By default, or if none is specified, the input
-pixel format is used.
-
-The filter does not support converting between YUV and RGB pixel formats.
-
- at item passthrough
-If set to 0, every frame is processed, even if no conversion is necessary.
-This mode can be useful to use the filter as a buffer for a downstream
-frame-consumer that exhausts the limited decoder frame pool.
+ at subsection Commands
-If set to 1, frames are passed through as-is if they match the desired output
-parameters. This is the default behaviour.
+This filter supports the all above options as @ref{commands}.
- at item param
-Algorithm-Specific parameter.
+ at section scroll
+Scroll input video horizontally and/or vertically by constant speed.
-Affects the curves of the bicubic algorithm.
+The filter accepts the following options:
+ at table @option
+ at item horizontal, h
+Set the horizontal scrolling speed. Default is 0. Allowed range is from -1 to 1.
+Negative values changes scrolling direction.
- at item force_original_aspect_ratio
- at item force_divisible_by
-Work the same as the identical @ref{scale} filter options.
+ at item vertical, v
+Set the vertical scrolling speed. Default is 0. Allowed range is from -1 to 1.
+Negative values changes scrolling direction.
- at item reset_sar
-Works the same as the identical @ref{scale} filter option.
+ at item hpos
+Set the initial horizontal scrolling position. Default is 0. Allowed range is from 0 to 1.
+ at item vpos
+Set the initial vertical scrolling position. Default is 0. Allowed range is from 0 to 1.
@end table
- at subsection Examples
+ at subsection Commands
- at itemize
- at item
-Scale input to 720p, keeping aspect ratio and ensuring the output is yuv420p.
- at example
-scale_cuda=-2:720:format=yuv420p
- at end example
+This filter supports the following @ref{commands}:
+ at table @option
+ at item horizontal, h
+Set the horizontal scrolling speed.
+ at item vertical, v
+Set the vertical scrolling speed.
+ at end table
- at item
-Upscale to 4K using nearest neighbour algorithm.
- at example
-scale_cuda=4096:2160:interp_algo=nearest
- at end example
+ at anchor{scdet}
+ at section scdet
- at item
-Don't do any conversion or scaling, but copy all input frames into newly allocated ones.
-This can be useful to deal with a filter and encode chain that otherwise exhausts the
-decoders frame pool.
- at example
-scale_cuda=passthrough=0
- at end example
- at end itemize
+Detect video scene change.
- at anchor{scale_npp}
- at section scale_npp
+This filter sets frame metadata with mafd between frame, the scene score, and
+forward the frame to the next filter, so they can use these metadata to detect
+scene change or others.
-Use the NVIDIA Performance Primitives (libnpp) to perform scaling and/or pixel
-format conversion on CUDA video frames. Setting the output width and height
-works in the same way as for the @var{scale} filter.
+In addition, this filter logs a message and sets frame metadata when it detects
+a scene change by @option{threshold}.
-The following additional options are accepted:
- at table @option
- at item format
-The pixel format of the output CUDA frames. If set to the string "same" (the
-default), the input format will be kept. Note that automatic format negotiation
-and conversion is not yet supported for hardware frames
+ at code{lavfi.scd.mafd} metadata keys are set with mafd for every frame.
- at item interp_algo
-The interpolation algorithm used for resizing. One of the following:
- at table @option
- at item nn
-Nearest neighbour.
+ at code{lavfi.scd.score} metadata keys are set with scene change score for every frame
+to detect scene change.
- at item linear
- at item cubic
- at item cubic2p_bspline
-2-parameter cubic (B=1, C=0)
+ at code{lavfi.scd.time} metadata keys are set with current filtered frame time which
+detect scene change with @option{threshold}.
- at item cubic2p_catmullrom
-2-parameter cubic (B=0, C=1/2)
+The filter accepts the following options:
- at item cubic2p_b05c03
-2-parameter cubic (B=1/2, C=3/10)
+ at table @option
+ at item threshold, t
+Set the scene change detection threshold as a percentage of maximum change. Good
+values are in the @code{[8.0, 14.0]} range. The range for @option{threshold} is
+ at code{[0., 100.]}.
- at item super
-Supersampling
+Default value is @code{10.}.
- at item lanczos
+ at item sc_pass, s
+Set the flag to pass scene change frames to the next filter. Default value is @code{0}
+You can enable it if you want to get snapshot of scene change frames only.
@end table
- at item force_original_aspect_ratio
-Enable decreasing or increasing output video width or height if necessary to
-keep the original aspect ratio. Possible values:
+ at anchor{selectivecolor}
+ at section selectivecolor
- at table @samp
- at item disable
-Scale the video as specified and disable this feature.
+Adjust cyan, magenta, yellow and black (CMYK) to certain ranges of colors (such
+as "reds", "yellows", "greens", "cyans", ...). The adjustment range is defined
+by the "purity" of the color (that is, how saturated it already is).
- at item decrease
-The output video dimensions will automatically be decreased if needed.
+This filter is similar to the Adobe Photoshop Selective Color tool.
- at item increase
-The output video dimensions will automatically be increased if needed.
+The filter accepts the following options:
+
+ at table @option
+ at item correction_method
+Select color correction method.
+Available values are:
+ at table @samp
+ at item absolute
+Specified adjustments are applied "as-is" (added/subtracted to original pixel
+component value).
+ at item relative
+Specified adjustments are relative to the original component value.
+ at end table
+Default is @code{absolute}.
+ at item reds
+Adjustments for red pixels (pixels where the red component is the maximum)
+ at item yellows
+Adjustments for yellow pixels (pixels where the blue component is the minimum)
+ at item greens
+Adjustments for green pixels (pixels where the green component is the maximum)
+ at item cyans
+Adjustments for cyan pixels (pixels where the red component is the minimum)
+ at item blues
+Adjustments for blue pixels (pixels where the blue component is the maximum)
+ at item magentas
+Adjustments for magenta pixels (pixels where the green component is the minimum)
+ at item whites
+Adjustments for white pixels (pixels where all components are greater than 128)
+ at item neutrals
+Adjustments for all pixels except pure black and pure white
+ at item blacks
+Adjustments for black pixels (pixels where all components are lesser than 128)
+ at item psfile
+Specify a Photoshop selective color file (@code{.asv}) to import the settings from.
@end table
-One useful instance of this option is that when you know a specific device's
-maximum allowed resolution, you can use this to limit the output video to
-that, while retaining the aspect ratio. For example, device A allows
-1280x720 playback, and your video is 1920x800. Using this option (set it to
-decrease) and specifying 1280x720 to the command line makes the output
-1280x533.
+All the adjustment settings (@option{reds}, @option{yellows}, ...) accept up to
+4 space separated floating point adjustment values in the [-1,1] range,
+respectively to adjust the amount of cyan, magenta, yellow and black for the
+pixels of its range.
-Please note that this is a different thing than specifying -1 for @option{w}
-or @option{h}, you still need to specify the output resolution for this option
-to work.
+ at subsection Examples
- at item force_divisible_by
-Ensures that both the output dimensions, width and height, are divisible by the
-given integer when used together with @option{force_original_aspect_ratio}. This
-works similar to using @code{-n} in the @option{w} and @option{h} options.
+ at itemize
+ at item
+Increase cyan by 50% and reduce yellow by 33% in every green areas, and
+increase magenta by 27% in blue areas:
+ at example
+selectivecolor=greens=.5 0 -.33 0:blues=0 .27
+ at end example
-This option respects the value set for @option{force_original_aspect_ratio},
-increasing or decreasing the resolution accordingly. The video's aspect ratio
-may be slightly modified.
+ at item
+Use a Photoshop selective color preset:
+ at example
+selectivecolor=psfile=MySelectiveColorPresets/Misty.asv
+ at end example
+ at end itemize
-This option can be handy if you need to have a video fit within or exceed
-a defined resolution using @option{force_original_aspect_ratio} but also have
-encoder restrictions on width or height divisibility.
+ at anchor{separatefields}
+ at section separatefields
- at item reset_sar
-Works the same as the identical @ref{scale} filter option.
+The @code{separatefields} takes a frame-based video input and splits
+each frame into its components fields, producing a new half height clip
+with twice the frame rate and twice the frame count.
- at item eval
-Specify when to evaluate @var{width} and @var{height} expression. It accepts the following values:
+This filter use field-dominance information in frame to decide which
+of each pair of fields to place first in the output.
+If it gets it wrong use @ref{setfield} filter before @code{separatefields} filter.
- at table @samp
- at item init
-Only evaluate expressions once during the filter initialization or when a command is processed.
+ at section setdar, setsar
- at item frame
-Evaluate expressions for each incoming frame.
+The @code{setdar} filter sets the Display Aspect Ratio for the filter
+output video.
- at end table
+This is done by changing the specified Sample (aka Pixel) Aspect
+Ratio, according to the following equation:
+ at example
+ at var{DAR} = @var{HORIZONTAL_RESOLUTION} / @var{VERTICAL_RESOLUTION} * @var{SAR}
+ at end example
- at end table
+Keep in mind that the @code{setdar} filter does not modify the pixel
+dimensions of the video frame. Also, the display aspect ratio set by
+this filter may be changed by later filters in the filterchain,
+e.g. in case of scaling or if another "setdar" or a "setsar" filter is
+applied.
-The values of the @option{w} and @option{h} options are expressions
-containing the following constants:
+The @code{setsar} filter sets the Sample (aka Pixel) Aspect Ratio for
+the filter output video.
- at table @var
- at item in_w
- at item in_h
-The input width and height
-
- at item iw
- at item ih
-These are the same as @var{in_w} and @var{in_h}.
-
- at item out_w
- at item out_h
-The output (scaled) width and height
-
- at item ow
- at item oh
-These are the same as @var{out_w} and @var{out_h}
+Note that as a consequence of the application of this filter, the
+output display aspect ratio will change according to the equation
+above.
- at item a
-The same as @var{iw} / @var{ih}
+Keep in mind that the sample aspect ratio set by the @code{setsar}
+filter may be changed by later filters in the filterchain, e.g. if
+another "setsar" or a "setdar" filter is applied.
- at item sar
-input sample aspect ratio
+It accepts the following parameters:
- at item dar
-The input display aspect ratio. Calculated from @code{(iw / ih) * sar}.
+ at table @option
+ at item r, ratio, dar (@code{setdar} only), sar (@code{setsar} only)
+Set the aspect ratio used by the filter.
- at item n
-The (sequential) number of the input frame, starting from 0.
-Only available with @code{eval=frame}.
+The parameter can be a floating point number string, or an expression. If the
+parameter is not specified, the value "0" is assumed, meaning that the same
+input value is used.
- at item t
-The presentation timestamp of the input frame, expressed as a number of
-seconds. Only available with @code{eval=frame}.
+ at item max
+Set the maximum integer value to use for expressing numerator and
+denominator when reducing the expressed aspect ratio to a rational.
+Default value is @code{100}.
- at item pos
-The position (byte offset) of the frame in the input stream, or NaN if
-this information is unavailable and/or meaningless (for example in case of synthetic video).
-Only available with @code{eval=frame}.
-Deprecated, do not use.
@end table
- at section scale2ref_npp
-
-Use the NVIDIA Performance Primitives (libnpp) to scale (resize) the input
-video, based on a reference video.
-
-See the @ref{scale_npp} filter for available options, scale2ref_npp supports the same
-but uses the reference video instead of the main input as basis. scale2ref_npp
-also supports the following additional constants for the @option{w} and
- at option{h} options:
-
- at table @var
- at item main_w
- at item main_h
-The main input video's width and height
-
- at item main_a
-The same as @var{main_w} / @var{main_h}
+The parameter @var{sar} is an expression containing the following constants:
- at item main_sar
-The main input video's sample aspect ratio
+ at table @option
+ at item w, h
+The input width and height.
- at item main_dar, mdar
-The main input video's display aspect ratio. Calculated from
- at code{(main_w / main_h) * main_sar}.
+ at item a
+Same as @var{w} / @var{h}.
- at item main_n
-The (sequential) number of the main input frame, starting from 0.
-Only available with @code{eval=frame}.
+ at item sar
+The input sample aspect ratio.
- at item main_t
-The presentation timestamp of the main input frame, expressed as a number of
-seconds. Only available with @code{eval=frame}.
+ at item dar
+The input display aspect ratio. It is the same as
+(@var{w} / @var{h}) * @var{sar}.
- at item main_pos
-The position (byte offset) of the frame in the main input stream, or NaN if
-this information is unavailable and/or meaningless (for example in case of synthetic video).
-Only available with @code{eval=frame}.
+ at item hsub, vsub
+Horizontal and vertical chroma subsample values. For example, for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
@end table
@subsection Examples
@itemize
+
@item
-Scale a subtitle stream (b) to match the main video (a) in size before overlaying
+To change the display aspect ratio to 16:9, specify one of the following:
@example
-'scale2ref_npp[b][a];[a][b]overlay_cuda'
+setdar=dar=1.77777
+setdar=dar=16/9
@end example
@item
-Scale a logo to 1/10th the height of a video, while preserving its display aspect ratio.
+To change the sample aspect ratio to 10:11, specify:
@example
-[logo-in][video-in]scale2ref_npp=w=oh*mdar:h=ih/10[logo-out][video-out]
+setsar=sar=10/11
@end example
- at end itemize
- at section scale_vt
+ at item
+To set a display aspect ratio of 16:9, and specify a maximum integer value of
+1000 in the aspect ratio reduction, use the command:
+ at example
+setdar=ratio=16/9:max=1000
+ at end example
-Scale and convert the color parameters using VTPixelTransferSession.
+ at end itemize
-The filter accepts the following options:
- at table @option
- at item w
- at item h
-Set the output video dimension expression. Default value is the input dimension.
+ at anchor{setfield}
+ at section setfield
- at item color_matrix
-Set the output colorspace matrix.
+Force field for the output video frame.
- at item color_primaries
-Set the output color primaries.
+The @code{setfield} filter marks the interlace type field for the
+output frames. It does not change the input frame, but only sets the
+corresponding property, which affects how the frame is treated by
+following filters (e.g. @code{fieldorder} or @code{yadif}).
- at item color_transfer
-Set the output transfer characteristics.
+The filter accepts the following options:
- at end table
+ at table @option
- at section scharr
-Apply scharr operator to input video stream.
+ at item mode
+Available values are:
-The filter accepts the following option:
+ at table @samp
+ at item auto
+Keep the same field property.
- at table @option
- at item planes
-Set which planes will be processed, unprocessed planes will be copied.
-By default value 0xf, all planes will be processed.
+ at item bff
+Mark the frame as bottom-field-first.
- at item scale
-Set value which will be multiplied with filtered result.
+ at item tff
+Mark the frame as top-field-first.
- at item delta
-Set value which will be added to filtered result.
+ at item prog
+Mark the frame as progressive.
+ at end table
@end table
- at subsection Commands
+ at anchor{setparams}
+ at section setparams
-This filter supports the all above options as @ref{commands}.
+Force frame parameter for the output video frame.
- at section scroll
-Scroll input video horizontally and/or vertically by constant speed.
+The @code{setparams} filter marks interlace and color range for the
+output frames. It does not change the input frame, but only sets the
+corresponding property, which affects how the frame is treated by
+filters/encoders.
-The filter accepts the following options:
@table @option
- at item horizontal, h
-Set the horizontal scrolling speed. Default is 0. Allowed range is from -1 to 1.
-Negative values changes scrolling direction.
-
- at item vertical, v
-Set the vertical scrolling speed. Default is 0. Allowed range is from -1 to 1.
-Negative values changes scrolling direction.
+ at item field_mode
+Available values are:
- at item hpos
-Set the initial horizontal scrolling position. Default is 0. Allowed range is from 0 to 1.
+ at table @samp
+ at item auto
+Keep the same field property (default).
- at item vpos
-Set the initial vertical scrolling position. Default is 0. Allowed range is from 0 to 1.
- at end table
+ at item bff
+Mark the frame as bottom-field-first.
- at subsection Commands
+ at item tff
+Mark the frame as top-field-first.
-This filter supports the following @ref{commands}:
- at table @option
- at item horizontal, h
-Set the horizontal scrolling speed.
- at item vertical, v
-Set the vertical scrolling speed.
+ at item prog
+Mark the frame as progressive.
@end table
- at anchor{scdet}
- at section scdet
-
-Detect video scene change.
-
-This filter sets frame metadata with mafd between frame, the scene score, and
-forward the frame to the next filter, so they can use these metadata to detect
-scene change or others.
-
-In addition, this filter logs a message and sets frame metadata when it detects
-a scene change by @option{threshold}.
+ at item range
+Available values are:
- at code{lavfi.scd.mafd} metadata keys are set with mafd for every frame.
+ at table @samp
+ at item auto
+Keep the same color range property (default).
- at code{lavfi.scd.score} metadata keys are set with scene change score for every frame
-to detect scene change.
+ at item unspecified, unknown
+Mark the frame as unspecified color range.
- at code{lavfi.scd.time} metadata keys are set with current filtered frame time which
-detect scene change with @option{threshold}.
+ at item limited, tv, mpeg
+Mark the frame as limited range.
-The filter accepts the following options:
+ at item full, pc, jpeg
+Mark the frame as full range.
+ at end table
- at table @option
- at item threshold, t
-Set the scene change detection threshold as a percentage of maximum change. Good
-values are in the @code{[8.0, 14.0]} range. The range for @option{threshold} is
- at code{[0., 100.]}.
+ at item color_primaries
+Set the color primaries.
+Available values are:
-Default value is @code{10.}.
+ at table @samp
+ at item auto
+Keep the same color primaries property (default).
- at item sc_pass, s
-Set the flag to pass scene change frames to the next filter. Default value is @code{0}
-You can enable it if you want to get snapshot of scene change frames only.
+ at item bt709
+ at item unknown
+ at item bt470m
+ at item bt470bg
+ at item smpte170m
+ at item smpte240m
+ at item film
+ at item bt2020
+ at item smpte428
+ at item smpte431
+ at item smpte432
+ at item jedec-p22
@end table
- at anchor{selectivecolor}
- at section selectivecolor
+ at item color_trc
+Set the color transfer.
+Available values are:
-Adjust cyan, magenta, yellow and black (CMYK) to certain ranges of colors (such
-as "reds", "yellows", "greens", "cyans", ...). The adjustment range is defined
-by the "purity" of the color (that is, how saturated it already is).
+ at table @samp
+ at item auto
+Keep the same color trc property (default).
-This filter is similar to the Adobe Photoshop Selective Color tool.
+ at item bt709
+ at item unknown
+ at item bt470m
+ at item bt470bg
+ at item smpte170m
+ at item smpte240m
+ at item linear
+ at item log100
+ at item log316
+ at item iec61966-2-4
+ at item bt1361e
+ at item iec61966-2-1
+ at item bt2020-10
+ at item bt2020-12
+ at item smpte2084
+ at item smpte428
+ at item arib-std-b67
+ at end table
-The filter accepts the following options:
+ at item colorspace
+Set the colorspace.
+Available values are:
- at table @option
- at item correction_method
-Select color correction method.
+ at table @samp
+ at item auto
+Keep the same colorspace property (default).
+
+ at item gbr
+ at item bt709
+ at item unknown
+ at item fcc
+ at item bt470bg
+ at item smpte170m
+ at item smpte240m
+ at item ycgco
+ at item bt2020nc
+ at item bt2020c
+ at item smpte2085
+ at item chroma-derived-nc
+ at item chroma-derived-c
+ at item ictcp
+ at end table
+ at item chroma_location
+Set the chroma sample location.
Available values are:
+
@table @samp
- at item absolute
-Specified adjustments are applied "as-is" (added/subtracted to original pixel
-component value).
- at item relative
-Specified adjustments are relative to the original component value.
+ at item auto
+Keep the same chroma location (default).
+
+ at item unspecified, unknown
+ at item left
+ at item center
+ at item topleft
+ at item top
+ at item bottomleft
+ at item bottom
@end table
-Default is @code{absolute}.
- at item reds
-Adjustments for red pixels (pixels where the red component is the maximum)
- at item yellows
-Adjustments for yellow pixels (pixels where the blue component is the minimum)
- at item greens
-Adjustments for green pixels (pixels where the green component is the maximum)
- at item cyans
-Adjustments for cyan pixels (pixels where the red component is the minimum)
- at item blues
-Adjustments for blue pixels (pixels where the blue component is the maximum)
- at item magentas
-Adjustments for magenta pixels (pixels where the green component is the minimum)
- at item whites
-Adjustments for white pixels (pixels where all components are greater than 128)
- at item neutrals
-Adjustments for all pixels except pure black and pure white
- at item blacks
-Adjustments for black pixels (pixels where all components are lesser than 128)
- at item psfile
-Specify a Photoshop selective color file (@code{.asv}) to import the settings from.
@end table
-All the adjustment settings (@option{reds}, @option{yellows}, ...) accept up to
-4 space separated floating point adjustment values in the [-1,1] range,
-respectively to adjust the amount of cyan, magenta, yellow and black for the
-pixels of its range.
+ at section shear
+Apply shear transform to input video.
- at subsection Examples
+This filter supports the following options:
- at itemize
- at item
-Increase cyan by 50% and reduce yellow by 33% in every green areas, and
-increase magenta by 27% in blue areas:
- at example
-selectivecolor=greens=.5 0 -.33 0:blues=0 .27
- at end example
+ at table @option
+ at item shx
+Shear factor in X-direction. Default value is 0.
+Allowed range is from -2 to 2.
- at item
-Use a Photoshop selective color preset:
- at example
-selectivecolor=psfile=MySelectiveColorPresets/Misty.asv
- at end example
- at end itemize
+ at item shy
+Shear factor in Y-direction. Default value is 0.
+Allowed range is from -2 to 2.
- at anchor{separatefields}
- at section separatefields
+ at item fillcolor, c
+Set the color used to fill the output area not covered by the transformed
+video. For the general syntax of this option, check the
+ at ref{color syntax,,"Color" section in the ffmpeg-utils manual,ffmpeg-utils}.
+If the special value "none" is selected then no
+background is printed (useful for example if the background is never shown).
-The @code{separatefields} takes a frame-based video input and splits
-each frame into its components fields, producing a new half height clip
-with twice the frame rate and twice the frame count.
+Default value is "black".
-This filter use field-dominance information in frame to decide which
-of each pair of fields to place first in the output.
-If it gets it wrong use @ref{setfield} filter before @code{separatefields} filter.
+ at item interp
+Set interpolation type. Can be @code{bilinear} or @code{nearest}. Default is @code{bilinear}.
+ at end table
- at section setdar, setsar
+ at subsection Commands
-The @code{setdar} filter sets the Display Aspect Ratio for the filter
-output video.
+This filter supports the all above options as @ref{commands}.
-This is done by changing the specified Sample (aka Pixel) Aspect
-Ratio, according to the following equation:
- at example
- at var{DAR} = @var{HORIZONTAL_RESOLUTION} / @var{VERTICAL_RESOLUTION} * @var{SAR}
- at end example
+ at section showinfo
-Keep in mind that the @code{setdar} filter does not modify the pixel
-dimensions of the video frame. Also, the display aspect ratio set by
-this filter may be changed by later filters in the filterchain,
-e.g. in case of scaling or if another "setdar" or a "setsar" filter is
-applied.
+Show a line containing various information for each input video frame.
+The input video is not modified.
-The @code{setsar} filter sets the Sample (aka Pixel) Aspect Ratio for
-the filter output video.
+This filter supports the following options:
-Note that as a consequence of the application of this filter, the
-output display aspect ratio will change according to the equation
-above.
+ at table @option
+ at item checksum
+Calculate checksums of each plane. By default enabled.
-Keep in mind that the sample aspect ratio set by the @code{setsar}
-filter may be changed by later filters in the filterchain, e.g. if
-another "setsar" or a "setdar" filter is applied.
+ at item udu_sei_as_ascii
+Try to print user data unregistered SEI as ascii character when possible,
+in hex format otherwise.
+ at end table
-It accepts the following parameters:
+The shown line contains a sequence of key/value pairs of the form
+ at var{key}:@var{value}.
+
+The following values are shown in the output:
@table @option
- at item r, ratio, dar (@code{setdar} only), sar (@code{setsar} only)
-Set the aspect ratio used by the filter.
+ at item n
+The (sequential) number of the input frame, starting from 0.
-The parameter can be a floating point number string, or an expression. If the
-parameter is not specified, the value "0" is assumed, meaning that the same
-input value is used.
+ at item pts
+The Presentation TimeStamp of the input frame, expressed as a number of
+time base units. The time base unit depends on the filter input pad.
- at item max
-Set the maximum integer value to use for expressing numerator and
-denominator when reducing the expressed aspect ratio to a rational.
-Default value is @code{100}.
+ at item pts_time
+The Presentation TimeStamp of the input frame, expressed as a number of
+seconds.
- at end table
+ at item fmt
+The pixel format name.
-The parameter @var{sar} is an expression containing the following constants:
+ at item sar
+The sample aspect ratio of the input frame, expressed in the form
+ at var{num}/@var{den}.
- at table @option
- at item w, h
-The input width and height.
+ at item s
+The size of the input frame. For the syntax of this option, check the
+ at ref{video size syntax,,"Video size" section in the ffmpeg-utils manual,ffmpeg-utils}.
- at item a
-Same as @var{w} / @var{h}.
+ at item i
+The type of interlaced mode ("P" for "progressive", "T" for top field first, "B"
+for bottom field first).
- at item sar
-The input sample aspect ratio.
+ at item iskey
+This is 1 if the frame is a key frame, 0 otherwise.
- at item dar
-The input display aspect ratio. It is the same as
-(@var{w} / @var{h}) * @var{sar}.
+ at item type
+The picture type of the input frame ("I" for an I-frame, "P" for a
+P-frame, "B" for a B-frame, or "?" for an unknown type).
+Also refer to the documentation of the @code{AVPictureType} enum and of
+the @code{av_get_picture_type_char} function defined in
+ at file{libavutil/avutil.h}.
- at item hsub, vsub
-Horizontal and vertical chroma subsample values. For example, for the
-pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
- at end table
+ at item checksum
+The Adler-32 checksum (printed in hexadecimal) of all the planes of the input frame.
- at subsection Examples
+ at item plane_checksum
+The Adler-32 checksum (printed in hexadecimal) of each plane of the input frame,
+expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3}]".
- at itemize
+ at item mean
+The mean value of pixels in each plane of the input frame, expressed in the form
+"[@var{mean0} @var{mean1} @var{mean2} @var{mean3}]".
- at item
-To change the display aspect ratio to 16:9, specify one of the following:
- at example
-setdar=dar=1.77777
-setdar=dar=16/9
- at end example
+ at item stdev
+The standard deviation of pixel values in each plane of the input frame, expressed
+in the form "[@var{stdev0} @var{stdev1} @var{stdev2} @var{stdev3}]".
- at item
-To change the sample aspect ratio to 10:11, specify:
- at example
-setsar=sar=10/11
- at end example
-
- at item
-To set a display aspect ratio of 16:9, and specify a maximum integer value of
-1000 in the aspect ratio reduction, use the command:
- at example
-setdar=ratio=16/9:max=1000
- at end example
-
- at end itemize
-
- at anchor{setfield}
- at section setfield
+ at end table
-Force field for the output video frame.
+ at section showpalette
-The @code{setfield} filter marks the interlace type field for the
-output frames. It does not change the input frame, but only sets the
-corresponding property, which affects how the frame is treated by
-following filters (e.g. @code{fieldorder} or @code{yadif}).
+Displays the 256 colors palette of each frame. This filter is only relevant for
+ at var{pal8} pixel format frames.
-The filter accepts the following options:
+It accepts the following option:
@table @option
+ at item s
+Set the size of the box used to represent one palette color entry. Default is
+ at code{30} (for a @code{30x30} pixel box).
+ at end table
- at item mode
-Available values are:
-
- at table @samp
- at item auto
-Keep the same field property.
+ at section shuffleframes
- at item bff
-Mark the frame as bottom-field-first.
+Reorder and/or duplicate and/or drop video frames.
- at item tff
-Mark the frame as top-field-first.
+It accepts the following parameters:
- at item prog
-Mark the frame as progressive.
- at end table
+ at table @option
+ at item mapping
+Set the destination indexes of input frames.
+This is space or '|' separated list of indexes that maps input frames to output
+frames. Number of indexes also sets maximal value that each index may have.
+'-1' index have special meaning and that is to drop frame.
@end table
- at anchor{setparams}
- at section setparams
-
-Force frame parameter for the output video frame.
-
-The @code{setparams} filter marks interlace and color range for the
-output frames. It does not change the input frame, but only sets the
-corresponding property, which affects how the frame is treated by
-filters/encoders.
+The first frame has the index 0. The default is to keep the input unchanged.
- at table @option
- at item field_mode
-Available values are:
+ at subsection Examples
- at table @samp
- at item auto
-Keep the same field property (default).
+ at itemize
+ at item
+Swap second and third frame of every three frames of the input:
+ at example
+ffmpeg -i INPUT -vf "shuffleframes=0 2 1" OUTPUT
+ at end example
- at item bff
-Mark the frame as bottom-field-first.
+ at item
+Swap 10th and 1st frame of every ten frames of the input:
+ at example
+ffmpeg -i INPUT -vf "shuffleframes=9 1 2 3 4 5 6 7 8 0" OUTPUT
+ at end example
+ at end itemize
- at item tff
-Mark the frame as top-field-first.
+ at section shufflepixels
- at item prog
-Mark the frame as progressive.
- at end table
+Reorder pixels in video frames.
- at item range
-Available values are:
+This filter accepts the following options:
- at table @samp
- at item auto
-Keep the same color range property (default).
+ at table @option
+ at item direction, d
+Set shuffle direction. Can be forward or inverse direction.
+Default direction is forward.
- at item unspecified, unknown
-Mark the frame as unspecified color range.
+ at item mode, m
+Set shuffle mode. Can be horizontal, vertical or block mode.
- at item limited, tv, mpeg
-Mark the frame as limited range.
+ at item width, w
+ at item height, h
+Set shuffle block_size. In case of horizontal shuffle mode only width
+part of size is used, and in case of vertical shuffle mode only height
+part of size is used.
- at item full, pc, jpeg
-Mark the frame as full range.
+ at item seed, s
+Set random seed used with shuffling pixels. Mainly useful to set to be able
+to reverse filtering process to get original input.
+For example, to reverse forward shuffle you need to use same parameters
+and exact same seed and to set direction to inverse.
@end table
- at item color_primaries
-Set the color primaries.
-Available values are:
+ at section shuffleplanes
- at table @samp
- at item auto
-Keep the same color primaries property (default).
+Reorder and/or duplicate video planes.
- at item bt709
- at item unknown
- at item bt470m
- at item bt470bg
- at item smpte170m
- at item smpte240m
- at item film
- at item bt2020
- at item smpte428
- at item smpte431
- at item smpte432
- at item jedec-p22
- at end table
+It accepts the following parameters:
- at item color_trc
-Set the color transfer.
-Available values are:
+ at table @option
- at table @samp
- at item auto
-Keep the same color trc property (default).
+ at item map0
+The index of the input plane to be used as the first output plane.
- at item bt709
- at item unknown
- at item bt470m
- at item bt470bg
- at item smpte170m
- at item smpte240m
- at item linear
- at item log100
- at item log316
- at item iec61966-2-4
- at item bt1361e
- at item iec61966-2-1
- at item bt2020-10
- at item bt2020-12
- at item smpte2084
- at item smpte428
- at item arib-std-b67
- at end table
+ at item map1
+The index of the input plane to be used as the second output plane.
- at item colorspace
-Set the colorspace.
-Available values are:
+ at item map2
+The index of the input plane to be used as the third output plane.
- at table @samp
- at item auto
-Keep the same colorspace property (default).
+ at item map3
+The index of the input plane to be used as the fourth output plane.
- at item gbr
- at item bt709
- at item unknown
- at item fcc
- at item bt470bg
- at item smpte170m
- at item smpte240m
- at item ycgco
- at item bt2020nc
- at item bt2020c
- at item smpte2085
- at item chroma-derived-nc
- at item chroma-derived-c
- at item ictcp
@end table
- at item chroma_location
-Set the chroma sample location.
-Available values are:
+The first plane has the index 0. The default is to keep the input unchanged.
- at table @samp
- at item auto
-Keep the same chroma location (default).
+ at subsection Examples
- at item unspecified, unknown
- at item left
- at item center
- at item topleft
- at item top
- at item bottomleft
- at item bottom
- at end table
- at end table
+ at itemize
+ at item
+Swap the second and third planes of the input:
+ at example
+ffmpeg -i INPUT -vf shuffleplanes=0:2:1:3 OUTPUT
+ at end example
+ at end itemize
- at section sharpen_npp
-Use the NVIDIA Performance Primitives (libnpp) to perform image sharpening with
-border control.
+ at anchor{signalstats}
+ at section signalstats
+Evaluate various visual metrics that assist in determining issues associated
+with the digitization of analog video media.
-The following additional options are accepted:
- at table @option
+By default the filter will log these metadata values:
- at item border_type
-Type of sampling to be used ad frame borders. One of the following:
@table @option
+ at item YMIN
+Display the minimal Y value contained within the input frame. Expressed in
+range of [0-255].
- at item replicate
-Replicate pixel values.
+ at item YLOW
+Display the Y value at the 10% percentile within the input frame. Expressed in
+range of [0-255].
- at end table
- at end table
+ at item YAVG
+Display the average Y value within the input frame. Expressed in range of
+[0-255].
- at section shear
-Apply shear transform to input video.
+ at item YHIGH
+Display the Y value at the 90% percentile within the input frame. Expressed in
+range of [0-255].
-This filter supports the following options:
+ at item YMAX
+Display the maximum Y value contained within the input frame. Expressed in
+range of [0-255].
- at table @option
- at item shx
-Shear factor in X-direction. Default value is 0.
-Allowed range is from -2 to 2.
-
- at item shy
-Shear factor in Y-direction. Default value is 0.
-Allowed range is from -2 to 2.
-
- at item fillcolor, c
-Set the color used to fill the output area not covered by the transformed
-video. For the general syntax of this option, check the
- at ref{color syntax,,"Color" section in the ffmpeg-utils manual,ffmpeg-utils}.
-If the special value "none" is selected then no
-background is printed (useful for example if the background is never shown).
-
-Default value is "black".
-
- at item interp
-Set interpolation type. Can be @code{bilinear} or @code{nearest}. Default is @code{bilinear}.
- at end table
-
- at subsection Commands
-
-This filter supports the all above options as @ref{commands}.
-
- at section showinfo
-
-Show a line containing various information for each input video frame.
-The input video is not modified.
-
-This filter supports the following options:
-
- at table @option
- at item checksum
-Calculate checksums of each plane. By default enabled.
-
- at item udu_sei_as_ascii
-Try to print user data unregistered SEI as ascii character when possible,
-in hex format otherwise.
- at end table
-
-The shown line contains a sequence of key/value pairs of the form
- at var{key}:@var{value}.
-
-The following values are shown in the output:
-
- at table @option
- at item n
-The (sequential) number of the input frame, starting from 0.
-
- at item pts
-The Presentation TimeStamp of the input frame, expressed as a number of
-time base units. The time base unit depends on the filter input pad.
-
- at item pts_time
-The Presentation TimeStamp of the input frame, expressed as a number of
-seconds.
-
- at item fmt
-The pixel format name.
-
- at item sar
-The sample aspect ratio of the input frame, expressed in the form
- at var{num}/@var{den}.
-
- at item s
-The size of the input frame. For the syntax of this option, check the
- at ref{video size syntax,,"Video size" section in the ffmpeg-utils manual,ffmpeg-utils}.
-
- at item i
-The type of interlaced mode ("P" for "progressive", "T" for top field first, "B"
-for bottom field first).
-
- at item iskey
-This is 1 if the frame is a key frame, 0 otherwise.
-
- at item type
-The picture type of the input frame ("I" for an I-frame, "P" for a
-P-frame, "B" for a B-frame, or "?" for an unknown type).
-Also refer to the documentation of the @code{AVPictureType} enum and of
-the @code{av_get_picture_type_char} function defined in
- at file{libavutil/avutil.h}.
-
- at item checksum
-The Adler-32 checksum (printed in hexadecimal) of all the planes of the input frame.
-
- at item plane_checksum
-The Adler-32 checksum (printed in hexadecimal) of each plane of the input frame,
-expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3}]".
-
- at item mean
-The mean value of pixels in each plane of the input frame, expressed in the form
-"[@var{mean0} @var{mean1} @var{mean2} @var{mean3}]".
-
- at item stdev
-The standard deviation of pixel values in each plane of the input frame, expressed
-in the form "[@var{stdev0} @var{stdev1} @var{stdev2} @var{stdev3}]".
-
- at end table
-
- at section showpalette
-
-Displays the 256 colors palette of each frame. This filter is only relevant for
- at var{pal8} pixel format frames.
-
-It accepts the following option:
-
- at table @option
- at item s
-Set the size of the box used to represent one palette color entry. Default is
- at code{30} (for a @code{30x30} pixel box).
- at end table
-
- at section shuffleframes
-
-Reorder and/or duplicate and/or drop video frames.
-
-It accepts the following parameters:
-
- at table @option
- at item mapping
-Set the destination indexes of input frames.
-This is space or '|' separated list of indexes that maps input frames to output
-frames. Number of indexes also sets maximal value that each index may have.
-'-1' index have special meaning and that is to drop frame.
- at end table
-
-The first frame has the index 0. The default is to keep the input unchanged.
-
- at subsection Examples
-
- at itemize
- at item
-Swap second and third frame of every three frames of the input:
- at example
-ffmpeg -i INPUT -vf "shuffleframes=0 2 1" OUTPUT
- at end example
-
- at item
-Swap 10th and 1st frame of every ten frames of the input:
- at example
-ffmpeg -i INPUT -vf "shuffleframes=9 1 2 3 4 5 6 7 8 0" OUTPUT
- at end example
- at end itemize
-
- at section shufflepixels
-
-Reorder pixels in video frames.
-
-This filter accepts the following options:
-
- at table @option
- at item direction, d
-Set shuffle direction. Can be forward or inverse direction.
-Default direction is forward.
-
- at item mode, m
-Set shuffle mode. Can be horizontal, vertical or block mode.
-
- at item width, w
- at item height, h
-Set shuffle block_size. In case of horizontal shuffle mode only width
-part of size is used, and in case of vertical shuffle mode only height
-part of size is used.
-
- at item seed, s
-Set random seed used with shuffling pixels. Mainly useful to set to be able
-to reverse filtering process to get original input.
-For example, to reverse forward shuffle you need to use same parameters
-and exact same seed and to set direction to inverse.
- at end table
-
- at section shuffleplanes
-
-Reorder and/or duplicate video planes.
-
-It accepts the following parameters:
-
- at table @option
-
- at item map0
-The index of the input plane to be used as the first output plane.
-
- at item map1
-The index of the input plane to be used as the second output plane.
-
- at item map2
-The index of the input plane to be used as the third output plane.
-
- at item map3
-The index of the input plane to be used as the fourth output plane.
-
- at end table
-
-The first plane has the index 0. The default is to keep the input unchanged.
-
- at subsection Examples
-
- at itemize
- at item
-Swap the second and third planes of the input:
- at example
-ffmpeg -i INPUT -vf shuffleplanes=0:2:1:3 OUTPUT
- at end example
- at end itemize
-
- at anchor{signalstats}
- at section signalstats
-Evaluate various visual metrics that assist in determining issues associated
-with the digitization of analog video media.
-
-By default the filter will log these metadata values:
-
- at table @option
- at item YMIN
-Display the minimal Y value contained within the input frame. Expressed in
-range of [0-255].
-
- at item YLOW
-Display the Y value at the 10% percentile within the input frame. Expressed in
-range of [0-255].
-
- at item YAVG
-Display the average Y value within the input frame. Expressed in range of
-[0-255].
-
- at item YHIGH
-Display the Y value at the 90% percentile within the input frame. Expressed in
-range of [0-255].
-
- at item YMAX
-Display the maximum Y value contained within the input frame. Expressed in
-range of [0-255].
-
- at item UMIN
-Display the minimal U value contained within the input frame. Expressed in
-range of [0-255].
+ at item UMIN
+Display the minimal U value contained within the input frame. Expressed in
+range of [0-255].
@item ULOW
Display the U value at the 10% percentile within the input frame. Expressed in
@@ -24417,64 +23876,23 @@ The command above can also be specified as:
transpose=1:portrait
@end example
- at section transpose_npp
-
-Transpose rows with columns in the input video and optionally flip it.
-For more in depth examples see the @ref{transpose} video filter, which shares mostly the same options.
+ at section trim
+Trim the input so that the output contains one continuous subpart of the input.
It accepts the following parameters:
-
@table @option
+ at item start
+Specify the time of the start of the kept section, i.e. the frame with the
+timestamp @var{start} will be the first frame in the output.
- at item dir
-Specify the transposition direction.
+ at item end
+Specify the time of the first frame that will be dropped, i.e. the frame
+immediately preceding the one with the timestamp @var{end} will be the last
+frame in the output.
-Can assume the following values:
- at table @samp
- at item cclock_flip
-Rotate by 90 degrees counterclockwise and vertically flip. (default)
-
- at item clock
-Rotate by 90 degrees clockwise.
-
- at item cclock
-Rotate by 90 degrees counterclockwise.
-
- at item clock_flip
-Rotate by 90 degrees clockwise and vertically flip.
- at end table
-
- at item passthrough
-Do not apply the transposition if the input geometry matches the one
-specified by the specified value. It accepts the following values:
- at table @samp
- at item none
-Always apply transposition. (default)
- at item portrait
-Preserve portrait geometry (when @var{height} >= @var{width}).
- at item landscape
-Preserve landscape geometry (when @var{width} >= @var{height}).
- at end table
-
- at end table
-
- at section trim
-Trim the input so that the output contains one continuous subpart of the input.
-
-It accepts the following parameters:
- at table @option
- at item start
-Specify the time of the start of the kept section, i.e. the frame with the
-timestamp @var{start} will be the first frame in the output.
-
- at item end
-Specify the time of the first frame that will be dropped, i.e. the frame
-immediately preceding the one with the timestamp @var{end} will be the last
-frame in the output.
-
- at item start_pts
-This is the same as @var{start}, except this option sets the start timestamp
-in timebase units instead of seconds.
+ at item start_pts
+This is the same as @var{start}, except this option sets the start timestamp
+in timebase units instead of seconds.
@item end_pts
This is the same as @var{end}, except this option sets the end timestamp
@@ -26415,237 +25833,807 @@ ffmpeg -i first.mp4 -i second.mp4 -filter_complex xfade=transition=fade:duration
@end example
@end itemize
- at section xmedian
-Pick median pixels from several input videos.
+ at section xmedian
+Pick median pixels from several input videos.
+
+The filter accepts the following options:
+
+ at table @option
+ at item inputs
+Set number of inputs.
+Default is 3. Allowed range is from 3 to 255.
+If number of inputs is even number, than result will be mean value between two median values.
+
+ at item planes
+Set which planes to filter. Default value is @code{15}, by which all planes are processed.
+
+ at item percentile
+Set median percentile. Default value is @code{0.5}.
+Default value of @code{0.5} will pick always median values, while @code{0} will pick
+minimum values, and @code{1} maximum values.
+ at end table
+
+ at subsection Commands
+
+This filter supports all above options as @ref{commands}, excluding option @code{inputs}.
+
+ at anchor{xpsnr}
+ at section xpsnr
+
+Obtain the average (across all input frames) and minimum (across all color plane averages)
+eXtended Perceptually weighted peak Signal-to-Noise Ratio (XPSNR) between two input videos.
+
+The XPSNR is a low-complexity psychovisually motivated distortion measurement algorithm for
+assessing the difference between two video streams or images. This is especially useful for
+objectively quantifying the distortions caused by video and image codecs, as an alternative
+to a formal subjective test. The logarithmic XPSNR output values are in a similar range as
+those of traditional @ref{psnr} assessments but better reflect human impressions of visual
+coding quality. More details on the XPSNR measure, which essentially represents a blockwise
+weighted variant of the PSNR measure, can be found in the following freely available papers:
+
+ at itemize
+ at item
+C. R. Helmrich, M. Siekmann, S. Becker, S. Bosse, D. Marpe, and T. Wiegand, "XPSNR: A
+Low-Complexity Extension of the Perceptually Weighted Peak Signal-to-Noise Ratio for
+High-Resolution Video Quality Assessment," in Proc. IEEE Int. Conf. Acoustics, Speech,
+Sig. Process. (ICASSP), virt./online, May 2020. @url{www.ecodis.de/xpsnr.htm}
+
+ at item
+C. R. Helmrich, S. Bosse, H. Schwarz, D. Marpe, and T. Wiegand, "A Study of the
+Extended Perceptually Weighted Peak Signal-to-Noise Ratio (XPSNR) for Video Compression
+with Different Resolutions and Bit Depths," ITU Journal: ICT Discoveries, vol. 3, no.
+1, pp. 65 - 72, May 2020. @url{http://handle.itu.int/11.1002/pub/8153d78b-en}
+ at end itemize
+
+When publishing the results of XPSNR assessments obtained using, e.g., this FFmpeg filter, a
+reference to the above papers as a means of documentation is strongly encouraged. The filter
+requires two input videos. The first input is considered a (usually not distorted) reference
+source and is passed unchanged to the output, whereas the second input is a (distorted) test
+signal. Except for the bit depth, these two video inputs must have the same pixel format. In
+addition, for best performance, both compared input videos should be in YCbCr color format.
+
+The obtained overall XPSNR values mentioned above are printed through the logging system. In
+case of input with multiple color planes, we suggest reporting of the minimum XPSNR average.
+
+The following parameter, which behaves like the one for the @ref{psnr} filter, is accepted:
+
+ at table @option
+ at item stats_file, f
+If specified, the filter will use the named file to save the XPSNR value of each individual
+frame and color plane. When the file name equals "-", that data is sent to standard output.
+ at end table
+
+This filter also supports the @ref{framesync} options.
+
+ at subsection Examples
+ at itemize
+ at item
+XPSNR analysis of two 1080p HD videos, ref_source.yuv and test_video.yuv, both at 24 frames
+per second, with color format 4:2:0, bit depth 8, and output of a logfile named "xpsnr.log":
+ at example
+ffmpeg -s 1920x1080 -framerate 24 -pix_fmt yuv420p -i ref_source.yuv -s 1920x1080 -framerate
+24 -pix_fmt yuv420p -i test_video.yuv -lavfi xpsnr="stats_file=xpsnr.log" -f null -
+ at end example
+
+ at item
+XPSNR analysis of two 2160p UHD videos, ref_source.yuv with bit depth 8 and test_video.yuv
+with bit depth 10, both at 60 frames per second with color format 4:2:0, no logfile output:
+ at example
+ffmpeg -s 3840x2160 -framerate 60 -pix_fmt yuv420p -i ref_source.yuv -s 3840x2160 -framerate
+60 -pix_fmt yuv420p10le -i test_video.yuv -lavfi xpsnr="stats_file=-" -f null -
+ at end example
+ at end itemize
+
+ at anchor{xstack}
+ at section xstack
+Stack video inputs into custom layout.
+
+All streams must be of same pixel format.
+
+The filter accepts the following options:
+
+ at table @option
+ at item inputs
+Set number of input streams. Default is 2.
+
+ at item layout
+Specify layout of inputs.
+This option requires the desired layout configuration to be explicitly set by the user.
+This sets position of each video input in output. Each input
+is separated by '|'.
+The first number represents the column, and the second number represents the row.
+Numbers start at 0 and are separated by '_'. Optionally one can use wX and hX,
+where X is video input from which to take width or height.
+Multiple values can be used when separated by '+'. In such
+case values are summed together.
+
+Note that if inputs are of different sizes gaps may appear, as not all of
+the output video frame will be filled. Similarly, videos can overlap each
+other if their position doesn't leave enough space for the full frame of
+adjoining videos.
+
+For 2 inputs, a default layout of @code{0_0|w0_0} (equivalent to
+ at code{grid=2x1}) is set. In all other cases, a layout or a grid must be set by
+the user. Either @code{grid} or @code{layout} can be specified at a time.
+Specifying both will result in an error.
+
+ at item grid
+Specify a fixed size grid of inputs.
+This option is used to create a fixed size grid of the input streams. Set the
+grid size in the form @code{COLUMNSxROWS}. There must be @code{ROWS * COLUMNS}
+input streams and they will be arranged as a grid with @code{ROWS} rows and
+ at code{COLUMNS} columns. When using this option, each input stream within a row
+must have the same height and all the rows must have the same width.
+
+If @code{grid} is set, then @code{inputs} option is ignored and is implicitly
+set to @code{ROWS * COLUMNS}.
+
+For 2 inputs, a default grid of @code{2x1} (equivalent to
+ at code{layout=0_0|w0_0}) is set. In all other cases, a layout or a grid must be
+set by the user. Either @code{grid} or @code{layout} can be specified at a time.
+Specifying both will result in an error.
+
+ at item shortest
+If set to 1, force the output to terminate when the shortest input
+terminates. Default value is 0.
+
+ at item fill
+If set to valid color, all unused pixels will be filled with that color.
+By default fill is set to none, so it is disabled.
+ at end table
+
+ at subsection Examples
+
+ at itemize
+ at item
+Display 4 inputs into 2x2 grid.
+
+Layout:
+ at example
+input1(0, 0) | input3(w0, 0)
+input2(0, h0) | input4(w0, h0)
+ at end example
+
+ at example
+xstack=inputs=4:layout=0_0|0_h0|w0_0|w0_h0
+ at end example
+
+Note that if inputs are of different sizes, gaps or overlaps may occur.
+
+ at item
+Display 4 inputs into 1x4 grid.
+
+Layout:
+ at example
+input1(0, 0)
+input2(0, h0)
+input3(0, h0+h1)
+input4(0, h0+h1+h2)
+ at end example
+
+ at example
+xstack=inputs=4:layout=0_0|0_h0|0_h0+h1|0_h0+h1+h2
+ at end example
+
+Note that if inputs are of different widths, unused space will appear.
+
+ at item
+Display 9 inputs into 3x3 grid.
+
+Layout:
+ at example
+input1(0, 0) | input4(w0, 0) | input7(w0+w3, 0)
+input2(0, h0) | input5(w0, h0) | input8(w0+w3, h0)
+input3(0, h0+h1) | input6(w0, h0+h1) | input9(w0+w3, h0+h1)
+ at end example
+
+ at example
+xstack=inputs=9:layout=0_0|0_h0|0_h0+h1|w0_0|w0_h0|w0_h0+h1|w0+w3_0|w0+w3_h0|w0+w3_h0+h1
+ at end example
+
+Note that if inputs are of different sizes, gaps or overlaps may occur.
+
+ at item
+Display 16 inputs into 4x4 grid.
+
+Layout:
+ at example
+input1(0, 0) | input5(w0, 0) | input9 (w0+w4, 0) | input13(w0+w4+w8, 0)
+input2(0, h0) | input6(w0, h0) | input10(w0+w4, h0) | input14(w0+w4+w8, h0)
+input3(0, h0+h1) | input7(w0, h0+h1) | input11(w0+w4, h0+h1) | input15(w0+w4+w8, h0+h1)
+input4(0, h0+h1+h2)| input8(w0, h0+h1+h2)| input12(w0+w4, h0+h1+h2)| input16(w0+w4+w8, h0+h1+h2)
+ at end example
+
+ at example
+xstack=inputs=16:layout=0_0|0_h0|0_h0+h1|0_h0+h1+h2|w0_0|w0_h0|w0_h0+h1|w0_h0+h1+h2|w0+w4_0|
+w0+w4_h0|w0+w4_h0+h1|w0+w4_h0+h1+h2|w0+w4+w8_0|w0+w4+w8_h0|w0+w4+w8_h0+h1|w0+w4+w8_h0+h1+h2
+ at end example
+
+Note that if inputs are of different sizes, gaps or overlaps may occur.
+
+ at end itemize
+
+ at anchor{yadif}
+ at section yadif
+
+Deinterlace the input video ("yadif" means "yet another deinterlacing
+filter").
+
+It accepts the following parameters:
+
+
+ at table @option
+
+ at item mode
+The interlacing mode to adopt. It accepts one of the following values:
+
+ at table @option
+ at item 0, send_frame
+Output one frame for each frame.
+ at item 1, send_field
+Output one frame for each field.
+ at item 2, send_frame_nospatial
+Like @code{send_frame}, but it skips the spatial interlacing check.
+ at item 3, send_field_nospatial
+Like @code{send_field}, but it skips the spatial interlacing check.
+ at end table
+
+The default value is @code{send_frame}.
+
+ at item parity
+The picture field parity assumed for the input interlaced video. It accepts one
+of the following values:
+
+ at table @option
+ at item 0, tff
+Assume the top field is first.
+ at item 1, bff
+Assume the bottom field is first.
+ at item -1, auto
+Enable automatic detection of field parity.
+ at end table
+
+The default value is @code{auto}.
+If the interlacing is unknown or the decoder does not export this information,
+top field first will be assumed.
+
+ at item deint
+Specify which frames to deinterlace. Accepts one of the following
+values:
+
+ at table @option
+ at item 0, all
+Deinterlace all frames.
+ at item 1, interlaced
+Only deinterlace frames marked as interlaced.
+ at end table
+
+The default value is @code{all}.
+ at end table
+
+ at section yaepblur
+
+Apply blur filter while preserving edges ("yaepblur" means "yet another edge preserving blur filter").
+The algorithm is described in
+"J. S. Lee, Digital image enhancement and noise filtering by use of local statistics, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 1980."
+
+It accepts the following parameters:
+
+ at table @option
+ at item radius, r
+Set the window radius. Default value is 3.
+
+ at item planes, p
+Set which planes to filter. Default is only the first plane.
+
+ at item sigma, s
+Set blur strength. Default value is 128.
+ at end table
+
+ at subsection Commands
+This filter supports same @ref{commands} as options.
+
+ at section zoompan
+
+Apply Zoom & Pan effect.
+
+This filter accepts the following options:
+
+ at table @option
+ at item zoom, z
+Set the zoom expression. Range is 1-10. Default is 1.
+
+ at item x
+ at item y
+Set the x and y expression. Default is 0.
+
+ at item d
+Set the duration expression in number of frames.
+This sets for how many number of frames effect will last for
+single input image. Default is 90.
+
+ at item s
+Set the output image size, default is 'hd720'.
+
+ at item fps
+Set the output frame rate, default is '25'.
+ at end table
+
+Each expression can contain the following constants:
+
+ at table @option
+ at item in_w, iw
+Input width.
+
+ at item in_h, ih
+Input height.
+
+ at item out_w, ow
+Output width.
+
+ at item out_h, oh
+Output height.
+
+ at item in
+Input frame count.
+
+ at item on
+Output frame count.
+
+ at item in_time, it
+The input timestamp expressed in seconds. It's NAN if the input timestamp is unknown.
+
+ at item out_time, time, ot
+The output timestamp expressed in seconds.
+
+ at item x
+ at item y
+Last calculated 'x' and 'y' position from 'x' and 'y' expression
+for current input frame.
+
+ at item px
+ at item py
+'x' and 'y' of last output frame of previous input frame or 0 when there was
+not yet such frame (first input frame).
+
+ at item zoom
+Last calculated zoom from 'z' expression for current input frame.
+
+ at item pzoom
+Last calculated zoom of last output frame of previous input frame.
+
+ at item duration
+Number of output frames for current input frame. Calculated from 'd' expression
+for each input frame.
+
+ at item pduration
+number of output frames created for previous input frame
+
+ at item a
+Rational number: input width / input height
+
+ at item sar
+sample aspect ratio
+
+ at item dar
+display aspect ratio
+
+ at end table
+
+ at subsection Examples
+
+ at itemize
+ at item
+Zoom in up to 1.5x and pan at same time to some spot near center of picture:
+ at example
+zoompan=z='min(zoom+0.0015,1.5)':d=700:x='if(gte(zoom,1.5),x,x+1/a)':y='if(gte(zoom,1.5),y,y+1)':s=640x360
+ at end example
+
+ at item
+Zoom in up to 1.5x and pan always at center of picture:
+ at example
+zoompan=z='min(zoom+0.0015,1.5)':d=700:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
+ at end example
+
+ at item
+Same as above but without pausing:
+ at example
+zoompan=z='min(max(zoom,pzoom)+0.0015,1.5)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
+ at end example
+
+ at item
+Zoom in 2x into center of picture only for the first second of the input video:
+ at example
+zoompan=z='if(between(in_time,0,1),2,1)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
+ at end example
+
+ at end itemize
+
+ at anchor{zscale}
+ at section zscale
+Scale (resize) the input video, using the z.lib library:
+ at url{https://github.com/sekrit-twc/zimg}. To enable compilation of this
+filter, you need to configure FFmpeg with @code{--enable-libzimg}.
+
+The zscale filter forces the output display aspect ratio to be the same
+as the input, by changing the output sample aspect ratio.
+
+If the input image format is different from the format requested by
+the next filter, the zscale filter will convert the input to the
+requested format.
+
+ at subsection Options
+The filter accepts the following options.
+
+ at table @option
+ at item width, w
+ at item height, h
+Set the output video dimension expression. Default value is the input
+dimension.
+
+If the @var{width} or @var{w} value is 0, the input width is used for
+the output. If the @var{height} or @var{h} value is 0, the input height
+is used for the output.
+
+If one and only one of the values is -n with n >= 1, the zscale filter
+will use a value that maintains the aspect ratio of the input image,
+calculated from the other specified dimension. After that it will,
+however, make sure that the calculated dimension is divisible by n and
+adjust the value if necessary.
+
+If both values are -n with n >= 1, the behavior will be identical to
+both values being set to 0 as previously detailed.
+
+See below for the list of accepted constants for use in the dimension
+expression.
+
+ at item size, s
+Set the video size. For the syntax of this option, check the
+ at ref{video size syntax,,"Video size" section in the ffmpeg-utils manual,ffmpeg-utils}.
+
+ at item dither, d
+Set the dither type.
+
+Possible values are:
+ at table @var
+ at item none
+ at item ordered
+ at item random
+ at item error_diffusion
+ at end table
+
+Default is none.
+
+ at item filter, f
+Set the resize filter type.
+
+Possible values are:
+ at table @var
+ at item point
+ at item bilinear
+ at item bicubic
+ at item spline16
+ at item spline36
+ at item lanczos
+ at end table
+
+Default is bilinear.
+
+ at item range, r
+Set the color range.
+
+Possible values are:
+ at table @var
+ at item input
+ at item limited
+ at item full
+ at end table
+
+Default is same as input.
+
+ at item primaries, p
+Set the color primaries.
+
+Possible values are:
+ at table @var
+ at item input
+ at item 709
+ at item unspecified
+ at item 170m
+ at item 240m
+ at item 2020
+ at end table
+
+Default is same as input.
+
+ at item transfer, t
+Set the transfer characteristics.
+
+Possible values are:
+ at table @var
+ at item input
+ at item 709
+ at item unspecified
+ at item 601
+ at item linear
+ at item 2020_10
+ at item 2020_12
+ at item smpte2084
+ at item iec61966-2-1
+ at item arib-std-b67
+ at end table
+
+Default is same as input.
+
+ at item matrix, m
+Set the colorspace matrix.
+
+Possible value are:
+ at table @var
+ at item input
+ at item 709
+ at item unspecified
+ at item 470bg
+ at item 170m
+ at item 2020_ncl
+ at item 2020_cl
+ at end table
+
+Default is same as input.
+
+ at item rangein, rin
+Set the input color range.
+
+Possible values are:
+ at table @var
+ at item input
+ at item limited
+ at item full
+ at end table
+
+Default is same as input.
+
+ at item primariesin, pin
+Set the input color primaries.
+
+Possible values are:
+ at table @var
+ at item input
+ at item 709
+ at item unspecified
+ at item 170m
+ at item 240m
+ at item 2020
+ at end table
+
+Default is same as input.
-The filter accepts the following options:
+ at item transferin, tin
+Set the input transfer characteristics.
- at table @option
- at item inputs
-Set number of inputs.
-Default is 3. Allowed range is from 3 to 255.
-If number of inputs is even number, than result will be mean value between two median values.
+Possible values are:
+ at table @var
+ at item input
+ at item 709
+ at item unspecified
+ at item 601
+ at item linear
+ at item 2020_10
+ at item 2020_12
+ at end table
- at item planes
-Set which planes to filter. Default value is @code{15}, by which all planes are processed.
+Default is same as input.
- at item percentile
-Set median percentile. Default value is @code{0.5}.
-Default value of @code{0.5} will pick always median values, while @code{0} will pick
-minimum values, and @code{1} maximum values.
+ at item matrixin, min
+Set the input colorspace matrix.
+
+Possible value are:
+ at table @var
+ at item input
+ at item 709
+ at item unspecified
+ at item 470bg
+ at item 170m
+ at item 2020_ncl
+ at item 2020_cl
@end table
- at subsection Commands
+ at item chromal, c
+Set the output chroma location.
-This filter supports all above options as @ref{commands}, excluding option @code{inputs}.
+Possible values are:
+ at table @var
+ at item input
+ at item left
+ at item center
+ at item topleft
+ at item top
+ at item bottomleft
+ at item bottom
+ at end table
- at anchor{xpsnr}
- at section xpsnr
+ at item chromalin, cin
+Set the input chroma location.
-Obtain the average (across all input frames) and minimum (across all color plane averages)
-eXtended Perceptually weighted peak Signal-to-Noise Ratio (XPSNR) between two input videos.
+Possible values are:
+ at table @var
+ at item input
+ at item left
+ at item center
+ at item topleft
+ at item top
+ at item bottomleft
+ at item bottom
+ at end table
-The XPSNR is a low-complexity psychovisually motivated distortion measurement algorithm for
-assessing the difference between two video streams or images. This is especially useful for
-objectively quantifying the distortions caused by video and image codecs, as an alternative
-to a formal subjective test. The logarithmic XPSNR output values are in a similar range as
-those of traditional @ref{psnr} assessments but better reflect human impressions of visual
-coding quality. More details on the XPSNR measure, which essentially represents a blockwise
-weighted variant of the PSNR measure, can be found in the following freely available papers:
+ at item npl
+Set the nominal peak luminance.
- at itemize
- at item
-C. R. Helmrich, M. Siekmann, S. Becker, S. Bosse, D. Marpe, and T. Wiegand, "XPSNR: A
-Low-Complexity Extension of the Perceptually Weighted Peak Signal-to-Noise Ratio for
-High-Resolution Video Quality Assessment," in Proc. IEEE Int. Conf. Acoustics, Speech,
-Sig. Process. (ICASSP), virt./online, May 2020. @url{www.ecodis.de/xpsnr.htm}
+ at item param_a
+Parameter A for scaling filters. Parameter "b" for bicubic, and the number of
+filter taps for lanczos.
- at item
-C. R. Helmrich, S. Bosse, H. Schwarz, D. Marpe, and T. Wiegand, "A Study of the
-Extended Perceptually Weighted Peak Signal-to-Noise Ratio (XPSNR) for Video Compression
-with Different Resolutions and Bit Depths," ITU Journal: ICT Discoveries, vol. 3, no.
-1, pp. 65 - 72, May 2020. @url{http://handle.itu.int/11.1002/pub/8153d78b-en}
- at end itemize
+ at item param_b
+Parameter B for scaling filters. Parameter "c" for bicubic.
+ at end table
-When publishing the results of XPSNR assessments obtained using, e.g., this FFmpeg filter, a
-reference to the above papers as a means of documentation is strongly encouraged. The filter
-requires two input videos. The first input is considered a (usually not distorted) reference
-source and is passed unchanged to the output, whereas the second input is a (distorted) test
-signal. Except for the bit depth, these two video inputs must have the same pixel format. In
-addition, for best performance, both compared input videos should be in YCbCr color format.
+The values of the @option{w} and @option{h} options are expressions
+containing the following constants:
-The obtained overall XPSNR values mentioned above are printed through the logging system. In
-case of input with multiple color planes, we suggest reporting of the minimum XPSNR average.
+ at table @var
+ at item in_w
+ at item in_h
+The input width and height
-The following parameter, which behaves like the one for the @ref{psnr} filter, is accepted:
+ at item iw
+ at item ih
+These are the same as @var{in_w} and @var{in_h}.
- at table @option
- at item stats_file, f
-If specified, the filter will use the named file to save the XPSNR value of each individual
-frame and color plane. When the file name equals "-", that data is sent to standard output.
- at end table
+ at item out_w
+ at item out_h
+The output (scaled) width and height
-This filter also supports the @ref{framesync} options.
+ at item ow
+ at item oh
+These are the same as @var{out_w} and @var{out_h}
- at subsection Examples
- at itemize
- at item
-XPSNR analysis of two 1080p HD videos, ref_source.yuv and test_video.yuv, both at 24 frames
-per second, with color format 4:2:0, bit depth 8, and output of a logfile named "xpsnr.log":
- at example
-ffmpeg -s 1920x1080 -framerate 24 -pix_fmt yuv420p -i ref_source.yuv -s 1920x1080 -framerate
-24 -pix_fmt yuv420p -i test_video.yuv -lavfi xpsnr="stats_file=xpsnr.log" -f null -
- at end example
+ at item a
+The same as @var{iw} / @var{ih}
- at item
-XPSNR analysis of two 2160p UHD videos, ref_source.yuv with bit depth 8 and test_video.yuv
-with bit depth 10, both at 60 frames per second with color format 4:2:0, no logfile output:
- at example
-ffmpeg -s 3840x2160 -framerate 60 -pix_fmt yuv420p -i ref_source.yuv -s 3840x2160 -framerate
-60 -pix_fmt yuv420p10le -i test_video.yuv -lavfi xpsnr="stats_file=-" -f null -
- at end example
- at end itemize
+ at item sar
+input sample aspect ratio
- at anchor{xstack}
- at section xstack
-Stack video inputs into custom layout.
+ at item dar
+The input display aspect ratio. Calculated from @code{(iw / ih) * sar}.
-All streams must be of same pixel format.
+ at item hsub
+ at item vsub
+horizontal and vertical input chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
-The filter accepts the following options:
+ at item ohsub
+ at item ovsub
+horizontal and vertical output chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+ at end table
+
+ at subsection Commands
+This filter supports the following commands:
@table @option
- at item inputs
-Set number of input streams. Default is 2.
+ at item width, w
+ at item height, h
+Set the output video dimension expression.
+The command accepts the same syntax of the corresponding option.
- at item layout
-Specify layout of inputs.
-This option requires the desired layout configuration to be explicitly set by the user.
-This sets position of each video input in output. Each input
-is separated by '|'.
-The first number represents the column, and the second number represents the row.
-Numbers start at 0 and are separated by '_'. Optionally one can use wX and hX,
-where X is video input from which to take width or height.
-Multiple values can be used when separated by '+'. In such
-case values are summed together.
+If the specified expression is not valid, it is kept at its current
+value.
+ at end table
-Note that if inputs are of different sizes gaps may appear, as not all of
-the output video frame will be filled. Similarly, videos can overlap each
-other if their position doesn't leave enough space for the full frame of
-adjoining videos.
+ at c man end VIDEO FILTERS
-For 2 inputs, a default layout of @code{0_0|w0_0} (equivalent to
- at code{grid=2x1}) is set. In all other cases, a layout or a grid must be set by
-the user. Either @code{grid} or @code{layout} can be specified at a time.
-Specifying both will result in an error.
+ at chapter CUDA Video Filters
+ at c man begin CUDA Video Filters
- at item grid
-Specify a fixed size grid of inputs.
-This option is used to create a fixed size grid of the input streams. Set the
-grid size in the form @code{COLUMNSxROWS}. There must be @code{ROWS * COLUMNS}
-input streams and they will be arranged as a grid with @code{ROWS} rows and
- at code{COLUMNS} columns. When using this option, each input stream within a row
-must have the same height and all the rows must have the same width.
+To enable CUDA and/or NPP filters please refer to configuration guidelines for @ref{CUDA} and for @ref{CUDA NPP} filters.
-If @code{grid} is set, then @code{inputs} option is ignored and is implicitly
-set to @code{ROWS * COLUMNS}.
+Running CUDA filters requires you to initialize a hardware device and to pass that device to all filters in any filter graph.
+ at table @option
-For 2 inputs, a default grid of @code{2x1} (equivalent to
- at code{layout=0_0|w0_0}) is set. In all other cases, a layout or a grid must be
-set by the user. Either @code{grid} or @code{layout} can be specified at a time.
-Specifying both will result in an error.
+ at item -init_hw_device cuda[=@var{name}][:@var{device}[, at var{key=value}...]]
+Initialise a new hardware device of type @var{cuda} called @var{name}, using the
+given device parameters.
- at item shortest
-If set to 1, force the output to terminate when the shortest input
-terminates. Default value is 0.
+ at item -filter_hw_device @var{name}
+Pass the hardware device called @var{name} to all filters in any filter graph.
- at item fill
-If set to valid color, all unused pixels will be filled with that color.
-By default fill is set to none, so it is disabled.
@end table
- at subsection Examples
+For more detailed information see @url{https://www.ffmpeg.org/ffmpeg.html#Advanced-Video-options}
@itemize
@item
-Display 4 inputs into 2x2 grid.
-
-Layout:
+Example of initializing second CUDA device on the system and running scale_cuda and bilateral_cuda filters.
@example
-input1(0, 0) | input3(w0, 0)
-input2(0, h0) | input4(w0, h0)
+./ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i input.mp4 -init_hw_device cuda:1 -filter_complex \
+"[0:v]scale_cuda=format=yuv444p[scaled_video];[scaled_video]bilateral_cuda=window_size=9:sigmaS=3.0:sigmaR=50.0" \
+-an -sn -c:v h264_nvenc -cq 20 out.mp4
@end example
+ at end itemize
- at example
-xstack=inputs=4:layout=0_0|0_h0|w0_0|w0_h0
- at end example
+Since CUDA filters operate exclusively on GPU memory, frame data must sometimes be uploaded (@ref{hwupload}) to hardware surfaces associated with the appropriate CUDA device before processing, and downloaded (@ref{hwdownload}) back to normal memory afterward, if required. Whether @ref{hwupload} or @ref{hwdownload} is necessary depends on the specific workflow:
-Note that if inputs are of different sizes, gaps or overlaps may occur.
+ at itemize
+ at item If the input frames are already in GPU memory (e.g., when using @code{-hwaccel cuda} or @code{-hwaccel_output_format cuda}), explicit use of @ref{hwupload} is not needed, as the data is already in the appropriate memory space.
+ at item If the input frames are in CPU memory (e.g., software-decoded frames or frames processed by CPU-based filters), it is necessary to use @ref{hwupload} to transfer the data to GPU memory for CUDA processing.
+ at item If the output of the CUDA filters needs to be further processed by software-based filters or saved in a format not supported by GPU-based encoders, @ref{hwdownload} is required to transfer the data back to CPU memory.
+ at end itemize
+Note that @ref{hwupload} uploads data to a surface with the same layout as the software frame, so it may be necessary to add a @ref{format} filter immediately before @ref{hwupload} to ensure the input is in the correct format. Similarly, @ref{hwdownload} may not support all output formats, so an additional @ref{format} filter may need to be inserted immediately after @ref{hwdownload} in the filter graph to ensure compatibility.
- at item
-Display 4 inputs into 1x4 grid.
+ at anchor{CUDA}
+ at section CUDA
+Below is a description of the currently available Nvidia CUDA video filters.
-Layout:
- at example
-input1(0, 0)
-input2(0, h0)
-input3(0, h0+h1)
-input4(0, h0+h1+h2)
- at end example
+Prerequisites:
+ at itemize
+ at item Install Nvidia CUDA Toolkit
+ at end itemize
- at example
-xstack=inputs=4:layout=0_0|0_h0|0_h0+h1|0_h0+h1+h2
- at end example
+Note: If FFmpeg detects the Nvidia CUDA Toolkit during configuration, it will enable CUDA filters automatically without requiring any additional flags. If you want to explicitly enable them, use the following options:
-Note that if inputs are of different widths, unused space will appear.
+ at itemize
+ at item Configure FFmpeg with @code{--enable-cuda-nvcc --enable-nonfree}.
+ at item Configure FFmpeg with @code{--enable-cuda-llvm}. Additional requirement: @code{llvm} lib must be installed.
+ at end itemize
- at item
-Display 9 inputs into 3x3 grid.
+ at subsection bilateral_cuda
+CUDA accelerated bilateral filter, an edge preserving filter.
+This filter is mathematically accurate thanks to the use of GPU acceleration.
+For best output quality, use one to one chroma subsampling, i.e. yuv444p format.
-Layout:
- at example
-input1(0, 0) | input4(w0, 0) | input7(w0+w3, 0)
-input2(0, h0) | input5(w0, h0) | input8(w0+w3, h0)
-input3(0, h0+h1) | input6(w0, h0+h1) | input9(w0+w3, h0+h1)
- at end example
+The filter accepts the following options:
+ at table @option
+ at item sigmaS
+Set sigma of gaussian function to calculate spatial weight, also called sigma space.
+Allowed range is 0.1 to 512. Default is 0.1.
- at example
-xstack=inputs=9:layout=0_0|0_h0|0_h0+h1|w0_0|w0_h0|w0_h0+h1|w0+w3_0|w0+w3_h0|w0+w3_h0+h1
- at end example
+ at item sigmaR
+Set sigma of gaussian function to calculate color range weight, also called sigma color.
+Allowed range is 0.1 to 512. Default is 0.1.
-Note that if inputs are of different sizes, gaps or overlaps may occur.
+ at item window_size
+Set window size of the bilateral function to determine the number of neighbours to loop on.
+If the number entered is even, one will be added automatically.
+Allowed range is 1 to 255. Default is 1.
+ at end table
+ at subsubsection Examples
+ at itemize
@item
-Display 16 inputs into 4x4 grid.
-
-Layout:
- at example
-input1(0, 0) | input5(w0, 0) | input9 (w0+w4, 0) | input13(w0+w4+w8, 0)
-input2(0, h0) | input6(w0, h0) | input10(w0+w4, h0) | input14(w0+w4+w8, h0)
-input3(0, h0+h1) | input7(w0, h0+h1) | input11(w0+w4, h0+h1) | input15(w0+w4+w8, h0+h1)
-input4(0, h0+h1+h2)| input8(w0, h0+h1+h2)| input12(w0+w4, h0+h1+h2)| input16(w0+w4+w8, h0+h1+h2)
- at end example
+Apply the bilateral filter on a video.
@example
-xstack=inputs=16:layout=0_0|0_h0|0_h0+h1|0_h0+h1+h2|w0_0|w0_h0|w0_h0+h1|w0_h0+h1+h2|w0+w4_0|
-w0+w4_h0|w0+w4_h0+h1|w0+w4_h0+h1+h2|w0+w4+w8_0|w0+w4+w8_h0|w0+w4+w8_h0+h1|w0+w4+w8_h0+h1+h2
+./ffmpeg -v verbose \
+-hwaccel cuda -hwaccel_output_format cuda -i input.mp4 \
+-init_hw_device cuda \
+-filter_complex \
+" \
+[0:v]scale_cuda=format=yuv444p[scaled_video];
+[scaled_video]bilateral_cuda=window_size=9:sigmaS=3.0:sigmaR=50.0" \
+-an -sn -c:v h264_nvenc -cq 20 out.mp4
@end example
-Note that if inputs are of different sizes, gaps or overlaps may occur.
-
@end itemize
- at anchor{yadif}
- at section yadif
+ at subsection bwdif_cuda
-Deinterlace the input video ("yadif" means "yet another deinterlacing
-filter").
+Deinterlace the input video using the @ref{bwdif} algorithm, but implemented
+in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec
+and/or nvenc.
It accepts the following parameters:
-
@table @option
-
@item mode
The interlacing mode to adopt. It accepts one of the following values:
@@ -26654,13 +26642,9 @@ The interlacing mode to adopt. It accepts one of the following values:
Output one frame for each frame.
@item 1, send_field
Output one frame for each field.
- at item 2, send_frame_nospatial
-Like @code{send_frame}, but it skips the spatial interlacing check.
- at item 3, send_field_nospatial
-Like @code{send_field}, but it skips the spatial interlacing check.
@end table
-The default value is @code{send_frame}.
+The default value is @code{send_field}.
@item parity
The picture field parity assumed for the input interlaced video. It accepts one
@@ -26693,428 +26677,413 @@ Only deinterlace frames marked as interlaced.
The default value is @code{all}.
@end table
- at section yadif_cuda
+ at subsection chromakey_cuda
+CUDA accelerated YUV colorspace color/chroma keying.
-Deinterlace the input video using the @ref{yadif} algorithm, but implemented
-in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec
-and/or nvenc.
+This filter works like normal chromakey filter but operates on CUDA frames.
+for more details and parameters see @ref{chromakey}.
-It accepts the following parameters:
+ at subsubsection Examples
+ at itemize
+ at item
+Make all the green pixels in the input video transparent and use it as an overlay for another video:
- at table @option
+ at example
+./ffmpeg \
+ -hwaccel cuda -hwaccel_output_format cuda -i input_green.mp4 \
+ -hwaccel cuda -hwaccel_output_format cuda -i base_video.mp4 \
+ -init_hw_device cuda \
+ -filter_complex \
+ " \
+ [0:v]chromakey_cuda=0x25302D:0.1:0.12:1[overlay_video]; \
+ [1:v]scale_cuda=format=yuv420p[base]; \
+ [base][overlay_video]overlay_cuda" \
+ -an -sn -c:v h264_nvenc -cq 20 output.mp4
+ at end example
- at item mode
-The interlacing mode to adopt. It accepts one of the following values:
+ at item
+Process two software sources, explicitly uploading the frames:
- at table @option
- at item 0, send_frame
-Output one frame for each frame.
- at item 1, send_field
-Output one frame for each field.
- at item 2, send_frame_nospatial
-Like @code{send_frame}, but it skips the spatial interlacing check.
- at item 3, send_field_nospatial
-Like @code{send_field}, but it skips the spatial interlacing check.
- at end table
+ at example
+./ffmpeg -init_hw_device cuda=cuda -filter_hw_device cuda \
+ -f lavfi -i color=size=800x600:color=white,format=yuv420p \
+ -f lavfi -i yuvtestsrc=size=200x200,format=yuv420p \
+ -filter_complex \
+ " \
+ [0]hwupload[under]; \
+ [1]hwupload,chromakey_cuda=green:0.1:0.12[over]; \
+ [under][over]overlay_cuda" \
+ -c:v hevc_nvenc -cq 18 -preset slow output.mp4
+ at end example
-The default value is @code{send_frame}.
+ at end itemize
- at item parity
-The picture field parity assumed for the input interlaced video. It accepts one
-of the following values:
+ at subsection colorspace_cuda
- at table @option
- at item 0, tff
-Assume the top field is first.
- at item 1, bff
-Assume the bottom field is first.
- at item -1, auto
-Enable automatic detection of field parity.
- at end table
+CUDA accelerated implementation of the colorspace filter.
-The default value is @code{auto}.
-If the interlacing is unknown or the decoder does not export this information,
-top field first will be assumed.
+It is by no means feature complete compared to the software colorspace filter,
+and at the current time only supports color range conversion between jpeg/full
+and mpeg/limited range.
- at item deint
-Specify which frames to deinterlace. Accepts one of the following
-values:
+The filter accepts the following options:
@table @option
- at item 0, all
-Deinterlace all frames.
- at item 1, interlaced
-Only deinterlace frames marked as interlaced.
+ at item range
+Specify output color range.
+
+The accepted values are:
+ at table @samp
+ at item tv
+TV (restricted) range
+
+ at item mpeg
+MPEG (restricted) range
+
+ at item pc
+PC (full) range
+
+ at item jpeg
+JPEG (full) range
+
@end table
-The default value is @code{all}.
@end table
- at section yaepblur
+ at anchor{overlay_cuda}
+ at subsection overlay_cuda
-Apply blur filter while preserving edges ("yaepblur" means "yet another edge preserving blur filter").
-The algorithm is described in
-"J. S. Lee, Digital image enhancement and noise filtering by use of local statistics, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 1980."
+Overlay one video on top of another.
+
+This is the CUDA variant of the @ref{overlay} filter.
+It only accepts CUDA frames. The underlying input pixel formats have to match.
+
+It takes two inputs and has one output. The first input is the "main"
+video on which the second input is overlaid.
It accepts the following parameters:
@table @option
- at item radius, r
-Set the window radius. Default value is 3.
+ at item x
+ at item y
+Set expressions for the x and y coordinates of the overlaid video
+on the main video.
- at item planes, p
-Set which planes to filter. Default is only the first plane.
+They can contain the following parameters:
- at item sigma, s
-Set blur strength. Default value is 128.
- at end table
+ at table @option
- at subsection Commands
-This filter supports same @ref{commands} as options.
+ at item main_w, W
+ at item main_h, H
+The main input width and height.
- at section zoompan
+ at item overlay_w, w
+ at item overlay_h, h
+The overlay input width and height.
-Apply Zoom & Pan effect.
+ at item x
+ at item y
+The computed values for @var{x} and @var{y}. They are evaluated for
+each new frame.
-This filter accepts the following options:
+ at item n
+The ordinal index of the main input frame, starting from 0.
+
+ at item pos
+The byte offset position in the file of the main input frame, NAN if unknown.
+Deprecated, do not use.
+
+ at item t
+The timestamp of the main input frame, expressed in seconds, NAN if unknown.
+
+ at end table
+
+Default value is "0" for both expressions.
+ at item eval
+Set when the expressions for @option{x} and @option{y} are evaluated.
+
+It accepts the following values:
@table @option
- at item zoom, z
-Set the zoom expression. Range is 1-10. Default is 1.
+ at item init
+Evaluate expressions once during filter initialization or
+when a command is processed.
- at item x
- at item y
-Set the x and y expression. Default is 0.
+ at item frame
+Evaluate expressions for each incoming frame
+ at end table
- at item d
-Set the duration expression in number of frames.
-This sets for how many number of frames effect will last for
-single input image. Default is 90.
+Default value is @option{frame}.
- at item s
-Set the output image size, default is 'hd720'.
+ at item eof_action
+See @ref{framesync}.
+
+ at item shortest
+See @ref{framesync}.
+
+ at item repeatlast
+See @ref{framesync}.
- at item fps
-Set the output frame rate, default is '25'.
@end table
-Each expression can contain the following constants:
+This filter also supports the @ref{framesync} options.
+
+ at anchor{scale_cuda}
+ at subsection scale_cuda
+Scale (resize) and convert (pixel format) the input video, using accelerated CUDA kernels.
+Setting the output width and height works in the same way as for the @ref{scale} filter.
+
+The filter accepts the following options:
@table @option
- at item in_w, iw
-Input width.
+ at item w
+ at item h
+Set the output video dimension expression. Default value is the input dimension.
- at item in_h, ih
-Input height.
+Allows for the same expressions as the @ref{scale} filter.
- at item out_w, ow
-Output width.
+ at item interp_algo
+Sets the algorithm used for scaling:
- at item out_h, oh
-Output height.
+ at table @var
+ at item nearest
+Nearest neighbour
- at item in
-Input frame count.
+Used by default if input parameters match the desired output.
- at item on
-Output frame count.
+ at item bilinear
+Bilinear
- at item in_time, it
-The input timestamp expressed in seconds. It's NAN if the input timestamp is unknown.
+ at item bicubic
+Bicubic
- at item out_time, time, ot
-The output timestamp expressed in seconds.
+This is the default.
- at item x
- at item y
-Last calculated 'x' and 'y' position from 'x' and 'y' expression
-for current input frame.
+ at item lanczos
+Lanczos
- at item px
- at item py
-'x' and 'y' of last output frame of previous input frame or 0 when there was
-not yet such frame (first input frame).
+ at end table
- at item zoom
-Last calculated zoom from 'z' expression for current input frame.
+ at item format
+Controls the output pixel format. By default, or if none is specified, the input
+pixel format is used.
- at item pzoom
-Last calculated zoom of last output frame of previous input frame.
+The filter does not support converting between YUV and RGB pixel formats.
- at item duration
-Number of output frames for current input frame. Calculated from 'd' expression
-for each input frame.
+ at item passthrough
+If set to 0, every frame is processed, even if no conversion is necessary.
+This mode can be useful to use the filter as a buffer for a downstream
+frame-consumer that exhausts the limited decoder frame pool.
- at item pduration
-number of output frames created for previous input frame
+If set to 1, frames are passed through as-is if they match the desired output
+parameters. This is the default behaviour.
- at item a
-Rational number: input width / input height
+ at item param
+Algorithm-Specific parameter.
- at item sar
-sample aspect ratio
+Affects the curves of the bicubic algorithm.
- at item dar
-display aspect ratio
+ at item force_original_aspect_ratio
+ at item force_divisible_by
+Work the same as the identical @ref{scale} filter options.
+
+ at item reset_sar
+Works the same as the identical @ref{scale} filter option.
@end table
- at subsection Examples
+ at subsubsection Examples
@itemize
@item
-Zoom in up to 1.5x and pan at same time to some spot near center of picture:
- at example
-zoompan=z='min(zoom+0.0015,1.5)':d=700:x='if(gte(zoom,1.5),x,x+1/a)':y='if(gte(zoom,1.5),y,y+1)':s=640x360
- at end example
-
- at item
-Zoom in up to 1.5x and pan always at center of picture:
+Scale input to 720p, keeping aspect ratio and ensuring the output is yuv420p.
@example
-zoompan=z='min(zoom+0.0015,1.5)':d=700:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
+scale_cuda=-2:720:format=yuv420p
@end example
@item
-Same as above but without pausing:
+Upscale to 4K using nearest neighbour algorithm.
@example
-zoompan=z='min(max(zoom,pzoom)+0.0015,1.5)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
+scale_cuda=4096:2160:interp_algo=nearest
@end example
@item
-Zoom in 2x into center of picture only for the first second of the input video:
+Don't do any conversion or scaling, but copy all input frames into newly allocated ones.
+This can be useful to deal with a filter and encode chain that otherwise exhausts the
+decoders frame pool.
@example
-zoompan=z='if(between(in_time,0,1),2,1)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
+scale_cuda=passthrough=0
@end example
-
@end itemize
- at anchor{zscale}
- at section zscale
-Scale (resize) the input video, using the z.lib library:
- at url{https://github.com/sekrit-twc/zimg}. To enable compilation of this
-filter, you need to configure FFmpeg with @code{--enable-libzimg}.
+ at subsection yadif_cuda
-The zscale filter forces the output display aspect ratio to be the same
-as the input, by changing the output sample aspect ratio.
+Deinterlace the input video using the @ref{yadif} algorithm, but implemented
+in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec
+and/or nvenc.
-If the input image format is different from the format requested by
-the next filter, the zscale filter will convert the input to the
-requested format.
+It accepts the following parameters:
- at subsection Options
-The filter accepts the following options.
@table @option
- at item width, w
- at item height, h
-Set the output video dimension expression. Default value is the input
-dimension.
-
-If the @var{width} or @var{w} value is 0, the input width is used for
-the output. If the @var{height} or @var{h} value is 0, the input height
-is used for the output.
-
-If one and only one of the values is -n with n >= 1, the zscale filter
-will use a value that maintains the aspect ratio of the input image,
-calculated from the other specified dimension. After that it will,
-however, make sure that the calculated dimension is divisible by n and
-adjust the value if necessary.
-If both values are -n with n >= 1, the behavior will be identical to
-both values being set to 0 as previously detailed.
+ at item mode
+The interlacing mode to adopt. It accepts one of the following values:
-See below for the list of accepted constants for use in the dimension
-expression.
+ at table @option
+ at item 0, send_frame
+Output one frame for each frame.
+ at item 1, send_field
+Output one frame for each field.
+ at item 2, send_frame_nospatial
+Like @code{send_frame}, but it skips the spatial interlacing check.
+ at item 3, send_field_nospatial
+Like @code{send_field}, but it skips the spatial interlacing check.
+ at end table
- at item size, s
-Set the video size. For the syntax of this option, check the
- at ref{video size syntax,,"Video size" section in the ffmpeg-utils manual,ffmpeg-utils}.
+The default value is @code{send_frame}.
- at item dither, d
-Set the dither type.
+ at item parity
+The picture field parity assumed for the input interlaced video. It accepts one
+of the following values:
-Possible values are:
- at table @var
- at item none
- at item ordered
- at item random
- at item error_diffusion
+ at table @option
+ at item 0, tff
+Assume the top field is first.
+ at item 1, bff
+Assume the bottom field is first.
+ at item -1, auto
+Enable automatic detection of field parity.
@end table
-Default is none.
+The default value is @code{auto}.
+If the interlacing is unknown or the decoder does not export this information,
+top field first will be assumed.
- at item filter, f
-Set the resize filter type.
+ at item deint
+Specify which frames to deinterlace. Accepts one of the following
+values:
-Possible values are:
- at table @var
- at item point
- at item bilinear
- at item bicubic
- at item spline16
- at item spline36
- at item lanczos
+ at table @option
+ at item 0, all
+Deinterlace all frames.
+ at item 1, interlaced
+Only deinterlace frames marked as interlaced.
@end table
-Default is bilinear.
+The default value is @code{all}.
+ at end table
- at item range, r
-Set the color range.
+ at anchor{CUDA NPP}
+ at section CUDA NPP
+Below is a description of the currently available NVIDIA Performance Primitives (libnpp) video filters.
-Possible values are:
- at table @var
- at item input
- at item limited
- at item full
- at end table
+Prerequisites:
+ at itemize
+ at item Install Nvidia CUDA Toolkit
+ at item Install libnpp
+ at end itemize
-Default is same as input.
+To enable CUDA NPP filters:
- at item primaries, p
-Set the color primaries.
+ at itemize
+ at item Configure FFmpeg with @code{--enable-nonfree --enable-libnpp}.
+ at end itemize
-Possible values are:
- at table @var
- at item input
- at item 709
- at item unspecified
- at item 170m
- at item 240m
- at item 2020
- at end table
-Default is same as input.
+ at anchor{scale_npp}
+ at subsection scale_npp
- at item transfer, t
-Set the transfer characteristics.
+Use the NVIDIA Performance Primitives (libnpp) to perform scaling and/or pixel
+format conversion on CUDA video frames. Setting the output width and height
+works in the same way as for the @var{scale} filter.
-Possible values are:
- at table @var
- at item input
- at item 709
- at item unspecified
- at item 601
- at item linear
- at item 2020_10
- at item 2020_12
- at item smpte2084
- at item iec61966-2-1
- at item arib-std-b67
- at end table
+The following additional options are accepted:
+ at table @option
+ at item format
+The pixel format of the output CUDA frames. If set to the string "same" (the
+default), the input format will be kept. Note that automatic format negotiation
+and conversion is not yet supported for hardware frames
-Default is same as input.
+ at item interp_algo
+The interpolation algorithm used for resizing. One of the following:
+ at table @option
+ at item nn
+Nearest neighbour.
- at item matrix, m
-Set the colorspace matrix.
+ at item linear
+ at item cubic
+ at item cubic2p_bspline
+2-parameter cubic (B=1, C=0)
-Possible value are:
- at table @var
- at item input
- at item 709
- at item unspecified
- at item 470bg
- at item 170m
- at item 2020_ncl
- at item 2020_cl
- at end table
+ at item cubic2p_catmullrom
+2-parameter cubic (B=0, C=1/2)
-Default is same as input.
+ at item cubic2p_b05c03
+2-parameter cubic (B=1/2, C=3/10)
- at item rangein, rin
-Set the input color range.
+ at item super
+Supersampling
-Possible values are:
- at table @var
- at item input
- at item limited
- at item full
+ at item lanczos
@end table
-Default is same as input.
-
- at item primariesin, pin
-Set the input color primaries.
+ at item force_original_aspect_ratio
+Enable decreasing or increasing output video width or height if necessary to
+keep the original aspect ratio. Possible values:
-Possible values are:
- at table @var
- at item input
- at item 709
- at item unspecified
- at item 170m
- at item 240m
- at item 2020
- at end table
+ at table @samp
+ at item disable
+Scale the video as specified and disable this feature.
-Default is same as input.
+ at item decrease
+The output video dimensions will automatically be decreased if needed.
- at item transferin, tin
-Set the input transfer characteristics.
+ at item increase
+The output video dimensions will automatically be increased if needed.
-Possible values are:
- at table @var
- at item input
- at item 709
- at item unspecified
- at item 601
- at item linear
- at item 2020_10
- at item 2020_12
@end table
-Default is same as input.
+One useful instance of this option is that when you know a specific device's
+maximum allowed resolution, you can use this to limit the output video to
+that, while retaining the aspect ratio. For example, device A allows
+1280x720 playback, and your video is 1920x800. Using this option (set it to
+decrease) and specifying 1280x720 to the command line makes the output
+1280x533.
- at item matrixin, min
-Set the input colorspace matrix.
+Please note that this is a different thing than specifying -1 for @option{w}
+or @option{h}, you still need to specify the output resolution for this option
+to work.
-Possible value are:
- at table @var
- at item input
- at item 709
- at item unspecified
- at item 470bg
- at item 170m
- at item 2020_ncl
- at item 2020_cl
- at end table
+ at item force_divisible_by
+Ensures that both the output dimensions, width and height, are divisible by the
+given integer when used together with @option{force_original_aspect_ratio}. This
+works similar to using @code{-n} in the @option{w} and @option{h} options.
- at item chromal, c
-Set the output chroma location.
+This option respects the value set for @option{force_original_aspect_ratio},
+increasing or decreasing the resolution accordingly. The video's aspect ratio
+may be slightly modified.
-Possible values are:
- at table @var
- at item input
- at item left
- at item center
- at item topleft
- at item top
- at item bottomleft
- at item bottom
- at end table
+This option can be handy if you need to have a video fit within or exceed
+a defined resolution using @option{force_original_aspect_ratio} but also have
+encoder restrictions on width or height divisibility.
- at item chromalin, cin
-Set the input chroma location.
+ at item reset_sar
+Works the same as the identical @ref{scale} filter option.
-Possible values are:
- at table @var
- at item input
- at item left
- at item center
- at item topleft
- at item top
- at item bottomleft
- at item bottom
- at end table
+ at item eval
+Specify when to evaluate @var{width} and @var{height} expression. It accepts the following values:
- at item npl
-Set the nominal peak luminance.
+ at table @samp
+ at item init
+Only evaluate expressions once during the filter initialization or when a command is processed.
- at item param_a
-Parameter A for scaling filters. Parameter "b" for bicubic, and the number of
-filter taps for lanczos.
+ at item frame
+Evaluate expressions for each incoming frame.
+
+ at end table
- at item param_b
-Parameter B for scaling filters. Parameter "c" for bicubic.
@end table
The values of the @option{w} and @option{h} options are expressions
@@ -27146,31 +27115,135 @@ input sample aspect ratio
@item dar
The input display aspect ratio. Calculated from @code{(iw / ih) * sar}.
- at item hsub
- at item vsub
-horizontal and vertical input chroma subsample values. For example for the
-pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+ at item n
+The (sequential) number of the input frame, starting from 0.
+Only available with @code{eval=frame}.
- at item ohsub
- at item ovsub
-horizontal and vertical output chroma subsample values. For example for the
-pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+ at item t
+The presentation timestamp of the input frame, expressed as a number of
+seconds. Only available with @code{eval=frame}.
+
+ at item pos
+The position (byte offset) of the frame in the input stream, or NaN if
+this information is unavailable and/or meaningless (for example in case of synthetic video).
+Only available with @code{eval=frame}.
+Deprecated, do not use.
@end table
- at subsection Commands
+ at subsection scale2ref_npp
-This filter supports the following commands:
+Use the NVIDIA Performance Primitives (libnpp) to scale (resize) the input
+video, based on a reference video.
+
+See the @ref{scale_npp} filter for available options, scale2ref_npp supports the same
+but uses the reference video instead of the main input as basis. scale2ref_npp
+also supports the following additional constants for the @option{w} and
+ at option{h} options:
+
+ at table @var
+ at item main_w
+ at item main_h
+The main input video's width and height
+
+ at item main_a
+The same as @var{main_w} / @var{main_h}
+
+ at item main_sar
+The main input video's sample aspect ratio
+
+ at item main_dar, mdar
+The main input video's display aspect ratio. Calculated from
+ at code{(main_w / main_h) * main_sar}.
+
+ at item main_n
+The (sequential) number of the main input frame, starting from 0.
+Only available with @code{eval=frame}.
+
+ at item main_t
+The presentation timestamp of the main input frame, expressed as a number of
+seconds. Only available with @code{eval=frame}.
+
+ at item main_pos
+The position (byte offset) of the frame in the main input stream, or NaN if
+this information is unavailable and/or meaningless (for example in case of synthetic video).
+Only available with @code{eval=frame}.
+ at end table
+
+ at subsubsection Examples
+
+ at itemize
+ at item
+Scale a subtitle stream (b) to match the main video (a) in size before overlaying
+ at example
+'scale2ref_npp[b][a];[a][b]overlay_cuda'
+ at end example
+
+ at item
+Scale a logo to 1/10th the height of a video, while preserving its display aspect ratio.
+ at example
+[logo-in][video-in]scale2ref_npp=w=oh*mdar:h=ih/10[logo-out][video-out]
+ at end example
+ at end itemize
+
+ at subsection sharpen_npp
+Use the NVIDIA Performance Primitives (libnpp) to perform image sharpening with
+border control.
+
+The following additional options are accepted:
@table @option
- at item width, w
- at item height, h
-Set the output video dimension expression.
-The command accepts the same syntax of the corresponding option.
-If the specified expression is not valid, it is kept at its current
-value.
+ at item border_type
+Type of sampling to be used ad frame borders. One of the following:
+ at table @option
+
+ at item replicate
+Replicate pixel values.
+
+ at end table
@end table
- at c man end VIDEO FILTERS
+ at subsection transpose_npp
+
+Transpose rows with columns in the input video and optionally flip it.
+For more in depth examples see the @ref{transpose} video filter, which shares mostly the same options.
+
+It accepts the following parameters:
+
+ at table @option
+
+ at item dir
+Specify the transposition direction.
+
+Can assume the following values:
+ at table @samp
+ at item cclock_flip
+Rotate by 90 degrees counterclockwise and vertically flip. (default)
+
+ at item clock
+Rotate by 90 degrees clockwise.
+
+ at item cclock
+Rotate by 90 degrees counterclockwise.
+
+ at item clock_flip
+Rotate by 90 degrees clockwise and vertically flip.
+ at end table
+
+ at item passthrough
+Do not apply the transposition if the input geometry matches the one
+specified by the specified value. It accepts the following values:
+ at table @samp
+ at item none
+Always apply transposition. (default)
+ at item portrait
+Preserve portrait geometry (when @var{height} >= @var{width}).
+ at item landscape
+Preserve landscape geometry (when @var{width} >= @var{height}).
+ at end table
+
+ at end table
+
+ at c man end CUDA Video Filters
@chapter OpenCL Video Filters
@c man begin OPENCL VIDEO FILTERS
More information about the ffmpeg-cvslog
mailing list