[FFmpeg-devel] The "bad" Patch
softworkz .
softworkz at hotmail.com
Mon Jun 2 10:31:35 EEST 2025
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Mark
> Thompson
> Sent: Samstag, 31. Mai 2025 22:26
> To: ffmpeg-devel at ffmpeg.org
> Subject: Re: [FFmpeg-devel] The "bad" Patch
>
Hi Mark,
Here are my answers to the remaining points:
> >> In reality, ffmpeg is often used on multi-user systems and called in
> strange
> >> ways from network services where many inputs are not trusted.
> >
> > None of such systems have xdg-open
>
> xdg-open is in most default install of Linux distributions (indeed, that's why
> you use it), so I don't think it is a reasonable assumption that it would not
> be there.
xdg-open is part of the xdg-util package which is typically installed when
there's a Desktop Environment or certain individual UI applications.
It's normally not included in server, container or minimal installations.
No need to argue that it's not a 100% reliable indication, but still good
as one element of a group of indicators.
> > - The file name is built from the local time with milliseconds. Pretty hard
> to
> > hit
>
> No, trivial to hit given that creating a file and watching whether it gets
> touched (inotify) are very low cost operations.
Just to reiterate (as I cut off above too close): An attacker would need to
have access to the same system as the target user.
It would need to
- Create & Delete 1000 files per second
or perform 1000 renames per second
- This number is independent from the number of files ("window") that you
keep at a time. You can't make that window too small, because the file name
is created before the graph is built, so maybe 500 or 1000 to be safe
The latter would provide a tolerance of 1 second.
- Even when you use just 100 files at a time (covering 100ms), you still
need to do 1000 create/rm or 1000 renames per second
- The number of inotify instances is usually limited on systems, so
you can only monitor the folder.
- Monitoring a folder with 1000 or 2000 file changes per second with inotiy
is no longer cheap (afaik), even when you limit the events by event mask
- This has consequences and affects the system:
- It creates 3.6 or 7.2 Millions of file system journal entries per hour
(for a journalling fs)
- Even though the files may have 0 bytes, this causes continuous disk
activity and might affect other fs operations
- I have no time to try it out, but especially on slow systems, this
is probably well noticeable
Anyway, that's all pointless due to the below:
> > - In v2, a temp-directory specific to the current user is created. Other
> users
> > have no access
>
> Your new method does not work because the attacker could have created the
> temporary directory (world-writeable) before the ffmpeg process does.
Right, it should maybe call stat after creation, but due to the below, there's
not much to achieve anyway.
> > - The file is an html file which will be launched by a browser from file:///
> > url, means it is treated with extra safety and isolation. There's hardly
> anything
> > you could achieve these days from a local html page
>
> Is there some general citation for this?
>
> I would naively expect browsers to assign greater trust to local rather than
> remote files and possibly allow some additional capabilities to scripts
> running in them, but I admit I have no familiarity with this area so I may be
> completely wrong.
In fact, it's the other way round, contrary to what one would expect. On
one side, there have been loads of exploits in this regard in the past, but
in combination with other "security changes" that Chromium (and all browsers
derived from it) have introduced over the years, I have gained the strong
impression that Google are purposefully trying do this (at least also) to
force more and more things move to the cloud.
Content from local file urls is the least trusted origin in contemporary
browsers. It has upset me too many times in the past years, that content from
whichever malicious site is trusted more than content from the file system,
just for having an SSL cert, which anybody can have these days.
It's hardly possible anymore to view something from (non-ssl) servers in your
local network when origins are mixed, and again, the hacked site with ssl
is considered "secure" and local network machines (http) as "unsafe" and
file urls even more.
When you want to open an XML file from the file system which specifies an
xslt stylesheet in the same folder - browsers don't load it anymore.
But they load it when it's available from an https url. That's crazy,
because locally, I can be sure that it doesn't change, but from a remote
server, it can change at any time and is out of your control. Still they
are calling this "secure" because it has ssl.
Their latest nonsense is to restrict access to http hosts with private
network IP addresses (sigh).
> Some further thoughts on your new patch which you will undoubtably have
> already considered:
> * What happens if the system argument string exceeds the allowed command
> argument length?
This is not possible, because the maximum length is:
/var/tmp/ffmpeg-4294967296/ffmpeg_graph_0000-00-00_00-00-00_000.htm
> * What happens if /bin/sh is not bash?
The command is not bash-specific, any Posix shell should do it. Tested with
bash and dash.
> * What happens if the attacker successfully contrives a transient out-of-
> memory condition during any of the calls to av_bprintf()? (As they can do on
> a shared machine.)
The maximum length of the path is 68, and 110 for the command. The AVBPrints are
stack-allocated and have something like 1kB, which should be more than sufficient
for doing av_bprintf() without allocating additional memory.
> (The Windows implementation is not changed and does not look robust, I assume
> you have not revised it.)
Correct - as said, it's all and only about the system() incovation.
> > Many other CLI tools are launching browsers, so that's not really rocket
> > science like you're trying to allude to.
>
> I agree. Rocketry seems to be generally reliable and successful when compared
> to computer security, where people forever find new vulnerabilities in
> supposedly secure and audited programs.
>
> I would hope that other CLI tools doing this have carefully documented the
> circumstances in which they do so to ensure that they don't get used in cases
> where it might cause problems.
Git: Nope
git web--browse --help
git web--browse https://ffmpeg.org
GitHub CLI: Nope
https://cli.github.com/manual/gh_browse
gh browse --repo ffmpeg/ffmpeg
Neither are making a big thing out of it and the same applies to all other
cases I had seen.
Thanks,
sw
More information about the ffmpeg-devel
mailing list