[FFmpeg-devel] Why 'You can only build one library type at once on MinGW'?

Trent Piepho xyzzy
Mon May 14 01:51:28 CEST 2007

On Sun, 13 May 2007, Rich Felker wrote:
> On Sat, May 12, 2007 at 11:10:16PM -0700, Trent Piepho wrote:
> > > False. The compiler just needs to generate code that loads a 64bit
> > > immediate and uses it as an address for every single memory access,
> > > and appropriate relocation records for these. Yes this is rather
> > > inefficient and ugly, but that's the only way to solve the "problem"
> > > if the platform is so broken.
> >
> > If you want to do that, you could just use rip relative addressing to load
> > the 64-bit base address.  It's a smaller instruction and doesn't require a
> > relocation.
> >
> > Instead of:
> >   movq $0x0102030405060708, %rdx  # constant must be changed by textrel
> >   movl (%rdx), %eax
> >
> > Do this:
> >   leaq 0x01020304(%rip), %rdx     # link time constant, no textrel
> >   movl (%rdx), %eax
> Forget about ELF "textrel" terminology. The rip-relative address is
> not known until the relocation is resolved by the linker. It might not
> be within 4gig of %rip, and then the linker will fail.

That's pretty much the same thing as saying objects larger than 4GB aren't
supported, isn't it?

You claimed that not supporting text relocations on x86-64 was an assembler
bug.  It's not an assembler bug, it's a limitation of the architecture.

If you want to support text relocations on x86-64, without limiting the total
address range across all objects to 32-bits, you must use an alternate and
much larger and slower sequence to access global data.  Call this a "long
access" if you want.

Provided one accepts the limitation that a single object must fit in 4GB,
there exists an alternative to the "long access", rip-relative addressing,
that is smaller, faster, and, being position independent, avoids the text

"long accesses" would allow relocations above 32-bits, but rip-relative
addressing allows the same thing and is superior in every way.  So, other
than to allow objects larger than 4GB, what possible reason is there for a
compiler to support "long accesses"?

And how can fixing this assembler bug you claimed existed turn an
instruction using a "short access" into something that will support being
relocated above 32-bits?

> The only way (in C) you know the symbol resides in the same "object"
> is if it's static, and then no relocation is involved. Whenever a

For a shared library, even static globals do not have constant addresses.

> symbol is external, the compiler has _no_ knowledge of the final
> address it will resolve to. The only correct approaches are either a
> sort of 'deferred compiling' or just assuming the maximal address size
> for all relocations. This is much like other archs where 'short call'
> opcodes exist, but the 'long call' version must be used for extern
> functions.

Or only supporting the range of the "short call" or allowing one to select
between short and long based on their needs.  It's too bad CPUs aren't
designed for optimal performance with the constraints that your view of what a
strict following of the current (or do you not recognize C99?) C standard

Some people want to produce better code for the reality that does exist,
rather than the reality they wish existed, and so they take architectural
realities into account and extend their abilities in order to control them.

C89 doesn't say anything about inline asm, can that be used?  C89 doesn't
say anything about shared libraries or dynamic linking or memory models,
so any optimization relating to those can't be used?

More information about the ffmpeg-devel mailing list