Fix machine code on macOS 11
macOS 11 adds two JIT hardening features:
* Memory can only be mmapped as both PROT_WRITE and PROT_EXEC if MAP_JIT
is also specified.
* Memory mapped with MAP_JIT is never writeable and executable at the same
time. It is necessary to call a function before writing, and then again
after writing to make it executable again.
Compiler [arm32/arm64]: Implement new semantics of F_MARK_AT
Compiler [arm32/arm64]: Fix maximum bytecode check
Compiler and runtime: Added byte codes F_PUSH_CATCHES and F_CATCH_AT.
These are needed to be able to save and restore the recovery context
for generator functions.
Updates the code generators for quite a few machine code backends.
Merge remote-tracking branch 'origin/master' into new_utf8
Merge remote-tracking branch 'origin/8.1' into gobject-introspection
Interpreter: fixed handling of SAVE_LOCALS bitmask
Since the introduction of save_locals_bitmask, expendible_offset was
never set. Also since the handling of expendible_offset and
save_locals_bitmask were handled by the same case, the code was broken.
During pop entries handling of the save_locals bitmask could lead
to situations where locals above expendible_offset were 'copied' into
the trampoline frame. Those locals could have already been popped from
the stack by the RETURN_LOCAL opcode.
Also slightly refactored the code to not allocate more space for locals
than needed and removed some unnecessary casts.
This became visible and could lead to crashes when building for 32bit
on 64bit x86 machines.
FALL THROUGH -> FALLTHRU to survive -Wimplicit-fallthrough=4.
ARM64: Fix F_INIT_FRAME on big endian
Merge branch '8.1' into peter/travis
ARM64: Support big endian
Interpreter: merge low_return variants
ARM64: Enable disassembler even without PIKE_DEBUG
ARM64: Optimize arm64_mark by using postincrement
ARM64: added DUP, SWAP and NOT
Compiler [arm64]: do not modify instrs array
ARM: remove opcode statistics
There is a similar feature already available when compiling
unsigned INT64 -> UINT64
ARM64: de-inline free_svalue
To reduce code size, we generate a simple free_svalue version
at the beginning of every program. When freeing an svalue, we
jump to that de-inlined version for every type which is reference
counted. This reduces the size of the generated machine code by around
ARM64: use Pike_fatal for assert
ARM64: de-inlined F_RETURN
ARM64: removed unused code
ARM64: preliminary check_threads_etc
Uses dedicated register for the fast_check_threads_etc counter but
is generated in too many places for now.
ARM64: complete fast check threads
Slow path now only generated once per program.
ARM: merge pike register handling
Most of the register loading and storing is now shared between 32 and 64 bit.
ARM: first step at merging arm 32 and 64 bit code base
moved label handling and register allocation into one source file.
ARM32 ARM64: Update PC before calling builtins
ARM64: implement F_PRIVATE_GLOBAL(_AND_POP)
ARM64: optimize handling of globals
* keep Pike_fp->current_storage in a register
* keep track of whether or not the current object
could have been destructed
* actually start using F_PRIVATE_GLOBAL
ARM64: implement F_PRIVATE_GLOBAL
ARM64: added the F_CALL_BUILTIN opcodes
It would be useful to know at assembler time, if a given efun has
a return value or not.
ARM64: added missing argument to inc/dec opcodes
ARM64: added INC/DEC opcodes
ARM64: fixed F_EQ/F_NE
The fast path (for everything but objects and functions) was broken
in these two opcodes.
ARM64: implement F_RETURN_IF_TRUE
ARM64: added F_OR_INT, F_AND_INT and F_XOR_INT
ARM64: implemented F_LOOP
ARM64: fixed F_LOOP
We cannot rely on the loopcnt always being positive.
ARM64: simplify some conditionals
ARM64: implement some F_RETURN_* opcodes
ARM64: correctly keep track of special registers
Since updating Pike_fp->pc would load the fp_reg, the fp_reg could
sometimes not be correctly invalidated after calling a opcode fun.
ARM64: simplify prologue/epilogue generation
ARM64: fixed svalue type in F_THIS_OBJECT
ARM64: introduced aliases for argument registers
ARM64: implement F_THIS_OBJECT and F_SIZEOF_LOCAL
ARM64: more debug prologues
ARM64: fix bug in slowpath of integer ops
The slowpath of the integer operations could sometimes change the register
state. This was incorrect because it is not always executed.
ARM64: Always return unconditional branches from ins_f_jump
This makes sure we get maximum avaiable range for future update_f_jumps.
ARM64: Use cbz/cbnz in more places
ARM64: Use zero register some more
ARM64: Use tbz/tbnz instructions
ARM64: Use pre-decrement when popping from mark stack
ARM64: Add some custom register names
ARM64: better versions of F_STRING and F_CONSTANT
ARM64: fix several assert failure
ARM64: fix F_PROTECT_STACK
expendibles are stored as an offset in pike_frame now.
ARM64: do some checks only during init
ARM64: fix some warnings
ARM64: implement F_POP_TO_MARK
ARM64: implement F_ASSIGN_LOCAL
ARM64: generate better statistics
ARM64: set return hint on function exits
ARM64: implement OPCODE_INLINE_BRANCH
ARM64: implement QUICK and comparison jumps
ARM64: Fix loading of string constants with large indices
ARM64: Initial commit
ARM64: really_free_svalue does not change fp or sp
ARM64: added comparison opcodes
ARM64: added more CMP opcodes
ARM64: remove c++ style comments
ARM64: added disassembler
ARM64: add F_ADD_INTS
ARM64: simplify arm32_ins_maybe_exit
ARM64: implement fast paths for comparisons
ARM64: use unsigned compare for pointers
ARM64: added low level support for shift instructions