Avoid include cycle.
GC: Inline markers into datastructures
Initial work is done. This may have worse performance characteristics
for now. Futher work will use the block allocator provided iterator
instead of linked lists to visit all objects and potentially optimize
the memory layout of the marker struct.
unsigned INT64 -> UINT64
inline is part of c99
We do have.. excessive amounts of inline, incidentally. :)
stdlib.h is included from global.h
block_allocator: improved error messages
block_allocator: removed unwanted PMOD_EXPORT
block_allocator: align allocations to page size
Try to choose the initial page size such that it fits into a multiple of
4K including headers and malloc overhead.
block_allocator: fixed a bug in ba_sort_free_list
This could possibly happen after syntax errors when freeing
the compiler data.
Stdio.Buffer: use round_up*() functions to enlarge buffer.
Fixes an infinity loop due to overflow. Also changed the round_up*() functions
to return 0 on overflow and made 1 the next power of two after 0.
Build: Support compilation with compilers other than gcc again.
Replace all uses of __attribute((unused)) with PIKE_ATTRIBUTE_UNUSED.
Replaced all remaining uses of __attribute() and __attribute__()
build: more deattributification for Windows
rename aligned_alloc to xalloc_aligned
C11 defines aligned_alloc. This can lead to all kinds of confusion, so
lets rename our internal function.
block_allocator: use Valgrind macros in the right order
When sorting the free list during ba_walk(), the Valgrind macros were
used in the wrong order. This led to spurious Valgrind warnings.
Fix a bunch of Clang warnings.
Valgrind: suppress some warnings
block_allocator: do not keep empty pages around
In certain cases the block allocator kept around free pages. This
happened with markers used by the gc.
Merge remote-tracking branch 'origin/8.0' into string_alloc
block_allocator: fix ba_walk() and improve valgrind support
Blocks in the free list are now marked as free while the callback is
running. This helps detecting invalid accesses. Also fixed a bug in
the free list sort.
block_allocator: do not round up powers of 2
The family of round_up* functions rounds up to the next power of two.
This made most block allocators initial areas twice as big as intended.
block_allocator: removed some INLINE and warnings for unused parameters
mallocs nowdays return void*, so no need to case.
block_allocator: always check ptr validity on free
Fixed warning with ifdefs.
block_allocator: do not execute empty loop
Merge branch '8.0' into gobject-introspection
block_allocator: do not allocate too large bitvector
block_allocator: allocate pages only if needed and some cleanup
block_allocator: more helpful debug on free list corruption
block_allocator: fixed --with-debug
block_allocator: fixed ba_walk and some cleanup
block_allocator: added ba_walk for iterating over allocated blocks
block_allocator: reuse pages used by the gc
Fixed count_all in the block allocator.
The memory_info() function had a tendency to return values like
0 objects, total 120Kb
added fallbacks for valgrind mempool macros
block_allocator: added support for alignment
pike_memory: replaced old block allocator
Merge remote-tracking branch 'origin/7.9' into pdf
block_allocator: check for valgrind macros
block_allocator: use valgrind macros
block_alloc: memusage stats could overflow
Added new block allocator. It dramatically speeds up free, when
allocating many blocks and deallocation happens non linearly.
new block alloc
use memcheck macros when USE_VALGRIND is defined
add GJAlloc as a bundle
pike_memory: new helper cmemset()
let's go seperate ways, together.
use free_blk to save one memory dereference on free/alloc
set p->first = NULL when full
use maligned memory pages to find page without hash
lets try another approach
optimize hash lookup
added more sofisticated statistics support
include malloc headers for stats
moved counter into stats struct
stats moved into seperate struct for clearity
got rid of list, take advantage of power of two hashtable sizes
errbuf ... never used but for micro benchmark
more meaningful counters
redo fast paths!
traverse both buckets at once
macro based alignment
blueprint support and corrected rounding up of page_size
have page struct at beginning of page
Revert "blueprint support for faster? initialization"
This reverts commit 6dbb91eab2145159e1f9c2b88138551f1b599ed9.
join the two slow paths in free and remove left in favour or p->used
make ba_free fast path smaller
pre-undo (post-do) intermediate commit.
help gcc with descision making
reduce size of ba_free to help inlining
do late chaining
properly grow page_size
addition COUNTS in htable lookup
use base bitvector.h
working for now
fixed memory initialization
working state (without debug)
fixed inline -> INLINE
blueprint support for faster? initialization
set prev only if first
fixed some compilation problem
fixed page indexing
added grow/shrink support and some fast paths
use chained buckets instead of open allocation
used unsigned int
fixed linked list fuckup
added some debug
keep 3 empty pages around
block_allocator: initial commit
THIS IS TEMPORARY