Runtime: Fix dmalloc_unregister_all().
Second attempt at fixing dmalloc_unregister_all().
Runtime: Improved output from print_allocator() slightly.
Runtime: Fix typo in ba_walk().
Only the first block of each page was actually walked
by ba_walk(). This caused eg dmalloc_unregister_all()
to not actually unregister all blocks, which triggered
failures if/when any blocks were reused. Similar issues
were likely for the other code that uses ba_walk().
block_allocator: Remove unused alignment functionality
Merge remote-tracking branch 'origin/master' into new_utf8
Merge commit '722771973bd' into patches/lyslyskom22891031
* commit '722771973bd': (6177 commits)
Verify that callablep responses are aligned with reality.
Merge commit '2470270f500c728d10b8895314d8d8b07016e37b' into grubba/typechecker-automap
* commit '2470270f500c728d10b8895314d8d8b07016e37b': (18681 commits)
Removed the old typechecker.
block_allocator: Correctly signal ba_free_all() to valgrind
Merge remote-tracking branch 'origin/8.1' into gobject-introspection
Merge commit '75c9d1806f1a69ca21c27a2c2fe1b4a6ea38e77e' into patches/pike63
* commit '75c9d1806f1a69ca21c27a2c2fe1b4a6ea38e77e': (19587 commits)
Merge branch '8.1' into peter/travis
Avoid include cycle.
GC: Inline markers into datastructures
Initial work is done. This may have worse performance characteristics
for now. Futher work will use the block allocator provided iterator
instead of linked lists to visit all objects and potentially optimize
the memory layout of the marker struct.
unsigned INT64 -> UINT64
inline is part of c99
We do have.. excessive amounts of inline, incidentally. :)
stdlib.h is included from global.h
block_allocator: improved error messages
block_allocator: removed unwanted PMOD_EXPORT
block_allocator: align allocations to page size
Try to choose the initial page size such that it fits into a multiple of
4K including headers and malloc overhead.
block_allocator: fixed a bug in ba_sort_free_list
This could possibly happen after syntax errors when freeing
the compiler data.
Stdio.Buffer: use round_up*() functions to enlarge buffer.
Fixes an infinity loop due to overflow. Also changed the round_up*() functions
to return 0 on overflow and made 1 the next power of two after 0.
Build: Support compilation with compilers other than gcc again.
Replace all uses of __attribute((unused)) with PIKE_ATTRIBUTE_UNUSED.
Replaced all remaining uses of __attribute() and __attribute__()
build: more deattributification for Windows
block_allocator: removed some INLINE and warnings for unused parameters
mallocs nowdays return void*, so no need to case.
block_allocator: always check ptr validity on free
Fixed warning with ifdefs.
block_allocator: do not execute empty loop
block_allocator: do not allocate too large bitvector
block_allocator: allocate pages only if needed and some cleanup
block_allocator: more helpful debug on free list corruption
block_allocator: reuse pages used by the gc
block_allocator: fixed --with-debug
block_allocator: fixed ba_walk and some cleanup
block_allocator: added ba_walk for iterating over allocated blocks
Fixed count_all in the block allocator.
The memory_info() function had a tendency to return values like
0 objects, total 120Kb
added fallbacks for valgrind mempool macros
block_allocator: added support for alignment
pike_memory: replaced old block allocator
block_allocator: check for valgrind macros
block_allocator: use valgrind macros
block_alloc: memusage stats could overflow
Added new block allocator. It dramatically speeds up free, when
allocating many blocks and deallocation happens non linearly.
rename aligned_alloc to xalloc_aligned
C11 defines aligned_alloc. This can lead to all kinds of confusion, so
lets rename our internal function.
block_allocator: use Valgrind macros in the right order
When sorting the free list during ba_walk(), the Valgrind macros were
used in the wrong order. This led to spurious Valgrind warnings.
Fix a bunch of Clang warnings.
Valgrind: suppress some warnings
block_allocator: do not keep empty pages around
In certain cases the block allocator kept around free pages. This
happened with markers used by the gc.
Merge remote-tracking branch 'origin/8.0' into string_alloc
block_allocator: fix ba_walk() and improve valgrind support
Blocks in the free list are now marked as free while the callback is
running. This helps detecting invalid accesses. Also fixed a bug in
the free list sort.
block_allocator: do not round up powers of 2
The family of round_up* functions rounds up to the next power of two.
This made most block allocators initial areas twice as big as intended.
Merge branch '8.0' into gobject-introspection
Merge remote-tracking branch 'origin/7.9' into pdf
new block alloc
use memcheck macros when USE_VALGRIND is defined
add GJAlloc as a bundle
pike_memory: new helper cmemset()
let's go seperate ways, together.
use free_blk to save one memory dereference on free/alloc
set p->first = NULL when full
use maligned memory pages to find page without hash
lets try another approach
optimize hash lookup
moved counter into stats struct
added more sofisticated statistics support
include malloc headers for stats
stats moved into seperate struct for clearity
got rid of list, take advantage of power of two hashtable sizes
errbuf ... never used but for micro benchmark
more meaningful counters
redo fast paths!
traverse both buckets at once
macro based alignment
blueprint support and corrected rounding up of page_size
addition COUNTS in htable lookup
have page struct at beginning of page
Revert "blueprint support for faster? initialization"
This reverts commit 6dbb91eab2145159e1f9c2b88138551f1b599ed9.
make ba_free fast path smaller
pre-undo (post-do) intermediate commit.
help gcc with descision making
reduce size of ba_free to help inlining
do late chaining
properly grow page_size
join the two slow paths in free and remove left in favour or p->used
fixed inline -> INLINE
use base bitvector.h
set prev only if first
working for now
fixed memory initialization
working state (without debug)
blueprint support for faster? initialization
fixed some compilation problem
fixed page indexing
added grow/shrink support and some fast paths
use chained buckets instead of open allocation
used unsigned int
THIS IS TEMPORARY
fixed linked list fuckup
keep 3 empty pages around
block_allocator: initial commit
added some debug