Default-constructed objects of (boost::heap) handle_type are singular,
including the wrapped handle_type::iterator.
Apparently, MSVC iterator debug facilities strictly require that
one singular instance is compared to another singular instance.
It is not possible to get check-comparabe iterators of non-singular
and singular instances as owning container will always mismatch.
Makes turn restrictions into dedicated structures and diferentiates between them via a variant.
Ensures that we do not accidentally mess up ID types within our application.
In addition this improves the restriction performance by only parsing all edges
once at the cost of (at the time of writing) 22MB in terms of main memory usage.
This adds the ability to mark ways with a user-defined
class in the profile. This class information will be included
in the response as property of the RouteStep object.
PR uses TBB internal atomic's for atomic CAS on non-atomic data
Corresponding PR https://github.com/Project-OSRM/osrm-backend/pull/4199
Other options:
* use sequential update
* use an internal packed vector lock -> makes packed vector non-movable
* use boost.interprocess atomics implementation -> outdated and only 32 bit version
* use glib atomic's -> requires new dependency
* wait for https://isocpp.org/blog/2014/05/n4013 as_atomic
* use c11 _Atomic and atomic_compare_exchange_weak -> not possible to mix c++11 and c11
* use builtin functions gcc __sync_bool_compare_and_swap and msvc _InterlockedCompareExchange64 -> possible, but requires proper testing
boolean CompareAndSwapPointer(volatile * void * ptr,
void * new_value,
void * old_value) {
if defined(_MSC_VER)
if (InterlockedCompareExchange(ptr, new_value, old_value) == old_value) return false;
else return true;
elif (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) > 40100
return __sync_bool_compare_and_swap(ptr, old_value, new_value);
else
error No implementation
endif
}
* use Boost.Atomic -> requires new dependency
WordT local_lower_word = lower_word, new_lower_word;
do
{
new_lower_word = set_lower_value<WordT, T>(local_lower_word,
lower_mask[internal_index.element],
lower_offset[internal_index.element],
value);
} while (!boost::atomics::detail::operations<sizeof(WordT), false>::compare_exchange_weak(
lower_word,
local_lower_word,
new_lower_word,
boost::memory_order_release,
boost::memory_order_relaxed));
The new numbering uses the partition information
to sort border nodes first to compactify storages
that need access indexed by border node ID.
We also get an optimized cache performance for free
sincr we can also recursively sort the nodes by cell ID.
This implements issue #3779.