Default-constructed objects of (boost::heap) handle_type are singular,
including the wrapped handle_type::iterator.
Apparently, MSVC iterator debug facilities strictly require that
one singular instance is compared to another singular instance.
It is not possible to get check-comparabe iterators of non-singular
and singular instances as owning container will always mismatch.
Makes turn restrictions into dedicated structures and diferentiates between them via a variant.
Ensures that we do not accidentally mess up ID types within our application.
In addition this improves the restriction performance by only parsing all edges
once at the cost of (at the time of writing) 22MB in terms of main memory usage.
This adds the ability to mark ways with a user-defined
class in the profile. This class information will be included
in the response as property of the RouteStep object.
PR uses TBB internal atomic's for atomic CAS on non-atomic data
Corresponding PR https://github.com/Project-OSRM/osrm-backend/pull/4199
Other options:
* use sequential update
* use an internal packed vector lock -> makes packed vector non-movable
* use boost.interprocess atomics implementation -> outdated and only 32 bit version
* use glib atomic's -> requires new dependency
* wait for https://isocpp.org/blog/2014/05/n4013 as_atomic
* use c11 _Atomic and atomic_compare_exchange_weak -> not possible to mix c++11 and c11
* use builtin functions gcc __sync_bool_compare_and_swap and msvc _InterlockedCompareExchange64 -> possible, but requires proper testing
boolean CompareAndSwapPointer(volatile * void * ptr,
void * new_value,
void * old_value) {
if defined(_MSC_VER)
if (InterlockedCompareExchange(ptr, new_value, old_value) == old_value) return false;
else return true;
elif (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) > 40100
return __sync_bool_compare_and_swap(ptr, old_value, new_value);
else
error No implementation
endif
}
* use Boost.Atomic -> requires new dependency
WordT local_lower_word = lower_word, new_lower_word;
do
{
new_lower_word = set_lower_value<WordT, T>(local_lower_word,
lower_mask[internal_index.element],
lower_offset[internal_index.element],
value);
} while (!boost::atomics::detail::operations<sizeof(WordT), false>::compare_exchange_weak(
lower_word,
local_lower_word,
new_lower_word,
boost::memory_order_release,
boost::memory_order_relaxed));
The new numbering uses the partition information
to sort border nodes first to compactify storages
that need access indexed by border node ID.
We also get an optimized cache performance for free
sincr we can also recursively sort the nodes by cell ID.
This implements issue #3779.
(i.e. stuff that's stored in our datafiles). Keep those checks for user-supplied values
(i.e. coordinates coming from files during preprocessing, or coordinates supplied by users
during requests)
This fixes issues #3952. The new approach pre-computes masks for fast
access. Since elements can potentially span multiple words we need masks
and offsets for each upper and lower word.
Due to a bug in the C++14 standart the mask computation is not
recognized as constexpr, but would work on C++17.
* optionally include condition and via node coords in InputRestrictionContainer
* only write conditionals to disk, custom serialization for restrictions
* conditional turn lookup, reuse timezone validation from
extract-conditionals
* adapt updater to use coordinates/osm ids, remove internal to external map
* add utc time now parameter to contraction
* only compile timezone code where libshp is found, adapt test running
* slight refactor, more tests
* catch invalid via nodes in restriction parsing, set default cucumber
origin to guinée
* add another run to test mld routed paths
* cosmetic review changes
* Simplify Timezoner for windows build
* Split declaration and parsing parts for opening hours
* adjust conditional tests to run without shapefiles
* always include parse conditionals option
* Adjust travis timeout
* Added dummy TZ shapefile with test timezone polygons
* [skip ci] update changelog
This graph enables efficient boundary edge scans at each level.
Currenly this needs about |V|*|L| bytes of storage.
We can optimize this when the highest boundary nodes ID is << |V|.