Slow down road speeds in cities

Great! Coincidentally I’m doing some new work around this at the moment. Nothing significant is changing in speedregions.core; instead I’m building some tools to let me define speed regions in ODL Studio by selecting postcode areas, so you’ll be able to draw speed regions on a map. It’ll be open-sourced - I’ll keep you posted on this. We should be able to set it up using German postcode areas / regions / zips too using the data here https://www.suche-postleitzahl.org/downloads

1 Like

I’ve just figured out a speedup with building the quadtree-like lookup structure… The code is checked in but not very tested or profiled yet. Its a big speed-up (like a 10x or maybe even 100x speedup). The secret is that when testing a polygon against a node (i.e. rectangle) for intersection etc, I now get and re-use the intersection geometry between the two (i.e. the part of the polygon that fits within the rectangle). For all subsequent tests on the children of the node, I then use this intersection geometry instead (which itself gets recursively intersected). So the more you recurse down the tree for building it for a single polygon, you’re actually only testing intersections on smaller and smaller bits of the polygon - only the bit of the polygon which intersected with your parent node.

Basically it massively cuts down the geometry tests when building the graph. I should have spotted this earlier…

1 Like

I’m still doing testing but for a UK dataset with 200m cell resolution, building the lookup is over 10x quicker than building the road network graph (with contraction hierarchies on). So basically we wouldn’t add much to the graph building time by adding this feature and we don’t have to bother with precompiling the lookup and reusing a compiled version for different graph builds (which makes the whole process much more complex).

:thumbsup: this sounds nice. We currently have a very simple spatial lookup procedure which is not yet merged but I use it in another branch (landmarks). Do you think your work is just a different implementation of the index or is it significant different and you’d prefer that we use this instead of merging the current lookup first and adding your solution later?

I’m not sure. Can you give me a short overview of how your index works or point me to the source file? The minimum development path (and so most likely to get done in the short term) would just be to:

  1. include speedregions.core as a dependency in reader-osm and
  2. wire it in a similar(ish) way to https://github.com/PGWelch/speedregions/blob/master/experimental.ghlatest/src/main/java/com/opendoorlogistics/speedregions/graphhopper/ExperimentalUseSpeedRegionsWithCar.java

Feedback still welcome / wanted on how you want to wire it in exactly : )

I am likely to make some other improvements to speedregions.core in the near and medium-term future. It might therefore be better to keep it as a separate project for the moment that you link to, so those improvements can be fed in too. For example, I’ve recently come across a situation where East-West traffic is significantly slower than North-South traffic (city on a grid plan). It’s therefore possible I might add some dependency on road orientation in the future.
.

The main class is this I think (see the full pull request for all changes). But @boldtrn can better comment on this subject as he also had a closer look into your approach (?)

I’m using it to split e.g. Africa from EU to create smaller subnetworks so that my landmarks implementation performs better and at the same time the weighting (or distance) fits into a short value :wink:

I would avoid an external dependency (or at least via maven central) but as the license is also Apache (?) this is a minor issue IMO.

Direction dependent stuff is interesting, we’ve thought about a similar thing when considering early rush hour (flow into the city centre) vs. rush hour at the afternoon (flow from the city centre)

Looking at that file it’s a grid-based spatial lookup approach, so it would have issues scaling world-wide at reasonable resolution due to the amount of memory used. For my UK test, a grid based approach would require ~12 million cells to get 200m resolution but the node-tree based approach I’m using has about 120000 nodes instead (depending on the complexity of the input geometry, e.g. a less detailed coastline will halve this). So ~1% of the cells.

My node class(es) aren’t memory optimised at the moment and might need some work for worldwide usage at high resolution. For example I’m storing geometry objects on them which could probably just be created when needed, same for the lat/long bounds. This should be relatively straightforward to do but might need some profiling for the trade-off between memory usage and query speed.

I can add it to Maven Central. Long-term it could just be added to the Graphhopper project, but this might be better done when I’m not likely to add much more to it.

If I put speedregions.core on Maven and cleaned up / clearly documented the example class with the experimental integration into Graphhopper, would you be able to wire it into Graphhopper? As you can see the ‘wiring in’ code is quite small and I’m not keen on guessing how you’d want it wired in in-terms of tags etc…

Exactly, that is true. The idea of this approach was to map countries, or at least bigger regions and not cities. Therefore, I use a resolution of about 10-20km. If a point close to the border is requested I do a Polygon contains Point lookup. Therefore, the solution is very exact and has a very fast lookup for most cases. Creating an array with world-wide coverage using this approach is easy and memory efficient. Using a higher resolution like 200m would probably not scale very well.

Integrating it into the GH core would be not optimal I think since you have dependency to LGPL lib (com.vividsolutions.jts).

Yes exactly. That is where the magic happens :slight_smile:. Thanks for pinging me, I haven’t seen this discussion, yet.

I think wiring it into GH should be quite easy from a technical perspective. Some minor changes to the flag encoder. We would have to find a way to load it from the core and generate it in the OSMReader?

Clever!

The LGPL dependency is unfortunate but I’ve not found a decent Apache or similar licenced Java geometry library which can do complex stuff like get intersecting geometry of two polygons.

It wouldn’t need to be a dependency of core though, only reader.osm (on the assumption we’re always applying to osm files). Whether the LGPL is a problem or not depends on your usage. [Usual disclaimer about not being a lawyer and this is not legal advise]. Using it behind a webservice is probably OK, shipping a single fat jar to a client is probably not OK, shipping two fat jars (one graphhopper, one LGPL dependencies) to a client and getting them to run it via the command line java command explicitly linking both jars themselves, is probably OK.

Anyway I’m doing a few things related to speedregions at the moment. Once its cleaned up and better documented in a couple of weeks time, I’ll put speedregions.core on Maven central and we’ll go from there.

Just stumbled over https://github.com/jchambers/jeospatial (BSD license) … but not sure if this is similar use case :slight_smile:

Me neither, I also did a bit of searching.

It looks very interesting :slight_smile: .