Flexible Vehicle Profiles

Continuing the discussion from Disabling CH not allowed on the server-side?:

This discussion is about this flexible branch.

This feature was merged in master. I have written a small tutorial about it here: Flexible / Configurable Weighting Update


is there any document explaining the format?

Currently not

note the GET query for same areas works:

It is not identical as a flex query automatically sets ch.disable=true to allow for flexibility. The error you mentioned comes from the fact that we have some kind of an ‘overload’ protection in place. You can increase this the limit via e.g.

routing.max_visited_nodes: 2000000

Or just move the two locations closer to each other for this test to work.

1 Like

@karussell thanks - but actually the path is pretty short and the error No path found due to maximum nodes exceeded 1000000 seems to be related to the fact I miss to set “max_speed” - is that expected?

This works:

{"request": { "points" : [[11.248630, 46.030340], [11.265450, 46.325830] ] }, "model": { "base":"car", "max_speed":160, "no_access": { "road_class": ["motorway"]} } }

This fails:

{"request": { "points" : [[11.248630, 46.030340], [11.265450, 46.325830] ] }, "model": { "base":"car", "no_access": { "road_class": ["motorway"]} } }

Pls. can you suggest where I shall start from to check myself which attributes I can use in the json POST - I’m interested on toll avoidance as well as specifying fastest/shortest. In the meantime I’m trying to find out myself…

Again thanks for the hints you shared so far!

Yes, then this is expected. max_speed is marked as required but somehow it does not throw an error if this is missing.

I’m interested on toll avoidance as well as specifying fastest/shortest.

toll avoidance is not possible yet, but we can probably add it as separate EnumEncodedValue easily (NO, ALL, HGV). Have a look into EncodingManager.addSurface and the related classes on how to make GraphHopper import and storage aware of it. And once added there, we need to add this in FlexModel and then this can be used in FlexWeighting. (currently this is a bit work but not complicated IMO. Still we’ll improve this especially that query speed will be okay)

fastest/shortest is also not yet possible. In general “shortest” does not make sense - I should write a blog post about it and we have the alternative ShortFastestWeighting which is probably the weighting you want, especially for car routing.

Atch - it seems I have quite some homeworks to make that working.

On the other side, "no_access": { "road_class": ["motorway"]} is ignored in flex as well. I’m wondering if I should focus more on this pull request for my purpose, as you suggested.

I’ll see… thanks again

Yes, the status is very alpha as I said. But I’ll add the toll avoidance as the next step, so you won’t have to do that at least :wink:

On the other side, "no_access": { "road_class": ["motorway"]} is ignored in flex as well

This is strange, will investigate.

1 Like

Have implemented the toll avoidance with this commit. Values could be all, no and hgv.

See here how I used it to change the avoidance per request from the UI:

Edit: the max_speed currently only works as a hint for the A* algorithm and clearly needs some checks :wink: … or a renaming or something

Also YAML is now accepted additionally to the JSON. E.g.:

base: car
max_speed: 120
 toll: ["all"]

So one now can define a new vehicle for import or querying from the same (yaml) file.

1 Like

The problem here is a similar that I stumbled over: the max_speed and base entries are required, but somehow the json annotations in FlexModel do not work yet. So this worked for me without a change:

 "no_access": { "road_class": ["motorway"]}

Edit: dropwizard seems to know only hibernate validation annotations for this. For now I check the validity via code.

1 Like

Just wonderful! All works like a charm also when mixing :wink:

{ "base":"car", 
 "no_access": { "toll": ["all"],  "road_class": ["trunk"]}
1 Like

Great! (I hope the smiley is not meant ironically :wink: )

Have added also a “well behaved” shortest option. I.e. if you add “distance_factor”

 "distance_factor": 1
 "no_access": { "toll": ["all"],  "road_class": ["trunk"]}

then you should see that it prefers shorter routes if it does not loose too much time. The higher the factor the more time you are willing to exchange for a shorter distance.

I probably will rename it to distance_costs or something more meaningful.

1 Like

That’s great and works well!!! This is not ironic, is real :wink:

I need distance_factor to be 4-5 at least to obtain the result I need.

I’m doing various tests, with different combinations.

Is there any data about the performance (speed/memory usage) of flex vs speed mode?

There are currently three modes: speed mode (CH), hybrid mode (LM) and flex mode.

The landmark (LM) mode can be really fast, but CH is still usually a few times faster (except if you create many landmarks, feasible for smaller areas only). Now, depending on the length of the route, the differences between LM and CH are usually not noticeable (as both are small like 30ms vs. 3ms). The good property of LM is that it can be customized per request. The downside is that the slowdown can be so bad that it can get as slow as the flex mode. E.g. if the best path goes along a longer motorway and if you now specify you need to avoid this, then the performance-impact could be bad (but all depends here on the landmarks position and count).

I did some perf tests today using a recent italy pbf file (1.2GB), graph.dataaccess: RAM_STORE, JAVA_OPTS:=-Xmx2000m -Xms2000m, and a pretty extreme route (15 points , ~3500km):

graph.flag_encoders: car
build time:21min
gh/ folder size: 868MB
Route calculation:
car shortest or fastest: ~2.6secs
car shortest|fastest with no toll: 4.27|4.47secs

SPEED MODE (I created a new Notollcar flag encoder to avoid tolls)
graph.flag_encoders: car, notollcar;
prepare.ch.weightings: fastest, shortest
build time: 34min
gh/ folder size: 1231MB (due to two vehicles)
Route calculation:
car fastest <150ms
car shortest <300ms
notollcar fastest <200ms
notollcar shortest <350ms

This in an intel i5-4670K, 8GB RAM, SSD.

Did you enable landmarks? (If not it falls back to Dijkstra.) You can do this via:

prepare.lm.weightings: fastest

Hmmh, or maybe even better using directly the flex weighting, but I’m not sure if this already works:

prepare.lm.weightings: flex

SPEED MODE (I created a new Notollcar flag encoder to avoid tolls)

Speed mode is faster with the configured defaults. But if you have multiple properties like toll, motorway, max_weight then you have no choice and you need to use flex weighting to create a “base profile” and then modify it. Another option would be to create a vehicle for every possible combination.

btw: the current master also supports the flex mode via ch.disable=true, but the features are limited (heading, different weighting, block_area).

btw2: you can make flex mode faster via more landmarks

thanks - atch I believe I did not enable landmarks, I’ll do!

Right one option is to create a vehicle for every possible combination, which I attempted to do (car, notollCar, nomotorwayCar, notollmotorwayCar) - of course disk space requirement increase (about 2GB) and also -Xmx/s needs to be at 3GB to run. Speed remains very fast.

Let me try to tweak flex with landmarks: also without landmarks, I noticed for routes where points are not too far from each other (<50km) the calculation is fast (<150ms for a total 400km route with about 20 points).

with prepare.lm.weightings: flex I get:
java.lang.IllegalArgumentException: weighting flex not supported

With prepare.lm.weightings: fastest I don’t see any major performance improvement for the 3500km route test case, here is my config yml for flex vehicles profiles:

  graph.flag_encoders: car
  prepare.ch.weightings: no
  prepare.lm.weightings: fastest
  routing.max_visited_nodes: 1000000
  routing.ch.disabling_allowed: true
  routing.non_ch.max_waypoint_distance: 1000000
  graph.dataaccess: RAM_STORE

json post contains just
“model”: { “base”:“car”, “max_speed”:130 , “distance_factor”:0 }

I need to better understand how lm works and if it is used…

If I understood correctly LM has effects on /route GET requests only, not on /flex POST, correct? I now tried with GET car shortest and fastest and in my test route timing is <1.5 and <1.9secs - which is a good improvement from the POST ~2.6secs.

Is fastest a bit slower than shortest since additional calculations are needed to compute time (distance*speed) vs just distance, right?


There were a couple of things that I had to fix. Additionally you have to provide the weighting at import time. Now the name of this weighting is not “flex” but the name that used in the configuration:

name: custom1
base: car

Store this in path/to/custom1.yml and do:

graph.encoding_manager: path/to/custom1.yml
prepare.lm.weightings: custom1

Then, if you do the import you should see something like

GraphHopperOSM: start creating graph from ...
GraphHopperOSM: using custom1|RAM_STORE|...

And if you do a POST (or GET) request you should see a log entry:

… RouteResource: … astarbi|landmarks-routing

If you see something with “astarbi|beeline” then it still does not use the faster landmark routing.

One remaining task for me is to enable landmark fastest-routing in case of base: car. But the conceptional problem is that with the new approach there is no clear “vehicle” and just the “weighting” so we should better do
base: car|fastest
or something. That’s good that I stumbled over this problem in an early phase :slight_smile:

This is also the reason that you currently have to specify vehicle=custom1&weighting=custom1 for the GET request, see the fire_truck test.

If I understood correctly LM has effects on /route GET requests only, not on /flex POST, correct?

I have tried this now and it should work. Have not tuned speed that much at the moment but the LM algo is triggered for me via the process above. Please let me know if you make it working and see the landmark-routing in the logs with the latest master :slight_smile:

Is fastest a bit slower than shortest since additional calculations are needed to compute time (distance*speed) vs just distance, right?

It shouldn’t be that much but have not done speed tuning that much yet for this proof of concept.

I don’t see any major performance improvement for the 3500km route test case

Routes above 1000km are indeed significantly slower than with CH (again if you add landmarks then you can get similar performance but RAM quickly gets an issue in case of such big areas). We could introduce a heuristic parameter to improve speed but sacrifice quality (optimality guarantee). But I hope that the landmark routing was not enabled and so the routing should be now roughly 10-15 times faster with the procedure described above.

Edit: there is another limitation of this simple JSON approach: it is not easy to say AND. E.g. set speed to 90km/h if road is primary AND in Germany AND not in the city. So we probably have to introduce a bit scripting here, but then we loose control to tune performance.

Just tried with latest changes, I see astarbi|landmarks in logs when calculating routes so landmarks are used but performance is worst than flex w/o landmarks: 4-5secs consuming 100% CPU.

Here is my config.yml (note I had to comment graph.flag_encoders: car to avoid an error at runtime)

prepare.ch.weightings: no
graph.encoding_manager: /path/to/custom1.xml
prepare.lm.weightings: custom1
prepare.lm.landmarks: 16
routing.max_visited_nodes: 1000000
routing.ch.disabling_allowed: true
routing.non_ch.max_waypoint_distance: 1000000
graph.dataaccess: RAM_STORE

my custom1.yml is just:

name: custom1
base: car
max_speed: 120

json request is { "base":"custom1" , "max_speed":120 }

am I doing something wrong? Can I tune landmarks?

Note that with POST I see “astarbi| landmarks-routing” in logs while with GET I see always “astarbi|beeline-routing” no matter if:
However the latter takes 1.3secs while the former (weighting=custom1) 4secs - which is aligned with POST.

1 Like

Yes, there is still something wrong with the derived heuristic in these cases. Will investigate.

while with GET I see always “astarbi| beeline-routing ” no matter if

This is a bug that I need to fix

Have tested this now and it should be fixed now. Was a wrong speed factor and the weight should be in seconds or 1k*seconds instead of ms.

while with GET I see always “astarbi| beeline-routing

Can you try: vehicle=custom1&weighting=custom1&ch.disable=true