Flexible Vehicle Profiles

Just tried with latest changes, I see astarbi|landmarks in logs when calculating routes so landmarks are used but performance is worst than flex w/o landmarks: 4-5secs consuming 100% CPU.

Here is my config.yml (note I had to comment graph.flag_encoders: car to avoid an error at runtime)

prepare.ch.weightings: no
graph.encoding_manager: /path/to/custom1.xml
prepare.lm.weightings: custom1
prepare.lm.landmarks: 16
routing.max_visited_nodes: 1000000
routing.ch.disabling_allowed: true
routing.non_ch.max_waypoint_distance: 1000000
graph.dataaccess: RAM_STORE

my custom1.yml is just:

name: custom1
base: car
max_speed: 120

json request is { "base":"custom1" , "max_speed":120 }

am I doing something wrong? Can I tune landmarks?

Note that with POST I see “astarbi| landmarks-routing” in logs while with GET I see always “astarbi|beeline-routing” no matter if:
However the latter takes 1.3secs while the former (weighting=custom1) 4secs - which is aligned with POST.

1 Like

Yes, there is still something wrong with the derived heuristic in these cases. Will investigate.

while with GET I see always “astarbi| beeline-routing ” no matter if

This is a bug that I need to fix

Have tested this now and it should be fixed now. Was a wrong speed factor and the weight should be in seconds or 1k*seconds instead of ms.

while with GET I see always “astarbi| beeline-routing

Can you try: vehicle=custom1&weighting=custom1&ch.disable=true

Great! Now “fastest” with LM got a real performance boost:

fastest: <.4secs ("model": { "base":"custom1", "max_speed":120 } )
fastest no tolls: 2.8secs ("model": { "base":"custom1", "max_speed":120 , "no_access": { "toll": ["all"] } } )
shortest: 4secs ({ "base":"custom1", "max_speed":120 , "distance_factor":90 } )
shortest no tolls: 4.7secs ("base":"custom1", "max_speed":120 , "distance_factor":90 , "no_access": { "toll": ["all"] })

Shortest performance is a bit below expectations… but again the above is for an extreme 3500km route, for a 20 viapoints 500km one:

fastest: .097secs
fastest no tolls: .104
shortest : .109
shortest no tolls: .204

Also, GET with " vehicle=custom1&amp;weighting=custom1&amp;ch.disable=true" is now using LM ( “astarbi| landmarks-routing ”), as expected


If you use a certain weighting for the LM preparation then the speed is only tuned for these specific settings. All options that forces to route to go off the “optimal” path will reduce performance. So if you need good performance for shortest you could do two LM preparations (one for your fastest and one for shortest).

Do you mean I can create a new vehicle profile adding distance factor, for example:

name: shortest
base: car
max_speed: 120
distance_factor: 90

then a POST including { “base”:“shortest”, “max_speed”:120 , “distance_factor”:90 } will use this specific LM preparation, correct? And same for toll (need to check yml syintax for “no_access”: { “toll”: [“all”] }).

I’ll give a try shortly! I’m curious to check how disk space and RAM consumption compares to CH.

Yes, you create a new vehicle profile and then you should be able to just POST (yaml or JSON):

base: shortest
max_speed: 120

The ideal thing would be to just specify base: shortest but at the moment I think max_speed is required.

need to check yml syintax for “no_access”: { “toll”: [“all”] }).

You can define new vehicle profiles via yaml or JSON. So I would only use yaml if you are using yaml per request. The yaml array structure is explained here.

The advantage over CH is not so much the RAM consumption but that you can modify the profile per request.

1 Like

I’m getting some troubles when having two profiles, it seems jetty fails to start:

INFO  [2018-10-30 19:56:26,816] com.graphhopper.reader.osm.GraphHopperOSM: graph gfast,gfastnotoll|RAM_STORE|2D|NoExt|,,,,, details:edges:0(0MB), nodes:0(0MB), name:(0MB), geo:0(0MB), bounds:1.7976931348623157E308,-1.7976931348623157E308,1.7976931348623157E308,-1.7976931348623157E308
INFO  [2018-10-30 20:02:50,212] com.graphhopper.routing.lm.PrepareLandmarks: Calculated landmarks for 1 subnetworks, took:185.74075 =>
INFO  [2018-10-30 20:05:51,445] org.eclipse.jetty.server.AbstractConnector: Started application@2a2843ec{HTTP/1.1,[http/1.1]}{localhost:8989}
INFO  [2018-10-30 20:05:51,446] org.eclipse.jetty.server.AbstractConnector: Started admin@2042ccce{HTTP/1.1,[http/1.1]}{localhost:8990}
ERROR [2018-10-30 20:05:51,448] io.dropwizard.cli.ServerCommand: Unable to start server, shutting down
! java.lang.IllegalStateException: already initialized
! at com.graphhopper.storage.RAMDataAccess.loadExisting(RAMDataAccess.java:118)
! at com.graphhopper.routing.lm.LandmarkStorage.loadExisting(LandmarkStorage.java:705)
! at com.graphhopper.routing.lm.PrepareLandmarks.loadExisting(PrepareLandmarks.java:121)
! at com.graphhopper.routing.lm.LMAlgoFactoryDecorator$1.run(LMAlgoFactoryDecorator.java:284)
! at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
! at java.util.concurrent.FutureTask.run(FutureTask.java:266)
! at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
! at java.util.concurrent.FutureTask.run(FutureTask.java:266)
! at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
! at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
INFO  [2018-10-30 20:05:51,452] org.eclipse.jetty.server.AbstractConnector: Stopped application@2a2843ec{HTTP/1.1,[http/1.1]}{localhost:8989}
INFO  [2018-10-30 20:05:51,455] org.eclipse.jetty.server.AbstractConnector: Stopped admin@2042ccce{HTTP/1.1,[http/1.1]}{localhost:8990}
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: already initialized
	at com.graphhopper.routing.lm.LMAlgoFactoryDecorator.loadOrDoWork(LMAlgoFactoryDecorator.java:304)
	at com.graphhopper.GraphHopper.loadOrPrepareLM(GraphHopper.java:1252)
	at com.graphhopper.reader.osm.GraphHopperOSM.loadOrPrepareLM(GraphHopperOSM.java:91)
	at com.graphhopper.GraphHopper.postProcessing(GraphHopper.java:912)
	at com.graphhopper.GraphHopper.process(GraphHopper.java:710)
	at com.graphhopper.GraphHopper.importOrLoad(GraphHopper.java:679)
	at com.graphhopper.http.GraphHopperManaged.start(GraphHopperManaged.java:71)
	at io.dropwizard.lifecycle.JettyManaged.doStart(JettyManaged.java:27)

the -gh/ folder does nto seem completed:
Oct 30 21:02 landmarks_gfast
Oct 30 21:05 landmarks_gfastnotoll
Oct 30 20:59 location_index
Oct 30 21:02 subnetwork_landmarks_gfast
Oct 30 21:05 subnetwork_landmarks_gfastnotoll

note, I believe I had the same issue when I tried leaving “graph.flag_encoders: car” in the config.yml

Oh, yes. There is an issue with more than one profile. Strange, as this works for the master branch and should work here too …

This should work now.

1 Like

Confirmed! I was able to compile and run with more than one profile. More tests soon. Thanks!

1 Like

Just to confirm all works as expected, I create two profiles


name: gfast
base: car
max_speed: 130


name: gfastnotoll
base: car
max_speed: 130
  - all


  prepare.ch.weightings: no
  graph.encoding_manager: /pathto/flex/gfast.xml,/pathto/flex/gfastnotoll.xml
  prepare.lm.weightings: gfast,gfastnotoll
  prepare.lm.landmarks: 16
  routing.max_visited_nodes: 1000000
  routing.ch.disabling_allowed: true
  routing.non_ch.max_waypoint_distance: 1000000
  graph.dataaccess: RAM_STORE

  - type: http
    port: 8989
    bind_host: localhost
      appenders: []
  - type: http
    port: 8990
    bind_host: localhost

Both works, timing for my 3500km test route are

fastest: 0.4s "model": { "base":"gfast", "max_speed":130 }
fastest no tolls: 0.7s "model": { "base":"gfastnotool", "max_speed":130 }
shortest: 4.2s "model": { "base":"gfast", "max_speed":120 , "distance_factor":90 }
shortest no tolls: 4.2s "model": {"base":"custom1", "max_speed":120 , "distance_factor":90 , "no_access": { "toll": ["all"] }

Thanks for all the help to achieve this :wink:

As a next steps I’d like to play with landmarks to see impact on performance.

1 Like

Thanks for your time to give me feedback here :slight_smile:

Now I need to find time to consolidate all the changes (and certain quicker workarounds) and also make the profile configuration more generic. This will take some time but the first work will go into 0.12 already.

1 Like

Just a quick note - OSM has rather poor toll info, so in the future it would be the need to include separate source. Then such mapping can fail (different pay for certain profiles, different payment models (per distance/between points/on point/toll zone).

This sounds very promising. Still I’m not able to get it running.

I configured everything as gspeed mentioned but when running
./graphhopper.sh -a web -i europe_germany.pbf
I always get the following error:
java.lang.IllegalStateException: Cannot load properties to fetch EncodingManager configuration at: ./europe_germany-gh/
while the folder “europe_germany-gh” does not exist.

BTW: I cloned the graphhopper git repo. [EDIT: and checked out the flex_vehicleprofile branch!] Is this feature there not available? When will version 0.12 be released?


Hi Peter,

Just wondering if the same performance could be achieved on master if I inject encoders I need to check max_height and a few other parameters or flex_vehicleprofile is supposed to have better performance?

RE: LM. Am I right in saying that performance deteriorates when “base profile” significantly deviates from run time parameters


I reboot the server, i still get the same question, how you do it? I just comment the routing.ch.disabling_allowed = true in the config .yml, but there shows the Disabling CH not allowed on the server-side.

Please avoid adding questions to unrelated topics. Instead create a new topic and explain what you did and what you need in more detail.

This feature was merged in master. I have written a small tutorial about it here: Flexible / Configurable Weighting Update