| Forum | GitHub | Maps | Blog

Flexible Vehicle Profiles


Great! Now “fastest” with LM got a real performance boost:

fastest: <.4secs ("model": { "base":"custom1", "max_speed":120 } )
fastest no tolls: 2.8secs ("model": { "base":"custom1", "max_speed":120 , "no_access": { "toll": ["all"] } } )
shortest: 4secs ({ "base":"custom1", "max_speed":120 , "distance_factor":90 } )
shortest no tolls: 4.7secs ("base":"custom1", "max_speed":120 , "distance_factor":90 , "no_access": { "toll": ["all"] })

Shortest performance is a bit below expectations… but again the above is for an extreme 3500km route, for a 20 viapoints 500km one:

fastest: .097secs
fastest no tolls: .104
shortest : .109
shortest no tolls: .204

Also, GET with " vehicle=custom1&amp;weighting=custom1&amp;ch.disable=true" is now using LM ( “astarbi| landmarks-routing ”), as expected



If you use a certain weighting for the LM preparation then the speed is only tuned for these specific settings. All options that forces to route to go off the “optimal” path will reduce performance. So if you need good performance for shortest you could do two LM preparations (one for your fastest and one for shortest).


Do you mean I can create a new vehicle profile adding distance factor, for example:

name: shortest
base: car
max_speed: 120
distance_factor: 90

then a POST including { “base”:“shortest”, “max_speed”:120 , “distance_factor”:90 } will use this specific LM preparation, correct? And same for toll (need to check yml syintax for “no_access”: { “toll”: [“all”] }).

I’ll give a try shortly! I’m curious to check how disk space and RAM consumption compares to CH.


Yes, you create a new vehicle profile and then you should be able to just POST (yaml or JSON):

base: shortest
max_speed: 120

The ideal thing would be to just specify base: shortest but at the moment I think max_speed is required.

need to check yml syintax for “no_access”: { “toll”: [“all”] }).

You can define new vehicle profiles via yaml or JSON. So I would only use yaml if you are using yaml per request. The yaml array structure is explained here.

The advantage over CH is not so much the RAM consumption but that you can modify the profile per request.


I’m getting some troubles when having two profiles, it seems jetty fails to start:

INFO  [2018-10-30 19:56:26,816] com.graphhopper.reader.osm.GraphHopperOSM: graph gfast,gfastnotoll|RAM_STORE|2D|NoExt|,,,,, details:edges:0(0MB), nodes:0(0MB), name:(0MB), geo:0(0MB), bounds:1.7976931348623157E308,-1.7976931348623157E308,1.7976931348623157E308,-1.7976931348623157E308
INFO  [2018-10-30 20:02:50,212] com.graphhopper.routing.lm.PrepareLandmarks: Calculated landmarks for 1 subnetworks, took:185.74075 =>
INFO  [2018-10-30 20:05:51,445] org.eclipse.jetty.server.AbstractConnector: Started application@2a2843ec{HTTP/1.1,[http/1.1]}{localhost:8989}
INFO  [2018-10-30 20:05:51,446] org.eclipse.jetty.server.AbstractConnector: Started admin@2042ccce{HTTP/1.1,[http/1.1]}{localhost:8990}
ERROR [2018-10-30 20:05:51,448] io.dropwizard.cli.ServerCommand: Unable to start server, shutting down
! java.lang.IllegalStateException: already initialized
! at
! at com.graphhopper.routing.lm.LandmarkStorage.loadExisting(
! at com.graphhopper.routing.lm.PrepareLandmarks.loadExisting(
! at com.graphhopper.routing.lm.LMAlgoFactoryDecorator$
! at java.util.concurrent.Executors$
! at
! at java.util.concurrent.Executors$
! at
! at java.util.concurrent.ThreadPoolExecutor.runWorker(
! at java.util.concurrent.ThreadPoolExecutor$
INFO  [2018-10-30 20:05:51,452] org.eclipse.jetty.server.AbstractConnector: Stopped application@2a2843ec{HTTP/1.1,[http/1.1]}{localhost:8989}
INFO  [2018-10-30 20:05:51,455] org.eclipse.jetty.server.AbstractConnector: Stopped admin@2042ccce{HTTP/1.1,[http/1.1]}{localhost:8990}
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: already initialized
	at com.graphhopper.routing.lm.LMAlgoFactoryDecorator.loadOrDoWork(
	at com.graphhopper.GraphHopper.loadOrPrepareLM(
	at com.graphhopper.reader.osm.GraphHopperOSM.loadOrPrepareLM(
	at com.graphhopper.GraphHopper.postProcessing(
	at com.graphhopper.GraphHopper.process(
	at com.graphhopper.GraphHopper.importOrLoad(
	at com.graphhopper.http.GraphHopperManaged.start(
	at io.dropwizard.lifecycle.JettyManaged.doStart(

the -gh/ folder does nto seem completed:
Oct 30 21:02 landmarks_gfast
Oct 30 21:05 landmarks_gfastnotoll
Oct 30 20:59 location_index
Oct 30 21:02 subnetwork_landmarks_gfast
Oct 30 21:05 subnetwork_landmarks_gfastnotoll

note, I believe I had the same issue when I tried leaving “graph.flag_encoders: car” in the config.yml


Oh, yes. There is an issue with more than one profile. Strange, as this works for the master branch and should work here too …


This should work now.


Confirmed! I was able to compile and run with more than one profile. More tests soon. Thanks!


Just to confirm all works as expected, I create two profiles


name: gfast
base: car
max_speed: 130


name: gfastnotoll
base: car
max_speed: 130
  - all


graphhopper: no
  graph.encoding_manager: /pathto/flex/gfast.xml,/pathto/flex/gfastnotoll.xml
  prepare.lm.weightings: gfast,gfastnotoll
  prepare.lm.landmarks: 16
  routing.max_visited_nodes: 1000000 true
  routing.non_ch.max_waypoint_distance: 1000000
  graph.dataaccess: RAM_STORE

  - type: http
    port: 8989
    bind_host: localhost
      appenders: []
  - type: http
    port: 8990
    bind_host: localhost

Both works, timing for my 3500km test route are

fastest: 0.4s "model": { "base":"gfast", "max_speed":130 }
fastest no tolls: 0.7s "model": { "base":"gfastnotool", "max_speed":130 }
shortest: 4.2s "model": { "base":"gfast", "max_speed":120 , "distance_factor":90 }
shortest no tolls: 4.2s "model": {"base":"custom1", "max_speed":120 , "distance_factor":90 , "no_access": { "toll": ["all"] }

Thanks for all the help to achieve this :wink:

As a next steps I’d like to play with landmarks to see impact on performance.


Thanks for your time to give me feedback here :slight_smile:

Now I need to find time to consolidate all the changes (and certain quicker workarounds) and also make the profile configuration more generic. This will take some time but the first work will go into 0.12 already.


Just a quick note - OSM has rather poor toll info, so in the future it would be the need to include separate source. Then such mapping can fail (different pay for certain profiles, different payment models (per distance/between points/on point/toll zone).


This sounds very promising. Still I’m not able to get it running.

I configured everything as gspeed mentioned but when running
./ -a web -i europe_germany.pbf
I always get the following error:
java.lang.IllegalStateException: Cannot load properties to fetch EncodingManager configuration at: ./europe_germany-gh/
while the folder “europe_germany-gh” does not exist.

BTW: I cloned the graphhopper git repo. [EDIT: and checked out the flex_vehicleprofile branch!] Is this feature there not available? When will version 0.12 be released?