Correct way to enable landmark algorithm


In my application, I use custom FlagEncoder and Weighting. In particular, my encoding manager has three encoders: DataFlagEncoder, CustomFlagEncoder, and CarFlagEncoder. For actual route calculation, I use CustomWeighting & FastestWeighting. CustomWeighting is a wrapper of BlockAreaWeighting, GenericWeighting & FastestWeighting to define per-request constraints like height, blocked areas and something else.

Currently, I am having issues in enabling LM when importing graph. I tried the following and they didn’t work.

  1. Add properties:
    setProperty(“prepare.lm.weightings”, “fastest”);
    setProperty(“ch.disable”, “true”);
    Nothing happened. Graph was still created with CH.

  2. Programmatically enable LM & weighting string
    hopper.getLMFactoryDecorator().setWeightingsAsStrings(Arrays.asList(“generic”, “custom”, “fastest”));

GH tried to create weighting for every combination of weighting string in the provided list & encoder in the encodingManager as in initLMAlgoFactoryDecorator(). And I got null weighting as it resorted to default createWeighting() in for some combination. I know I can change my createWeighting() method in my subclass of but it should be the same approach as #3 below.

  1. Invoke createWeighting() for the combinations of weighting, encoder I need and add weighting to hopper.getLMFactoryDecorator().getWeightings() before importing graph.
    GH ran successfully to generate initial graph and failed at postProcessing() (stacktrace below)
java.lang.IllegalStateException: maximumWeight cannot be null. Default should be just negative. Couldn't find generic in {}

	at com.graphhopper.routing.lm.LMAlgoFactoryDecorator.createPreparations(
	at com.graphhopper.GraphHopper.postProcessing(
	at com.graphhopper.GraphHopper.process(
	at com.graphhopper.GraphHopper.importOrLoad(
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
	at java.lang.reflect.Method.invoke(
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(
	at org.junit.internal.runners.statements.RunBefores.evaluate(
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(
	at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(
	at com.intellij.rt.execution.junit.JUnitStarter.main(

Can you please advise the correct way to enable LM?

Can you have a look into GraphHopperLandmarksIT and see what you do differently?

Regarding the 2nd point: currently all combinations of encoders and weightings are processed.

Regarding the 3nd point: maybe we are here a bit untested/buggy and too complicated for customization yet. You you’ll have to call lmalgoFactoryDecorator.setWeightingsAsStrings, afterwards the strings are used to call createWeighting in the GraphHopper class.

Thanks Peter.

For point #1, there was a bug in my code that didn’t pass the parameters to GraphHopper properly. Now I am able to enable LM for CarFlagEncoder with FastestWeighting.

I am still struggling with point #2 and #3 when I tried to enable LM for multiple encoders and weightings. Will try a number of approaches first and come back if the problem still persists.

1 Like

Let me know how I can help and what we should improve to make customization easier.

Below are some “findings” so far followed with my question.

  1. Enabling LM for DataFlagEncoder and GenericWeighting failed as below. I tested this with one FlagEncoder and one Weighting on GraphHopper 0.9. Perhaps this is a bug.
    11:14:42.996 [generic_generic] INFO com.graphhopper.routing.lm.PrepareLandmarks - Start calculating 16 landmarks, default active lms:8, weighting:LM_BFS|generic, totalMB:1421, usedMB:755
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.UnsupportedOperationException: landmark weight factor cannot be bigger than Integer.MAX_VALUE 1.64794921875E10

	at com.graphhopper.routing.lm.LMAlgoFactoryDecorator.loadOrDoWork(
	at com.graphhopper.GraphHopper.loadOrPrepareLM(
  1. For multiple encoders and weightings scenario which include custom encoder and weighting, it seems not logical to me that GraphHopper creates weighting for all combinations of encoders and weightings. My custom encoder does not “work” with built-in weighting like FastestWeighting, and vice versus. I thought about returning null for those non-compatible combination in creatingWeighting() method of my GraphHopper subclass. However, it means null will be added to lmFactoryDecorator, which is not prefered. To overcome this, I had to make the method initLMAlgoFactoryDecorator() in GraphHopper protected/public so that I could overwrite in my GraphHopper subclass & only create weighting for the combination I wanted.

Using this change, I was able to import graph and prepare LM for two combination: car-fastest, customEncoder-customWeighting. However, the performance was not as fast as I expected and I think I am missing something. Below is my performance test result when used only CarFlagEncoder and FastestWeighting.

In my scenario, I was trying to find a route between Melbourne - Sydney in Australia. GH is running on an Android tablet device. The performance was below for different settings

  1. flexible mode and hybrid mode are almost the same. Both took about 1.5 minutes
  2. speed mode took only 13 seconds.

I was expecting hybrid mode to be much faster. Btw, In all settings, GH didn’t use up all available memory.

For LM, when loading graph, I only disabled CH and enabled LM programmatically as below:

I also tried to not set algorithm and set algorithim in request, results are still the same.

Is there any other settings I need to do to use LM algorithm? How do I know if the algorithm is actually picked by GH engine?

1 Like

Hmmh, this is ugly. We have to improve this - at least with a documentation how to do that.

Yes, this is a current limitation

However, the performance was not as fast as I expected and I think I am missing something

The quality of the landmarks is very important (you can visualize them via the geojson) and the landmarks should be at the ‘edges’ of the area and not clustered.

GH is running on an Android tablet device. The performance was below for different settings

Yes, it is a good idea to test car+fastest first. Then additional to the seconds, would you mind to write about the visited nodes? For LM it should be roughly the same like when you try it on GH Maps, ie. 10 000, see this in the returned JSON. For CH it would be ~700 other modes are disabled in our API so I cannot tell you quickly.

GH didn’t use up all available memory

It can’t on Android :frowning: … for that we would have to implement a native DataAccess (to avoid this ugly Android-Java RAM limitation you mentioned) which we would love to, but currently have no time for

For LM, when loading graph, I only disabled CH and enabled LM programmatically as below:

It is important to do this for the import, afterwards disabling per request is sufficient.

How do I know if the algorithm is actually picked by GH engine?

You can see this either in the logs astarbi|ch-routing:[time it took] (if landmark is picked it is astarbi|landmarks) or via GHResponse.getDebugInfo()

Is there any other settings I need to do to use LM algorithm?

It looks like something obvious is missing and we should find out and document this :slight_smile:

On Android can help somehow the situation with android:largeHeap flag.

@devemux86: Yes, the flag was already set. I also managed to manually increase the maxHeapSize for my particular app. Turned out the memory limit set per Android app is not actually the issue now as the app still has plenty allocated memory available.

The real issue is how to make LM work at requesting time. Below is the excerpt of debug info in GHResponse:

visited_nodes.average: 657158, visited_nodes.sum: 657158.0, Info: idLookup:0.103175424s; tmode:NODE_BASED; , algoInit:0.017103592s, astarbi|beeline-routing:2.6804726s, extract time:0.020075658;, algoInit:0.017103592s, astarbi|beeline-routing:2.6804726s, extract time:0.020075658, simplify (5915->4670)

It is clear now that LM is not enabled properly after loading. I tried to load the graph without programmatically setting anything as Peter suggested, and I got the following error:

java.lang.IllegalStateException: Configured [fastest|car] is not equal to loaded []

and below is my properties file inside the generated storage:


I think the issue might be around programatically enabling LM, persisting configuration properties and loading those properties. I didn’t see graph.lm.weightings or something similar in the file.

For your reference, I copied again the setting I made when importing OSM to enable LM:


I also tried to change bytesForEdgeFlags to 4 (not sure if this is relevant) but the result was still the same.

Btw, what do you mean by “disabling per request is sufficient”? Do I need to pass any parameter to GHRequest?

It seems like that in your configuration while loading the graph you have to put

If you graph is loaded with CH, you can enable and disable them per request by adding ch.disable=true.


When loading graph, I used the same settings as importing (as below) and could bypass that error:


It was only to show that loading without any setting got error.

With proper setting as above, I got astarbi|beeline-routing with large numbers of visited nodes. Peter said it should be astarbi|landmarks

That is correct, it should be astarbi|landmarks-routing.

Did you also requested the fastest weighting? You could also forbid disabling landmarks and see what happens. It should be hopper.getLMFactoryDecorator().setDisablingAllowed(false);.


1 Like

Thanks @boldtrn. I didn’t request fastest weighting explicitly. That’s what I was missing.
Now I am able to use LM in my request and it’s way faster than flexible. It took similar time as speed mode, roughly 13s on devices.

visited_nodes.average: 15702, visited_nodes.sum: 15702.0, Info: idLookup:1.0791016s; tmode:NODE_BASED; , algoInit:0.4567871s, astarbi|landmarks-routing:7.0663447s, extract time:0.07595826;, algoInit:0.4567871s, astarbi|landmarks-routing:7.0663447s, extract time:0.07595826, simplify (5915->4670)

Will try with my actual use case of multiple encoders, multiple weightings.

1 Like

I can now make LM work with my custom encoder and weighting in multiple encoder/weighting scenario. For this to work, I needed the following:

  1. Make GraphHopper.initLMAlgoFactoryDecorator() protected so that I could override in my GraphHopper subclass. As I mentioned in my early post, this change allowed me to only create and add weighting to LMAlgoFactoryDecorator for the combination of encoder/weighting that I want.

  2. When programmatically load graph, I still need to disable CH and enable LMFactoryDecorator. I don’t need to call hopper.getLMFactoryDecorator().setWeightingsAsStrings() as my overridden initLMAlgoFactoryDecorator() also add weighting to the list as below. Note that the first line is required as the second line still does not add that weighting to the list in LMFactoryDecorator.

             getLMFactoryDecorator().getWeightings().add(createWeighting(new HintsMap(weightingStr), encoder, null));
  3. For each request, I had to explicitly set the weighting as below. Without this, createWeighting() method stills create correct Weighting but somehow the algorithm picked was beeline not landmarks.

Btw, when preparing landmarks for my custom weighting, GH didn’t find any landmark. For fastest_car, GH found 16 landmarks as expected. I think this is because my custom weighting and encoder operate on a small subset of Australian roads which are only in two states (Victoria & Western Australia). And if the LM started from a point outside of these two states, chances are it won’t be able to find any landmark. If my understanding is correct, is there a way to specify custom starting point for landmark calculation?

Thanks for the summary and really glad you got it working!

Also nice that the speed up is even similar to CH, this is unlike some other users reported for their custom weightings. Also on server side for long routes there is a facor of ~10 difference between both modes.

Maybe we can somehow provide this functionality out of the box. For us this would be really helpful as well. I think this is similar to this issue:

When programmatically load graph, I still need to disable CH and enable LMFactoryDecorator.

Hmmh, this shouldn’t be. It should load the CH and LM, but only if stored. Maybe we should create an issue.

Note that the first line is required as the second line still does not add that weighting to the list in LMFactoryDecorator.

Yeah, this is the ugliness of the preparation initialiation. We use strings for Java or property configuration and create objects via Java code. I’m not happy with this state and already disliked it when implementing this.

Just in case I forgot, I wasn’t able to enable LM for DataFlagEncoder/GenericWeighting combination. The error message was in one of my early posts.

The reason is related to a wrong getMinWeight return value as we calculate the maximum weight in LandmarkStorage:

double maxWeight = weighting.getMinWeight(distanceInMeter);

Would you mind to check what is going wrong here? Maybe there is a bug in GH core?

I will check what’s wrong with DataFlagEncoder/GenericWeighting. At the moment, I am still fighting with improving performance with LM for my custom weighting/encoder.

In my previous post, when I said the routing was much faster with LM, I was actually referring to fastest_car use case. When using LM for my custom weighting/encoder, my experiments showed that astarbi|landmarks-routing is slower than astarbi|beeline-routing. In particular, astarbi|landmarks-routing visited 20% more nodes than astarbi|beeline-routing. I think this might be because LM couldn’t find any landmarks for my custom weighting/encoder, while LM was able to find landmarks for fastest/car. Do you think this is the case?

Just to give you a bit of context, below is a screenshot of a subset of road network that we supported with the custom weighting/encoder (roads in green colour). We use fastest/car for normal car routing for the full Australian road network.

This should not be the case … ah but if you are using just a very tiny subset then indeed the landmark algo will not produce any landmarks. You can change this setting (default is 500_000) via prepare.lm.min_network_size=10000 or prepareLM.setMinimumNodes(10000)

But for so few nodes the routing should be fast enough even without LM (thats the reasoning we disable it at a certain point :)). Maybe your custom weighting itself is slow and you can tune this a bit?

You’re right. I changed the minimum node as you advised and GH is able to find landmarks for my custom weighting/encoder.

What’s interesting now is with LM, the number of visited nodes is much smaller than flexible mode. However, the time it took to calculate a route with LM is worse than the time when LM is disabled (hence beeline approximation). This was tested on Android devices btw. I haven’t got time to investigate if the found landmarks were clustered or not.

Anyway, my original purpose was to improving the performance for fastest car while maintaining the performance for my custom weighting/encoder so I am happy with the current approach for now.

You’re probably right about enhancing my custom weighting. I already see how I can improve it now. Will set it as a TODO task :slight_smile:

Btw, my understanding is, LM or hybrid mode is similar to flexible mode in a way that I am able to use GenericWeighting to enforce road height restriction and/or BlockAreaWeighting to block certain areas per request. Is it correct? I am thinking of implementing a simple custom weighting that combine GenericWeighting/BlockAreaWeighting/FastestWeighting to test this, but would like to double check with you in advance.

Uh, this sounds strange and shouldn’t even happen on Android. Still can you compare this on a server?

Is it correct?