Landmark routing slow routing request with custom model

Hello community,

I have question about usage of LM algorithm. I created a custom car profile with CH and LM preparation enabled on the Europe map.
When I add custom model on a request basis to for example avoid tool, the request becomes very slow compare to CH. For a route of about 300kms
it’s take avout 25s.
Please find below debug info between request with/without LM activated.

with landmark activated
idLookup:0.033290517s; , algoInit:6679 μs, astarbi|landmarks-routing:25698 ms, path extraction: 4322 μs, visited nodes sum: 945102

without landmark activated
idLookup:0.027593544s; , algoInit:5301 μs, astarbi|beeline-routing:30459 ms, path extraction: 4051 μs, visited nodes sum: 1579418

I would like to know if it is normal or if I doing sometinh wrong during map import or when I do routing request. I am using directly
library “SDK”.

Thanks in advance.

The query speed with LM depends on your custom model. When it is very different to the one you used for the preparation the requests can become much slower. Did you try to send a request with an empty custom model? That should be much faster.

And which GraphHopper version do you use? Make sure to use the latest one (7.0) as we fixed some issues with LM performance recently.

I tried with an empty custom model and yes it is better:
idLookup:0.15241362s; , algoInit:16205 μs, astarbi|landmarks-routing:724 ms, path extraction: 2590 μs, visited nodes sum: 50962

The custom models used are very simple:

  • one for avoiding tool : customModel.addToPriority(Statement.If(“toll == ALL”, Statement.Op.MULTIPLY, “0.1”))
  • one for shortest route: customModel.setDistanceInfluence(120)

I am currently using GraphHopper 6.2

Ok, that makes sense. The custom models are simple, but especially changing the distance influence (for the request) can yield slow queries, because it changes the weights of all roads, such that the Landmark approximation becomes rather bad. If you know the custom models you are going to use beforehand it is obviously best to create separate profiles for each custom model on the server side. But even when this is not the case it can help to create several profiles and pick the ones that are closest to the requesting custom model.

Thanks for explanation. I understand that distance influence changes weights of all roads then routing request is very slow. But for avoiding tool, the weights change only for a less quantity of roads. Is it normal to also have slower routing request for this custom model ?

For custom model avoiding tool with LM:
idLookup:0.033290517s; , algoInit:6679 μs, astarbi|landmarks-routing:25698 ms, path extraction: 4322 μs, visited nodes sum: 945102
For shortest route custom model:
idLookup:0.23516764s; , algoInit:8219 μs, astarbi|landmarks-routing:113609 ms, path extraction: 39454 μs, visited nodes sum: 1725310

As you explained shortest routing is more thant 4X slower.

Do you recommend me to upgrade to GraphHopper 7 ?

I upgraded to Graphhopper 7 and try new LM prepration configuration for Europe map, but I would like to know your point of view before launching all Europe map compilation.
I increased the default number of Landmark from 16 to 48. Do you mean this will increase routing time when using LM ? ( only for avoiding tool )
Moreover to try to reduce memory footprint, I used setPreparationProfile on LM preparation handler:

new LMProfile("custom_car"),
new LMProfile("custom_truck").setPreparationProfile("custom_car"),

I read in the manual the warning regarding the fact that custom_truck yields larger or equal weights for all edges than the custom_carprofile.

If I understood correctly, all the routes in the profile ‘custom_truck’ must be more restrictive or equals (in terms of speed or priority) to all those in the “custom_car” ?

Thanks in advance for your help.

BR.

Is it normal to also have slower routing request for this custom model ?

Yes, especially for routes that normally would take the the toll roads. Since the route using the custom model deviates from the one the landmarks were prepared for it will longer to calculate.

Do you recommend me to upgrade to GraphHopper 7 ?

Yes, it is almost always recommended to upgrade to the newest version. In this specific case there was an important bugfix regarding the distance_influence: distance_influence should not get a default value if no value is specified by karussell · Pull Request #2716 · graphhopper/graphhopper · GitHub and a performance improvement that is quite important for landmarks for custom models that strongly deviate from the prepared profile: Make sure LM approximation is never worse than beeline by easbar · Pull Request #2756 · graphhopper/graphhopper · GitHub

I increased the default number of Landmark from 16 to 48. Do you mean this will increase routing time when using LM ? ( only for avoiding tool )

You mean it will decrease routing time? Yes, I think so. If not you might have to experiment with the number of active landmarks, i.e. the routing.lm.active_landmarks parameter.

Moreover to try to reduce memory footprint, I used setPreparationProfile on LM preparation handler

Note that this can yield a worse performance.

If I understood correctly, all the routes in the profile ‘custom_truck’ must be more restrictive or equals (in terms of speed or priority) to all those in the “custom_car” ?

I’m not entirely sure, but yes, probably.

Btw is there any reason you do not use CH instead of LM?

Thanks a lot for all your explanations.
I am already used CH preparation for “normal” profiles. In fact, I have 4 profiles for which I used CH preparation ( car, small truck, medium truck and large truck ). And for each I can used two more option: avoid tool and shortest route. So if I want to build CH preparation for all possible combinations, it is a lot of time for CH preparation and for memory footprint when launching GraphHopper to load all in memory.
That’s why I want use LM for the two options ( avoid tool and shortest route ).

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.