Is it possible to reset a server-side custom model "multiply_by": "0" per-request?

I have custom OSM-data where I have e.g. private roads and gates which by default should be non-accessible by car. I have been “misusing” the toll tag to mark these things so that e.g. gates are actually tiny segments of road where I have “toll=ALL” and “multiply_by”: 0. This works fine because in Finland we don’t have any actual toll roads.

However, I want to give the user the ability to override this in a per-request custom model by giving “if: toll == ALL, multiply_by: 1”. But it seems that a “multiply_by”: 0 can not be overridden anymore per request. I believe Graphhopper builds on top of the server-side custom model, so the formula becomes “0 * 1”, am I right?

Until now I had this the other way round (accessible by default, made non-accessible per-request), which obviously works just fine.

If this works the way I think it works, it would be nice to have a “set_to” operator in addition to “multiply_by”.

Yes, this is how it currently works. We also thought about a set_to operator but there was never a use case yet where it was really required (and we try to avoid adding to the “language” except for critical things).

E.g. without a preparation algorithm (LM or CH) you can build your profile also entirely per request without any overhead. The only requirement for a server-side profile is a single rule in the speed section that limits the speed to a certain value.

Thank you for your quick response. I currently use three server-side profiles with very specific rules as this enables fast routing + routes longer than 500km. I give those to the user in the form of a dropdown list as “built-in settings” which the user can then modify.

I suppose my strategy could be to add a simple, bare-bones server-side profile, as you suggested, which I then use in a request with ch.disable=true and give it all the settings.

If you plan to use the model with a LM preparation you need to use a server-side model as close as possible to the actual model you are finally using. Otherwise the speed-up of the preparation gets lost. I.e. a single “bare-bones” profiles will perform worse (potentially much worse) than what you currently have. So likely the best idea is to create the profile that you want to have and only exclude the private roads/barriers per a custom_model in the request. Also have a look into some performance tunings for large-distance routing requests with a custom_model here and e.g. exclude road_class==TRACK for motor vehicles (everything which helps to reduce the number of possible roads will improve response speed).

I see, I may have misunderstood something about how ch.disable=true works. I thought it instantly loses all the benefits of the pre-calculated profile, but apparently it does not. If so, I need to revisit my selection of profiles and how I use them in case I can optimize my app better.

Now I am wondering about the 500km limit when using ad-hoc per-request profile (that is, ch.disable=true), which I thought was a hard-limit in GH routing engine. I remember seeing an error message about that in the GH Maps UI so I now handle that explicitly in my application as well and tell the user about the limit too. However, now I don’t seem to be hitting that limit anymore. I recently upgraded from encoder version 3 to 4 (I’m using the Docker image from GitHub - IsraelHikingMap/graphhopper-docker-image-push: Scripts to create and upload a docker image of graphhopper without pinning the tag so I use the latest version always) but I suppose that does not have anything to do with the routing engine?

Just as background information, I am creating routes with maximum amount of gravel for adventure motorbikers and e.g. creating a 730km route with ch.disabled=true takes around 5 seconds now. But I would expect most users would use (or I hope they would use) the built-in profiles (basically min, optimal and max gravel settings) which use the shortcuts (ch.disable=false) and then of course the routing is quick (the same route 1,48 seconds).

Let me better explain this parameter. Let’s say you have a CH preparation, then with "ch.disable": true will completely disable the CH preparation. But if you also have an LM preparation then this will be used instead.
If you only have an LM preparation the "ch.disable": true will do nothing and is not required but you can also disable the LM preparation via "lm.disable": true and use normal A* which sometimes, for some heavier modified profiles, could make the response time faster.

So basically you have a “fallback chain” like CH → LM → A* which is triggered using the "xy.disable": true parameters.

But it seems that a “multiply_by”: 0 can not be overridden anymore per request. I believe Graphhopper builds on top of the server-side custom model, so the formula becomes “0 * 1”, am I right?

Yes, the statements of the custom model in the request will appended to the server-side model.

If this works the way I think it works, it would be nice to have a “set_to” operator in addition to “multiply_by”.

Yes, as I said before this is currently not planned as usually other ways are found. E.g. in your case you can use it like before (“the other way round”).

Also the set_to parameter would have the disadvantage that you cannot “cross query” using the same LM preparation. I.e. you need create a new profile and preparation anyway.
See the preparation_profile option introduced here.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.