Rate Limiting configuration

I want to protect my graphhopper server using a rate-limitin methods (by client):

especially interested in Fixed window and Sliding window.

My graphhopper server has 2 CH and 3 LM profiles. They are all Custom Profiles if that matters.

CPU cores: 6 (can up to 8)
GH config:
prepare.ch.threads: 2
prepare.lm.landmarks: 12
prepare.lm.threads: 12
prepare.subnetworks.threads: 8

routing.lm.active_landmarks: 12

A client will make CH, LM and LM + custom profile (in body of request) requests.
Is it possible to calculate the maximum load?

Thanks

The memory usage and CPU time will most likely be dominated by the LM+custom profile requests. For some custom models and large distances such queries could potentially explore the entire graph. You can set a limit for the number of visited nodes which at the same time will give you an idea of the maximum memory usage (and roughly calculation time) per request.

1 Like

Thanks, @easbar!
What recommendations can you give for

prepare.lm.landmarks:
prepare.lm.threads:
prepare.subnetworks.threads:
routing.lm.active_landmarks:

so that the import and LM&LM+ requests is processed as quickly as possible? Taking CPU cores and assuming that there is enough RAM.

prepare.lm.threads:
prepare.subnetworks.threads:

These two will speed up the corresponding parts of the import the more the higher the values as long as you have enough memory.

prepare.lm.landmarks:
routing.lm.active_landmarks:

These are a bit more tricky to configure. The higher prepare.lm.landmarks the more memory you will need to even start the server, without necessarily seeing positive effects on the query performance. routing.lm.active_landmarks can improve the A* heuristic, but also comes with an overhead to calculate the heuristic, so you’ll need to experiment with this value to find out what works best for you. Note that especially LM+custom_model-requests might no benefit from many active landmarks at all and it could even be counterproductive depending on the model. When you experiment with these requests you should always compare with lm.disable=true, but the results will depend on the custom model.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.