I’m running Graphhopper off a Ryzen 7950x, which has 32 threads, and I’m a bit disappointed with the results. It seems like Graphhopper is not utilizing all the cores to their full potential- instead it’s hovering at around 20%.
How does Graphhopper handle servers with many threads? Does a single instance utilize all threads or would i be better offer running many instances of Graphhopper and putting a load balance between them?
Every routing request will use only a single thread and so only if you have multiple parallel requests it will use all threads. Or do you mean while import?
I’m referring to taking requests from a user; in my tests, I’m sending multiple parallel requests.
Is the multi-threading behaviour automatic or is there a configuration parameter that needs to be set?
Or is it just a case where my requests aren’t complex enough to use 100% CPU utilization (although I’m sending 1000s of requests- the two endpoints are within 1km)
There is no need to configure it.
(although I’m sending 1000s of requests- the two endpoints are within 1km)
Did you parallelize these source of requests?
I believe so, I’m sending them at a speed faster than they can be solved. So there should essentially be a queue on the server-end.
If only a limited number of threads are used and you are sending them faster than they can be solved, the latency goes up - do you see this?
Just to clarify, all the threads are being used- it’s just very low utilization (around 20%)
I’m not quite should what you mean by “latency goes up”, do you mean the time required to receive a response?
Yes, I meant the response time of a request.
it’s just very low utilization (around 20%)
How do you measure this? Via