Is there a doc/example for using GraphHopper for batch processing a large number of routes from command line, without web call overhead. Example use case: a large CSV file with start and end locations, for which time and distance estimates are required?
There isn’t something like this but it should be easy to use a python client or even just bash via curl.
You mean use python/bash to query via web API? This seems like a large overhead if you’d want to do millions of route calculation…
Yep, it would be completely local and offline. Could you point me to the doc on how to create graph folder?
This example https://github.com/graphhopper/graphhopper/blob/master/docs/core/routing.md still seems to be about creating the server etc.
Those are configurations per platform, like simplify response geometry or use in-memory vs memory-mapped graph. Though can be used wherever is need, e.g. mobile config on desktop too if low on memory.
You could adapt the code for QueryTorture which does requests in parallel or adapt csv using this log format.
This is something I was looking for for ages. Have anyone already created the code? I so, would you share it? (I can’t code in Java.) Or what would be the overhead if the server was accessed from Python or R (which I can code in)?
And by the way, can GraphHopper find the shortest route (not only fastest route) too?
Many thanks and best wishes.
Just wanted to check in and see if anyone’s been able to do this?