GraphHopper.com | Forum | GitHub | Maps | Blog

Request large number of routes


#1

Let’s say we have 600.000 possible route combinations. How can one efficiently request a local Graphhopper server to get routing data for each single route. I am using the Routing API for this so far and I am iterating route by route which is apparently to slow for this size. Since the Matrix API seems not to be part of the open source repository, is there another way or more faster way to fetch the route informations at this scale?


#2

1 possible way, I can think of are multithreading? Depends on how powerful your server is, but you can for example split the 600k requests into 1000 chunks. So you do 1000 request at a time and each chunk has a size of 600 requests.

Wait for all to finish and than gather results.

Another way would be to have async requests, but I don’t know if it a good idea to send 600k requests at once.


#3

Are you making use of R or python for this?


#4

@martin.staehr
Thanks for the hint with async and multithreading. I tried to use the library grequests and it needs also quite long to fetch 10000 requests. I will try to use multithreading as next option.

@Anyaoha
Python


#5

I’m not that experienced in python, but in java i would make a big array with all requests. Than: Split array by X(number of processes) and start them all. Wait for them to finish, gather results in map again and save map to json or whatever file format you need or a database.

Next questions (if used locally):

  1. Are you using a CH prepared server instance (speed mode) or do you use flexible mode? If you run in flexible mode do you need flexible mode or not?
  2. Which map do you use? (so world or specific country)
  3. Is the map stored in ram or not?

#6

Please let me know if you or anyone has a code for R on this.
Thanks


#7

What is a CH prepared server?
I am using the default config.yml file and the map (europe) is stored in ram.
So guessing it is the speed mode:

  # By default the speed mode with the 'fastest' weighting is used. Internally a graph preparation via
  # contraction hierarchies (CH) is done to speed routing up. This requires more RAM/disc space for holding the
  # graph but less for every request. You can also setup multiple weightings, by providing a comma separated list.
  prepare.ch.weightings: fastest

I think there is no faster option for requesting Graphhopper (except for mentioned matrix api). The multithreading way accelerates it slightly but still too slow and the asynchronous way with “aiohttp” aborts for more than 10k requests, although it is quite fast for less than 10k requests. I might try to use the chunk strategy as next.

Btw: the problem has been solved. Multithreading was the solution.


#8

Hi cool, that I could help you :slight_smile: What did you do with multithreading, because you said it iwas slow before?