If i want to build a self-host routing api covered the whole planet area, then how much space do i need?
Disc space is probably less an issue as roughly 30gb for the pbf and a similar amount for the GH folder is necessary (Plus 60GB for elevation data). But many RAM is necessary like >25GB but can be more for multiple profiles or with elevation etc
I’m reviving this old thread to ask some more up to date info: could someone give some feedback about how much RAM / disk space / time (to start GH) would one need to setup a worldwide instance for say car, hike & bike profiles with elevation?
This depends on whether you are using CH (node- or edge-based) (speed mode) and LM (hybrid mode). You roughly need:
- 20-30GB for the GH graph
- 15-25GB for each LM profile
- 8-12GB for each node-based CH profile
- 20-25GB for each edge-based CH profile
So for car with edge-based CH and hike/bike with node-based CH (these profiles do not support turn costs, you do not need edge-based for these) it would be roughly 25GB for the graph + 42GB for CH and another 60GB if you want to use LM for all profiles as well. This is the RAM required to run the server. Note that for the import you should either provide more (maybe +20%) memory or do not run the preparations in parallel (
prepare.ch/lm.threads=1). You can also use
datareader.dataaccess=MMAP to reduce the memory usage. Using node-based CH instead of edge-based CH will strongly reduce import time.
Thank you very much for these numbers, that’s quite some RAM!
I’m not sure what the 42 is though? Is it 20-25GB (car edge base CH) + 2*8-12GB (hike + bike node-based CH) = 36-49GB?
I don’t know yet if I want / need LM for all or some profiles (I have to check what it adds). What I want is to run GraphHopper next to my OSM tile server. It currently has “only” 64GB, of which 40 are taken by renderd. I can upgrade it to a 128GB model, which would leave 88GB for GraphHopper, which seems fine if not using LM for all profiles (or not at all).
Yes, I meant node-based CH hike+bike (10+10) + edge-based CH car (22) would be around 42GB for CH total.
Thanks for the clarification! I just need to check what LM would bring to my use case now .
Currently you need LM for
heading , custom models/weighting, things like block_area, avoid edges and round trips. But for short distance queries you can do all this with flex mode (no preparation needed) as well. This will result in much slower query times though. If you really want to reduce memory you need neither LM nor CH, all they do is speed up the routing queries. Whether you need this or not depends on how fast your queries need to be.
Thanks again! I need to run some tests with and without, but I think I’ll want to allow long distances queries. The app is a hiking app (= usually short distances) but is well suited to plan road trips as well. Anyway the 128GB server now costs the same as my “old” 64GB one, so…
I’ll run some tests, thanks again!!
I would like to highlight this quote:
If you really want to use few memory you need this setting although the query itself might be slower with MMAP.
If you really want to reduce memory you need neither LM nor CH
Still CH with MMAP could be an interesting combination as the overall memory usage will be low and the memory usage per request will be low too. This is the reason this combination is used on Android devices.
But the memory usage when creating the CH or LM profiles require a certain amount of memory that cannot be memory mapped.