I’m trying to load the whole planet in for car routing. I read in the docs (https://github.com/graphhopper/graphhopper/blob/master/docs/core/deploy.md) that loading the global route network would take around 22GB. I’m running on a machine with 120GB, and I run out of memory after around 3 hours of processing. I do the following:
Is there something special I need to do to get it to load in a reasonable amount of ram. Or, in my case, load at all. Is there some flag or setting I’m missing? Should I load the data in a different format? How would you normally load the whole planet file for car queries?
We do need to do the whole planet, yes. Otherwise we’d have issues with cross-border things when we go to use the data. I’ve actually found a repository of continent files, and that will help. The borders between most continents are small, after all. However, the main problem remains the same: how can I load these files into memory correctly. The Europe file is still almost 19GB.
Do you think the difference lies in that resolution? Do you think I could load the whole planet in a sane amount of memory if I just set that to 300 (which is the default)?
What are the parameters you use to load the planet file?
What effect does changing the resolution have, btw? I have it at 50, because that’s that the original author of the code explicitly changed it to, and I’m a bit afraid of changing this, because I don’t want to change the semantics of the code too much.
In the end, processing individual region files and reducing the precision for one of them (asia), we managed to get the job to run. For future reference, the europe file, which is almost 19GB, requires ~80GB of ram to load at a precision of 50 meters.
This is just unpacking the OSM file. We don’t yet know how it’ll behave when we actually use it, but that’s a problem for tomorrow =)
I also ran into this problem. @karussell How do you run the world with 22 GB of ram and ch prepared? I have europe with 5 flagencoder and 3 weightings and starting graphhopper takes more than 64GB of ram and it runs with 56GB after initial load.
The deploy.md speaks about one vehicle profile, one weighting and one thread for CH preparation.
The more encoders and weightings you need the more RAM the CH preparation requires and the longer it takes. To reduce this time you can do CH preparation in parallel but this requires even more RAM. E.g. we use servers with >200GB RAM to make the import fast.
Running the service is a different story and usually requires less RAM, but also dependents on the number of weightings and vehicle profiles. For running the service there is a rule of thumb: the disk size (of graph-cache-gh) plus 2GB. (The higher the request volume the more RAM you should add)