I recently made changes to the handleWayTags function of the MotorcycleFlagEncoder.java file. I first built the graph locally with the netherlands-latest.pbf map. This has been successful.
After this was successful, I also wanted to build the graph on a bigger server, with the planet.pbf.
Unfortunately, the process of building the graph with the planet.pbf fails.
I can’t figure out what the error message is from the logs, as it isn’t given. It is now also very unclear to me what is the reason why the graph cannot be built.
I’ve never had any problems with this before. I’ve been able to build the graph more often with the planes.pbf. But unfortunately I’m now running into a problem where I don’t know the cause.
Hi,
I guess your server run out of memory. You should look into system log (where you likely find that the process is being killed). Processing the planet with 64GiB of RAM was never possible for me.
btw: the import is definitely possible with 64GB RAM but how much RAM is used depends on the config:
how many profiles, if LM or CH is enabled (and if edged-based), if MMAP is used, …
I don’t receive any “OutOfMemory” exception. I used to get this error message when I used too little RAM. But with the 64GiB I don’t get an error message, it just stops by itself.
So you both think the problem is related to my RAM? How much RAM do you advise me to use, given my config?
Sorry if I’m asking simple questions, I’m new to using Graphhopper but here to learn!
There are quite a few factors affecting RAM usage (notably if you are using CH and the amount of encoded values. )
For instance to get my import jobs done I need around 500GB .
The version of the JVM is important as well, since they implement better memory handling/GC strategies in newer JVM. I would advice to use at least a JDK 11 version built from OpenJDK sources with Hotspot engine enabled.
And don’t forget to supply a -Xmx fitting together with the RAM available (for instance if you have 64g you should use -Xmx63g on the command line .
I am pretty sure you have some kind of diagnostics, but you didn’t found the right spot where it is logged
It seems you are not using CH and so IMO this import should be even doable with less than 32GB RAM. Did you specify -Xmx like suggest from @OlafFlebbeBosch?
If you enable CH (then especially due to turn_costs=true) you might need more RAM than 64GB for 3 profiles. In general you can reduce RAM usage via:
graph.dataaccess: MMAP
but this might make the import slower. (And if later used for serving your requests it makes the requests slower too but then you could use RAM_STORE.)
This depends on the JVM and GC you are using. Some GCs tend to use several GB and so you need to leave room for this otherwise the linux OOM killer might decide to pick you app