| Forum | GitHub | Maps | Blog

Way name too long issue with some OSM_IDs



Usage : Map Matching
Branch No : 0.11
Jar File used : compiled through mvn package -DskipTests

I’m new to graphhopper and java, and has been testing this for some time using regional extracts with success. The last time I tried to import planet osm, and ended up with, Way Name too long exception. Then, I checked the source code, and found out that, if bytes length is greater than 256, the name field is truncated by first 64 characters, and the condition is checked again for the second time, But some of the Russian names were still bigger, and they triggered the exception.

Can’t we increase the number of times this check and truncation executes, or just put that in a while loop till the size gets below 255 bytes…?

Also, I tried to edit that file on my own forked version of the code, but now I don’t know how to compile the whole thing, as the Code compiles by directly taking files from repository if i’m correct. Sorry if these are noob questions.


As the name is used for navigation purposes a long name often does not make sense. There shouldn’t be an exception though (just a warning). The proper fix could be to use an “alternative name” in those cases (because if there are these long names then it is a good thing to provide an alternative name for navigational purposes)

Working with the code is explained here.


Hi Karussell,

Thanks for the prompt response. Those are just warnings as you said, it seems the error was due to OSMID=326787712 : Cannot parse duration tag value: 10分钟. This causes GC Overhead limit exceed issue. I tried removing the GC Limit, but in that case, the whole RAM (240GB) gets consumed and the process gets killed by the OS.

Thanks for pointing out to the docs.


This has likely a different reason :slight_smile: …try to increase the -Xmx setting, see here for more information.

the whole RAM (240GB) gets consumed and the process gets killed by the OS.

It can’t. The JVM uses only as much RAM as you tell it via -Xmx.


I already set a minimum of 17GB and a maximum of 150GB using -Xmx and -Xms options. Usually the RAM usage lingers around 22GB (checked using free -h), but once the above parsing point reaches, if GC Overhead limit check if enabled via JAVA_OPTS, then GC Overhead limit exceeded issue occurs, if disabled, then Java Heap space Out of Memory occurs.

By the way, I’m already running for a small .pbf file, and tried running the planet osm in the same machine, from a different clone of the repository, and using a different port for this run. I’ll check and revert back by stopping the already running code, and see if it’s conflicting with the second run of the code.


It seems the issue is with the import file I was using. But I checked the md5sum after download and it matched. I’m getting “Unable to read PBF file”. I’ll try clearing the graph cache and will run one more time, and will respond back here.