Import time takes over 12 hours, but used to take 1 hour

I have imported GH dozens of times now and it only takes 1-2 hours tops doing North America on car profile.

Now I am trying to import it and it’s been 12 hours and still going awfully slow.

I have tried 3 different AWS instances and somehow they all go slow. I removed the GH folder and cloned it again from github, and it still does the same thing.

I set java to: export JAVA_OPTS="-Xmx60g -Xms60g"
I’m using: ./graphhopper.sh -a web -i northAmerica.osm.pbf
It hasn’t reached the memory limit ever.

The only thing I have changed in config.yml is:
graph.flag_encodrs: car|turn_costs=true
routing.ch.disabling_allowed: true

Any ideas what I could check to see why it’s running slow?

Can you show your whole config.yml file and the logs of the import as well? Maybe you enabled turn costs in which case the CH preparation is much slower. https://github.com/graphhopper/graphhopper/issues/1565

easbar, thanks for the reply!

I think have turn costs enabled on them all. One GH is a different version. I was able to get one to finish in 1.2 hours and the other finished in 17 hours. I believe both have turn costs enabled, but I’m guessing the one that finished quickly probably wasn’t configured right? I don’t know how to test GH to know if turn costs were actually enabled or not.

The log just output to terminal and I didn’t direct it to a file, so unless it copies it somewhere else I don’t have one.

Here’s the configs with the most comments removed:
Took 1.2 hours:

graphhopper:
  # OpenStreetMap input file PBF or XML, can be changed via command line -Ddw.graphhopper.datareader.file=some.pbf
  datareader.file: ""
  graph.location: graph-cache
  graph.flag_encoders: car|turn_costs=true
  profiles:
    - name: car
      vehicle: car
      weighting: fastest
  profiles_ch:
    - profile: car
  #   - profile: car_with_turn_costs
  profiles_lm: []
  prepare.min_network_size: 200
  routing.ch.disabling_allowed: true
  routing.non_ch.max_waypoint_distance: 1000000
  graph.dataaccess: RAM_STORE

Took 17 hours:

    graphhopper:
      graph.flag_encoders: car|turn_costs=true
      graph.encoded_values: road_class,road_class_link,road_environment,max_speed,road_access
      graph.bytes_for_flags: 4
      prepare.ch.weightings: fastest
      prepare.ch.edge_based: edge_or_node
      prepare.min_network_size: 200
      prepare.min_one_way_network_size: 200
      routing.ch.disabling_allowed: true
      routing.non_ch.max_waypoint_distance: 10000000
      graph.dataaccess: RAM_STORE
    server:
      applicationConnectors:
      - type: http
        port: 8989
        # for security reasons bind to localhost
      requestLog:
          appenders: []
      adminConnectors:
      - type: http
        port: 8990
    # See https://www.dropwizard.io/1.3.8/docs/manual/configuration.html#logging
    logging:
      appenders:
      - type: file
        timeZone: UTC
        currentLogFilename: logs/graphhopper.log
        logFormat: "%d{YYYY-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
        archive: true
        archivedLogFilenamePattern: ./logs/graphhopper-%d.log.gz
        archivedFileCount: 30
        neverBlock: true
      - type: console
        timeZone: UTC
        logFormat: "%d{YYYY-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"

Ok this makes sense. Your second configuration is a GH version < 1.0. In this old config using graph.flag_encoders: car|turn_costs=true and prepare.ch.edge_based: edge_or_node meant that turn costs were enabled. This explains the much longer import time. In the first config you would have to add turn_costs: true under profiles otherwise turn costs are disabled and the preparation is (much) faster. However, 1.2h vs 17h is pretty extreme, maybe there is another reason on top of this. Which GH version did you use?

Your logging configuration should produce a log file in logs/graphhopper.log?

 logging:
      appenders:
      - type: file
        timeZone: UTC
        currentLogFilename: logs/graphhopper.log
Powered by Discourse