Further than 500 km apart

Hello guys,

I’ve been trying to resolve a certain route in a graphhopper instance running locally, but when I enable the custom_model, I receive the error “Using the custom model feature is unfortunately not possible when the request points are further than 500 km apart.”.

I realized in graphhopper web (maps: “GraphHopper Maps | Route Planner”), the error is a little bit different: “Using the custom model feature is unfortunately not possible when the request points are further than 700 km apart.”.

I also tried to solve this route using http request over postman, and the time with ch.disabled: true/false are very, very different.

Using false, I get responses under one second, and with true, It takes more then twenty seconds.

Another interesting things is, using https://explorer.graphhopper.com/ using the same payload (listed below), even with ch.disable, true or false, I do not receive any response above two seconds.

{
    "points": [
        [
            -89.6785026202, 20.9014352423
        ],
        [
            -106.408072, 31.605035
        ]
    ],
    "profile": "truck",
    "locale": "pt_BR",
    "pointsEncoded": true,
    "instructions": false,
    "details": [
        "average_speed",
        "leg_distance",
        "leg_time"
    ],
    "algorithm": null,
    "customModel": {
    },
    "ch.disable": false
}

1 - Where I can configure this limitation of 500/700 km in my GH instance?
2 - Why my json request have such different response times, with “ch.disable”: true? And how can I improve my processing time of ch in my local instance?

here’s my graphhopper configuration file:

graphhopper:
      datareader.file: ""
      custom_models.directory: /graphhopper/profiles
      profiles:
        - name: car
          custom_model_files: [ car.json ]
        - name: truck
          weighting: custom
          custom_model_files: [ truck.json ]
      profiles_ch:
        - profile: car
        - profile: truck
      profiles_lm: []
      prepare.min_network_size: 1000
      prepare.subnetworks.threads: 1
      routing.non_ch.max_waypoint_distance: 100000000 #100.000.000
      routing.max_visited_nodes: 15000000 #15.000.000
      import.osm.ignored_highways: footway,cycleway,path,pedestrian,steps # typically useful for motorized-only routing
      index.max_region_search: 30
      graph.location: graph-cache
      graph.dataaccess.default_type: RAM_STORE
      graph.encoded_values: hgv,max_weight,max_height,max_width,toll,car_access,car_average_speed,road_access
    server:
      application_connectors:
        - type: http
          port: 8989
          bind_host: localhost
          max_request_header_size: 50k
      request_log:
        appenders: []
      admin_connectors:
        - type: http
          port: 8990
          bind_host: localhost
    logging:
      appenders:
        - type: file
          time_zone: UTC
          current_log_filename: logs/graphhopper.log
          log_format: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
          archive: true
          archived_log_filename_pattern: ./logs/graphhopper-%d.log.gz
          archived_file_count: 30
          never_block: true
        - type: console
          time_zone: UTC
          log_format: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
      loggers:
        "com.graphhopper.osm_warnings":
          level: DEBUG
          additive: false
          appenders:
            - type: file
              currentLogFilename: logs/osm_warnings.log
              archive: false
              logFormat: '[%level] %msg%n'

The limits are different because different versions of GH maps are used. The live instance uses the most recent version, while the version included in open source GH has not been updated for some time. The limit is hard-coded here: Blaming graphhopper-maps/src/stores/QueryStore.ts at 3d1ff27068663ea638f7f05e5521944e1fa17151 · graphhopper/graphhopper-maps · GitHub

That’s expected, isn’t it? CH speeds up the routing drastically.

You mean how can you improve the processing time for flexible/custom model requests (not using CH)? You can enable LM (using the profiles_lm in your config).

There is another problem with the payload you pasted above: It uses camel-case instead of underscores, so for example the customModel field won’t have any effect at all! It has to be custom_model instead.

1 Like

Hello @easbar ,

The limit is hard-coded

Ok! I got that. I thought this was a something I could configure over config.yaml. That’s answer the question why the request works in postman, and in /maps, doesn’t.

You mean how can you improve the processing time for flexible/custom model requests (not using CH)? You can enable LM (using the profiles_lm in your config).

Right, but I have a question. I can enable ch and lm at the same time for the same profile. Correct!? If I pass ch.disabled=true in a certain route, it will use lm by default? Is lm able to be enabled/disabled as well?

And I have another question. Is there any configuration I can setup in config.yaml, that tells graphhopper how many thread it may use? I run graphhopper in a k8s cluster, and I have many kinds of machines available, this includes flavours with a huge amount of RAM, and flavours with a huge amount of CPU. I mean, which is the best choice, a machine with more CPU, or RAM?

Yes you can use CH and LM at the same time for the same profile. If you set ch.disable=true (not ch.disabled! btw) LM will be used and you can also disable LM by setting lm.disable=true.

Is there any configuration I can setup in config.yaml, that tells graphhopper how many thread it may use?

Yes, there is prepare.ch.threads and prepare.lm.threads. But these only control how many threads are used for the CH/LM preparation during import, or more precisely how many profiles are prepared in parallel (because the preparation of a single profile only uses a single thread).

When running the GraphHopper server it processes requests in parallel, but again each requests runs single-threaded. So increasing the number of processors is only necessary if you need to process many requests in parallel. The parallelism is controlled by the web framework (if at all), not by any GraphHopper specific code. Adding more RAM only makes sense to the point the graph data fits into memory (roughly the size of the -gh folder) plus some memory for processing requests (not a lot). Using faster RAM can be beneficial though.

Yes you can use CH and LM at the same time for the same profile. If you set ch.disable=true (not ch.disabled ! btw) LM will be used and you can also disable LM by setting lm.disable=true .

That’s great. This can help me to optimize some big requests I’ve been routing, and I can disable ch or lm, if I’m having any problem.

Yes, there is prepare.ch.threads and prepare.lm.threads . But these only control how many threads are used for the CH/LM preparation during import,

Perfect! So, I may use, for example, a machine with 60 vCPU to prepare my profiles. Right!? Is there a recommended amount of vCPU for this preparation? Or I’m free to serve any quantity I wish?

The parallelism is controlled by the web framework (if at all), not by any GraphHopper specific code

What do you mean with web framework? You mean the JVM graphhopper is running on? I’m asking this, because, I’m trying to find a way to increase the amount of request graphhopper can process in parallel.

No, only one thread is used for each profile. So using more CPUs than profiles won’t speed up the preparation further. Also not that running multiple profiles in parallel increases the RAM requirements.

GraphHopper server is built using Dropwizard which in turn employs a Jetty Server. Increasing the amount of requests GraphHopper can process in parallel should happen automatically. I think the default is that all available processors will be used to process requests against the GH server.