Synchronous Route Optimization Endpoint

Over the years we have improved the performance of the Route Optimization API and the last months showed that over 90% of the problems of our customers can now be solved within a few seconds.

Therefore today we publish the new synchronous endpoint for the Route Optimization API that returns the JSON for the solution without the need to fetch a job_id and poll the result. Compare the new synchronous with the existing asynchronous / batch endpoint in our documentation for more details.

This new synchronous endpoint is not only simpler but avoids the slight delayed retrieval that comes with the asynchronous / batch endpoint and comes with a small reduced total latency and so, will improve the response time for our customers further, especially for small problems.

The only limitation is that there is a fixed timeout of 10 seconds, i.e. all big and "complicated" problems still need to be solved with the asynchronous / batch endpoint.

Our recommendation is to start with the synchronous endpoint, especially when your location count is smaller than 200, and switch to the asynchronous / batch endpoint if the maximum response time grows above 7 seconds.

1 Like

That’s incredible news!
Great work, GH team @stefan & @karussell :smiley:


One feature-request here:
We make use of the synchronous endpoint, but in some cases our problems take more than 10 seconds, and the request times out, which is fair enough. We start an asynch request after 7 seconds as you recommend.

Here’s the thing:
Sometimes the answer comes after 8 or 9 seconds. Our first implementation just didn’t noticed that at all and started to wait for the asynch answer. (Our software worked synchronous as well). Now we use the synch result, if it still comes after first 7 seconds and don’t use the asynch results, which mostly come after some more seconds.

Our 2 issues with that solution:

  1. If we start a synch request, which takes longer than 7 seconds, and we start an asynch one and the synch result comes after 8/9 seconds, we waste our credits, cause it seems to be counted twice (two request, twice the price, fair enough). Currently our GH-payment-plan allows us to this, we have enough credits. But can grow into a more critical thing, if we move more calculations to the synch endpoint.

  2. The waiting time for a result, if the synch request doesn’t answer within 10 seconds, always takes 7 seconds longer than needed (cause we’re starting the asynch request after 7 sec.). For a webapplication 7 waiting seconds (more) are big amount.

We ended up, switching back to only use the asynch endpoint to avoid this behaviour.

Why don’t you switch from the synch endpoint to the asynch one automatically? Whenever a problem takes more than 10 sec to calculate, you answer with a retrieve-token like in the asynch endpoint and we can ping the server until the answer is ready? This could also be set by a parameter, if not every customer likes this behaviour. But it would solve our both issues: the credit count wouldn’t double and we don’t loose waiting seconds.

Great work though!


Thanks for the feature request. We will think about it.

Instead of stopping the request from your side you should wait for the sync request until it times out from our side. Then you avoid that it counts extra credits and we are able to fine tune and change the exact timeout settings.

Edit: Ah, now understand the misunderstanding. The recommendation about stopping after 7 seconds was: if you notice that some problem with X stops takes longer than 7 seconds then you still wait for the response (!) but for any follow up problem with e.g. X+1 stops or some changed property you use the async endpoint.