Hi everyone,

I was working on the CVRP instances, and I have some difficulties to get the best-known results available in the scientific literature.

The problem is when I run Jsprit; I get the exact same route as the best-known results, but the cost is higher.

For example in the literature, the instance **E-n51-k5** has as best cost **521**. However, when I calculate the traveled distance, I find **524.94**

When I use Jsprit, I get better result **524.63**. I know that **521** is the lower bound, which I don’t understand how they got it. The problem is that I can’t compare Jsprit results with the best-known results, how can I get this lower bound in Jsprit.

Thank you for your help.

Btw the image link is broken.

As to the main question:

How to get to best-known results of the CVRP?

Even with fantastic algorithms; luck. You just cannot be assured of this due to the nature of the problem. And the same algo run multiple times will not necessarily yield the same route. I believe that jsprit *can* repeat routes if you use the same seed for random number generation, but this is somewhat artificial and again gives no assurance that the solution that pops out is the “best-known”.

As for the cost calculations, I can only guess. **521** is a suspiciously round number, so perhaps they rounded the matrix to ints for their calculation. If the route is actually identical to the “best-known” but costs are slightly discrepant, I don’t see the issue? What are your intentions with the result? If it were for practical vehicle routing for a company, I wouldn’t be particularly concerned with this.

Thank you @roganjosh for your reply.

Actually, I am a student. For my project I need to compare the performance of the LNS with Swarm algorithms for the CVRP and other variants. the two meta-heuristics are good, but I need to know how good are they compared to the best known results for the CVRP.

Ok, that makes some sense

Does the `int`

suggestion give you the results you’re looking for? Other than that, I’m not really sure where the discrepancy comes from.

As an aside, I’ve been looking at this paper today. Their “simplified” ruin and recreate approach yields many of the “best known” solutions. 7 years in university taught me well what mistakes academics can make in what’s appropriate for real-life applications. Unless I’m mistaken, they suggest their approach takes 5 minutes to solve a “small” problem. At least from my standpoint this “solves” nothing; that would be impossible to apply in reality. There are good observations, but my opinion is that their particular implementation could not possibly work - I could not make use of that. A far more meaningful measure would be how close it gets to the “best known” solution in *seconds* at most. This is of course personal opinion but perhaps you can substantiate it with sources; that paper observes that there’s an increasing complexity in the heuristics but it, itself, doesn’t go anywhere near far enough to solving it.

I would like to re-word that issue a bit as I perhaps gave an overstated view.

Their approach may well yield better results than the yr 2000 (Schrimpf) response, and faster. However, a benchmark on obtaining the “best known” solution is a mathematical curiosity unless you can descend to the best-known solution faster (5 minutes is well out of the ballpark). There are observations in that paper that *are* relevant, just not demonstrated in the scheme of speed.

It’s a different landscape than before. Customers have apps. Apps need to give responses. If the response takes too long, they don’t twiddle thumbs, they use them for 1-star reviews.

Faster descent to best solution is *much* better than reaching an absolute response. Again, my opinion. But surely there is an argument to be made here.

@roganjosh thank you so much for your help.

your suggestion works just fine, you saved my day, thank you