Proposal: Simple soft time window implementation?

I’m trying to unpick the issue of soft time windows to see if there might be a shortcut implementation using existing features and approaches. I’m hoping to address the remaining issues with the huge implementation effort from Braktar but I might have misunderstood so I would also appreciate a sanity check :slight_smile:


  1. In a lot of cases, “unassigned” jobs don’t have a whole lot of meaning. I have my jobs, I have my fleet and if I get something unassigned for something as petty as a 1 min violation of a time window then I have problems. The jobs need to be done.

  2. The response to (1) is often to rerun the whole simulation again, potentially with some attempt to pre-load jsprit with the previous solution. But there’s overhead too on the logic of whether I need to re-call a solution.

  3. Why was a job unassigned? More overhead in trying to understand the issue. Of my 5 unassigned jobs, what restrictions need to be relaxed in order to get them in the route?

In other words, I believe overhead should be considered in context of the problem. I would rather handle time window violations in a predictable way with limits, within the initial problem specification, even if it adds some overhead to calculation (which I can turn off/on). If even this condition leads to unassigned jobs then I consider it an error – I have a real-world issue.

Extending Library

  1. Extend the TimeWindow class to create a ShadowTimeWindow.

  2. Define a problem-level penalty function for late jobs; this may be confined to a linear function of time with this idea, more later.

  3. When defining job time windows, we could specify a “shadow” period that immediately follows on from the hard time windows. For example, a 14:00-16:00 job might also have a shadow of 60 minutes. Under the hood, the shadow window is just an identifiable tag for a normal secondary time window from 16:00-17:00.

Insertion Logic

  1. The underlying insertion heuristic should not be any different than the existing implementation for multiple time windows. However, all insertions should always prefer to insert a job that is available in its non-shadow time window. Simple filter on the pool of jobs to insert.

  2. A route-level counter for how many jobs in the route are currently being serviced in their shadow window (ShadowCount) and also a tally of number of time units (ShadowTime). I’m under the impression that this can be cleanly implemented in Modular objective function (SolutionCostCalculator)

  3. The concern from Stefan, I think, on Braktar’s implementation was that insertion costs should be considered only examining jobs i, k, j, where k is the job to be inserted between i and j.

  • SHORT CIRCUIT. If ShadowCount is 0 and k is not being inserted in a shadow time window, bypass the rest. (Apologies, I can’t seem to get a nested list) Else:

  • Calculate the pushback caused on j by inserting k to be before it. An A to B matrix lookup.

  • Multiply pushback by ShadowCount (i.e. to apply to all jobs in route that are currently late). Add this to ShadowTime.

  • Subtract the shadow window opening time from the current route time of insertion. So job k has a shadow window of 16:00-17:00, if it is inserted at 16:25 then you have 25 minutes. Call this NewShadowAddition.

  1. Pass ((0.5*ShadowTime) + NewShadowAddition) to the problem-level penalty function to return total cost penalty.

  2. Increment ShadowCount by 1, ShadowTime = ShadowTime + NewShadowAddition.

Limitations and Going Forwards

The factor of 0.5 in (8) is a bit fudged. You cannot currently know that insertion k will have any impact on the existing jobs that are being serviced in their shadow window because they may precede job k. On average, half of jobs would be affected by each insertion. I think a non-linear cost penalty function might exasperate this issue. Two options I see, I’m hoping you might have another way:

  1. Accept the limitation but tie the penalty to SolutionCompletenessRatio to limit its impact

  2. Store ShadowTime as dual-feature data, where the ShadowTime is stored against an index of where it is in the route so that you can filter. I’m not sure if there’s a clean way of doing this or whether we degenerate to something like parsing whole route on each iteration.

Any thoughts? The implementation should be a lot shallower than the current effort, and also allows the user to define acceptable deviations from the time window rather than leaving the cost function open-ended. Due to short circuiting, it does absolutely nothing to affect the current solution progress so can be easily turned off by simply not defining a shadow period.


1 Like

Some overnight thoughts.

Tying the cost penalty to SolutionCompletenessRatio in the current way probably doesn’t have the desired effect. Instead, we could replace the 0.5 with index(k) divided by the route length.

The current suggestion is simply to try and prevent late jobs being unassigned; the logic only comes into play when all remaining jobs no longer have their primary window available. I think the key is that the soft time window is a discretized object and there’s a route-level state manager that can keep track of how many soft windows are in play.

The same principles perhaps do not hold true for early arrival, even if they could be implemented just as simply :frowning: . In this case, you wouldn’t be able to filter the job list by primary time windows, you’d have to allow free selection from the three timeslots (EarlyArrival, Normal, ShadowSlot) throughout the solution process. At this stage I guess we could be overwhelmed by unnecessary early insertions.

1 Like

Hi @roganjosh, Thanks a lot for your ideas. I like them very much. Here are some comments:

insertion logic.4: Even if shadowCount is 0 and k is inserted in its preferred time window, it can have impacts on subsequent jobs since they might be moved from their preferred to their shadow time window (just by inserting k).

insertion logic.6: The pushback does not need to impact subsequent jobs delivered in their shadow tw. Their effect can be significantly influenced by waiting times. At the same time, as written above, it can impact jobs that are previously served in their preferred tw.

To summarize, we dont know the impact of inserting k between i and j on subsequent jobs in the same route. We can calculate the (local) impact easily which is c(i,k) + c(k,j) - c(i,j). When naming the impact on the rest of the route ‘epsilon’, insertion costs of k can be calc by c_insert(k)=c(i,k) + c(k,j) - c(i,j) + epsilon. Epsilon, we do not know. Now we can either determine epsilon by calculating it or we estimate epsilon. Calculating is expensive. Estimating requires an estimation function. I assume that epsilon is dependent on the job (being the next job after k) and the delay caused by inserting k, i.e. epsilon(j) = f_j(delay). Delay might just be newEndTime(j) - oldEndTime(j). Therefore f_j needs to have an answer for each possible delay at j. To estimate this, we can determine values for a certain number of delays exactly, e.g. f(5)=2, f(20)=10, f(100)= 60, f(500)=200, and interpolate between these values. We need to determine and keep track of this for each job. However, we only need to update the estimate function for a job i, when the route of i changed.
Along with your shadow time window, this might be implemented with managable effort. What do you think?

Hi @stefan

The issue you raise here is the bit that I was unsure about

insertion logic.4: Even if shadowCount is 0 and k is inserted in its preferred time window, it can have impacts on subsequent jobs since they might be moved from their preferred to their shadow time window (just by inserting k).

If we take a step back to there being no shadow time windows… the insertion of any job could just as easily cause a subsequent job to exceed its hard time window in the current implementation. This would give an unassigned job and the cost penalty for this kicking-out of a route is known at some point in the iteration. Thus, if an insertion is being considered in its shadow window and that single insertion causes 3 jobs subsequent jobs to also go into their shadow window, could this not be detected in the same way as them becoming unassigned, except that they now all get their shadow penalty but stay in the route?

I’m not sure how jsprit currently handles the penalty for an insertion causing subsequent hard time windows to be broken. Is this considered prior to the insertion, or just a blanket penalty once observing the effect of the insertion at the end of the iteration?

If the shadow period is set on the problem level, rather than on an order-by-order basis, then provided that the travel time of the above insertion < global shadow time, then you can apply the shadow penalty with confidence to all orders that would normally become unassigned (/break the existing timeslot that they are using to have their current position in a route, since normal multiple time slots are not necessarily contiguous*) via that insertion. The calculation should be able to piggyback on whatever mechanism is already in place to handle this situation with multiple hard time windows.

If the travel time < global shadow time and shadowCount != 0, apply the cost penalty appropriately for travelTime*shadowCount.

If, on the other hand, the insertion causes subsequent orders that are in their shadow window to exceed even that, well the usual unassignment penalty is applied, since it’s no different than breaking a normal hard time window. Even if you happened to apply the shadow penalty for an order that instead is being kicked out of its shadow window, who cares, because the unassigned penalty is going to be disproportionate anyway.

In theory, we should be able to avoid all approximations on the insertion cost for this and get a direct cost instead, just by inspection of the insertion and existing mechanism for multiple time slots + unassigned.

*It just happens that they are contiguous for the purposes of the shadow time window

You are correct if you define the preferred tw to be hard. As I understood, preferred tws should be soft in your approach, or to put it in other words, there are only a hard start (being earliest start of preferred tw) and a hard end which is now the latest start of the shadow window. Latest start of the preferred window becomes soft and can thus be broken which in turn imposes a penalty. Did I understand this correctly?

Ok. Now I understand. When you dont allow switching to a shadow tw for all other jobs than k, then I aggree with you.

1 Like

My idea is like a pseudo-soft time window. It’s actually just two regular hard time windows that are contiguous. One of them is just identifiable as a shadow time window so we can keep a count on insertion and apply a global penalty to this count.

It is perhaps not fully fleshed out in line with the operation of the actual algorithm, but hopefully it gives a way in to solve this issue in principle from small (contextually!) modification to existing mechanisms. Maybe it will give you some ideas that you could take forward :slight_smile:

It may not be possible to allow for actual multiple tws if you choose to use the shadow window, instead reserving the multiple tws for this implementation.


Actually, your assertion might not be correct, provided that you allow only 1 preferred window and 1 shadow window. An insertion between two existing jobs should always result in a push back rather than a pull forward for all remaining jobs? In which case, if you wanted to allow a switch for subsequent jobs on the insertion of k, you could assume that any tw switch results in an increment to the global shadow count?

I agree that this cannot result in a pull forward (at least not with constant transport times). If there are no waiting times then it always result in a push back.

It does not necessarily result in an increment, but it might affect the shadow count. If for example all jobs after j are served at their latest start of the preferred tw and there are no waiting times, then the insertion of k before j move all subsequent jobs to their shadow window.

Hmm, which has made me think of a possible severe limitation. A ruin can result in a pull forwards. If you cause a ruin that ends up never being filled (the ruined jobs happen to go later into the route) and the jobs on either side of the ruin get connected, then you could end up pulling non-ruined jobs out of their shadow window… but might not necessarily notice this and adjust the shadow count appropriately prior to doing any insertions. That requires inspection of the whole post-ruin route I guess and completely recalc shadow count on each iteration? :frowning:

I’m revisiting the topic of soft time window and hope this is not too late.

Regarding the calculation/estimation of epsilon here, I have some idea:

First we will need a state updater which is similar to UpdateVehicleDependentPracticalTimeWindows. In this state updater, for each activity j and each vehicle v, we do not store only one latestArrivalTime. Instead, we store an array of latestArrivalTimes, one for each of activity j, all subsequent activities and vehicle v latest end.

For example, assume activity j has one subsequent activity m, then, for activity j and vehicle v, we have an array of three latestArrivalTimes [t1, t2, t3], which is the latest arrival time at j such that we will not be late for each of the activities (i.e., j, m and route end), respectively.

Now, assume we have a penalty function for being late for the soft time window, g(delta_t), then we can construct a penalty function for activity j, vehicle v and arrival time t:

G_j,v(t) = g(t-t1)+g(t-t2)+g(t-t3)

Note that G will be a piecewise function, because g is one.

Then this function G can be used in the constraint to calculate epsilon:

epsilon = G_j,v(new arrival time t') - G_j,v(old arrival time t)

where the value for the old arrival time can be passed via state updater (when newAct is not DeliverShipment).

What do you think?

Best regards,

It’s been a long time since I raised this and I since implemented soft time windows in Python (using numpy). I have no idea how useful my approach will be in terms of Java (where I’m hopeless) but numpy supports vectorization so could compute my costs very quickly in a loop.

The first thing I did was to split my costs up. I don’t know the best words to describe them, but maybe “blanket” costs and “specific” costs. Being late was a blanket cost, since I just consider it equally bad for any job. I could apply a linear cost function for lateness, or some other function. It didn’t matter; late was late, and all jobs suffered whatever penalty I had for it.

Then things became much easier. I had to have an array of arrival times as part of the main solution. That’s literally what I was optimising on. A vehicle leaves at a specific time and you cumsum up the travel matrix costs, with service time on each job added to the time matrix.

All I had to do was keep another array of time window ends of jobs. There’s no attribute lookup, I created an array at the start of the solution and just make sure that every ruin/recreate operation also applies, by index alone, to this array. The added cost is any discrepancy between your arrival matrix and the time window matrix, and in numpy this can be calculated quickly in regards to the entire solution.

You could use that on a per-insertion basis, or after all jobs had been inserted. The point being, that calculation needn’t be tied to any other objects in your solution, it’s just a 2D array that you shuffle around.

EDIT: And you can apply the exact same principle for early arrivals, it’s just another 2D array to maintain by index.

But you’re also trying to support a huge range of different activities (breaks, multi trips etc.). All I can say is that I abstracted away the objects I was working with in a specific problem. I’ve triggered your mind once, maybe I can trigger it again in what you’re trying to support.

Is it is open sources can please share the link to it?

It’s not open-source, sorry, and I couldn’t make it so. It’s also not integrated with jsprit in any way; heavily constrained problems were sent to jsprit to solve. Simpler problems were sent to my own, entirely-in-python solver.

I mentioned it just for the principle in how it works, it’s up to Stefan to see if it holds any water within jsprit :slight_smile:

The part where my Java knowledge falls down is in how you might approximate the vectorized operations of numpy. numpy arrays should have fixed dimensions from the outset, so you just instantiate it with, say, 100 elements per row, which is more jobs than any of my vehicles could possibly do. You then just pad any empty space with “null” values (and by that, I mean any value that adds no cost. One example might be a fake time window end that can never result in a cost penalty e.g. positive infinity)