Appropriate data structure for finding nearby edges in weighting


I have a custom weighting that seeks to avoid repetition in the routes it creates (runners tend to prefer running loops rather than “out-and-backs”). At present this is very crude. The edge IDs used in a segment of a route are added to a shared hashset after the segment is calculated, and then on the next segment there’s a check in a custom weighting to see whether the edge being examined is present in this hashet. If the edge ID is found to have already been used in a previous segment, then a severe cost is returned.

This works reasonably well, but it falls down in cases such as where there are two ways that run almost parallel to each other.

I’d like to experiment with an alternative that creates a 100m buffer (for example) around a edge before inserting it into the hashset, and then the weighting checks to see whether the incoming edge intersects any buffered edge in the hashset.

I could simply create a buffered shape and then iterate over every edge, but that’s O(n) and I suspect will cause severe performance issues.

Obviously I can’t use a vanilla Java HashSet for this. Can anyone suggest a suitable data structure (perhaps from GH or an external library) that will allow me to perform better than O(n) lookups to see if an edge intersects a shape?