Learn about the future of renewable energy infrastructure
Please fill in a valid email
You are now subscribed to our newsletter!
Oops! Something went wrong while submitting the form.
Share this article
Insights

What infrastructure planners and designers need to know about evolutionary computing

TL;DR

So, you want to run some algorithms and start doing some serious optimisation? But you’re worried about ‘garbage in, garbage out’.

Well, this is the blog post for you then. Comments, questions and dissenting opinions welcome, as always.

What is optimisation?

In simple terms, optimisation problems are all about maximising or minimising a real function subject to one or more constraints. In infrastructure planning and design, a classic optimisation problem is to find the design with the lowest construction cost, the shortest journey time or increasingly, the lowest lifetime carbon emissions.

Of course, these problems are rarely solved with a purely mathematical approach (more on that later) but when they are, there are two broad types of optimisation approaches:

  • Gradient-based optimisation — create a function that represents the problem and use its gradient to search for the minimum or maximum points of the function.
  • Gradient-free optimisation — the function is essentially a black box which receives inputs and returns outputs. Ignore the gradient and use information about which inputs give better outputs to iterate over them and find the best solutions.

We use gradient-free optimisation and so we’ll focus on discussing how to apply techniques from that domain in this post.

Why is it difficult to apply optimisation to infrastructure design?

Infrastructure schemes are never designed by a single person and typically consist of multiple, complex sub-systems that interact with each other over time and space. Each sub-system requires specialised technical expertise to model properly and the data available to capture the geospatial context and estimate the costs of construction may be limited or inaccurate, particularly in the early stages of the scheme.

Building a function that accurately represents and links together all of these different sub-systems is no mean feat. But even if you manage that, filling it in with low quality data for optimisation feels, well, a bit futile; garbage in, garbage out as they say.

The result is that human intuition is the preferred mode of optimisation. Infrastructure planners and designers use ‘heuristics’ (rules of thumb) and their own experience to come up with good solutions.

At the early stages of most infrastructure schemes, this means looking at a map, drawing the shortest line between the scheme’s end points that avoids any obvious constraints, then filling in the details afterwards and tweaking it as new information emerges. This inevitably limits the scope for real optimisation and if the human intuition misses the best solution from the start, there is no way back.

How do we get round garbage in, garbage out?

The answer is slightly subtle. If you have a representative function to optimise the design of the scheme with but not all the data you require, then you need to explore as much of the problem space as possible. The objective is not to find the one true optimal design but to quickly find multiple interesting solutions that merit further investigation.

Infrastructure planners and designers can then assess the trade-offs between different solutions using their own intuition, decide which designs to take forward and work out where they need to commission site surveys to gather more detailed, accurate data.

This approach ensures that no single solution is singled out as the best one based on at best, patchy data and model inputs. It also addresses the ‘garbage in, garbage out’ problem because it makes any output that is generated much more robust against unknown errors in either the data or the function and the models that underpin it.

Of course, this is to an extent what infrastructure planners and designers already do. But, the problem is that they can’t really explore the entire problem space because it is simply too consuming to do that manually in any detail. If the best solution is not intuitive then in all probability, it will be overlooked. Moreover, there is no digital trail left behind to prove that no better solutions exist; we just have to hope that that is the case.

Using evolutionary computing techniques

This is where evolutionary computing techniques come in.

Evolutionary algorithms mimic the biological process of evolution in nature and can very quickly search through a huge number of possible solutions in a large problem space. They can be used to optimise a population of solutions; in other words, they can look at more than one solution in a single iteration. This is critical because it allows for the simultaneous exploration and optimisation of different solutions in a short period of time.

The diagram below illustrates a basic example of a well-known type of evolutionary algorithm called a genetic algorithm. As you can see, it uses a gradient-free optimisation approach (recall our definition earlier of a black box which uses information about which inputs give better outputs to iterate over them and find the best solutions).

Schematic representation of a basic evolutionary algorithm

Though they are not as well known as neural networks or other types of artificial intelligence, genetic algorithms have been used in industry and academia to solve many types of different problems, including in infrastructure planning and design. In each case, the base algorithm shown above is simply modified for the problem context.

Here’s a quick example of multiple interesting solutions generated by the genetic algorithms within our product, Optioneer, for the design of a buried power cable.

Example problem shown in our product interface

What’s the catch?

So, if genetic algorithms are a powerful and proven optimisation tool, why are they not used more within the infrastructure planning and design industry?

First, they are slower than gradient-based optimisers that have more information about the function they are set up to optimise. Second — and you may have picked up the clue at the end of the previous section — it is difficult to generalise genetic algorithms to multiple problems, or even different instances of the same problem.

This is because every genetic algorithm contains different parameters that drive its behaviour in different ways. These range from global system parameters such as the size of the population of solutions used to the type of genetic operators applied.

To generalise their use to different instances of the same problem or multiple problems you need to automate the following areas:

  • The selection of “tools” (heuristics) available to the algorithm to modify solutions.
  • The selection of the parameters within these tools.
  • The selection of the type of algorithm used.

Introducing hyper-heuristics

This kind of automation can be done using a technique called ‘hyper-heuristics’. In essence, rather than trying to encode a different set of tools (heuristics) and parameters for every different instance or type of problem, the choice of which heuristics to use is automated so that the algorithm can learn and adapt itself to solve new problems.

You can see a basic example of a hyper-heuristic below. In this case the hyper-heuristic chooses one of several low-level heuristics through something called a ‘choice function’. For infrastructure planning and design, these heuristics could change the algorithm’s behaviour based on different terrain, the type of infrastructure asset or other factors.

In simple terms, the hyper-heuristic choice function selects a low-level heuristic and uses it to change a solution. The changed solution is evaluated and the result of that evaluation will update the hyper-heuristic. If the selected low-level heuristic improves the solution, the hyper-heuristic choice function will select it more often in the future.

Illustrative example of a selection hyper-heuristic

Hyper-heuristics come in all shapes and sizes; the illustration above is just one example. Some select lower-level heuristics, while others create entirely new heuristics to use by combining different components. The key point is that they allow the algorithm to continuously learn and adapt itself to solve a range of different types of problems.

Using hyper-heuristics is therefore critical to ensure that evolutionary computing techniques can be usefully applied and scaled across many different use cases in infrastructure planning and design.

We are therefore very excited to start a major research collaboration with Edinburgh Napier University in this area. The collaboration is funded by the Data Lab and we will be working with the Nature-Inspired Intelligent Systems Group, led by Professor Emma Hart, who is also the Editor-in-Chief of Evolutionary Computation, The MIT Press journal.