You may have seen the TV advert for Experian in which Marcus Brigstocke meets his ‘data self’. A collection of live information that governs how financial institutions treat him as a customer. This is a variant of the ‘digital twin’ idea that is getting so much attention. Unfortunately, the relationship doesn’t always work out for Marcus as expected, or for the best. The same thing can happen with other applications of digital twins.
What is a digital twin?
A digital twin is a digital model of a service, product, process or system — a virtual representation of something.
In themselves, they are not a new idea, but until recently they have been mostly used where risks are high, it is not practical to interact with the real, physical system, and we can tolerate high costs.
In space missions, you can’t get at the physical system to understand what has gone wrong or to test solutions. So exact copies are maintained back at base to experiment with. In the Apollo programme, these were actual duplicate vehicles with their control systems. They were famously used in the Apollo 13 mission to figure out a strategy to get the astronauts home.
As time has gone on twins have become more digital, and the New Horizons mission used detailed simulations to set up for the fly-bys at Pluto and Ultima Thule.
For a long time, chemical companies have run large manufacturing processes by talking to a model that controls the actual physical assets. There are not large numbers of operators spread around the plant turning pumps and valves on and off. These plants are too complex for that to work safely and efficiently.
Why are we talking about digital twins now?
Although digital twins have been around for a while, they are getting cheaper, better and more widely applicable. Cheap sensors, the internet of things, improved communication systems, big data, AI, and more powerful, ubiquitous and cloud computing are helping digital twins find new applications:
- Real-time information together with an accurate digital twin can optimise operating conditions and spot emerging problems. This will allow efficient operation and preventative maintenance for everything from jet-engines and MRI scanners to sewage treatment plants.
- New products and manufacturing processes can be exhaustively explored and tested in simulation before use in the physical world, supporting innovation and efficient use of resources.
- We can now build digital twins of really complex systems like complete supply chains, with the state of every component and every machine part of the same integrated model.
- Many people are very enthusiastic about applying digital twins to systems that are part infrastructure and part people; like transportation, buildings and cities. They could react in real-time to problems and predict the likely outcome of different options. The digital twin could also provide a safe ‘sandbox’ where different policy options are tested and explored without conducting live trials in a real city or with a new rail timetable.
As is often the case, practical improvements in an established idea are being dressed up by enthusiasts as something radical and disruptive. A quick search reveals thousands of articles and blogs in the last year about how digital twins will completely transform industry X, and many consultancy companies are busy touting it as the latest thing.
The Gartner Group have picked out digital twins as one of their most exciting emerging technologies, and their 2018 Hype Cycle chart puts them at the peak of inflated expectations. History teaches us that it is a long and uncertain road from such wild enthusiasm to steady commercial exploitation.
What are the risks of digital twins?
The big challenge for any virtual model of a real-world situation is fidelity. How accurately does it predict what will happen and how accurately does it track the real system? Any modelling involves abstraction and simplification and with that the loss of fidelity. For tightly controlled and clearly defined systems like an aero engine, or a chemical process, you can usually make the model faithful enough for the decisions you need to make.
As soon as you have a system with non-linear behaviour leading to emergent properties, for example a city of 10 million people, it is much more difficult to be sure about your model and much harder to predict and handle unexpected outcomes. An error that would lead to emergency maintenance taking a locomotive out of service and disruption to commuters’ journeys in a smaller digital twin, becomes a city-wide traffic jam or loss of water supply.
Even if a model is continuously updated with a stream of real-time information, running on the fastest available computers, and using the latest machine learning and AI, it can never be perfect.
In 1933 Alfred Korzybski popularised the phrase “the map is not the territory”. The model is not the thing, and it is very dangerous to assume that it is.
In 1893 Lewis Carroll published Sylvie and Bruno Concluded. In this story there is a conversation with an enthusiast for ever more accurate mapping for better military decision making:
“What do you consider the largest map that would be really useful?”
“About six inches to the mile.”
“Only six inches!” exclaimed Mein Herr. “We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!”
“Have you used it much?” I enquired.
“It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.”
Digital twins must trade off fidelity and practicality, yet deliver safe decision making.
“The map is not the territory”
A disconnect between the model and reality could have been behind the recent Boeing 737 crashes. A change in engine type changed the aerodynamics of the aircraft and leading to potential instability in flight. To fix this, Boeing changed the fly-by-wire system to spot and correct the problem. Unfortunately, it seems that in the Indonesian crash either a faulty input or a situation outside the model programming caused the automatic systems to make a false correction. The last line of defence, the pilots, could not take back control or were confused and fighting the plane’s systems. Early indications are that something similar happened in the Ethiopian disaster.
Digital twins are great tools, but unless they are digital identical twins with high fidelity, there is always a risk of something going wrong, especially in a complex and multi-actor system.
Great post and the caution around the tradeoff between accuracy and practicality is very valid. When creating digital twins of buildings, 3D scans of the structure are taken to ensure the model is as close to a representation as possible. In future, I imagine that regular scans of buildings will be taken to ensure fidelity with the real world.