Chaos theory is a branch of mathematics that studies random, apparently unpredictable systems and looks at the potential underlying patterns or laws governing its behaviour. Some of these underlying influencers could be repetition, constant feedback loops, interdependent variables or fractals. A fractal is a pattern that repeats at different scales; if you zoom in or out it will follow the repeating pattern. Snowflakes or the branches of a tree are well known examples of fractals in nature. Actually, examples of chaotic behaviour are seen all around us especially in nature; how fluids flow, the behaviour of the weather and climate, even road traffic or the stock market can be studied as a chaotic system. If the behaviour of a system is uncertain and dependent on some initial conditions (such as your location for the climate or what time you leave the house in the morning for traffic), chances are you can apply chaos theory to study and predict its next condition. Understanding the behaviour of these systems is crucial to understanding our world. Great leaps forward have been made in computing hardware in an effort to develop more advanced and robust algorithms to answer the previously unanswerable.

However, most of the time the systems studied are dependent on numerous influencing variables. Attempting to solve for these and predict the underlying patterns results in incredibly complicated equations, that while understood conceptually are nightmarish to compute. Fortunately, systems with many variables and underlying complexity have been the subject of machine learning research for years, and these years of conceptual study can be transposed into the study of chaos theory. Depending on the exact problem multiple approaches can be taken, from trying to have the machine learn to continue data sets or spot hidden features a human wouldn’t see.

A good example of a wide variety of these approaches is fluid dynamics. Fluid dynamics is of huge importance to the modern world, aside from plumbing and boats, it’s often forgotten that the air we live in abides by exactly the same rules along with the blood in our veins. The understanding of fluid is therefore of immeasurable value to engineers, from aerospace to bio. Fluid dynamics is incredibly complicated however due to its chaotic nature and even supercomputers struggle at dealing with systems of even moderate sizes. This is where machine learning becomes useful. When it comes to machine learning in fluid dynamics there’s no one simple solution to every problem, but through a mixture of techniques we can solve many specific problems. These are:

Reduction – Making the movement of a system of fluids easier to calculate

Modeling – Attempting to predict the future behavior of the fluid

Control – Trying to make a fluid flow in a specified way

Closure – Increasing the accuracy of approximations made in calculations

Taking reduction as an example, by training the artificial intelligence on what usually happens in flowing systems (showing it series of pictures) it’s possible to have the AI spot patterns between flows. This method of studying images is an example of supervised machine learning, in this case a Recurrent Neural Network (RNN). RNN’s are useful for spotting the time dependency between images and “watching” data to learn from it.

Another method for reduction is utilising a convolutional neural network. These work similarly to neurons in the human brain, taking data and building dependencies, similar to how a human understands that both a cat and a dog are animals, for example. This system is an example of unsupervised machine learning, as instead of trying to match data to pictures like before it’s attempting to learn these connections by itself. Using this system the system can spot related elements in a system and instead of treating everything independently, can build new rules that reduce processing time.These techniques are well and good for dealing with reduction but for the rest of the problems you need more powerful solutions, in this case a mix of both supervised and unsupervised methods. These semi-supervised methods lead to “best of both worlds” solutions that are capable of much greater feats. For Modeling, Control and Closure these are the best bet, but still leave much to be desired. With these methods the movement of fluids can be predicted for a short while, but small inaccuracies very quickly add up and lead to completely incorrect simulations. This is an incomplete field, but also relatively unexplored, and exciting discoveries may be very close yet.

While machine learning used alongside chaos is useful in the theoretical discussion of fluid flow, there are immediate uses for this technology. The Three Gorges Reservoir is surrounded by unstable terrain and landslides are both commonplace and deadly. Predicting when landslides will occur and where is critical to avoid this loss of life. Previously, machine learning alone has been utilised to predict these but proved ineffective due to the chaotic nature of landslides. While this may seem like a huge problem, chaotic systems are extensively studied mathematically and have properties that when married with artificial intelligence can lead to some incredible results.In chaotic systems a value known as the Lyapunov exponent is often explored. This value describes the “chaoticness” of a system; For a tiny change in initial conditions, how large a difference will it make? While observing past landslide data it was discovered that distinct, non-zero Lyapunov exponents could be calculated from the data: proof that just using traditional linear models and machine learning wasn’t enough, these systems needed to incorporate chaos theory.

Past landslides around the Three Gorges Reservoir have large GPS datasets that lend themselves to being used as training data for supervised machine learning. In particular with the advent of this new chaotic model of landslides an algorithm called the “Extreme Learning Machine” (ELM) has become frequently used. Much like the convolutional neural networks, these work as a system of neurons but don’t introduce traditional solutions such as weight decay or early stopping methods so appear marginally simpler but also considerably faster.

Alongside the ELM, Long Short-Term Memory networks (LSTM) models have been proposed. These are more advanced RNN’s with memory cells so alongside spotting time dependencies they can remember specific features, leading to more accurate predictions. This is still just future research, however, but shows promise conceptually. In the case of the ELM, great success has been achieved in modeling landslides when considering their chaotic features. Even models that didn’t observe rainfall and reservoir water level changes were shown to be accurate, with models that did, proving to be even more so. There is more to learn and more techniques to explore but with every advancement the lives of thousands get safer and safer.These methods applied for fluid dynamics can be used for other systems too. Millions interact with the stock markets every day, with prices fluctuating chaotically and people’s finances hanging in the balance. Predicting the movement of a stock’s worth is dependent on countless variables but for those who can do it successfully it is incredibly lucrative. By studying the stock market as a chaotic system one can attempt to predict its behaviour using pattern mining, training machines to dig for patterns in the data. The closing stock prices for consecutive days were given as the initial training set and a Sequential Pattern Mining Algorithm was used to study the system. The outputs from the algorithm were graphed and the fractal patterns formed. The algorithm notices these patterns within the fluctuation of a stock’s price, analyses it and then models it to then predict when they are likely to occur again in the future.

To the left is the closing stock price pattern for Tesla for a six month interval. The orange parts are the identified fractal patterns unique to the company and are highly reliable to occur again (even within a company as turbulent as this). By applying chaos theory to past market trends, one can effectively analyse and predict the future behaviour of a company’s stock using machine learning.One of the most effective machine learning methods for forecasting financial markets are Long Short-Term Memory networks (LSTM). Mentioned previously, it is a recurrent neural network that has feedforward connections like most methods but also feedback connections too. It has a vast processing power and can not only process single data points but also entire sequences of data. It can retain previous information and use what it saves in computing long term forecasts, allowing it to accurately predict further in time than other models.

To the right is the results from using the LSTM network to predict the behaviour of three of the most widely traded cryptocurrencies: Bitcoin, Digital Cash and Ripple. The blue lines represent the LSTM predicted prices to the true prices in red. As can be seen, LSTM has shown to be an incredibly accurate method for predicting fractal compositions long term in samples. It also performs so well because it approximates global non-linear hidden patterns and taking them into account. The main trade off for this incredibly high accuracy, however, is that the LSTM model is computationally much heavier than its counterparts but for such profitable insights, many will not be concerned with this.One of the most well-known, unsolved systems in the physics and math world is that of the many body problem. Consider the interactions between the planets within our solar system. While earth is gravitationally bound to each of the other planets, they are also bound gravitationally to each other. To find each individual gravitational force felt creates an incredibly complex set of interdependent equations which even supercomputers struggle to solve. The many body problem does not just encompass planets but any interacting particles and the physical problems pertaining to their system. The 2 bodied problem has been well studied and can easily be solved by hand or with simple computing software. Often our homework would be in relation to solving a two bodied problem such as a pair of pendulums. However as soon as you increase the system to three particles the complexity dramatically increases. Machine learning algorithms have been trained to accurately predict the position of three bodies but this requires inputting up to 10,000 solutions for the training set. Similar steps can be applied to solve the four and some five bodied problems but many assumptions and simplifications have to be employed. Scientists are yet to solve systems of any higher dimensions. With the addition of each new particle, the predictions become increasingly more inaccurate due to the chaotic nature of the problem and the numerical solutions in the training set become more and more computationally expensive. Machine learning is by no means a solution but merely a tool to help solve such complex systems.

A real life application of this is its use in predicting the orbit of satellites. In 2009 a collision occurred between a U.S. Iridium communications satellite and a Russian Cosmos 2251 communication satellite due to the inability to accurately predict their paths. Not only was it an enormous waste of resources, costing millions of dollars in losses, it also resulted in the dispersion of over 2000 large space debris fragments which pose a threat to other satellites and space travel. Unsurprisingly, a lot of research has been put into predicting orbits and avoiding such collisions again. Machine learning has been found to be the most effective method for this. However, the model’s predictions become less and less accurate as time goes on as is inevitable when analysing chaotic systems and again demonstrates the need for a balance between studying the underlying theory as well as the big data. Ultimately, machine learning can only fit curves and find correlations within the data given and is unable to physically understand the system it is dealing with. You can have an algorithm predict the temperature expected on a certain day, but it won’t know to bring a jacket. It is up to the person using it to understand the theory and be able to interpret the results produced. An example of this is a study which used machine learning to predict the thickening times of complex slurries from infrared analysis of cement powders. It was successful in its predictions however it gave no help in understanding the underlying mechanism of this process, nor did it predict any improved, new kinds of materials outside of the data provided. For all systems including the ones discussed previously, machine learning can give you a prediction, but it is up to us to understand why that answer makes sense. It is a fundamental flaw in the technology and creates a limitation for the user. To enhance the use of ML in the future, a balance between focusing on the big data but also the theory of the system is needed.