A Multi-Branched Radial Basis Network Approach to Predicting Complex Chaotic Behaviours
Abstract
In this study, we propose a multi branched network approach to predict the dynamics of a physics attractor characterized by intricate and chaotic behavior. We introduce a unique neural network architecture comprised of Radial Basis Function (RBF) layers combined with an attention mechanism designed to effectively capture nonlinear inter-dependencies inherent in the attractor’s temporal evolution. Our results demonstrate successful prediction of the attractor’s trajectory across 100 predictions made using a real-world dataset of 36,700 time-series observations encompassing approximately 28 minutes of activity. To further illustrate the performance of our proposed technique, we provide comprehensive visualizations depicting the attractor’s original and predicted behaviors alongside quantitative measures comparing observed versus estimated outcomes. Overall, this work showcases the potential of advanced machine learning algorithms in elucidating hidden structures in complex physical systems while offering practical applications in various domains requiring accurate short-term forecasting capabilities.
1 Introduction
In traditional mathematics, a radial basis function is a function that is based on the distance between the input and a specified point, such as the origin or a center point. A radial function is any function that meets this property [1].
A radial function is a function . When paired with a metric on a vector space a function is said to be a radial kernel centered at . A Radial function and the associated radial kernels are said to be radial basis functions if, for any set of nodes :
-
•
The kernels are linearly independent (for example in is not a radial basis function).
- •
Frequently utilized varieties of radial basis functions consist of:
-
•
Gaussian RBF:
where is the distance between the input point and the center, and is a parameter controlling the width of the Gaussian.
-
•
Multiquadric RBF:
where is the distance between the input point and the center, and is a parameter controlling the shape of the function.
-
•
Inverse Multiquadric RBF:
where is the distance between the input point and the center, and is a parameter controlling the shape of the function.
-
•
Thin Plate Spline RBF:
where is the distance between the input point and the center.
Imagine a ball rolling around a landscape with hills and valleys. An attractor acts like the bottom of a valley. Regardless of where you place the ball on the landscape (starting conditions), if it rolls downhill long enough, it will eventually settle at the valley’s bottom (the attractor). This signifies that the system (the ball) tends towards a specific set of values (the valley’s position) over time. Thus, formally defining an attractor involves identifying a group of numeric values that a system naturally gravitates towards, irrespective of its initial parameters.
Mathematical defintion of an attractor:
Let represent time and let be a function specifying the dynamics of the system. If is a point in an -dimensional phase space, representing the initial state of the system, then , and for a positive value of , is the result of the evolution of this state after units of time. For example, if the system describes the evolution of a free particle in one dimension, then the phase space is the plane with coordinates , where is the position of the particle, is its velocity, , and the evolution is given by
An attractor is a subset of the phase space characterized by the following three conditions:
-
1.
is forward invariant under : if is an element of , then so is for all .
-
2.
There exists a neighborhood of , called the basin of attraction for and denoted , which consists of all points that "enter" in the limit . More formally, is the set of all points in the phase space with the following property: For any open neighborhood of , there is a positive constant such that for all real .
-
3.
There is no proper (non-empty) subset of having the first two properties.
Since the basin of attraction contains an open set containing , every point that is sufficiently close to is attracted to . The definition of an attractor uses a metric on the phase space, but the resulting notion usually depends only on the topology of the phase space [4]. In the case of , the Euclidean norm [5] is typically used, which is defined as
. Using these concepts we propose implementing a multi-branched radial basis neural network to help predict the chaotic and random behaviours of an attractor.
2 Related Work
Radial Basis networks have been extensively studied and proven effective in various classification tasks [6][7]. They offer a versatile framework for pattern recognition and data analysis, leveraging the flexibility of radial basis functions to model complex relationships within datasets. By capturing the intricate dynamics and nonlinear interactions inherent in real-world phenomena, Radial Basis networks contribute to advancing our understanding of complex systems and facilitating informed decision-making in fields ranging from communication systems [8][9] to computational biology [10][11].
While RBF layers offer valuable capabilities in certain modeling tasks, they alone may not be sufficient for capturing the rich dynamics and predicting chaotic and random behaviors in attractors. To address the complexities inherent in chaotic systems, more sophisticated and adaptable modeling approaches are required, which may involve combining RBF layers with other architectural components and techniques tailored to the specific characteristics of chaotic dynamics.
Attention mechanisms [12] have emerged as powerful tools in the realm of neural networks, offering sophisticated mechanisms for selectively focusing on relevant parts of input data while suppressing irrelevant information. Originally inspired by human cognitive processes, attention mechanisms have found widespread applications in various domains, including natural language processing, computer vision, and sequential data modeling.
3 Dataset
We use a pre existing kaggle dataset [13]. This dataset comprises time series data originating from an unidentified physics attractor, synthesized through undisclosed governing rules. Manifesting intricate and chaotic dynamics, the attractor presents a challenge for analysis.
The dataset encompasses 36,700 data points, each delineating the positions of two points in a two-dimensional space at distinct time intervals. Collected over approximately 28 minutes, the dataset offers insights into the attractor’s behavior over time. Notably, the system undergoes periodic resets, typically occurring upon reentry into a recurring loop. Table 1 shows the different variables in the dataset.
Variable | Type | Definition |
---|---|---|
time | Float | The time in seconds since the start of the simulation |
distance | Float | Distance between both objects |
angle1 | Angle | Angle of the first object |
pos1x | Float | X position of the first object |
pos1y | Float | Y position of the first object |
angle2 | Angle | Angle of the second object |
pos2x | Float | X position of the second object |
pos2y | Float | Y position of the second object |
4 Methodology
It defines the network architecture consisting of several components:
Branches: Three separate branches are utilized, each focusing on learning the relationship between a specific pair of input columns.
- •
-
•
Dropout layer: Introduced with a probability of to mitigate overfitting.
-
•
AttentionLayer: Focuses on significant portions of the transformed data within the branch.
-
•
Linear layers with and activation functions for additional feature extraction and transformation.
Merging Layer: Following the processing of each pair of columns within their respective branches, the outputs are concatenated. A linear layer with a ReLU activation function integrates the combined information.
Output Layer: A final linear layer with an output size of projects the merged features onto the desired three-dimensional prediction.
Denote as the input vector having three features. Let denote the output of the model. Each branch accepts a pair of input features represented as , satisfying .
The forward function governs the data flow through the network:
-
•
Input Splitting: Separation of the input data into three distinct columns, representing the features: .
-
•
Branch Processing: Feeding each pair of columns into the assigned branch (branch1, branch2, or branch3); subsequently processed through their constituent layers, yielding an output per pair.
-
•
Output Concatenation: The individual branch outputs, namely , undergo concatenation along the feature dimension.
-
•
Merging: Transmission of the concatenated outputs through the merging layer produces a unified representation.
-
•
Prediction: Applying the merged features to the ultimate output layer yields the three-dimensional prediction .


This design enables the model to discern specific relationships amongst diverse input feature pairs while combining the learned features through the attention mechanism and merging stages for delivering the final prediction.
5 Training
We train the model on a singular NVIDIA A30 GPU. It takes 2 hours to train the model for 2000 epochs with a batch_size=512. We use the Mean Squared Error (MSE) loss function as our criterion:
Where represents the predicted values, represents the actual target values, and is the total number of samples. The MSE computes the average of the squared differences between predicted and actual values, providing a measure of the model’s performance in minimizing prediction errors. We utilize Adam [14] optimizer for our model:
Where and are the first and second moment estimates, is the gradient, and are the exponential decay rates for the moment estimates, and are bias-corrected estimates, is the parameter at iteration , is the learning rate, and is a small constant to prevent division by zero.
6 Results
-
•
Loss over iterations of the Single Sequential Network (Figure 3) The training loss for Object 1 (blue) starts high, sharply decreases, and then fluctuates around a lower level with some spikes. The training loss for Object 2 (orange) follows a similar pattern but maintains a higher overall loss throughout the training process. There are large spikes in the loss for both objects early in training, indicating potential instability or difficulty in the initial learning phase. The loss seems to stabilize and flatten out more towards the end of the training iterations shown.
-
•
Loss over iterations of the Multi-Branched Network (Figure 4) has a overall pattern is similar to Figure 3, with Object 2’s loss (orange) being consistently higher than Object 1’s loss (blue).However, the initial large spikes in loss are more prominent and last longer compared to Image 1.The loss curves appear to flatten out and stabilize at a later point in the training process compared to Figure 3.There are fewer small fluctuations and spikes in the loss curves once they stabilize, suggesting potentially smoother convergence.
In summary, while the overall trend of Object 2 having higher training loss is consistent across both images, the single sequential network exhibits more pronounced initial instability and takes longer to stabilize compared to the multi-branched architecture.
We next compare the outputs of the single sequential layer and the multi layered architecture. Figure 5 shows the object movement for the single sequential layer and Figure 6 shows the object movement for the multi layered architecture.
The predicted paths (black lines) for the single sequential layer 5 are relatively centralized and seem to capture some linear segments of the trajectories. The overall pattern shows dense and tangled paths, which is typical of chaotic systems. The black lines appear to follow the chaotic nature to some extent but might be too centralized and not dispersed enough to fully capture the randomness. The predicted paths (black lines) for the multi layer architecture layer 6 are also centralized but show slight shifts compared to the output of the single sequential layer. This output also has dense and tangled paths, consistent with chaotic behavior. The black lines appear to capture more variability and slight shifts, which might better reflect the unpredictability of chaotic systems.




7 Conclusion
In conclusion, this paper has explored the application of Radial Basis Function Neural Networks (RBFNNs) in predicting chaotic and random behaviors. Through a comprehensive review of related work, we have highlighted the strengths and limitations of RBFNNs in capturing the complex dynamics of chaotic systems. Leveraging insights from chaos theory and neural network architecture, we have proposed novel approaches for enhancing the predictive capabilities of RBFNNs with attention mechanisms.
Our results demonstrate the effectiveness of our proposed methods in predicting chaotic and random behaviors. A comparison of object movement predictions illustrated in our visual results indicates that our enhanced RBFNN model effectively captures the inherent variability and unpredictability of chaotic systems. Specifically, in Figure 6 prediction paths exhibited greater variability and subtle shifts, closely aligning with the expected characteristics of chaotic behavior. This confirms that our model can realistically reflect the randomness and sensitivity to initial conditions typical of chaotic systems.
Overall, this paper contributes to advancing our understanding of chaotic systems and lays the groundwork for future research in utilizing RBFNNs for predictive modeling in complex dynamical systems.
8 Limitations
Chaotic systems often require ongoing monitoring and adjustments to plans. Since small changes can have significant impacts, staying updated on the current state of the system is crucial. We understand that chaos can not be truly predicted and understood.
9 Reproducibility
Results can be reproduced from the code present in my GitHub repository.
10 Acknowledgement
We acknowledge the work of Alessio Russo who originally implemented RBFNNs in PyTorch. His work is available on his GitHub[16].
References
- [1] Contributors to Wikimedia projects. Radial basis function - Wikipedia, 2024.
- [2] Gregory E. Fasshauer. Meshfree Approximation Methods with MATLAB. World Scientific Publishing Co. Pte. Ltd., Singapore, 2007.
- [3] Holger Wendland. Scattered Data Approximation. Cambridge University Press, Cambridge, 2005.
- [4] John Milnor. On the concept of attractor. Communications in Mathematical Physics, 99(2):177–195, Jun 1985.
- [5] M. Emre Celebi, Fatih Celiker, and Hassan A. Kingravi. On euclidean norm approximations, 2010.
- [6] Yue Wu, Hui Wang, Biaobiao Zhang, and K.-L. Du. Using Radial Basis Function Networks for Function Approximation and Classification. International Scholarly Research Notices, 2012, March 2012.
- [7] James A Leonard and Mark A Kramer. Radial basis function networks for classifying process faults. IEEE Control Systems Magazine, 11(3):31–38, 1991.
- [8] Deng Jianping, Narasimhan Sundararajan, and P Saratchandran. Communication channel equalization using complex-valued minimal radial basis function neural networks. IEEE Transactions on neural networks, 13(3):687–696, 2002.
- [9] Hao Yu, Tiantian Xie, Stanisław Paszczynski, and Bogdan M Wilamowski. Advantages of radial basis function networks for dynamic system design. IEEE Transactions on Industrial Electronics, 58(12):5438–5450, 2011.
- [10] A Vande Wouwer, Christine Renotte, and Ph Bogaerts. Biological reaction modeling using radial basis function networks. Computers & chemical engineering, 28(11):2157–2164, 2004.
- [11] Yu-Yen Ou et al. Identifying the molecular functions of electron transport proteins using radial basis function networks and biochemical properties. Journal of Molecular Graphics and Modelling, 73:166–178, 2017.
- [12] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2023.
- [13] NIKITRICKY. Physics attractor time series dataset, 2023.
- [14] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017.
- [15] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library, 2019.
- [16] Alessio Russo. Pytorch rbf layer, 2021.