BRICs Economic Forecasting using Neural Networks
This weekend I finished an interesting research assignment in which I used five computational techniques to train artificial neural networks to forecast the 2011 GDP growth rates for Brazil, Russia, India, China, and South Africa (BRICS nations). The forecasts made by the neural networks were based on social development and economic indicators obtained from the World Bank. In this article I introduce the topic of economic forecasting, discuss neural networks and particle swarm optimization algorithms, and end with some conclusions. Additionally at the end of this post the results for each nation will be shown.
Economic Forecasting
Economic forecasting is the process of making predictions about the future performance of an economy. Predictions can be made by economists or by computational models and are important from a societal, governmental, and financial services perspective. Economic forecasts are used by regulators in making economic decisions such as setting interest rates, defining monetary policy, and managing currencies. Poor governance is one of the root causes of the hyperinflation experienced by Zimbabwe after 1998. Economic forecasts are also used by businesses and investors when defining their financial and geographical strategies. Building stronger, more robust financial models is therefore a key global concern. The following indicators were selected as inputs into the neural networks: labour participation rates, unemployment rates, vulnerable employment rates, secondary school enrolment rates, population brackets, cash surplus or deficit, exports as a % of total economic goods and services, imports as a % of total economic goods and services, gross savings rates, inflation rates, foreign direct investments (net inflows), and gross domestic product. All of the data used in this research study was normalized.
Artificial Neural Networks (ANN)
Neural networks consist of layers of interconnected nodes. Individual nodes are called perceptrons and resemble a multiple linear regression. The difference between a multiple linear regression and a perceptron is that a perceptron feeds the signal generated by a multiple linear regression into an activation function which may or may not be nonlinear. In a multi layered perceptron (MLP) perceptrons are arranged into layers and layers are connected with other another. In the MLP there are three types of layers namely, the input layer, hidden layer(s), and the output layer. The input layer receives input patterns and the output layer could contain a list of classifications or output signals to which those input patterns may map. Hidden layers adjust the weightings on those inputs until the error of the neural network is minimized. One interpretation of this is that the hidden layers extract salient features in the input data which have predictive power with respect to the outputs.
For a much more detailed explanation of neural networks and what can go wrong when using them (especially in finance) please see my article, 10 Misconceptions about Neural Networks in finance and trading.
Particle Swarm Optimization (PSO)
Swarm Intelligence is set of computational models based on the collective behaviour of decentralized and selforganizing systems. One such model, the Particle Swarm Optimization (PSO) algorithm, is derived from the behaviour of flocks of birds. In a PSO, a swarm of particles encoded as vectors in the solution space are initialized randomly. Each particle represents a candidate solution to the computational problem being solved. The PSO works by iteratively improving each particle relatively to its history and particle(s) from the rest of the swarm representing better solutions to the computational problem. The PSO algorithm has been shown to work well in rugged solution spaces because it does not reply on gradients. Depending on the problem, the PSO algorithm can be more efficient than evolutionary algorithms such as genetic algorithms.
One problem faced by the PSO algorithm is premature convergence to suboptimal solutions. Techniques aimed at reducing this deficiency in the algorithm include the use of particle restarts, electrostatically charged particles, and particles that use quantum over Newtonian motions. PSO algorithms have been applied successfully to a wide array of optimization problems including portfolio optimization. I used a charged PSO algorithm to calibrate a swarm of neural networks encoded as vectors. This technique can be used for dynamic environments because it eliminates the need for continuous model recalibration. The approach is illustrated above for clarity.
For a more detailed explanation on the Particle Swarm Optimization algorithm I recommend reading my article, Portfolio Optimization using Particle Swarm Optimization.
Algorithms contained in this study
 A standard nonrecurrent feed forward neural network (FFNN),
 An Elman simple recurrent neural network,
 A Jordan simple recurrent neural network,
 A swarm of FFNN’s trained using a standard PSO algorithm, and
 A swarm of FFNN’s trained using a charged PSO algorithm.
Conclusions
It was found that using the Charged PSO algorithm to train swarms of feedforward neural networks was a viable approach and often performed better than traditional techniques. It was also found that Elman recurrent neural network performed the best on average when it came to minimizing the sum squared error of the neural networks.
Regarding the problem of economic forecasting, it was concluded that the performance of each algorithm varied with each nation. This indicates that the performance of these algorithms is problem dependent. As such, it was suggested that a Quorum of neural networks acting in conjunction with oneanother to "vote" on the probability of each others economic forecasts might perform better than a standalone neural network.

Stuart, it was very interesting reading this article. I though it was very insightful, comprehensive and enjoyable to read.
Comments