Predictive Uncertainty in Neural Network-Based Financial Market Forecasting

  • Iwao Maeda The University of Tokyo
  • Hiroyasu Matsushima The University of Tokyo
  • Hiroki Sakaji The University of Tokyo
  • Kiyoshi Izumi The University of Tokyo
  • David deGraw Daiwa Securities Co. Ltd.
  • Atsuo Kato Daiwa Institute of Research Ltd.
  • Michiharu Kitano Daiwa Institute of Research Ltd.
Keywords: Financial data mining, Financial market forecasting, Uncertainty consideration, Neural networks

Abstract

In financial market forecasting, various methods based on statistical analysis and neural networks have been proposed. Accurate forecasting of future market states can be helpful in decision-making related to investment behavior; however, existing forecasting methods have considerable deficiencies due to the nature of financial markets and their complexity, influenceability, and nonstationarity. Forecasting of complex systems, such as financial markets, should be performed considering predictive uncertainty, and decision-making needs to be adjusted accordingly. In the present study, we introduce the concept of uncertainty to neural network-based financial market forecasting. A sparse variational dropout Bayesian neural network (SVDBNNs) is used for stochastic prediction, and on this basis, the corresponding decision-making process is proposed. The proposed method is validated by conducting investment simulation on the historical orderbook data from the Tokyo Stock Exchange and is confirmed to enable more efficient and safe investments compared with the considered alternative approaches.

References

T. G. Andersen, T. Bollerslev, and S. Lange, “Forecasting financial market volatility: Sample frequency vis-a-vis forecast horizon,” Journal of empirical finance, vol. 6, no. 5, pp. 457–477, 1999.

D. W. Bunn, “Modelling prices in competitive electricity markets,” 2004.

E. M. Azoff, Neural network time series forecasting of financial markets. John Wiley & Sons, Inc., 1994.

A.-S. Chen, M. T. Leung, and H. Daouk, “Application of neural networks to an emerging financial market: forecasting and trading the taiwan stock index,” Computers & Operations Research, vol. 30, no. 6, pp. 901–923, 2003.

E. Levin, N. Tishby, and S. A. Solla, “A statistical approach to learning and generalization in layered neural networks,” Proceedings of the IEEE, vol. 78, no. 10, pp. 1568–1574, 1990.

D. Sornette, Why stock markets crash: critical events in complex financial systems. Princeton University Press, 2017, vol. 49.

W. B. Arthur, The economy as an evolving complex system II. CRC Press, 2018.

A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” in Advances in neural information processing systems, 2017, pp. 5574–5584.

A. Bate, M. Lindquist, I. R. Edwards, S. Olsson, R. Orre, A. Lansner, and R. M. De Freitas, “A bayesian neural network method for adverse drug reaction signal generation,” European journal of clinical pharmacology, vol. 54, no. 4, pp. 315–321,

C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, “Weight uncertainty in neural networks,” arXiv preprint arXiv:1505.05424, 2015.

D. P. Kingma, T. Salimans, and M. Welling, “Variational dropout and the local reparameterization trick,” in Advances in Neural Information Processing Systems, 2015, pp. 2575–2583.

D. Molchanov, A. Ashukha, and D. Vetrov, “Variational dropout sparsifies deep neural networks,” in Proceedings of the 34th International Conference on Machine LearningVolume 70. JMLR. org, 2017, pp. 2498–2507.

E. J. Bomhoff and E. J. Bomhoff, Financial forecasting for business and economics. Dryden Press London, 1994.

Y. S. Abu-Mostafa and A. F. Atiya, “Introduction to financial forecasting,” Applied Intelligence, vol. 6, no. 3, pp. 205–213, 1996.

S.-H. Poon, A practical guide to forecasting financial market volatility. John Wiley & Sons, 2005.

Y. Li and W. Ma, “Applications of artificial neural networks in financial economics: a survey,” in 2010 International symposium on computational intelligence and design, vol. 1. IEEE, 2010, pp. 211–214.

M. Dixon, D. Klabjan, and J. H. Bang, “Classification-based financial markets prediction using deep neural networks,” Algorithmic Finance, vol. 6, no. 3-4, pp. 67–77, 2017.

A. Tsantekidis, N. Passalis, A. Tefas, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Forecasting stock prices from the limit order book using convolutional neural networks,” in 2017 IEEE 19th Conference on Business Informatics (CBI), vol. 1, 2017, pp. 7–12.

T. Fischer and C. Krauss, “Deep learning with long short-term memory networks for financial market predictions,” European Journal of Operational Research, vol. 270, no. 2, pp. 654–669, 2018.

O. B. Sezer and A. M. Ozbayoglu, “Financial trading model with stock bar chart image time series with deep convolutional neural networks,” arXiv preprint arXiv:1903.04610, 2019.

B. LeBaron, “Building the santa fe artificial stock market,” Physica A, pp. 1–20, 2002.

J. Rust, R. Palmer, and J. H. Miller, “Behaviour of trading automata in a computerized double auction market.” Santa Fe Institute, 1992.

D. Ladley, “Zero intelligence in economics and finance,” The Knowledge Engineering Review, vol. 27, no. 2, pp. 273–286, 2012.

P. Vytelingum, R. K. Dash, E. David, and N. R. Jennings, “A risk-based bidding strategy for continuous double auctions,” in ECAI, vol. 16, 2004, p. 79.

R. S. Sutton, A. G. Barto et al., Introduction to reinforcement learning. MIT press Cambridge, 1998, vol. 135.

P. Dayan and B. W. Balleine, “Reward, motivation, and reinforcement learning,” Neuron, vol. 36, no. 2, pp. 285–298, 2002.

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.

V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.

Y. Nevmyvaka, Y. Feng, and M. Kearns, “Reinforcement learning for optimized trade execution,” in Proceedings of the 23rd international conference on Machine learning, 2006, pp. 673–680.

T. L. Meng and M. Khushi, “Reinforcement learning in financial markets,” Data, vol. 4, no. 3, p. 110, 2019.

Z. Jiang, D. Xu, and J. Liang, “A deep reinforcement learning framework for the financial portfolio management problem,” arXiv preprint arXiv:1706.10059, 2017.

T. Spooner, J. Fearnley, R. Savani, and A. Koukorinis, “Market making via reinforcement learning,” in Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 2018, pp. 434–442.

K. S. Zarkias, N. Passalis, A. Tsantekidis, and A. Tefas, “Deep reinforcement learning for financial trading using price trailing,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 3067–3071.

B. Renard, D. Kavetski, G. Kuczera, M. Thyer, and S. W. Franks, “Understanding predictive uncertainty in hydrologic modeling: The challenge of identifying input and structural errors,” Water Resources Research, vol. 46, no. 5, 2010.

B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” in Advances in neural information processing systems, 2017, pp. 6402–6413.

C. E. Rasmussen, “Gaussian processes in machine learning,” in Summer School on Machine Learning. Springer, 2003, pp. 63–71.

A. Malinin and M. Gales, “Predictive uncertainty estimation via prior networks,” in Advances in Neural Information Processing Systems, 2018, pp. 7047–7058.

M. Sensoy, L. Kaplan, and M. Kandemir, “Evidential deep learning to quantify classification uncertainty,” in Advances in Neural Information Processing Systems, 2018, pp. 3179–3189.

J. Snoek, Y. Ovadia, E. Fertig, B. Lakshminarayanan, S. Nowozin, D. Sculley, J. Dillon, J. Ren, and Z. Nado, “Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift,” in Advances in Neural Information Processing Systems, 2019, pp. 13 969–13 980.

W. Wright, “Bayesian approach to neural-network modeling with input uncertainty,” IEEE Transactions on Neural Networks, vol. 10, no. 6, pp. 1261–1270, 1999.

A. Graves, “Practical variational inference for neural networks,” in Advances in neural information processing systems, 2011, pp. 2348–2356.

A. Kucukelbir, D. Tran, R. Ranganath, A. Gelman, and D. M. Blei, “Automatic differentiation variational inference,” The Journal of Machine Learning Research, vol. 18, no. 1, pp. 430–474, 2017.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014.

Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in international conference on machine learning, 2016, pp. 1050–1059.

D. J. Rezende, S. Mohamed, and D. Wierstra, “Stochastic backpropagation and approximate inference in deep generative models,” in Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ser. ICML’14. JMLR.org, 2014, p. II–1278–II–1286.

Z. Govindarajulu, “Distribution-free confidence bounds for p (x¡ y),” Annals of the institute of statistical mathematics, vol. 20, no. 2, pp. 229–38, 1968.

P. Auer, “Using confidence bounds for exploitation-exploration trade-offs,” Journal of Machine Learning Research, vol. 3, no. Nov, pp. 397–422, 2002.

M. Pelikan, D. E. Goldberg, E. Cantu-Paz ´ et al., “Boa: The bayesian optimization algorithm,” in Proceedings of the genetic and evolutionary computation conference GECCO-99, vol. 1, 1999, pp. 525–532.

J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of machine learning algorithms,” in Advances in neural information processing systems, 2012, pp. 2951–2959.

S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.

W. Bao, J. Yue, and Y. Rao, “A deep learning framework for financial time series using stacked autoencoders and long-short term memory,” PloS one, vol. 12, no. 7, p. e0180944, 2017.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.

D. Tashiro, H. Matsushima, K. Izumi, and H. Sakaji, “Encoding of high-frequency order information and prediction of short-term stock price by deep learning,” Quantitative Finance, vol. 19, no. 9, pp. 1499–1506, 2019.

A. Y. Ng, “Feature selection, l 1 vs. l 2 regularization, and rotational invariance,” in Proceedings of the twenty-first international conference on Machine learning, 2004, p. 78.

S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” in Advances in neural information processing systems, 2015, pp. 1135–1143.

T. Dietterich, “Overfitting and undercomputing in machine learning,” ACM computing surveys (CSUR), vol. 27, no. 3, pp. 326–327, 1995.

D. M. Hawkins, “The problem of overfitting,” Journal of chemical information and computer sciences, vol. 44, no. 1, pp. 1–12, 2004.

J. Brogaard, T. Hendershott, and R. Riordan, “High-frequency trading and price discovery,” The Review of Financial Studies, vol. 27, no. 8, pp. 2267–2306, 2014.

J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (roc) curve.” Radiology, vol. 143, no. 1, pp. 29–36, 1982.

K. Boyd, K. H. Eng, and C. D. Page, “Area under the precision-recall curve: point estimates and confidence intervals,” in Joint European conference on machine learning and knowledge discovery in databases. Springer, 2013, pp. 451–466.

J. Keilwagen, I. Grosse, and J. Grau, “Area under precision-recall curves for weighted and unweighted data,” PloS one, vol. 9, no. 3, p. e92209, 2014.

C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” in Proceedings of the 34th International Conference on Machine LearningVolume 70. JMLR. org, 2017, pp. 1321–1330.

W. F. Sharpe, “The sharpe ratio,” Journal of portfolio management, vol. 21, no. 1, pp. 49–58, 1994.

O. Ledoit and M. Wolf, “Robust performance hypothesis testing with the sharpe ratio,” Journal of Empirical Finance, vol. 15, no. 5, pp. 850–859, 2008.

M. Magdon-Ismail and A. F. Atiya, “Maximum drawdown,” Risk Magazine, vol. 17, no. 10, pp. 99–102, 2004.

M. Magdon-Ismail, A. F. Atiya, A. Pratap, and Y. S. Abu-Mostafa, “On the maximum drawdown of a brownian motion,” Journal of applied probability, vol. 41, no. 1, pp. 147–161, 2004.

M. Campello, J. R. Graham, and C. R. Harvey, “The real effects of financial constraints: Evidence from a financial crisis,” Journal of financial Economics, vol. 97, no. 3, pp. 470–487, 2010.

Published
2021-02-27
Section
Technical Papers