Skip to main content

Bayesian Neural Networks of Probabilistic Back Propagation for Scalable Learning on Hyper-Parameters

  • Chapter
  • First Online:

Part of the book series: Intelligent Systems Reference Library ((ISRL,volume 172))

Abstract

Extensive multilayer neural systems prepared with back proliferation have as of late accomplished best in class results in some of issues. This portrays and examines Bayesian Neural Network (BNN). The work shows a couple of various uses of them for grouping and relapse issues. BNNs are included a Probabilistic Model and a Neural Network. The plan of such a plan is to join the qualities of Neural Networks and stochastic demonstrating. Neural Networks display ceaseless capacity approximates abilities. Be that as it may, utilizing back drop for neural networks adapting still has a few disservices, e.g., tuning a substantial figure of hyper-parameters to the information, absence of aligned probabilistic forecasts, and a propensity to over fit the preparation information. The Bayesian way to deal with learning neural systems does not have these issues. Nonetheless, existing Bayesian systems need versatility to expansive dataset and system sizes. In this work we present a novel versatile strategy for learning Bayesian neural systems, got back to probabilistic engendering (PBP). Like traditional back spread, PBP works by figuring a forward engendering of probabilities through the system and afterward completing a retrogressive calculation of inclinations. A progression of analyses on ten true datasets demonstrates that PBP is essentially quicker than different methods, while offering aggressive prescient capacities. Our examination additionally demonstrates that PBP-BNN gives precise appraisals of the back change on the system weights.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Aizerman, M.A., Braverman, E.M., Rozenoer, L.I.: Theoretical foundation of potential function method in pattern recognition. Autom. Remote Control 25, 917–936 (1964). https://scinapse.io/papers/1526146785

  2. Akaike, H.: Statistical predictor identification. Ann. Inst. Stat. Maths. 22, 202–217 (1970)

    MathSciNet  MATH  Google Scholar 

  3. Ramesh, G.P., Kumar, N.M.: Radiometric analysis of ankle edema via RZF antenna for biomedical applications. Wirel. Pers. Commun. 102(2), 1785–1798 (2018)

    Google Scholar 

  4. Chakraborty, K., Mehrotra, K., Mohan, ChK, Ranka, S.: Forecasting the behaviour of multivariate time series using neural networks. Neural Netw. 5, 961–970 (1992)

    Article  Google Scholar 

  5. Cichocki, A., Unbehauen, R.: Neural Networks for Optimization and Signal Processing. Wiley, Chichester, West Sussex, UK (1993). https://www.wiley.com/en-ai/Neural+Networks+for+Optimization+and+Signal+Processing-p-9780471930105

  6. Cohen, M.A., Grossberg, S.: Absolute Stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybern. 13, 815–826 (1983). https://doi.org/10.1109/TSMC.1983.6313075

    Article  MathSciNet  MATH  Google Scholar 

  7. Cybenko, G.: Approximation by superposition’s of a sigmoidal function. Math. Control Signals Syst. 2, 303–314 (1989). https://doi.org/10.1007/BF02551274

    Article  MathSciNet  MATH  Google Scholar 

  8. Denton, J.W.: How good are neural networks for causal forecasting? J. Bus. Forecast. 14(2), 17–20 (1995). https://www.questia.com/library/journal/1P3-6802140/how-good-are-neural-networks-for-causal-forecasting

  9. Elman, J.L.: Finding structure in time. Cogn. Sci. 14, 179–211 (1990). https://doi.org/10.1016/0364-0213(90)90002-E

    Article  Google Scholar 

  10. Fogel, D.B.: An information criterion for optimal neural network selection. IEEE Trans. Neural Netw. 2, 490–497 (1991). https://doi.org/10.1109/72.134286

    Article  Google Scholar 

  11. Forster, W.R., Collopy, F., Ungar, L.H.: Neural network forecasting of short, noisy time series. Comput. Chem. Eng. 16(2), 293–297 (1992). https://doi.org/10.1016/0098-1354(92)80049-F

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K. Thirupal Reddy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Thirupal Reddy, K., Swarnalatha, T. (2020). Bayesian Neural Networks of Probabilistic Back Propagation for Scalable Learning on Hyper-Parameters. In: Balas, V., Kumar, R., Srivastava, R. (eds) Recent Trends and Advances in Artificial Intelligence and Internet of Things. Intelligent Systems Reference Library, vol 172. Springer, Cham. https://doi.org/10.1007/978-3-030-32644-9_6

Download citation

Publish with us

Policies and ethics