Nova Publishers
My Account Nova Publishers Shopping Cart
HomeBooksSeriesJournalsReference CollectionseBooksInformationSalesImprintsFor Authors
            
  Top » Catalog » Books » Computer Science and Robotics » Horizons in Computer Science Research. Volume 7 Chapters » My Account  |  Cart Contents  |  Checkout   
Quick Find
  
Use keywords to find the product you are looking for.
Advanced Search
What's New? more
Mechatronic Design Automation: Emerging Research and Recent Advances
$59.00
Shopping Cart more
0 items
Information
Shipping & Returns
Privacy Notice
Conditions of Use
Contact Us
Notifications more
NotificationsNotify me of updates to Parallelization of Neural Network Building and Training: An Original Decomposition Method (pp.193-223)
Tell A Friend
 
Tell someone you know about this product.
Parallelization of Neural Network Building and Training: An Original Decomposition Method (pp.193-223) $100.00
Authors:  (Marc Sauget, Sylvain Contassot-Vivier, Michel Salomon, IRMA/ENISYS, University of Franche-Comte, France, and others)
Abstract:
Since the first developments of neural networks by Pitts and McCulloch, the major
encountered problems lie in their building and learning. Indeed, there are some results
proving that a feed­forward multi­layer perceptron neural network can be used as an
universal interpolator. Unfortunately, there is neither any indication on how to build
an optimized topology, nor a method to choose the best suited learning algorithm to
train the network. Many learning algorithms give good results, like the classical back­
propagation algorithm for which various optimizations have been proposed. Some of
these optimizations change the network structure, like the Square MLP or the HPU
designs, whereas others improve the learning process, like the QuickProp or the Re­
silient back­Propagation (RPROP) algorithms. Nonetheless, these works are based on
neural networks having a static structure which have to be inferred manually accord­
ing to the user's experience. In this chapter, we present a way to adapt automatically
the neural network topology to the application context. In fact, we present an efficient
method that permits to obtain a parallel building and learning based on an original domain decomposition. This chapter describes, for both aspects, the corresponding
algorithms and gives comparative results showing the relevance of our approach. In
addition, the exploitation aspect of the obtained neural network is also addressed in
the last part. We present a multi­threaded version of our Neurad application used to
compute irradiation doses in any environment. 


Available Options:
Version:
This Item Is Currently Unavailable.
Special Focus Titles
01.Multicultural and Citizenship Awareness through Language: Cross Thematic Practices in Language Pedagogy
02.The Brainstem and Behavior
03.Sustainable Development: The Context of Use of Indigenous Plants for Local Economic Growth
04.Cancer versus Nutraceuticals
05.Fundamentals of Fuel Injection and Emission in Two-Stroke Engines
06.Free to Love: Schema Therapy for Christians
07.Textiles: Advances in Research and Applications
08.Cheese Production, Consumption and Health Benefits
09.Education in Lesotho: Prospects and Challenges
10.Work-Life Balance in the 21st Century: Perspectives, Practices, and Challenges
11.Electrical Measurements: Introduction, Concepts and Applications
12.Potassium Channels in Health and Disease

Nova Science Publishers
© Copyright 2004 - 2018

Parallelization of Neural Network Building and Training: An Original Decomposition Method (pp.193-223)