Nova Publishers
My Account Nova Publishers Shopping Cart
HomeBooksSeriesJournalsReference CollectionseBooksInformationSalesImprintsFor Authors
            
  Top » Catalog » Books » Computer Science and Robotics » Horizons in Computer Science Research. Volume 7 Chapters » My Account  |  Cart Contents  |  Checkout   
Quick Find
  
Use keywords to find the product you are looking for.
Advanced Search
What's New? more
Constrained Bayesian Methods of Hypotheses Testing: A New Philosophy of Hypotheses Testing in Parallel and Sequential Experiments
$207.00
Shopping Cart more
0 items
Information
Shipping & Returns
Privacy Notice
Conditions of Use
Contact Us
Notifications more
NotificationsNotify me of updates to Parallelization of Neural Network Building and Training: An Original Decomposition Method (pp.193-223)
Tell A Friend
 
Tell someone you know about this product.
Parallelization of Neural Network Building and Training: An Original Decomposition Method (pp.193-223) $100.00
Authors:  (Marc Sauget, Sylvain Contassot-Vivier, Michel Salomon, IRMA/ENISYS, University of Franche-Comte, France, and others)
Abstract:
Since the first developments of neural networks by Pitts and McCulloch, the major
encountered problems lie in their building and learning. Indeed, there are some results
proving that a feed­forward multi­layer perceptron neural network can be used as an
universal interpolator. Unfortunately, there is neither any indication on how to build
an optimized topology, nor a method to choose the best suited learning algorithm to
train the network. Many learning algorithms give good results, like the classical back­
propagation algorithm for which various optimizations have been proposed. Some of
these optimizations change the network structure, like the Square MLP or the HPU
designs, whereas others improve the learning process, like the QuickProp or the Re­
silient back­Propagation (RPROP) algorithms. Nonetheless, these works are based on
neural networks having a static structure which have to be inferred manually accord­
ing to the user's experience. In this chapter, we present a way to adapt automatically
the neural network topology to the application context. In fact, we present an efficient
method that permits to obtain a parallel building and learning based on an original domain decomposition. This chapter describes, for both aspects, the corresponding
algorithms and gives comparative results showing the relevance of our approach. In
addition, the exploitation aspect of the obtained neural network is also addressed in
the last part. We present a multi­threaded version of our Neurad application used to
compute irradiation doses in any environment. 


Available Options:
Version:
This Item Is Currently Unavailable.
Special Focus Titles
01.Chaliapin and the Jews: The Question of Chaliapin's Purported Antisemitism
02.The Humanities: Past, Present and Future
03.The Poles: Myths and Reality
04.Child-Rearing: Practices, Attitudes and Cultural Differences
05."A Home Away from Home": A Community of International and South African University Students
06.Palliative Care: Oncology Experience from Hong Kong
07.The Enigma of Autism: Genius, Disorder or Just Different?
08.The Collector Mentality: Modernization of the Hunter-Gatherer
09.Face Processing: Systems, Disorders and Cultural Differences
10.Occurrences, Structure, Biosynthesis, and Health Benefits Based on Their Evidences of Medicinal Phytochemicals in Vegetables and Fruits. Volume 8
11.Crystal Growth: Concepts, Mechanisms and Applications
12.The Economic, Social and Political Impact of Mining on Akyem Abuakwa from the Pre-Colonial Era up to 1943

Nova Science Publishers
© Copyright 2004 - 2017

Parallelization of Neural Network Building and Training: An Original Decomposition Method (pp.193-223)