Introduction:

Ever since eternity, one thing that has made human beings stand apart from the rest of the animal kingdom is, its brain .The most intelligent device on earth, the “Human brain” is the driving force that has given us the ever-progressive species diving into technology and development as each day progresses.

Due to his inquisitive nature, man tried to make machines that could do intelligent job processing, and take decisions according to instructions fed to it. What resulted was the machine that revolutionized the whole world, the “Computer” (more technically speaking the Von Neumann Computer). Even though it could perform millions of calculations every second, display incredible graphics and 3-dimentional animations, play audio and video but it made the same mistake every time.

Practice could not make it perfect. So the quest for making more intelligent device continued. These researches lead to birth of more powerful processors with high-tech equipments attached to it, super computers with capabilities to handle more than one task at a time and finally networks with resources sharing facilities. But still the problem of designing machines with intelligent self-learning, loomed large in front of mankind. Then the idea of initiating human brain stuck the designers who started their researches one of the technologies that will change the way computer work Artificial Neural Networks.


Definition:

Neural Network is the specified branch of the Artificial Intelligence.
In general, Neural Networks are simply mathematical techniques designed to accomplish a variety of tasks. Neural Networks uses a set of processing elements (or nodes) loosely analogues to neurons in the brain (hence the same, neural networks). These nodes are interconnected in a network that can then identify patterns in data as it is exposed to the data. In a sense, the network learns from the experience just as people do. Neural networks can be configured in various arrangements to perform a range of tasks including pattern recognition, data mining, classification, and process modeling.

Basics of Artificial Neural Models:

The human brain is made up of computing elements, called neurons, coupled with sensory receptors (affecters) and effectors. The average human brain, roughly three pounds in weight and 90 cubic inches in volume, is estimated to contain about 100 billion cells of various types. A neuron is a special cell that conducts and electrical signal, and there are about 10 billion neurons in the human brain. The remaining 90 billion cells are called glial or glue cells, and these serve as support cells for the neurons. Each neuron is about one-hundredth size of the period at the end of this sentence. Neurons interact through contacts called synapses. Each synapse spans a gap about a millionth of an inch wide. On the average each neuron receives signals via thousands of synapses.

The motivation for artificial neural network (ANN) researches is the belief that a human’s capabilities, particularly in real-time visual perception, speech understanding, and sensory information processing and in adaptively as well as intelligent decision making in general, come from the organizational and computational principles exhibited in the highly complex neural network of the human brain. Expectations of faster and better solution provide us with the challenge to build machines using the same computational and organizational principles, simplified and abstracted from neurobiological of the brain.

Artificial Neural Network:
Artificial neural network (ANNs), also called parallel distributed processing systems (PDPs) and connectionist systems, are intended for modeling the organization principles of the central neurons system, with the hope that the biologically inspired computing capabilities of the ANN will allow the cognitive and logically inspired computing capabilities of the ANN will allow the cognitive and sensory tasks to be performed more easily and more satisfactory than with conventional serial processors. Because of the limitation of serial computers, much effort has devoted to the development of the parallel processing architecture; the function of single processor is at a level comparable to that of a neuron. If the interconnections between the simplest fine-grained processors are made adaptive, a neural network results.
ANN structures, broadly classified as recurrent (involving feedback) or non-recurrent (without feedback), have numerous processing elements (also dubbed neurons, neurodes, units or cells) and connections (forward and backward interlayer connections between neurons in different layers, forward and backward interlayer connections or lateral connections between neurons in the same layer, and self-connections between the input and output layer of the same neuron. Neural networks may not have differing structures or topology but are also distinguished from one another by the way they learn, the manner in which computations are performed (rule-based, fuzzy, even nonalorithmic), and the component characteristic of the neurons or the input/output description of the synaptic dynamics). These networks are required to perform significant processing tasks through collective local interaction that produces global properties.
Since the components and connections and their packaging under stringent spatial constraints make the system large-scale, the role of graph theory, algorithm, and neuroscience is pervasive.    

How Neural Networks differ from Conventional Computer?

Neural Networks perform computation in a very different way than conventional computers, where a single central processing unit sequential dictates every piece of the action. Neural Networks are built from a large number of very simple processing elements that individually deal with pieces of a big problem. A processing element (PE) simply multiplies an output value (table lookup). The principles of neural computation come from the massive processing tasks, and from the adaptive nature of the parameters (weights) that interconnected the PEs.


0 comments:

 
Top