Current limitation of the speed of physical processing elements in neural network simulators comes not just from time multiplexing more than one neuron, but also from sequential computation of neuron interconnections.
In this thesis, a neurocomputing architecture called $\underline{Neuron Machine}$ is proposed. It can compute a large number of neuron interconnections in parallel, inside the physical processing element.
Its memory produces information for a set of interconnections, called interconnection fold, at a single memory access. In computing states of neurons, an array of multipliers and a tree of adders compute net-input-sum in parallel, using the interconnection fold data given from memory. Then one of processors, called Soma processors, takes the net-input-sum and evaluates a threshold function, thereby producing new neuron state. Weights of interconnections in a interconnection fold are computed in parallel by means of SIMD PEs attached to memory outputs. Employment of pipelined operations in the architecture accelerates the computing speed so that those interconnections can be processed every memory cycle time in computing both states and weights.
It is shown that efficency of the architecture is comparable with the optimal efficency of digital neurocomputing architecture. Since the architecture is immune from communication problem, from which current multiprocessor system neural network simulators suffer seriously, it is suitable for very high speed neurocomputer.