It is known that linear neural networks with bit significance concept have better performance than ones without bit significance in terms of memory capacity and error correction capability. The thesis demonstrate that pair significance concept can be introduced to quadratic neural networks, as an expansion of the bit significance concept in the linear neural networks. The proposed quadratic neural networks with pair significance may be trained by either Hebbian learning rule or error-dirven learning rule.
The quadratic associative memory with pair significance, based on the Hebbian learning rule, is shown to have more memory capacity and better recall rate than the conventional one without pair significance. It is shown also by using the pair significance concept that some of the interconnections in quadratic associative memory are disturbing and redundant. By removing the redundant interconnections, more vectors can be stored and better recall rate can be achieved. This enables the number of interconnections in quadratic associative memories to be reduced, retaining the memory capacity and the recall rate.
When the error-driven learning is applied to the quadratic neural network with pair significance, it becomes a quadratic training-by-adaptive-gain (TAG) model, a second order expansion of the linear TAG. This neural network allows important nonlinear mappings to be captured, an attribute that significantly improves the pattern classification power of the linear TAG model.
Using liquid crystal light valve and the ground glass that can make random interconnections, the quadratic TAG was implemented.