__
TR00-086 | 26th September 2000 00:00
__

#### On the Complexity of Computing and Learning with Multiplicative Neural Networks

**Abstract:**
In a great variety of neuron models neural inputs are

combined using the summing operation. We introduce the concept of

multiplicative neural networks which contain units that multiply

their inputs instead of summing them and, thus, allow inputs to

interact nonlinearly. The class of multiplicative networks

comprises such widely known and well studied network types as

higher-order networks and product unit networks.

We investigate the complexity of computing and learning for

multiplicative neural networks. In particular, we derive upper and

lower bounds on the Vapnik-Chervonenkis (VC) dimension and the

pseudo dimension for various types of networks with multiplicative

units. As the most general case, we consider feedforward networks

consisting of product and sigmoidal units, showing that their pseudo

dimension is bounded from above by a polynomial with the same order

of magnitude as the currently best known bound for purely sigmoidal

networks. Moreover, we show that this bound holds even in the case

when the unit type, product or sigmoidal, may be learned. Crucial

for these results are calculations of solution set components bounds

for new network classes. As to lower bounds we construct product

unit networks of fixed depth with superlinear VC dimension.

For higher-order sigmoidal networks we establish polynomial bounds

that, in contrast to previous results, do not involve any

restriction of the network order. We further consider various

classes of higher-order units, also known as sigma-pi units,

characterized by connectivity constraints. In terms of these we

derive some asymptotically tight bounds.

Multiplication plays an important role both in neural modeling of

biological behavior and in applications of artificial neural

networks. We also briefly survey research in biology and in

applications where multiplication is considered an essential

computational element. The results we present here provide new tools

for assessing the impact of multiplication on the computational

power and the learning capabilities of neural networks.