## ANNs models: Sine Net

The Sine Net is a family of neural networks developed from 1999 by Massimo Buscema at Semeion and is characterised by a specific modality of information processing in each node that influences both on the evaluationof the output and on the learning phase. This modality can be applied to the topologies of the already existent neural networks and introduces sbustantial modifications of the equations of learning. In practice, the Sine Net constitutes a new and general law of learning able to produce a remarkable convergence and a remarkable capacity of extrapolation starting from database of a relevant complexity. The experimentations carried out have actually reported very interesting results.

**Background **In a classic neural network each node is an element that recieves an input weighted by the nodes of input, processes it and filters the result through a non linera function.

The basic ides of the networks sine is the one to give to each node receptors placed between input and the sum.

The receptors change not linearly the input and eventually compress all the inputs in a value that will be filtered through a not linear function. If in a classic network the input is processed only from a quantitativepoint of view, the receptors make possible a processing both quantitative and qualitative of the input, through the sinusoidal functions. For each i-th coordinate of the space of input, the sinusoidal functions introduce a dependence of each i-th value of input transformed by the positionin the space.

The output value of each node depends from the relationship between each input value multiplied for its specific wave lenght (wight). This determinesthat the output value of each node is a not linearfunction of a sum of sinusoidal functions. The wave lenghts (the weights) of each input are modified during the phase of training.

**The algorithm **We will show in summary through the equations the differences between the data process in a classic network and in a SineNet one. We introduce the used terminology .

[s] : the generic layer ofthe network (s=1 indicates the inputs layer, growing values indicate the hidden e output nodes);

x_{j}^{[s]}: the value of output of the j-th node in the layer [s];

x_{i}[^{s-1]}: the i-th input of the generic node of the layer [s] coming from the i-th node of the layer [s-1];

x_{0}[^{s-1]}: a "false" input of the generic node in the layer [s], introduced surreptitiously to represent, in a mathematically convenient way, an input value ad hoc, genericaly placed at 1;

w^{ij[s]}: the value of the weights of the connections between the i-th node of the layer [s-1] and the j-th node of the layer [s];

n:the number of the input of the node;

In the following figure is shown the input of the generic node j.

In the classic network each node operates a non linear transformation starting from a linear transformation of the input:

In this equation the non linear transformation F is a function of a sigmoid type while the linear transformation L is the weighted sum of the input:

From the two equations we have:

In the SineNet each node operates a non linear transformation on the basis of a non linear transformation of the input already operated.

In this relationship the non linear transformation F is, like in the first case, a function of a sigmoid type and the non linear transformation G is the sum of the weighted inputs, processed through a non monotonous sinusoidal function and that introduces the qualitative data process.

**Applications and comparisons**

We have compared the SineNet with the Back Propagation Network (BP), bth in front of very simple problems, and in front of real data.

In the first case we have anlised the quality of convergence of the two networks in front of the problem of the Xor: the risultats show how the usage of the function seno leads to the Sinenet to a faster convergence of the BP.

The same interesting resultats were obtained comparing the two architectures with a database related to the brest cancer supplied by the University of the Wisconsin. Globally the SineNet obtains better results and it is more effective in avoiding the overfittings (less capacities in testing and in the generalization with respect to the capacities of training) of the BP and its performance s are in some way independent from the specific configurations (for example, number of hiddennodes).

For a more exaustive treatment please see the Technical Paper n. 21.