• Nem Talált Eredményt

Signal Processing by a Single Neuron

N/A
N/A
Protected

Academic year: 2022

Ossza meg "Signal Processing by a Single Neuron"

Copied!
93
0
0

Teljes szövegt

(1)

Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework**

Consortium leader

PETER PAZMANY CATHOLIC UNIVERSITY

Consortium members

SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

The Project has been realised with the support of the European Union and has been co-financed by the European Social Fund ***

**Molekuláris bionika és Infobionika Szakok tananyagának komplex fejlesztése konzorciumi keretben

***A projekt az Európai Unió támogatásával, az Európai Szociális Alap társfinanszírozásával valósul meg.

(2)

Signal Processing by a Single Neuron

(Jelfeldolgozás Mesterséges Neuronnal)

Treplán Gergely

Digitális- neurális-, és kiloprocesszoros architektúrákon alapuló jelfeldolgozás

Digital- and Neural Based Signal Processing &

Kiloprocessor Arrays

(3)

Outline

• Historical notes

• Artificial neuron (McCulloch-Pitts neuron)

• Elementary set separation by a single neuron

• Implementation of a single logical function by a single neuron

• Pattern recognition by a single neuron

• The learning algorithm

• Questions

• Example problems

(4)

Historical notes

• Threshold Logic Unit (TLU) proposed by Warren McCulloch and Walter Pitts in 1943;

• Hebb’s first rule for self-organized learning in 1949;

• Perceptron developed by Frank Rosenblatt in 1957;

• ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) by Hoff and Widrow in 1960;

• Perceptron learning rule (LMS algorithm) by Widrow in 1960,

• Limitations of the perceptron (Minsky and Papert-1969);

• Back-propagation algorithm (1986);

• Radial-basis function network (Broomhead and Lowe-1988).

(5)

The artificial neuron (1)

• The artificial neuron is an information processing unit that is elementary element of an artificial neural network.

• extracted from the biological model

(6)

The artificial neuron (2)

• A crude simplification of a nerve cell is depicted in the Figure.

Stimuli arrives from other neurons. From the synapses the dendrites carry the signal to the nerve cell body where it gets summed up and if it reaches a certain level an output is

generated. A synapse is called excitatory if stimulating it

increases the probability of generating an output or inhibitory

Stimuli

Output stimulus (response) Synapse

(7)

The artificial neuron (3)

• The following artificial model is just simple copy of the nerve cell, however some important features can be extracted from this model!

Dendrite

Soma

Nucleus Myelin sheath

Schwann cell

Nodes of Ranvier

Axon terminal

(8)

The artificial neuron (4)

• The artificial neuron is connected to the outside world with the input signal is xi , where the synaptic strength is represented by the weight wi .

• Basically what arrives to the AN is a weighted sum of the

input signal. Then this wi quantifies two general effects, if wi >

0 then the input is amplified, else it is attenuated, so it means that the wi is the descriptor of the synapse.

There is also a b threshold what the nerve compares to the weighted sum of the input.

(9)

The artificial neuron (5)

• The output value is the value determined according to a φ(.) nonlinearity, which is also called the threshold function. In

order to have a more compact form the extended weight vector a new w0 = b can be defined, in order to have an inequality like this. This representation can be seen in the Figure.

(10)

The artificial neuron (6)

• Using this interpretation we have same model as above if the input x0 of w0 is a constant 1, and now it can be easily seen that the final output is a nonlinearity of the inner product of the weight and inputs. Mathematically it means the following

equation for the output y:

• Or using the threshold notation:

0

( )

=

= =

N i i T

i

y ϕ w x ϕ w x

=

N i i

y ϕ w x b

(11)

The artificial neuron (7)

Activation function (1)

• Let the activation or threshold function φ(.) be is a monotone differential increasing function, for example it can be the

inverse of arctan(). Let it be called the soft nonlinearity function, which is showed in the next Figure.

• In this case the output

• where

( )

, 1 exp2

( )

,

= =

+

y u

ϕ λ u

λ

=

N .

u w x b

(12)

The artificial neuron (8)

Activation function (2)

• Sigmoid nonlinearity function

(13)

The artificial neuron (8)

Activation function (3)

If the activation function is the sgn(.) function then it is the so called hard nonlinearity function shown in the Figure. And now this is the formula which fully described the operation of an ANN.

• In this case the output

• where

( )

1, if 0

sgn ,

1, else

= =

y u u

=

N .

u w x b

(14)

The artificial neuron (8)

Activation function (4)

• Signum hard linearity function

(15)

The artificial neuron (9)

• Relation between activation functions:

(16)

The artificial neuron (10)

• Formula which fully described the operation of an McCulloch- Pitts or artificial neuron:

If an input vector arrives to the AN,

1. computes the weighted sum of it, 2. compare the result with a threshold,

3. then the output is getting throw on a nonlinearity function.

• Again: it is a growth simplification, getting some important feature of biological nerve cell.

• This is reveled soon, that it is a very complex model, with we can solve hard IT problems.

(17)

Elementary set separation by a single neuron (1)

• Let be the φ(.) hard nonlinear function, and then the output is discrete -1 or 1 with this assumption:

Rewrite the formula substituting u to wTx and then the output +1, if the weighted sum of the input is greater than zero or −1 if the argument is smaller than zero.

( )

sgn

( )

1, if 0.

1, else

+

= =

u u u

ϕ

( )

sgn

( )

+1, if 1, else 0.

= = =

T

y ϕ u w xT w x

(18)

Elementary set separation by a single neuron (2)

• It is a very important formula, because if we look this formula in an geometrical interpretation, it is a separation by a linear hyperspace. From elementary geometric we are aware of the fact that this is the equation of a hyper plane:

• While it is an equality of an N-dimensional hyper plane, the weights of the artificial neuron represent a linear decision boundary in a two class pattern-classification problem.

= 0 w xT

(19)

Elementary set separation by a single neuron (3)

• Illustration of the hyper plane (in this example, a straight line) as decision boundary for a two-dimensional, two class pattern- classification problem.

(20)

Elementary set separation by a single neuron (4)

• If we represent the hyperspace in a 2-D input space, this

equation will determine a hyper plane, which is a simple line.

What is beyond that line is going to be classified +1, and what is under the line is going to be −1.

• The most simplest artificial neuron with a 2-D input.

0 1

1

x1 x2

x1AND x2

1 1 1

-1 -1 1

-1 1 -1

-1 -1 -1

y x2 x1

1 1 1

-1 -1 1

-1 1 -1

-1 -1 -1

y x2 x1

w

A potential decision boundary

1

-1 w1 Σ sgn(.)

w2 x2 x1

w0 1

(w1x1 w2x2 w0)

sgn

y= + +

Σ sgn(.) y

w1

w2 x2 x1

w0 1

(w1x1 w2x2 w0)

sgn

y= + +

y

(21)

Elementary set separation by a single neuron (5)

• To give a specific example let the weight vector be w = (3, 2, 1) so the hyper plane can be describer the following equation:

• Explicitly it means:

• Following Figure shows the decision line of this equation.

1 2

3 + 2 x + 1 x = 0

2 = − −3 2 1

x x

(22)

Elementary set separation by a single neuron (6)

• Decision boundary of the example.

(23)

Elementary set separation by a single neuron (7)

• The decision domain can be easily caught, according to the

sign of w2. As a result when a given vector is the weight vector this is the way how we can visualize the set separation.

• Furthermore, if we change the weight vector’s components, then we will have different numbers in the equation, what means an other separation line.

As a result that basically w vector represents the

programmability of the artificial neuron and this fact can be carried out from the figure was shown above.

(24)

Elementary set separation by a single neuron (8)

• Why it is so important to use set separation by hyper plane?

• We can come up with the applications when we would like to implement some logic functions.

• Furthermore there are plenty of mathematical and

computational task which can be derived to a set separation problem by a linear hyper plane.

• Let us observe the implementation of logical functions by a single neuron!

(25)

Implementation of a single logical function by a single neuron (1)

• Firstly let us actually focus on 2d-and function. We can come up with the truth table of the logical AND function. Figure also shows the input space with its simple geometric interpretation : as far as x1 and x2 are the inputs.

• 2-D AND from truth table to visualization

(26)

Implementation of a single logical function by a single neuron (2)

• All the input points can be seen on the plot, and basically the 2-D AND function means set separation, because only one have to classified as +1, and all others must be minus one. So it is a set separation, because it can be implemented the good decision, if we choose the right weights. The truth table

determines for given input vector whether the output is +1 or - 1. On left side the geometric interpretation can be seen, and it is easy to notice that is really a problem, which can be solved with a linear separator. If we know the decision boundary we can give the weight form the equation of the line.

(27)

Implementation of a single logical function by a single neuron (3)

• Than what we have on the figure is actually the separation surface which we was needed, which mathematically is the following equation:

• As a result:

• And that means that this separation surface is look like in the figure, as a result it can be easily seen that the weight vector is w = (−1.5, 1, 1).

1 2

1.5 0

+ x + x =

2 = 1.5 1

x x

(28)

Implementation of a single logical function by a single neuron (4)

• Next figure shows us how to design the implementation of a 2-D AND function by an artificial neurons.

• Solution of the logical 2-D AND by a single neuron.

(29)

Implementation of a single logical function by a single neuron (5)

• Furthermore instead of 2D, we can actually come up with the R dimensional AND function.

The corresponding weight vector to implement an R dimensional AND function are the following program.

• The weights corresponding to the inputs are all 1 and threshold should be R − 0.5. As a result the actual weights of the neuron are the following:

( )

( )

T = − R 0.5 ,1,,1 w

(30)

Implementation of a single logical function by a single neuron (6)

• In a same way OR function can also be implemented by a single artificial neuron, being a linear separation problem, which is shown by the next Figure, and the weight must be w =(−0.5, 1, 1).

• 2-D OR problem solved by a linear separator.

(31)

Implementation of a single logical function by a single neuron (7)

• However we cannot implement every logical function by a linear hyper plane. Unfortunately there are some ones which cannot be implemented by a single neuron for example

excluded OR (XOR) is like that, which entails a separation given in the next Figure.

• 2-D XOR problem can not be solved by one linear separator

(32)

Implementation of a single logical function by a single neuron (8)

• As it can be seen, it cannot be separated by one linear hyper plane so more neuron should be used, as in this example one of the neuron implement one line , the other is going to

implement the other line and then the 3rd neuron realizes the AND function to combine the two separation.

• Is there any neural based solution for nonlinear separation problems?

• Let us build neural networks!

(33)

Implementation of a single logical function by a single neuron (9)

• As a result even the XOR function can be implemented by a neural network , for example with 3 neurons in a feed forward manner.

• 2-D XOR problem can be solved by a network of neurons.

(34)

Implementation of a single logical function by a single neuron (10)

• The very important conclusion is that elementary artificial neuron is a linear set separator in the N dimensional input

space, where programmability is actually ensured by changing the free parameters of the system, depending how well it

classifies, and then AN can implement a class of logical functions, more precisely the linear separable functions. So basically we need ANN to implement any logical function by deploying several neurons, which results a network

(35)

Implementation of a single logical function by a single neuron (11)

• Feed forward artificial neural network.

(36)

Pattern recognition by a single neuron (1)

• Now it is going to be shown how to solve elementary pattern recognition task with a single neuron, what is the reason that artificial neuron is also called perceptron, because it can

intelligently recognize patterns.

• Let assume that at the input, there is a speech pattern, which is a continuous signal. Let assume that someone can say “yes” or can pronounce “no”.

• Many of the ATM systems in the US already follow this pattern: no client orders are executed without verbal

validation. (your own “yes-or no” verbal verification order is usually needed to proceed with any of your financial actions.)

(37)

Pattern recognition by a single neuron (2)

• Let assume that the input is a speech pattern in a form of continuous signal which represents a “yes” or a “no”.

• This continuous signal is going in an A/D converter where it is transformed to a simple digital signal.

• After that, the next step is to carry out the features with an FFT, which is represented by X vector.

• Then all of the components are plugged into an artificial neuron, which computes the weighted sum of the input,

compares with a threshold, and if the state is greater or equal than the threshold the output is going to be +1, otherwise it is

(38)

Pattern recognition by a single neuron (3)

• The block diagram of the speech pattern recognition by an

(39)

Pattern recognition by a single neuron (4)

• This is how we would like to solve the speech pattern

recognition task, with a separation by a linear hyper plane. The model is now correct, the next question is that what the

weights are of this neuron, so what should be the program of the perceptron to get correct recognition.

• In a more general view (next Figure) every special pattern can arrive which has two possible value s(1) or s(2),which are going to represent the Fourier transformation of “yes” and the “no”.

(40)

Pattern recognition by a single neuron (5)

• Generalization of a pattern recognition task by an artificial neuron.

(41)

Pattern recognition by a single neuron (6)

• Then this pattern hits the system, and a preprocessing calculation is used, depending what the specific task.

• After the preprocessing, an artificial neuron should be implemented that finally provide +1 or -1.

• Having elaborated this model the mathematical analysis of the pattern recognition is going to be derived, which proves that an elementary artificial neuron can decide optimally under

assumptions, or in other words a linear separator is so to speak good enough to carry out a pattern recognition task.

• Furthermore, the method is going to be given, how to

(42)

Pattern recognition by a single neuron (7)

Pattern recognition under Gaussian noise (1)

• To address the first theorem, let us assume whether the task can be implemented under Gaussian noise.

• The largest problem when someone pronounces “yes”, that everybody can pronounce with many type of “yes”, so there are several versions of the individual.

Let us assume that we have a standard “yes” as s(1) and a standard “no” as s(2).

• Standard pattern basically as we mentioned earlier is either represented by “yes” or “no”.

(43)

Pattern recognition by a single neuron (8)

Pattern recognition under Gaussian noise (2)

• However we want to be sure that a special input “yes” is going to be classified as “yes”.

Let x be the observation which belongs to the spoken speech pattern.

• This observation differs from the standard one subject to a multidimensional Gaussian formulation with 0 mean and K covariance matrix.

• That is the most general assumption when the aim is to design a system.

(44)

Pattern recognition by a single neuron (9)

Pattern recognition under Gaussian noise (3)

Formally it means the following equality for the observation x:

• where the original signal must be one of the standard pattern:

• and the noise is:

= + , x ξ ν

( ) ( )

{

1

,

2

} ,

ξ s s

( , ) .

N

ν 0 K

(45)

Pattern recognition by a single neuron (10)

Pattern recognition under Gaussian noise (4)

• That is the most general assumption when, the aim is to design a system. Generally the block diagram of the task can be seen in the next Figure.

• Pattern recognition under Gaussian noise solved by an AN.

x

(46)

Pattern recognition by a single neuron (11)

Pattern recognition under Gaussian noise (5)

• Due to this statistic what we observed for the random vector that the probabilities of the pronounced speech vector, if the standard was “yes” or “no” are the followings:

(

( )

) ( ) ( ) (

( )

)

( )

(

( )

) (

( )

) ( ) ( ) (

( )

)

( )

(

( )

)

1 1 1 1

2 2 1 2

1 1

| exp ,

2 det 2

1 1

| exp .

2 det 2

N

N

P

P

π π

= =

= =

x ξ s x s K x s

K

x ξ s x s K x s

K

(47)

Pattern recognition by a single neuron (12)

Pattern recognition under Gaussian noise (6)

• These are the traditional multidimensional Gaussian density functions with different expected value depending on the condition.

• This means again (geometrically speaking) that a kind of separation problem given. If we observe x which before the distortion was s(1) or s (2), the best we can do is find the closest original point in the probability space.

(48)

Pattern recognition by a single neuron (13)

Pattern recognition under Gaussian noise (7)

• Pattern classification from a geometrical point of view.

(49)

Pattern recognition by a single neuron (14)

Pattern recognition under Gaussian noise (8)

• This is the classical Bayesian decision.

• This is the method how we can guarantee minimal error probability.

• Bayes decision is an optimal decision, because it always chooses according to the likelihood functions above, so it is going to choose the more possible original point.

(50)

Pattern recognition by a single neuron (15)

Pattern recognition under Gaussian noise (9)

• Formally the following inequality has to be evaluated, to decide which standard is the more probable:

( )

( ) ( )

(

( )

)

( )

(

( )

)

1 1 1 1 1 1

: exp

2 det 2

>

π N

s x s K x s

K

( )1 : P

(

| = ( )1

)

> P

(

| = ( )2

)

,

s x ξ s x ξ s

(

( )2

)

( )1

(

( )2

)

1 1

exp .

> x s K x s

(51)

Pattern recognition by a single neuron (16)

Pattern recognition under Gaussian noise (10)

• Which can be rewritten as:

• After decomposition:

(

( )1

)

T ( )1

(

( )1

) (

( )2

)

T ( )1

(

( )2

)

1 1

2 2 .

x s K x s > − x s K x s

( )

( )

( ) ( )

( )

( ) ( ) ( )

( )

( )

( ) ( )

( )

( ) ( ) ( )

T T

1 1 1 1 1 1

T

T T

1 2 1 2 1 2

T

1 1

2 2

1 1

2 2 .

+ >

> − +

x K x s K x s K s x K x s K x s K s

(52)

Pattern recognition by a single neuron (17)

Pattern recognition under Gaussian noise (11)

• Rearranging the inequality:

• And now it can be seen that if we choose:

• and

( ) ( )

(

s 1 s 2

)

T K( )1 x > 12

( )

s( )1 T K( ) ( )1 s1 12

( )

s( )2 T K( ) ( )1 s 2 .

( ) ( )

(

1 2

)

T ( )1

T = ,

w s s K

( )

( )1 T ( ) ( )1 1

( )

( )2 T ( ) ( )1 2

1 1

2 2

=

b s K s s K s

T > . w x b

(53)

Pattern recognition by a single neuron (18)

Pattern recognition under Gaussian noise (12)

Therefore we get s(1) if:

• As a result it is a linear set separation problem, so it can be implemented on an artificial neuron in a way like next Figure shows, and it can carry out the solution of the task under

Gaussian noise.

T > . w x b

(54)

Pattern recognition by a single neuron (19)

Pattern recognition under Gaussian noise (13)

• Implementation of AN solving the pattern recognition task.

(55)

Pattern recognition by a single neuron (20)

Pattern recognition under Gaussian noise (14)

• Basically the AN can decide if an observed pattern arrives, after we had downloaded the optimal weights what we could have calculated offline if the standard patterns and covariance matrix was given.

• The conclusion is that an elementary artificial neuron can solve any pattern recognition when two patterns has to be distinguished.

(56)

Pattern recognition by a single neuron (21)

Pattern recognition under Gaussian noise (14)

• This model is very well defined now if the parameters are fully given, since s(1),s(2),K are given and w free parameters can be calculated, and the actual neuron can be implemented.

• On the other hand in a real life application this quantities are not known, an this is why the next issue is how it is possible to get w and b in the neck of these parameters.

• The next topic provides a learning algorithm, where neurons are updating themselves optimally.

(57)

The learning algorithm (1)

• We do not know what are the actually the standard patterns and the covariance matrix is also unknown so what we have is only a set of examples which is called learning set:

• Example unlike that can be always given because it can be told by a human expert. An artificial neuron can function properly, if the two classes X+ and X must be linearly separable.

{ }

{ }

: 1 : 1

+

= = +

= = −

X d

X d

x x

(58)

The learning algorithm (2)

• This, in turn, means that the patterns to be classified must be sufficiently separated from each other to ensure that the

decision surface consists of a hyper plane.

• The question is how to develop an algorithm which based on these examples can find the right decision, even the original parameters are unknown.

• So instead of the actual parameter only a learning set is available to us.

(59)

The learning algorithm (3)

• Suppose then that the input variables of the perceptron originate from two linearly separable classes X+ and X.

Given the sets of vectors X+ and X to train the classifier (the perceptron), the training process involves the adjustment of the weight vector wopt in such a way that the two classes X+ and X linearly separable.

• Furthermore separability means that there exists an optimal wopt vector, for which is true that the whole set of X+ and X fulfills the next relationships:

{ }

{ }

T opt T

: 0 ,

: 0 .

+

=

= <

X X

x w x x w x

(60)

The learning algorithm (4)

• Furthermore this linear separation can be carried out with an artificial neuron shown the next Figure, and the only problem is that the wopt program of the neuron is fully unknown.

• General artificial neuron.

(61)

The learning algorithm (5)

• But we have some examples which is represented as:

• Actually we are looking for the optimal weights which the

neuron perform with perfectly to the learning set. Formally the object is to find w:

( )K :=

{ ( ( ) ( )

k ,d k

)

,k =1,...,K

}

.

τ x

( )

1,

sgn .

1,

+

+

= =

T X

d X

w x x

x

(62)

The learning algorithm (6)

• Therefore basically we have to develop a recursive algorithm called learning, which can learn step by step, based on

observing the previous weight vector, the desired output and the actual output of the system.

• On these specific examples it is going to recursively adopt the weight vector in order to converge wopt. This can be described formally as follows:

(

k + = Ψ1

) ( ( ) ( ) ( )

k d k, , y k

)

opt

w w w

(63)

The learning algorithm (7)

• In a more ambitious way it can be called intelligent, because artificial intelligence is a kind of philosophy that we can learn from example, even the parameters are fully hidden.

• The so called Rosenblatt learning algorithm which is a manifestation of learning can be expressed with a special update rule.

Given the sets of vectors X+ and X and an initial weight vector w(0), this algorithm can set an optimal weights vector wopt to

(64)

The learning algorithm (8)

• The algorithm for adapting the weight vector of the elementary perceptron may now be formulates as follows:

1. Initialization. Set w(0)=0. Then perform the following computations for time step n=1,2,…

2. Activation. At time step k, activate the perceptron by applying continuous-valued input vector x(k) and desired d(k).

3. Computation of Actual response. Compute the actual response of the perceptron:

( )

= sgn

{

T

( ) ( ) }

.

y k w k x k

(65)

The learning algorithm (9)

4. Adaptation of the weight vector. Update the weight vector of the perceptron according to rule:

where

and

(

k + =1

) ( )

k + d k

( ) ( ) ( )

y k k ,

w w x

( ) ( )

( )

1 f belongs to class X 1 if belongs to class X ,

+

= 



i k

d k k

x x

( ) ( ) ( )

k = d k y k

ε

(66)

The learning algorithm (10)

5. Continuation. Increment time step n-by-one and go back to step 2.

• Basically we feedback the error signal to adopt the weights more efficiently, and in next Figure it can be seen the block diagram of the algorithm.

• One can come with the following questions:

• if the algorithm converges to any fix point?

• if there is a fix point, what is the speed of convergence?

(67)

The learning algorithm (11)

Perceptron

Training algorithm

x(k)

Input Output

... Σ

w0(k) w1(k) w2(k) wN(k) x2(k)

xN(k) x1(k) 1

sgn(.)

( (k) (k))

sgn ) k ( w ) k ( x ) k ( w sgn ) k (

y 0 T

N

1 i

i

i = w x

+

=

=

y(k) y(k)

x(k) y(k)

d(k) w(k)

Desired output

(k 1) (w( ) ( ) ( ) ( )k ,x k ,d k ,y k )

w + =Ψ

(Learning with Teacher) w(k) wopt

Perceptron

Training algorithm

x(k)

Input Output

... Σ

w0(k) w1(k) w2(k) wN(k) x2(k)

xN(k) x1(k) 1

sgn(.)

( (k) (k))

sgn ) k ( w ) k ( x ) k ( w sgn ) k (

y 0 T

N

1 i

i

i = w x

+

=

=

y(k) y(k)

x(k) y(k)

d(k) w(k)

Desired output

(k 1) (w( ) ( ) ( ) ( )k ,x k ,d k ,y k )

w + =Ψ

(Learning with Teacher) w(k) wopt

(68)

The learning algorithm (12)

Proof of the algorithm convergence (1)

• Let us define the following sets:

• and:

{

: Topt 0 ,

}

+ =

X x w x

{

: Topt 0 ,

}

= <

X x w x

{

: Topt

( )

0 ,

}

=

Xɶ -x w -x

{

0 .

}

+

= ɶ = T >

X X X x : w ( )x

(69)

The learning algorithm (13)

Proof of the algorithm convergence (2)

Suppose that w(k)x(k)<0 for k=1,2, …, and the input vector x(k) belongs to the class X+. That is the perceptron incorrectly classifies the vectors x(0), x(1), x(2),….

• In this case the perceptron learning rule is the follow:

• where the error signal may be 0 (no error) and 2 (error decision with d(k)=+1 and y(k)=-1) :

(

k + =1

) ( )

k + d k

( ) ( ) ( )

y k k =

( ) ( ) ( )

k +ε k k ,

w w x w x

( ) ( ) ( )

k = d k y k 0,2 .

{ }

ε

(70)

The learning algorithm (14)

Proof of the algorithm convergence (3)

We define two sets at the stage k (k=1,2, …) with the updated w(k):

• and

where XkOK is the Not OK (it’s not a correct classification) set, and the XkNOK is the OK (it’s a correct classification) set.

( ) ( )

{ }

NOK = < 0 ,

Xk x : w k x k

( ) ( )

{ }

OK = > 0 ,

Xk x : w k x k

(71)

The learning algorithm (15)

Proof of the algorithm convergence (4)

Then the input vectors are elements of the XkNOK set, and the weight vector w(k) is updated by learning rule:

( ) ( )

( )

NOK 0 NOK 1

NOK 1

0 1 ...

1

− ∈ k X X

k X

x x

x

( ) ( )

( )

0 2

1 2

...

1 2

=

=

− = e

e

e k

( ) ( )

( ) ( )

( ) ( )

1 (0) 2 0

2 (1) 2 1

...

( 1) 2 1

= +

= +

k = k − + k

w w x

w w x

w w x

(72)

The learning algorithm (16)

Proof of the algorithm convergence (5)

• If the initial condition is then we get:

Hence, multiplying both sides by the woptT, we get

where the dmin is defined as a positive number:

( )

1

( )

0

2 .

=

=

k

i

k i

w x

( )

1

( )

1

T T

opt opt min min

0 0

2 2 2 ,

= =

=

k

k =

i i

k i d kd

w w w x

T

opt dmin > 0.

w x

( )0 = ; ( )1 = 2 ( )0 ; ( )2 = 2( ( ) ( )0 + 1 )

w 0 w x w x x

(73)

The learning algorithm (17)

Proof of the algorithm convergence (6)

• Next we make use of an inequality known as the Cauchy- Schwarz:

• where ||.|| denotes the Euclidean norm of enclosed argument vector, and inner product is a scalar quantity.

• Given two vectors and , the Cauchy-Schwarz inequality states that:

2 2 2

, a b a b

( ) ( )

2 2 2 2 2

4 min.

T T

opt k opt k k d

w w w w

(74)

The learning algorithm (18)

Proof of the algorithm convergence (7)

• By taking the squared Euclidean norm of both sides, we obtain:

• But, under the assumption that the perceptron incorrectly classifies an input vector x(k) belonging to the class X-, we have:

( )

k 2 =

(

k − +1

)

2

(

k 1

)

2 =

w w x

(

k − ∈1

)

X NOK

x

(

1

)

2 4

(

1

) (

1

)

4

(

1

)

2.

= w k + w k x k − + x k

(

k 1

) (

k − <1

)

0.

w x

(75)

The learning algorithm (19)

Proof of the algorithm convergence (8)

• We therefore deduce that

• or equivalently

where dmax is a positive number defined by

( )

k 2

(

k 1

)

2 + 4

(

k 1

)

2,

w w x

( )

k 2

(

k 1

)

2 4

(

k 1

)

2 4dmax2 ,

w w x

max .

=

d x

(76)

The learning algorithm (20)

Proof of the algorithm convergence (9)

Adding the upper inequalities for k=1,2,…, and invoking the assumed initial condition w(0)=0, we get the following

inequality:

Then we get an upper bound:

( )

2

( )

2 max2 1

1 4 .

=

k

n

k n kd

w x

( )

k 2 4kdmax2 . w

(77)

The learning algorithm (21)

Proof of the algorithm convergence (10)

• Analyzed the upper and lower bound at the same time:

• we can take cognizance of that the lower bound increases faster (O(k2)) than upper bound (O(k)):

2 2

( )

2 2

min 2 max

4 4 ,

T opt

k d w k kd

w

2 2 min 2

2 max

4 4 .

T opt

k d kd

w

(78)

Problems and Questions (1)

• Describe the perceptron convergence theorem (algorithm)!

(proof of convergence and order of convergence)!

• Design an analog circuit to realize a single artificial neuron!

• Design an analog circuit to realize a single artificial neuron!

• Give the weights and the biases of the neurons implementing the 5 dimensional NAND and NOR logical functions ! Give the weights in the case of 12 dimensional input!

(79)

Problems and Questions (2)

• Why is it impossible to solve XOR problem with a single neuron? Give the network implementation of XOR problem using 3 artificial neuron!

• Can a pattern recognition task (recognizing only two distinct patterns under Gaussian noise) solved by a single neuron

(justify your answer) ?

(80)

Example problems (1)

• Solve the following classification task using a single neuron!

• Solve the problem with designing the weight vector analytically!

• Solve the problem, adapting the weights by using Rosenblatt learning algorithm, where after the initial weighs an learning rate are w (0)=0.7, w (0)=2, b(0) = 0.9 and µ = 0.5.

(81)

Example problems (2)

• In a bearing factory the quality of the produced pillows have to be analyzed. The bearings have to be classified according to two parameter given a mean value with a given limited range:

Radius: r = 10mm + −1mm Sleekness: d = 0.4mm + −0.2mm

• The task have to be solved by a neural network. If the given bearing have parameters in the limited range, the output of the neural network has to be +1, other way it has to be −1.

• Give the weights by analytical solution, in such a way, that the network separate well! Plot you solution!

(82)

Example problems (3)

• Give the number of logic functions which can be implemented on a single AN with 2 input!

• Adopt the weights optimally using the perceptron learning rule to realize NOR logic function on the perceptron!

The initial weights are w(0) = (0,−0.5, 1), and the learning rate µ = 1!

• Plot the sample points and separator lines at time step 0, and after the training!

(83)

Example problems (4)

• Which logic function is implemented on the given network?

• Can another one layer perceptron network realize this

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

kérdésre adott válaszomban is leírtam, egy-egy neuron lehet több populációnak is tagja, már csak azért is, mert - ellentétben a korai leírásokkal, amelyek még

Spike sorting is a signal processing technique to assign single unit activities recorded by a multielectrode to a corresponding neuron. Further, it is a prerequisite for studying

In order to make the correspondence among partial closure systems and partial closure operators unique, we introduce a particular, so called sharp partial closure operator (SPCO)..

• “Cross-fertilization” – ideas and models of different institutions need to be observed (patterns of development) – to thoroughly improve domestic legal and political

We described the application of Pólya’s method to a different domain of problem solving in a form of a special session for selected students.. This enables students to get a

The Maastricht Treaty (1992) Article 109j states that the Commission and the EMI shall report to the Council on the fulfillment of the obligations of the Member

Lady Macbeth is Shakespeare's most uncontrolled and uncontrollable transvestite hero ine, changing her gender with astonishing rapiditv - a protean Mercury who (and

Rheological measurements were performed by shearing the suspension at constant shear stress (5 Pa) both in lack of electric field and under the influence of field. Low oscillating