• Nem Talált Eredményt

K Agent K Agent K Agent K Agent Online Learning Knowledge Self-optimization Knowledge Self-deprecation Knowledge Sharing Knowledge Agent (15

N/A
N/A
Protected

Academic year: 2023

Ossza meg "K Agent K Agent K Agent K Agent Online Learning Knowledge Self-optimization Knowledge Self-deprecation Knowledge Sharing Knowledge Agent (15"

Copied!
97
0
0

Teljes szövegt

(1)
(2)
(3)
(4)
(5)
(6)

(7)

(8)
(9)
(10)
(11)

(12)

(13)

(14)

K Agent

K Agent

K Agent

K Agent Online Learning

Knowledge Self-optimization

Knowledge Self-deprecation

Knowledge Sharing

Knowledge

Agent

(15)

) , , , , ,

( S A R Ps

0

 

) , , ' ( s a s P

  R

 

 R

π

denotes the action selected by the policy π)

) ( ) , , ( )

( )

( s R s P s a s V s

V

S s

 

 

(16)

∈ R ∈ R

b A

1

n T i

i i

i

s s

s

A ( )( ( ) ( ) )

1

 

    

i n

i

i

r s b

1

)

 (

Φ(s)

Φ(s’)

(17)

b A~1~

(18)

(19)

(20)

h

x

ago games x

reward rate h

success

1

_ _

1 _ _

Algorithm I-1. Maintaining the consolidated age of an agent

function update_consolidated_age(event, consolidated_age) { if event type is game_played_event then

return consolidated_age +1;

if event type is knowledge_deprecation_event then return consolidated_age * event.deprecation_factor;

if event type is knowledge_import_event then return event.donor_age * event.import_weight +

consolidated_age * (1-event.import_weight);

}

(21)

original import

new

wA w A

A   ( 1  )

original import

new wb wb

b  (1 )

(22)

] [ ] [

] [ ]

[ ]

[ ] [

)]

] [ )(

] [ ]) [(

[ ], [ ]) cov(

[ ], [ (

j A i A

j A i

A j

A i A

j A i

A j E

A i j A

A i A

corr  

 

(23)

Algorithm I-2. Feature extraction

1

Find correlation groups in A. Greedy algorithm:

1.1 For each row i in A:

1.1.1 If i is already assigned to a correlation group, then continue with the next i.

1.1.2 Create a new correlation group Gi. Put i into Gi. 1.1.3 For all rows j > i:

 If j is already assigned to a group, then continue with the next j.

 If (for each row r in Gi: corr(A[r],A[j]) >

CORRELATION_LIMIT), then put j into Gi.

1.1.4 If the cardinality of Gi is 1 then throw Gi away.

Otherwise save Gi to the list of correlation groups.

2

For each correlation group, update the feature calculation logic.

(24)

2.1 For each rowId in the group: remove the corresponding feature from the feature calculator logic.

2.2 Add a new feature that is the combination of the original (removed) features. I used the average function as the combination operator.

3

For each correlation group, update vector b.

3.1 For each rowId in the group: remove the corresponding value from b.

3.2 Add a new value that is the combination of the original (removed) values. I used the average function as the combination operator.

4

For each correlation group, update matrix A.

4.1 For each rowId in the group: remove the corresponding row from A.

4.2 Add a new row that is the combination of the original (removed) rows. I used the average function as the combination operator.

4.3 For each rowed in the group: remove the column from A.

4.4 Add a new column that is the combination of the removed columns, using the same combination metrics as for the rows.

old i

new FC G G

FC

(25)

Algorithm I-3. Feature removal

1

For each row r in A: determine if r is unnecessary

1.1 Count sum of values in the rth column (sum_vertical) and in the rth row (sum_horizontal).

1.2 If sum_vertical < USEFULNESS_LIMIT and sum_horizontal <

USEFULNESS_LIMIT then mark r as unnecessary.

2

For each unnecessary row r: remove the corresponding row and column.

Algorithm I-4. Complete feature de-optimization

1

Decouple the topmost complex features. For each complex feature cf=(f1,..fn) on the feature list decouple the feature

1.1 Decouple it in the feature list 1.1.1 Remove cf from the list.

1.1.2 For each fi in cf add fi to the end of the list.

1.2 Decouple it in vector b.

1.2.1 Remove the corresponding row from vector b.

1.2.2 For each fi in cf add the removed value to the end of the vector.

1.3 Decouple it in matrix A.

1.3.1 Remove the corresponding row from A.

1.3.2 For each fi in cf copy the removed row to the end of the matrix.

1.3.3 Remove the corresponding column from A.

1.3.4 For each fi in cf copy the removed column to the end of the matrix.

2

If there are still complex features on the feature list (e.g. a cf was decoupled to further complex features), let us re-run step 1.

3

Add features that were thought useless.

3.1 For each original feature of that does not appear on the feature list

3.1.1 Add of to the feature list.

3.1.2 Add a new element to vector b with 0 value.

3.1.3 Add a new row and column to matrix A in a way that all new cells are 0 except for the right bottom one which is set to 1.

(26)

Algorithm I-5. Finding common feature set

1

While true

1.1 Construct a list of differences in form of (fi,side).

1.2 While the list is not empty.

1.2.1 Pick the element with the highest level of indirections3 from the list. If there are more than one such elements pick one of them.

1.2.2 If it is a complex feature, decouple it on the side it belongs to (using steps 1.1-1.3 in Algorithm 4).

1.2.3 If it is a simple feature, add it to the other side than it belongs to (using step 3.1 in Algorithm 4).

(27)

} ,.., { a

1

a

n

A

} ,.., { o

1

o

m

O

,..}

2 , 1

 { t

) , ,

(

, , ,

,t it it it

i

K H M

A

} ,..,

{

1

,t t t hs

i

h h

H

) ,

(

, ,

,x it x it x

i

o r

h

} 1 , 0 , 1

{  

r

O t A GA :  

1 ,

,

, 1 )

( a

it

t   o

it

GA

) , (

: A O A

1

r

G

t

 

t

t i t i t

i t

i

o G a G

a

G (

,

,

,1

)  (

,

) 

,

r

G res ( ) 

)

1

( GA

t

effect

(28)

) ), ,..,

), ( , ((

, ( )

( G

i,t

K

i,t1

o

i,t1

res G

i,t

h

i,t

h

i,ths1

M

i,t1

effect

) , ( :

) ), , , ((

) , , ( :

) , ( :

1

1 1 1 1

r K O K G

r M H K O

M H K G

r A O A G

t t

t t t t

t t

t t

) , (

: A O A r

NLG

t

 

t

)) , ( 1 (

)

(

,

1

,

|

k it x

i x t i

succ

res G a o

o k a

p

)) , ( 1 (

)

(

,

1

,

|

k it x

i x t i

succ

res G K o

o k K

p

) ( )

| ( )

(

,

1 ,

_ it i i

m

i succ

t i next

succ

a p a o p o

p

(29)

) ( )

| ( )

,

(

,

1 ,

_ it j i i

m

i succ

t i lt

succ

a j p a o p o

p

) ( )

( )

( G

i,t

p

succ

a

i,t 1

p

succ

a

i,t

val

m

i

m m t i t

i

learn

a val G a o p o

utility

1

,

,

) ( ( , )) ( )

(

  

j x

j i

k k x t m

k x t i

j x

x t i t

i succ t

i lt succ

o p o a G val G

val

G val a

p j a p

1 1 1

,

1 , ,

, _

) ( )) , ( ( )

(

) ( )

( )

,

(

(30)

) ,.., ( )

(

i,t t 1 t h

succ

a f h h

p

 

 

h

j x it j

j t i x x

k t x

i

h o

h v o

a

fp v

x

1 ,

, , 1

| 1

| 0 )

( { }

(31)

) ( ) 1 ( ) ( )

(

, ,

'

,a it y it

w

a w fp a w fp a

fp

y

   

(32)
(33)

Random 15% FR 5% FR 1% FR 0% FR Training opponent

Training results [%]

Games Lost Ties Games Won

Random 15% FR 5% FR 1% FR 0% FR Training opponent

Eval. results against 0% FR

Games Lost Ties Games Won

(34)

0 20 40 60 80 100

1 21 41 61 81

Round

Success of the society [%]

Win Win+Tie

Win, 1% deprecation Win+Tie, 1% Deprecation Win, 10 games per round Win+Tie, 10 games per round

0%

20%

40%

60%

80%

100%

10 20 30 40 50 60 70 80 90 Rounds100

Success rate

Collective success rate against 1% FR Collective success rate against 5% FR Collective success rate against 10% FR Collective success rate against 20% FR Collective success rate against 50% FR Collective success rate against 100% FR

(35)

0 20 40 60 80 100

1 21 41 61 81

Rounds

Success count

6.1.1000 Win 6.1.1000 Win+Tie 6.1.40 Win 6.1.40 Win+Tie

0%

20%

40%

60%

80%

100%

1 21 41 61 81

Rounds

Success rate

6.1.1000 Win 6.1.1000 Win+Tie 6.1.40 Win 6.1.40 Win+Tie

(36)

0%

25%

50%

75%

100%

0 +Trained Rnd +Trained C4 +Trained Trained +Rnd Acceptor+Donor

Results

Games Lost Ties Games Won

(37)

(38)

0 20 40 60 80 100 120 140 160

1 21 41 61 81 Rounds

Count

No share Win No share Win+Tie Random Win Random Win+Tie Opplast Win Opplast Win+Tie Age Win Age Win+Tie Society size

0 20 40 60 80 100 120

1 21 41 61 81 Rounds

Count

No share Win No share Win+Tie Random Win Random Win+Tie Opplast Win Opplast Win+Tie Age Win Age Win+Tie Society size

(39)

0 5 10 15 20 25 30 35 40 45 50

1 21 41 61 81 Rounds

Count

No share Win No share Win+Tie Random Win Random Win+Tie Opplast Win Opplast Win+Tie Age Win Age Win+Tie Society size

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

1 21 41 61 81 Rounds

Success rate [%]

No share Win No share Win+Tie Random Win Random Win+Tie Opplast Win Opplast Win+Tie Age Win Age Win+Tie

(40)

0%

25%

50%

75%

100%

Default features Optimized features

Feature setting

Result distribution

Games lost Ties Games won

(41)

0%50%100%

1-25 26-50 51-75 76-100 101-125

Rounds after rule change

Result distribution

Games Lost Ties Wins

(42)
(43)
(44)
(45)

Algorithm II-1: ODC

1. The clustering process is initiated on demand, i.e. when a node

is in need of expanding its cluster.

(46)

2. The node where the demand for clustering raises, called initiator, selects one of its neighbours to serve as match maker.

3. The match maker looks for a matching node, one that meets the initiator’s clustering criterion, among its own neighbours.

Returns the match if found.

4. When the match is not already a neighbour for the initiator, a process called rewiring begins. The initiator and the match establish a new link, while, in order to keep the total number of links under control, the match maker removes its own link to the match.

Algorithm II-2: Spyglass (differences from ODC marked with italic)

1. The clustering process is initiated on demand, i.e. when a node is in need of expanding its cluster.

2. The node where the demand for clustering raises, called initiator, selects one of its neighbours to serve as match maker.

3. The match maker looks for a matching node, one that meets the

initiator’s clustering criterion, among its own neighbours. When

no match is found there, the match maker continues with

checking its two-hop neighbours (the neighbours of its

neighbours) for a match. Returns the match if found.

(47)

4. When the match is not already a neighbour for the initiator, the rewiring begins. The initiator and the match establish a link, while, in order to keep the total number of links under control, the connection node to the match removes its own link towards it.

(The connection node is the match maker if the match is originally its own direct neighbour. Otherwise the connection node is the node between the match maker and the match.)

Algorithm II-3: Shuffling ODC (differences from ODC marked with italic)

1. The clustering process is initiated on demand, i.e. when a node

is in need of expanding its cluster.

(48)

2. The node where the demand for clustering raises, called initiator, selects one of its non-cluster neighbours to serve as match maker. When there is no non-cluster neighbour available, a cluster neighbour is selected as match maker.

3. The match maker looks for a matching node, one that meets the initiator’s clustering criterion, among its own neighbours. If found, returns the match. If not found, returns a random node from its neighbour list.

4. When the returned node is not already a neighbour for the

initiator, a rewiring process begins. The initiator and the match

establish a new link, while, in order to keep the total number of

links under control, the match maker removes its own link to

the match.

(49)

 

(50)

i

x mm WG

WG

WG

!

WG Workload generator

! Queue length above limit i Initiator

mm Match maker x Match

(51)

(52)

(53)

(54)

0 100 200 300 400 500 600 700 800 900

1 101 201 301 401 Rounds501

Clustered nodes

SPYGLASS Clustered ODC Clustered Shuffling ODC Clustered

0 100 200 300 400 500 600 700 800 900

1 101 201 301 401 Rounds501

Processed jobs per round

SPYGLASS Processed ODC Processed Shuffling ODC Processed

(55)

0 100 200 300 400 500 600

1 101 201 301 401 Rounds501

Shared jobs per round

SPYGLASS Shared ODC Shared Shuffling ODC Shared

0 50 100 150 200 250 300

1 101 201 301 401 Rounds501

Overloaded nodes

SPYGLASS Primary OL SPYGLASS Secondary OL

ODC Primary OL ODC Secondary OL

Shuffling ODC Primary OL Shuffling ODC Secondary OL

0 100000 200000 300000 400000 500000 600000

1 101 201 301 401 Rounds501

Unprocessed jobs

SPYGLASS Unprocessed ODC Unprocessed Shuffling ODC Unprocessed

(56)

0 500 1000 1500 2000 2500 3000

1 101 201 301 401 Rounds501

Messages per round

SPYGLASS Message ODC Message Shuffling ODC Message

(57)
(58)

0 100 200 300 400 500 600 700 800 900

1 101 201 301 401 Rounds501

Clustered nodes

SPYGLASS Clustered ODC Clustered Shuffling ODC Clustered

0 100 200 300 400 500 600 700 800

1 101 201 301 401 Rounds501

Processed jobs per round

SPYGLASS Processed ODC Processed Shuffling ODC Processed

(59)

0 100 200 300 400 500 600

1 101 201 301 401 Rounds501

Shared jobs per round

SPYGLASS Shared ODC Shared Shuffling ODC Shared

0 50 100 150 200 250 300 350

1 101 201 301 401 Rounds501

Overloaded nodes

SPYGLASS Primary OL SPYGLASS Secondary OL

ODC Primary OL ODC Secondary OL

Shuffling ODC Primary OL Shuffling ODC Secondary OL

(60)

0 50000 100000 150000 200000 250000

1 101 201 301 401 Rounds501

Unprocessed jobs

SPYGLASS Unprocessed ODC Unprocessed Shuffling ODC Unprocessed

0 500 1000 1500 2000 2500 3000

1 101 201 301 401 Rounds501

Messages per round

SPYGLASS Message ODC Message Shuffling ODC Message

(61)

0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000

1 101 201 301 401 Rounds501

Shared jobs per round

SPYGLASS Shared ODC Shared Shuffling ODC Shared

0 1000 2000 3000 4000 5000 6000 7000 8000

1 101 201 301 401 Rounds501

Processed jobs per round

SPYGLASS Processed ODC Processed Shuffling ODC Processed

0 5000 10000 15000 20000 25000 30000

1 101 201 301 401 Rounds501

Messages per round

SPYGLASS Message ODC Message Shuffling ODC Message

(62)

0 1 2 3 4 5 6 7

0 100 200 300 400 500

Round

Distance [m]

Shuffling ODC ODC Spyglass

(63)

0%

5%

10%

15%

20%

Shuffling ODC Spyglass

Increment in the average sharing distance Increment in the average cluster size

0 50000 100000 150000 200000 250000

1 101 201 301 401 Rounds501

Unprocessed jobs

SPYGLASS Unprocessed ODC Unprocessed Shuffling ODC Unprocessed

(64)

0 100 200 300 400 500 600 700 800 900

1 101 201 301 401 Rounds501

Clustered nodes

SPYGLASS Clustered ODC Clustered Shuffling ODC Clustered

(65)

0 100 200 300 400 500 600

1 101 201 301 401 Rounds501

Processed jobs per round

SPYGLASS Processed ODC Processed Shuffling ODC Processed

0 500 1000 1500 2000 2500 3000 3500 4000

1 101 201 301 401 Rounds501

Messages per round

SPYGLASS Message ODC Message Shuffling ODC Message

(66)

0 50000 100000 150000 200000 250000

1 101 201 301 401 Rounds501

Unprocessed jobs

SPYGLASS Unprocessed ODC Unprocessed Shuffling ODC Unprocessed

0 100 200 300 400 500 600 700 800 900

1 101 201 301 401 Rounds501

Clustered nodes

SPYGLASS Clustered ODC Clustered Shuffling ODC Clustered

(67)
(68)
(69)
(70)

(71)

(72)

Opinion mining per named entity Data acquisition

Opinion Database

Search interface Users Web

Term Database Web

Automated NE Recognition

Web Text Extractor

Senti- WordNet

Sentiment Self-revision

Sentiment Analyzer

Opinion Database Search Interface

Users

Articles, Blog Entries

NE Tagged Text Completeness

Checker

Data Acquisition

Self-Eval.

Corpus

(73)

Named entity recognition

Processing per paragraph Term based analysis of the entity’s vicinity

Term based analysis of the paragraph

Context sensitive weighting Text

Sentiment per named entity Stemming and negation handling

Senti- WordNet

(74)

Algorithm III-1. Sentiment mining for a NE occurrence.

function evaluate(paragraph, ne) { // calculate algorithm parameters:

// pre, post = window length for the vicinity, // w1 = the weight for vicinity,

// w2 = the weight for the total paragraph,

(75)

// expectedSentiment = expected total sentiments;

calculateParameters();

// extract the two frames

vic = a (pre, post) long window around ne;

p = all non-NE words in the paragraph;

// calculate frame sentiments calculateFrameSentiments(vic);

calculateFrameSentiments(p);

// combine frame sentiments ne.pos = w1 * vic.pos + w2 * p.pos;

ne.neg = w1 * vic.neg + w2 * p.neg;

ne.neu = max(0, expectedSentiment - ne.pos -ne.neg);

}

Algorithm III-2. Weight calculation for a frame.

function calculateFrameSentiments(frame) { pos = 0; neg = 0;

for each word in frame if word is a term

if word is non-negated {

pos += term.pos; neg += term.neg;

} else {

pos += term.neg; neg += term.pos;

} frame.pos = pos;

frame.neg = neg;

}

(76)

Algorithm III-3. A context sensitive parameter calculation heuristics.

function calculateParameters(p) { // initial values

w1 = 1.5; w2 = 1;

pre = 4; post = 2;

expectedSentinent = p.words / 20;

// reference: paragraph: 50 words, 1 NE

// adjust parameters when longer text or more than one NEs present if p contains no other NEs {

if (p.words > 50) {

// for long paragraphs, expand the window // and decrease the paragraph’s weight

pre = p.words / 11; post = p.words / 20;

w2 = 0.8;

} } else {

// ratio means the number of words per NE ratio = p.words / p.competitors;

if (ratio > 50) {

// if there are considerably more words than // NEs, do what the first branch did

pre = ratio / 11; post = ratio / 20;

w2 = 0.8;

} else {

// if competitors consume too many words // adjust the vicinity frame and its weight if (competitors > 3) {

pre++; post--;

w1 = 2.0;

}

if (ratio < 20) { pre--;

w1 = 2.0;

} } } }

(77)

} { d

i

D

} { f

i

F

} { w

i

W

} ,..}

{{

}

{

j k j

i

f w

d  

(78)

)

3

( : FRm

)

3

( ,..) (

: WR

m

n

} { t

i

T

} ) (

| {

* f w stem w T

W   

3

3

( *,..) ( )

) ( ,..) , ( :

* T WR   m WWR

m

n n

} 2 , 1 , 0 , 1 , 2 { ) (

: R

3

    

dir

1

*)) ( ) ( ( lim

0

|

*

|

 

p dir m dir m

W

(79)

 0  1 lim

) , (

) , ( )

(

0 ) ( ) (

| 0

0 ) ( ) (

| ) 1 , (

|

|

|

|

 

 



 

errorRate p

f w error

f w error w

errorRate

f dir w dir

f dir w f dir

w error

D

f w D f

f w D f

Algorithm III-4. The self-revision algorithm

// self-revision on documents D using mining method m // and left-out terms T

function selfRevision(D, m, T) {

// calculate sentiments for all fragments for each fragment f in D {

dirSeen = dir(m*(T, f));

for each word w in f { if w matches t  T {

// register the occurrence of the term // in the occurrence registry

registerOccurrence(w, t, dirSeen);

} } }

// summarize the registered occurences for each (w,t) pair in the registry {

t.dirSeenTypical = calculateTypicalDirectionSeen(t);

}

// mark suspicious terms

for each (w,t) pair in the registry {

if contradict(dirExpected, dirSeenTypical) { markSuspicious(t);

} }

(80)

}

10 5 2 0 0 +1

5 3 50 3 5 0

2 0 2 5 10 -1

+2 +1 0 -1 -2 Seen Expt.

(81)

(82)

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Time Negative OK % Positive OK %

(83)

Source Documents

(84)

fn.hu 1 937

hvg.hu 4 032

index.hu 2 555

mfor.hu 801

mobilarena.hu 187

nlcafe.hu 648

pecsinapilap.hu 894

prohardver.hu 165

Total 11 219

(85)

-1 -2 1 0

2 -2

-1 0 1 2 0

10 20 30 40 50 60 70

Count

Human

Algorithm

(86)
(87)

0 100 200 300 400 500 600

Time

Neg Pos Neu

(88)
(89)

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

2.50% 5% 10% 20%

Omission rate

Precision Precision (Neg only) Precision (Pos only) Recall Recall (Neg only) Recall (Pos only)

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

2.50% 5% 10% 20%

Omission rate

Precision Precision (Neg only) Precision (Pos only) Recall Recall (Neg only) Recall (Pos only)

(90)
(91)

(92)

(93)
(94)
(95)
(96)
(97)

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

In the paper, effect of lead time prediction accuracy is investigated in trust-based resource sharing, and the performance of facilities is compared with agent-based

The motivation of this paper is to provide a design framework for the problem of performance guarantees in eco-cruise control systems, in which the machine-learning-based agent can

The first proposed approach is based on mobile agents for executing queries, which collect information from a number of relations located in different sites.. This

Keywords: multi-agent system, decision system, macro model, coarse ceramic burning production process, design..

According the just mentioned authors, being a conscious agent means – intuitively and roughly speaking – to have some kind of agent’s private sense of an outer world, of a self

The results showed that knowledge acquisition, knowledge storage, knowledge creation, knowledge sharing, and knowledge implementation have significant factor loading on

Abstract: This paper presents algorithms for converting multi-agent planning (MAP) problems described in Multi-Agent Planning Domain Definition Language (MA-PDDL) to

Our studies resulted in finding the optimum conditions for the oxidation of the active agent with chromic acid, and the concentra- tion limits between which the active agent