Agent oriented problem solving is a programming paradigm based on autonomous entities – the agents. Besides their autonomy, agents are supposed to react to changes in their environment and to be capable of planning their actions in order to achieve their goals. The field of multi-agent systems considers agent based systems with more than one agent. In these systems, an additional agent capability gains impor- tance – the ability to communicate and cooperate with other agents in the system. Multi-agent systems are particularly well-suited for the scenario described in the previous section because they allow a very natural mapping from the entities oc- curring in the scenario to agents. At first glance, it seems very natural to model the transportation modules introduced in the previous section as agents that pursue their local goals. Doing this, however, results in a conceptual break when it comes to modelling the coupling activities that are necessary for location route sharing. Coupling two or more modules together to form a union requires an election pro- cess in order to decide which module should represent the resulting union towards the other unions. Additionally, this approach implies a high degree of intra-union communication whenever the union leader wants to integrate a new module in the existing union. To avoid these problems, we have decided to model the unions as the agents in our system. Applying this scheme results in an important simplification of the system design and the resulting implementation. Merging several unions into a single union does no longer require an election a coordination process among the participating modules as they are straightforwardly integrated in another existing union. The roles of the participating unions (either master or slave) are determined by the negotiation protocol. Whenever a new task is announced to the system, a new union, consisting only of a single module, is created, we will sometimes refer to unions with only one module as degenerated unions. The advantage of applying this scheme is that we do not have to differentiate between modules and unions; every active entity in the system is a union and that’s it!
To meet observed or even potential heterogeneity, several techniques have been developed. One group of these techniques are agent based simulations in social sciences. Each agent can – and has to – adjust his behaviour to his individual environment and experience. Therefore, the agents’ endowments and social environments but also the objectives change over time. The development of individual values and social norms will be embedded in a process of bounded rational behaviour of agents with limited knowledge, limited memory, and reduced rational expectations. But these values and norms are – together with emotions – the most important substitutes for “deficiency” in pure rationality. This logical circle can hardly be modelled with conventional economic models. Approaches of bounded rationality– from the seminal work of Herbert Simon (1966) to current developments – seem more appropriate. Another seminal work – Axelrod’s work (1992) on cooperative behaviour and his extensions of cooperative game theory seem also promising for reducing the deficiencies of the homo oeconomicus- approach in respect to observed social behaviour. Last, not least recent developments in behavioural economics support these approaches, see e.g. Kahneman/Tversky (ed. 2000) for a comprehensive depiction of this economic school of thought.
2. METHOD ON SIMULATIONFOR EVACUATION BEHAVIOR
2.1 Human Behavior and Intelligent Agents
An agent is anything that can be viewed as perceiving its environment through sensors and acting on that environment through effectors. A human has five senses for sensors, and hands, legs, mouth and other body parts for effectors [Horvitz, et. al 1988]. Thus, the acts of an agent substitute for human behavior including both sensors and effectors. Basically, a rational agent is one that does the right thing using his intelligence. Rational activity depends on the performance measure, the percept sequence, the knowledge of the environment and the performance of action. In other words, a definition of an ideal rational agent is for each possible percept sequence, an agent should do whatever action is expected to maximize its performance measure based on the evidence provided by the percept sequence and whatever built-in knowledge the agent has [Wilson, 1991].
The TEMAS is a combination of the TEM and additional concepts and functionality consolidated in the MASConcepts4TEM (see Figure 12). While it is possible to deploy your applications on the basis of the TEM in a local or distributed runtime environment called TEM execution environment, you might be interested in testing your applications in a runtime environment, called temLight simulationenvironment, that is deployed locally and totally abstracts from infrastructural considerations like nodes. In the fol- lowing, we present these two runtime environments in detail (see Section 4.1), explain how to initialize Agents dependent on the runtime environment you want to use (see Section 4.2), introduce the different execution models available in the runtime environ- ments (see Section 4.3), give an overview of agent scheduling in the TEMAS depending on the used runtime environment (see Section 4.4), explain how to bootstrap your ap- plications (see Section 4.5), show how to configure the TEM (see Section 4.6), and give some information on how to set up an Eclipse project when using the TEMAS (see Section 4.7).
Observation 3 Independent actions must affect distinct keys.
For many scenarios there is a natural way to decompose the environment into key value pairs. For example  partition a spatial environment into overlapping areas to simulate social force. Since areas overlap, the same ac- tion (effects) may be sent to multiple keys. Summation is used as an asso- ciative and commutative reduce operation. However, as not all simulations decompose spatially (see the Simulating Software Evolution scenario) we pro-
VRP, the great emphasis was put on heuristics accuracy and speed, whereas simplicity and flexibility were out of focus . As a consequence, the best state- of-the-art algorithms give very good results for many theoretical test instances of the static VRP, but they are hard to adapt to the dynamic real-world problems. Therefore, it is necessary to focus the future research on realistic VRPs. But this requires the development of a system that will be able to simulate various real-world vehicle routing problems thus allowing for both testing optimization algorithms and planning transport services. Such a realistic simulation has to incorporate realistically modelled dynamism of customer demand, traffic flow phenomena and fleet management operations. Especially the optimization of transport services in urban areas is extremely challenging due to high dynamics of traffic flow resulting in continuously changing travel times and often, depend- ing on the type of services, in high volatility of demand (e.g. taxi). Moreover, when considering city-logistics policies, many additional and often mutually con- flicting objectives appear as, for example, the reduction of the negative influence on the environment and on the local society, or the positive influence on city development.
Distributed systems play a major role in our lives. They are widely used in areas such as the Internet, healthcare, education, science, eCommerce, financial trading and others. The prime motivation for constructing and using distributed systems is the desire to share resources. The term ‘resource‘ is characterized by the range of things that can be usefully shared in a networked computer system. The definition spans from hardware components such as powerful processors and storage devices to software-defined entities such as files, databases and data objects of all kinds. It includes the stream of video frames, produced by a digital video camera, and the audio connection that a mobile phone call represents. However, there are challanges when designing and building distributed systems. A major concern is concurrency. The presence of multiple processes and users in a distributed system is a source of concurrent requests to its resources. Each resource must be designed to be safe in a concurrent environment .
Besides difficulties in specifying the requirements for a scenario like the one above, the construction of the system model can itself become a complicated and error-prone task. One obvious source for complexity that is inherent to any multi-agent system is the fact that there are many interactions between agents that may happen at the same time. If a microscopic view is applied as described above, this typically implies that the situations of individual agents have to be considered both for determining which interactions happen at a particular point in time and for calculating their outcomes. Furthermore, agents can also interact with their environment, which could include the physical world around them but also more abstract virtual entities that are shared between agents like a database or a network. This interaction with the environment can mean that agents react to events that originate from the environment. At the same time, actions performed by agents can change the state of the environment, which in turn might trigger events or influence other agents. It is obvious that a modeling approach that is suitable in such a setting has to provide means to capture a large number of effects and constraints whose scope ranges over the whole system. Doing this in an imperative programming language can lead to code that is very hard to comprehend and reason about. This can be experienced in typical simulation frameworks for general purpose languages like C/C++, Java, or Python that are designed for optimized performance. A viable alternative for describing the effects of interactions and events on the state of agents and the environment are rule-based languages where the outcome of a rule typically determines the value of one or more state variables for the next time step. In fact, the modeling languages that are employed by model checking tools are typically rule-based. However, for agents with more complex control logic, a purely rule-based representation of their behavior can be just as hard to follow. Altogether, the right solution appears to be a proper combination of several paradigms.
the agent experience during its action and negative related to the perceived negative impacts which it does tolerate.
Table 3: Example of residents of the MABSiT model
As concerns the spatial aspects, the artificial environment has the characteristics of a medium- small coastal destination. It is composed by eleven areas: one central zone, four mid-central zones and four sub-urban zones and finally two important areas in which high-flows’ attractors are located (in one zone beaches and in the other a shopping centre). In the whole artificial tourism destination the visitor and resident attractors are 143 and they concern, for example, hotels or other accommodation structures, restaurants, cafeterias, museums or other expositions, monuments, shops, schools, offices, civil houses, etc. Each attractor has a predetermined physical maximum capacity in terms of number of persons and of parking places. The accommodation structures are of different types of quality/price (e.g. stars for the hotels); thus, each tourist choices the structure in relation to its willingness to pay.
The following paper is organized as follows, in section II the basis of decentralized multi-agent systems are mentioned Section III summarizes the main characteristics of the Immune system along with a description of different computational models associated to them. Following, the commonalities that can be found between the immune system behaviours and those that can be found on-board an unmanned vehicle are presented in section IV; in section V the developed framework model and implementation is explained. Section VI contains the chosen simulationenvironment parameters and characteristics to test the proposed framework. Next, in section VII the simulation results are analyzed and, finally, the reached conclusions are presented in section VIII.
Much emphasis has been put on the communication interface between an arti- cial agent and a human user (see Lux95] for details). In particular this approach considers humans to be participants in the multi-agent system. Currently, a great deal of manpower has been invested in a project sponsored by the Ger- man Federal Department of Education and Research (Bundesministierium fur Bildung und Forschung BMBF): MOTIV (Mobilitat und Transport im inter- modalen Verkehr Mobility and Transportation in Inter-modal Trac) Sie95]. This project is a united eort of the major German car companies (e.g. BMW, Daimler-Benz and Volkswagen) and supplier companies (Siemens, Bosch, etc.) to develop a travel planning support system for the next millennium. Agents will be used to represent trac participants and service suppliers. Again, the question of scalability is most important for the overall success of the project as the en- visaged goal of this project is to derive a system to be used (in the best case) by every trac participant. Working with this project gives me the chance to gain industrial experience which can be used for nding scalability criteria of practical relevance. For this part I will be advised by Dr. D. Steiner and by Dr. B. Specker. These two bases - DFKI and university research on the one hand and indus- trial application on the other hand - provide a very fruitful environmentfor the envisioned goal. Furthermore, using two dierent multi-agent approaches and dierent test suites may lead to the broadness and generality neccessary for this dissertation. Finally, being advised by several experts will lead and already has lead to diverse sources of inspiration.
pre-processing pipeline execution of PARSE. Each agent that is located above another agent depends on the output of the agent below. Therefore, you can see three important dependency hierarchies that do not only consist of the pre-processing pipeline and one agent. First, the concurrent action agent and the loop detection agent depend on the action analyzer agent. This hierarchy is not usable for the approach of this thesis, as none of the agents generates or uses any hypothesis. The second hierarchy starts with the context agent, followed by the CoRef agent and the method synthesizer agent. These agents are suitable for the hypothesis approach of this thesis and could be considered later. The last hierarchy of agents to consider consists of the Wiki WSD, the Topic Detection, and the Ontology Selection agent. These three agents can also be used in the hypothesis approach of this thesis, as all agents can generate hypotheses. Furthermore, the Topic Detection agent already generates some kind hypotheses. In summary, you can see that no cyclic dependencies occurred. But in general one could build the hypothesis graph also with the presence of cyclic dependencies between agents. In this case a fixed number of runs through the cycle should be defined. By this an agent would appear in several layers. This way, especially the development of hypotheses could be observed. Also new variants of selectors and rating functions would be possible (cf. Chapter 6, 7).
The concept of complex interactions is very mature in the area of multi-agent systems, whereas traditional algorithmic approaches prove to be inadequate for one reason or another. Since processes are still specified as scripts, (for instance BPEL or OWL-S) they are limited, in essence, to constructs of nor- mal procedural programming such as sequence, iterate, fork, and join which are not that different to job control languages (JCLs) of the mainframes of the 1950s (Singh, Chopra, Desai & Mallya 2004). Imperative instruction sequencing are more appropriate for implementing components (the small ), interaction protocols are more suitable for putting independent distributed components together(the large) (DeRemer & Kron 1975). Procedural struc- tures do not represent well the nature of complex interactions. FIPA (FIPA 2002) is an example of a proposed solution for modeling complex inter- actions. It proposes a stack which on top of it has a library of ’standard’ interaction protocols which can be recombined and personalized to create models for business process.
Table 12. The statistic examination
The prediction of corporate financial distress is an important and challenging issue that has served as an impetus for many academic research studies over the past decades. While intangible assets are widely acknowledged to be essential elements when forecasting a corporate financial crisis, they are usually excluded from prior related early warning models. The objective of this study is to utilize the attributes of intangible assets as predictive variables and to propose a multi-agent hybrid mechanism, MAHM, to increase preciseness in the prediction of corporate financial distress. The introduced MAHM is grounded on the hybrid model that integrates multiple dissimilar base instruments into an aggregated outcome and has proved its superior forecasting performance. The fundamental idea behind the hybrid model is to complement the error made by a singular model.
An example of a strategic module is route generation. Travelers/vehicles need to compute the sequence of links (road segments) that they are taking through the network. A typical way to obtain such paths is to use a Dijkstra shortest path algorithm. This algorithm uses link travel times plus the starting and ending point of a trip, and gen- erates as output the fastest path. It is relatively straightfor- ward to make the costs (link travel times) time dependent, meaning that the algorithm can include the effect that con- gestion is time-dependent: Trips starting at one time of the day will encounter different delay patterns than trips starting at another time of the day. Link travel times are aggregated from the events fed back from the mobility simulation, for example into ﬁfteen-minute time bins, and the router ﬁnds the fastest route based on these time bins. Apart from relatively small and essential technical details, the implementation of such an algorithm is straightforward (Jacob, Marathe, and Nagel, 1999). It is possible to include public transportation into the routing (Barrett, Jacob, and Marathe, 2000); in our current work, we look at car trafﬁc only. Note that such a routing algorithm, with small varia- tions, can both be used to compute a behavioral response
Occurrence of unexpected events not only cause inconvenience to passengers, but also make troubles for traffic operations. Once stations and sections suffer a temporary breakoff due to emergencies, passenger’s route choice behavior will change greatly. According to different situations, the affected passengers are often divided into three categories: delay passengers, detour passengers and loss passengers. Delay passengers and detour passengers will continue to travel in network, the loss passengers will give up to ride because delay time or bypassing distances is too long ( Ling & Xu, 2011 ).
Such rule-based multi-agent simulations run well on current workstations and they can be distributed on parallel computers of the type “networks of coupled workstations.” Since these simulations do not run efficiently on traditional supercomputers (e.g. Cray), the jump in computational capability over the last decade has had a greater impact on the performance of multi-agent simulations than for, say, computational fluid-dynamics, which also worked well on traditional supercomputers. In practical terms, this means that we are now able to run microscopic simulations of large metropolitan regions with more than 10 million travelers. These simulations are even fast enough to run them many times in sequence, which is necessary to emulate the day-to-day dynamics of human learning, for example in reaction to congestion.
In the original model of Endsuleit and Mie  the n mem- bers of an Alliance share their computational state using any protocol for secure multi-party computation that is compli- ant with Canetti’s model . Per deﬁnition, such a protocol is intended for n ﬁxed (possibly malicious 2 ) players that im- plement a common functionality like playing a game. The common computation are required to be robust against up to t malicious players. To this end the function to be im- plemented is translated into a t-robust variant by adding redundancy. For the execution of this new (distributed) function it is necessary to split all data input (i.e. function arguments) into so-called t-shares. These are nothing else than redundant data slivers, with the additional property that reconstruction of the original data requires knowledge of at least t + 1 of the shares. In this paper we do not use
We think, developing good strategies for these two key problems helps to perform well in the Contest.
This document is organized as follows. While the first two chapter con- tains the specification and the rules of the contest, the latter ones are fo- cused on how to use the software provided to the participants. Chapter 2 contains a full description of the Multi-Agent Programming Contest 2011 sce- nario called Agents on Mars. There we describe the environment and the semantics of the environment’s evolution. In the chapter 3 it is explained how you can connect your agents on the lowest level, that is that you are supposed to faithfully implement the client-server communication-proto- col. This means that you are expected to establish and maintain an authenti- cated TCP/IP connection to the MASSim-server and communicate with it by exchanging XML-messages. In chapter 4 we explain how to start the server that simulates the scenario providing perception and action for the agents. Details about setting up a tournament in a local computer is also covered in this chapter. Additionally, in chapter 5 we explain different means for de- veloping agents with the MASSim-platform, each one on a distinct level of abstraction. The first proposal is to use EIS 1 -compatible environment-interface