Users of MATSim typically come from transport and civil engineering domains and are heading to solve problems for which the agent-based modelling approach is more suited than more traditional four step process modelling. The typical use-case starts with the collection and preparation of data. Then several simulations are run on a cluster for several days and compared afterwards. Often, this must be repeated several times as data may contain errors that do not become visible until simulation results are retrieved. If no automatic calibration is available, repetition ofsimulation runs might be needed to calibrate the model. Multi-agentsimulation pays o ﬀ when heterogeneous user preferences, time dependent user reactions or/and microscopic modelling of public transport are studied, e.g., see [4, 5, 6]. At the time these studies were un- dertaken, there was no default software support for the required functionality; MATSim had to be extended by custom model components. Today, some of this functionality is available in MATSim or as a contributing project. However, modelling problems on this high level of detail makes it hard to deﬁne a default method- ology and implementation that suits every user. So, quickly a user ﬁnds himself in the second group, the researchers.
Although it is not a technical restriction, SALMA is focused on modeling and simulating multi-agent systems. Multi-agentsimulation has been adapted in many different fields, which has resulted in a broad spectrum of more or less specialized approaches. Widely used examples for domain-independent frameworks in that area are MASON (Java) [LCRP + 05] and RePast (Java, C++)[Col03]. Software packages like that offer highly flexible APIs at a rel- atively low level of abstraction. On the other hand, there are modeling and simulation approaches that are specialized on particular applications, e.g. the MatSim framework for multi-agenttransport simulations [HNA16]. SALMA tries to provide as much flexibility as possible with regard to fitting simula- tions to the characteristics of the modeled domain. Most of all, this includes SALMA’s ability to vary the level of detail and abstraction of the system model within a broad spectrum, which was discussed thoroughly in this chap- ter. However, the logic-based generic representation in SALMA is inherently much more computationally expensive than optimized specialized approaches. In particular, the application of SALMA might not be practical for models with very large numbers of agents, which is typical, for instance, in more re- alistic traffic simulation experiments. In such cases, it could still be beneficial to use SALMA as a supporting approach for analyzing certain parts or mecha- nisms of the model from a microscopic perspective. This will become even more apparent when SALMA’s abilities for statistical model checking are discussed in the next chapter.
environment and different decisions relating to different time horizons can be mapped. Liedtke (2009), for example, develops such a model for Germany. He explicitly models the decision of two main agent- types: The shipper and the carrier. Shippers can decide about shipment size and carrier choice. Carriers construct truck tours with a vehicle routing heuristic. Both iteratively interact with each other in a market environment and make experiences from past iterations. Roorda et al. (2010) set up a conceptual framework for agent-based modelling of logistic services. They identify a number of agents, their respective behaviour and important facilities in the freight system. The agents coordinate by means of contracts. The contracts are a result of market interactions. Shipper-Carrier relations are set up by logistic contracts. Given those logistic contracts the carrier conducts a number of logistic decisions to fulfil them. First, the carrier decides about the transportation mode. The possibilities include using only trucks as well as intermodal combinations of truck, rail and marine. Secondly, for each of those transport modes – in the following we name this transport chain – further consolidation decisions are conducted. That is, for each leg in the transport chain, vehicle type choice, vehicle scheduling and route choice is made. Ramstedt (2008) designamulti-agent based simulationoftransport chains. They identify the transport chain coordinator (TCC), the transport buyer (TB) as well as the transport planner (TP) to be key decision makers on the transport side. The TCC is the interface between product demand, production and transportation choice and matching product suppliers with transport service providers. The TB manages the transport chain and its corresponding legs. The TP is the carrier actually owning a vehicle fleet and conducting the physical movement. Thus, transport chain choice and carrier choice are explicitly modelled.
Since fast and realistic traffic flow simulation is the key issue, before choosing a concrete simulation system, four popular traffic simulators, namely MATSim, TRANSIMS, VISSIM and SUMO, had been tested [9, 10]. All the systems ex- cept for SUMO gave, after calibration, correct results. However, comparing the systems’ functionality, MATSim (Multi-AgentTransportSimulation)  offers the most comprehensive set of features concerning the research goals. First of all, it is amulti-agent activity-based microsimulation system for daily transport demand analysis. Secondly, due to a fast and efficient traffic simulation, it is able to conduct analyses for large scenarios, even concerning a whole country. Last but not least, MATSim modularity and openness (open-source software) allow for extending and adjusting its functionality to one’s needs.
The microservice architecture pattern is a paradigm for programming applications by the composition of small independent services, called microservices. Each microservice runs its own process and communicates with other services via network calls. To establish the exchange of information between the microservices, each one must expose an Application Programming Interface (API). The microservice pattern is built on the concepts of Service- Oriented Architecture (SOA), which puts an emphasis on the design and development of highly maintainable and scalable software. Microservices manage growing complexity by decomposing large systems into a set of services. This approach focuses on loose coupling, high cohesion and it is beneficial in terms of modularity, maintainability and scalability . Company names, such as Netlix, Amazon and others, have joined the trend of decoupling large monolithic systems into a set of independent services   . Figure 1 shows a taxi- hailing company’s infrastructure represented as a monolith application and a refactored version, which uses microservices.
order is announced. Thus, it is introduced in multi-agent system as the internal mechanism of an agent. The MNL model is generally based on random utility theory. This model is appropriate for evaluating some measures which include various behavioral cases on announcing evacuation orders or recommendations. Here, the MNL model consists of some kinds of information on mudslide disaster, phenomena of mudslide, communications with evacuators in their family, contact with neighbors, awareness of risk etc. These attributes and factors are composed of explanatory variables of the choice model. Thus, it is supposed that the case of executing the evacuation is 1 and the case of staying in home is 0. Then the common utility value of all respondents is provided by using the explanatory variables. The software used for the estimation of parameters in the model was LIMDEP 7.0 [Limdep Econometric Software, 1998].
1.6. HISTORIC REMARKS
tion for a whole branch of LCS called "Michigan-style" LCS. In comparison the dissertation of Smith in 1980 at the University of Pittsburgh , introducing LS-1, inspired what would be called "Pittsburgh-style" LCS. The basic distinc- tion between both systems can be found in their population of rules. Where the "Pitt-approach" is characterized by multiple variable length rule-sets, each rep- resenting a solution to the problem, the "Michigan-style" LCS is characterized by a single rule-set. In the 1980s Holland further investigated and improved the concept of learning classiers [21, 22, 23, 24, 25, 26, 27, 28]. He was rst to apply the later widely adopted bucket brigade algorithm (BBA)  for credit assignment. Meanwhile specic GAs were scrutinized in detail as well. Booker proposed the use ofa niche-based GA on a system based on CS-1 . In a niche GA the GA only acts on small sets of rules, e.g. the match set, instead of the whole rule population. In 1986 Holland introduced his hallmark LCS, Standard CS , that would become the benchmark to compare against for many fu- ture LCS. Between the late 80s and the mid 90s research activity slowed down on the eld of classier systems. This was mainly due to the systems inherent complexity that made them hard to understand and their still narrow range of applications .
As was mentioned in chapter 3, many agent-oriented methodologies have been proposed. Even though existing methodologies are based on a strong agent-oriented basis, they need support for essential software engineering issues such as accessibility and expressiveness. This has an adverse effect on industry acceptability and adoption of the agent technology. Therefore, the consequences expected by the agent paradigm cannot yet be fully achieved. Moreover, comparing and selecting agent-oriented methodologies is difficult as they usually address different properties ofsoftware agents and methodological aspects. As a result, a comparison framework for the evaluation of those methodologies is needed in order to show their advantages and disadvantages. This framework is an important factor in their improvement and development. This framework sets up on existing work that compares object-oriented (OO) methodologies [Berard 1995; Bobkowska 2005; Prasse 1998; Rumbaugh 1996; Hong 1993] and AOSE methodologies [Henderson-Sellers and Giorgini 2005; Sabas, Delisle and Badri 2002; Cernuzzi and Rossi 2002; O'Malley 2001; Sturm and Shehory 2003]. By including a range of issues that have been identified as important by a range of authors, we avoid a biased comparison. The assessment of each methodology was done by postgraduate students who developed their own designs for the same application using different methodologies and collected comments from each other while they developed their application designs. The aim was to avoid any particular bias by having a range of viewpoints. However, we have customized these criteria to the domain of MAS development. One element of originality in our framework is the use and adaptation of concepts from object-oriented software engineering to the development of MAS methodologies.
2) The decision on returning trip can be affected by the age ofa traveler, the distance of returning and the location for a trip objective. Here, the distance of returning was classified into three divisions, namely, the distance of 0-5.4km, 5.5-9.4km and greater than 9.5km due to the survey. The districts where travelers return exist in such three ranges of distance. The eight districts in the Eastward and the Northward in Sapporo City were selected as the districts of case study mentioned previously. 3) Using the cross tabulation, the statistical significance of Chi squire was reasonable in the data of the returning distance and the location. Based on this result, travelers were classified by the returning distance and the location and the contents of attributes such as the rate of return, the preference of road, the knowledge of the return road by walk and the conditions of circumstances were obtained by the survey. 4) The utilized road is selected due to five conditions that are the minimum distance, the high density, low density, many numbers of shelters and no road selected. The characteristics of each agent are composed of the above whole attributes. The agents with different characteristics act due to their own thinking, when an earthquake occurs.
4. Daily activity and travel patterns of individuals and (workers in) firms and institutions. These activity patterns are important for various reasons. First, they have to be carried out within the current spatial system, and in that sense generate demand for facilities (stores, work places, recreation, schools) and transportation infrastructure. If one of these is insufficient (or if the demand of an individual/household/firm changes) this may lead to adaptations such relocation or suppression of activities. In the latter case, a demand exists for facilities or infrastructure, which may trigger changes in the physical spatial system (e.g. additional development of residential area). Second, it is through the generation and execution of daily activity patterns that mismatches between demand and supply become evident, changing the perceived quality of the urban system and possibly leading to adaptations. For instance, if demand for road space is too high, congestion will occur, leading to deterioration of accessibility and possibly causing households or firms to relocate.
Conceptual Model, Risk, Collaborative Design
Risk-based design is attracting significant attention in designing large scale products, such as airplane and ship. In conventional risk-based design projects, many risk assessment approaches have been developed in terms ofdesign process and activities . Through these approaches it is easy for designers to determine risk sources and anticipate their consequences after quantifying their probabilities. Global collaboration is a mainstream to distribute product design activities by using up-to-date design tool and technology. However, although this collaborative design is awarding but it involves more uncertainties due to complicating factors . These factors are not only related to multidisciplinary tasks and enormous resources, but also concerned coordination, negotiation and decision authorities within multi-agent interactions. The complexity and associated risks in planning and managing such large scale projects are increased by the need to integrate the functions of both technical and social teams that may be distributed across geographical regions with diverse languages and culture .
Although the projected version of WGS 84 is widely used, it is also criticized by GIS experts for its lag of accuracy. The projection not only sacrifices the poles, but it is also distorts the original projection. Web Mercator uses mathematical formulas and parameters that make it incompatible to WGS 84 (National Geospatial Intelligence Agency 2014). The errors increase with a larger distance from the Equator and can reach an offset of up to 40,000 meters. Figure 2.6 shows an overlay map of the United Kingdom, where it becomes most visible how severe differences are. The South of the UK shows an offset of 33,000 meters, while the North shows 36,700 meters.
In order to guarantee exibility and relevance to the current situation, the per- sonal planning agent should always be available to the user. Furthermore, the guidance system should be as powerful as possible. Therefore, three types of user agents will be introduced: A notebook-like tool, called mini-PTM or Personal In- telligent Communicator (PIC), a system integrated in the user's car dashboard, both designed for mobile on-line usage, and a piece ofsoftware which is run on the user's PC for o-line planning. The system running on a PC is intended to have all neccessary capabilities for performing the tasks required to a PTM. The capacities of PICs and dashboard tools, on the other hand, are much too small to perform complex planning and negotiation processes. Therefore, these tools communicate over the network with some associated broker agents who will perform the task on behalf and report the result to the PTM. Figure 6 displays this approach graphically.
Based on field work data including photos shown by past projects and interviews with marketing staff
Households either adopted the technology based on marketing from the business or hearing from linked neighbours, subject to an economic threshold defining their ability to afford to adopt Skyloos and an openness threshold defining their openness to new technology. If these thresholds were passed, then households become adopters, willing to install a Skyloo. If there were households willing to adopt Skyloos, they were added to a list of households for which the business could build. If the business had sufficient capital to build a Skyloo for a household, they would build the facility and issue a loan to the customer, with repayments beginning the next month. In some cases, people were on waiting lists for Skyloos to be built, and have not had them built due to the project financial situation, as was in the case of Project 3, where one loan collector had a waiting list of 11 households who were willing to buy a Skyloo. In the model, if households had to wait more than 30 days for a Skyloo to be built, they ‘un-adopted’ the technology.
One of the classic protocols for cooperative settings is the contract net protocol Smi80]. It assigns a task or resource to a single agent competing with a number of other possible contractors (the sub-holons, in our case). The manager (the holon's head) announces the resource or task to be allocated to the contractors which then submit a bid and state their cost of the bid. The manager grants the item to the bidder that stated the best oer and all other bids are rejected. For a non-cooperative setting, auction-based protocols are better suited for the distribution of tasks and resources. Well-known protocols are the following: In the sealed-bid- rst-price auction, all bidders submit a sealed bid and the bidder who oered to pay the highest price makes the deal and pays the price he actually bid for. In the sealed-bid-second-price auction (also called Vickrey auction) the bidder that submits the highest bid wins the competition but will only be charged the price the next bidder was willing to pay. The English auction is often applied in auction houses. Starting with the minimal price the auctioneer would accept, the bidders successively outbid each other until a single bidder is left. In the Dutch auction the auctioneer initially starts with a very high price which he lowers stepwise until one of the bidders accepts to buy the item at the current
The LRU uses three cameras for the perception of its environment, mounted to a pan-tilt unit on a mast, as can be seen in Figure 2. The two outer cameras form a stereo camera pair, recording synchronized grayscale images, while the center camera records color images. Dense depth data is recovered through stereo reconstruction via Semi Global Matching (SGM)  on a FPGA at a frame rate of 14Hz. It is used for self-localization as well as obstacle avoidance, environment modelling and path planning. Fast, local self- localization is achieved through a fusion of visual stereo odometry estimates together with inertial measurement data . For global self-localization as well as the generation ofa 3D environment model, the local estimate as well as the depth images are fed into a SLAM (Simultaneous Locating and Mapping) algorithm to generate a map of the robots surroundings for path planning .
Several literatures that focus on heliostat field layout patterns and configurations are available. In Noone et al.’s  work, for example, a spiral field pattern inspired by disc phyllotaxis is introduced, which is applied in heliostat field design and optimization. By redesigning the layout of the PS10 field using the algorithm, an improvement in the optical efficiency and a reduction in the land area utilized is witnessed. E. Carrizosa et al.  presented a pattern-free heliostat field layout distribution style, obtained by the simultaneous optimization of both the heliostat field (heliostat locations and number) and the tower (tower height and receiver size). Cadiz et al.  presented shadowing and blocking optimization procedures for a variable-geometry heliostat field. The variable-geometry concept explored by the author allows the possibility of minimizing the cosine losses by rotating the entire field. In a similar vein, Mohammed Aldulaimi and MS Soylemez  suggested a new heliostat field layout arrangement by identifying heliostats with low optical efficiency and increasing their heights in a bid to curb blocking losses and hence increase the total annual field efficiency. Emilo Carrizosa  also suggested some alterations in the field by considering a field with different heliostat sizes. Mani Yousefpour Lazardjan  presented a tool developed at Solar-Insitut Julich (SIJ) primarily for the optimization ofa novel micro-heliostat concept. In a novel and unconventional heliostat field layout design, Danielli et al.  developed the concatenated micro-tower (CMT). In this configuration, dynamic receivers mounted on arrays of small towers enable heliostats in mini subfields to direct sunlight with minimal cosine losses, thus improving the field’s overall optical efficiency. N. Cruz et al.  also developed an algorithm using a genetic algorithm that generates a continuous pattern-free field layout. The
Each person in the synthetic population obtains a second plan that uses the alternative mode. With this population the simulation is again run for 600 iterations. Like in the previous simulations 10 % of the virtual persons may shift their departure times while another 10 % seek a different route between origin and destination in the air transport network. Additionally, further 10 % of virtual persons may change mode, i.e. they can switch between the air traffic mode or the alternative mode. After 500 iterations all choice modules are switched off, thus for the last 100 iterations the logit model is used by by passengers to select one of their plans. From the output of the 600th iteration the same numbers as for the previous simulation runs are calculated and depicted in Tab. 2. If the speed of the alternative mode is 100 or 150 km/h the mean square error is quite similar to the previous results while the mean relative error is even less. The number of stuck passengers however is remarkable reduced from approx. 1500 to 185 or even 69. Alternative mode speeds higher than 150 km/h further reduce the number of stuck passengers while the relative error is quite similar. In contrast, the mean square error is increasing the higher the speed for the alternative mode is set.
To analyze the temporal behavior of the signal algorithm, queue lengths ofasimulation with fixed demand have been analyzed over time. Figure 4 shows vehicle queues for each link from second 1800 to 2000 of the simulation for a total occupancy rate of 0.7 (flow of 900 veh/h on major roads), at which the stabilizing regime is already intervening. One can clearly identify a cyclic profile in the service of signals. As the critical threshold value is defined to trigger a service once during T (assuming an average flow rate, see figure 1), the cyclic profile is regularly repeated in constant arrival situations. Looking e.g. at the blue line, it can be seen that the link is served around second 1840, having five vehicles in the queue. About 120 seconds later, at second 1960, the system is in the same state. This confirms the theory of the algorithm.
Keywords: urban rail transit network, agent-oriented modeling, dynamic passenger flow distribution,
Urban rail transit has played a key role to people mobility. With expansion of network, it is more important for providing a convenient, secure and economical transport by collaborative organization. As the foundation and core for transportation operations, spatial and temporal characteristics of passenger flow distribution are significant for making operation plans, collaborative organizations, and adjustment in emergency and improvement of network bottlenecks. However, it is very difficult or even impossible to adopt a global mathematical model to make new transportation solutions for such system due to its scale and complexity. Thus, we use computer simulation as a solution for analysis and evaluation of these complex systems.