So far, I have considered all theoretical questions of this text in a representational framework. Beyond being the most common in today’s psychological approaches, this viewpoint seems to formulate the questions of my experiments well. However, discarding the assumption that any experience is based on an internal representation could eliminate the issues detailed above (and possibly raise other ones). While the related debates around the existence or nonexistence of a “Cartesian theater” reach far back in the history of philosophy, when it comes to psychology, the non-representational viewpoint is often traced back to William James (1912). Oddly enough, the representational framework also finds a starting point in his writings (James, 1890), at least for psychologists. One may get a clearer and more contemporary picture of what this direction of thought entails from the writings of Gibson (2015), where he argues that perception is the starting point that we need to understand first, keeping in mind that organisms are mobile andperception operates in service of action. In this functional account of perceptionthe two domains are not even really separate from each other, andtheperception of invariant structures (objects) requires motion through time. Due to this, taking the retinal image at a given time point as the basis of visual perception is incorrect and misses the most important, dynamic and relational, pieces of information about the environment. Furthermore, if this information is in the interaction with the environment, there is no need for complex internal computations to model the world from an impoverished input. This way, all my earlier contemplations in this script about how our participants’ subjective percepts are biased are misguided, as the individual’s percept is not subjective but simply relational to his or her self, and these relations are not inferred but perceived directly. If, for example, the stimulus resembles a person, the relationship (or “affordance”) will be very different than in the case of meaningless dots.
The research perspective upon the roles’ dynamics inside sports teams is based on a case study on a professional handball team from the Second League of the Romanian Feminine Championship. Moreover, the methodological design of the study rests on a mix-method approach that combines a sociometric analysis of group members’ relationships, on both factual and perceptual level, with the observation method applied during training and competition contexts and in-depth interviews with the team’s coach. The present research has a longitudinal dimension, as the study was conducted over a period of two competitional seasons (2008-2009 and 2009- 2010), in two waves. If we were to make a brief remark regarding the contextual aspects of the team’s evolution, for a better understanding of the roles’ dynamics inside the team, it is important to say that these were also the first two years of this team’s existence. So, using Tuckman’s model of small-group development (Tuckman & Jensen, 1977) and adding to it the specificity of sports teams - that is the cyclicity of teams’ membership from a competitional season to another – the case study includes the storming and norming stages of the team in its first year of existence and competitional activity, but also the first stages from its second competitional year. As for the sociometric test, in order to be able to analyze the relationship betweenthe perceptual andthe factual dimensions of the roles’ dynamics, the design of the research instrument was based on a two axis crossing (Figure 1): the perceptual-factual one andthe attraction-rejection one.
Three scientific researchers of Minas Gerais, with a wide sectorial experience.
A grower of Minas Gerais with 60 years of experience in the coffee sector.
Identification of coffee technologies most important regionally
Based on data on the technologies developed for the coffee industry and breakpoints in the evolution of sectorial technological trajectory, identified in the first phase, we prepared a questionnaire to identify regionally the technologies, or technological ensembles, most important for the development of national coffee production. In this questionnaire the 15 most used technologies in agricultural production of coffee were presented, and each of the respondents selected those three technologies considered most important for coffee production, for each of the periods of technological trajectory of the agricultural sector. The questionnaire was applied by random sampling, between October 2012 and March 2013, to representatives of the coffee industry in the producing regions of the following Brazilian states:
Resilience was conceived as a disposition moderating the negative eﬀects of stress. It promotes adaptation to stressful situations (Wagnild & Young, 1993; Windle, 2011). Family and individual child resilience are interconnected, and lead to an overall capacity to maintain functionality in the light of adverse life experiences. A number of scales aim to measure resilience. Windle and colleagues reviewed nineteen different scales and concluded that “the conceptual and theoretical adequacy of a number of the scales was questionable” (Windle, Bennett, & Noyes, 2011). One of the most widely used measures is the Resilience Scale by Wagnild and Young (1993). This conception of resilience encompasses ﬁve characteristics: Equanimity, a balanced perspective of one's life and experiences; perseverance, the act of persistence despite adversity or discouragement; self- reliance, the trust in oneself and one's capabilities; and meaningfulness, the view that life has purpose andthe valuation of one's contributions; and existential aloneness, the realization that each person's life path is unique. Contrasting many other health beneﬁcial constructs, resilience is often thought to stem from successful coping with negative experience (Windle, 2011). Resilience is therefore often connected to the concept of post- traumatic growth (Jayawickreme & Blackie, 2014). Nonetheless, some individuals possess the ability to cope with stress without having endured signiﬁcant negative experiences. The idea of the “invulnerable children” reﬂects that some people appear to be able to cope with any stressful event. Fonagy and colleagues argued that resilience is established by early childhood experiences and interactions (Fonagy, Steele, Steele, Higgitt, & Target, 1994). Thus, regardless of traumatizing experiences in later life, the foundation for resilience may already be established for any adult person.
3. Long-Run Demographic Change and International Competitiveness
The basic model relies on parametric changes in the desire for children and education in order to motivate theconnectionbetween fertility, education, and competitiveness. This is analytically convenient but may appear to be intellectually not fully convincing. It seems to be more desirable to elicit “the family connection” as an outcome of endogenous demographic and behavioral change based on stable preferences. In this section we extend the basic model in this direction. This allows, in the spirit of unified growth theory (Galor, 2005), for an explanation of observable cross-country differences as outcome of a differentiated take-off to modern growth. Contemporaneous countries displaying high fertility and low investments in education are conceptualized as being at an early stage of the demographic transition, whereas countries displaying low fertility and high education have already reached a later stage of the transition. Ceteris paribus, we will not only observe that forerunners of the demographic transition produce higher income per capita than latecomers – as in e.g. in Galor and Weil (2000) – but also that forerunners are populated by on average more productive firms – as observed by e.g. Gollin, 2008 – and that firms of forerunners are more likely to export.
We explored this question using the string paradigm: infants were presented with an out-of-reach object connected to a string that was within reach. Infants are known to be able to pull a string to retrieve an object attached to it starting from the age of 10 months . However, when 16-month-olds are presented with four strings, only one of them connected to the toy, they often fail to pull the connected string and instead pull any string at random . To check infants' attentional behaviour toward theconnection, we used a Tobii eye tracker with a scene camera to see which string the infants looked at when they saw someone preparing to do the task. We tested infants aged 16, 20 and 24 months.
creative – is often sharply distinguished from animal behaviour, which is characterised as instinct-driven and only tool-using, and from machine oper- ation that is described as a repetitive and pre-programmed activity. If we con- tinue to define action by the demanding features of intentionality, rationality or reflexivity that are attributed to humans only, then – no wonder – all other uses of the term “action” in everyday life and actual technological develop- ments would be only metaphors or even categorical mistakes. In this case we would miss and misunderstand the massive changes in intelligent machine design and interactive media use that open up Pandora’s box filled with thou- sands of agents. These software or hardware agents equipped with belief, desire and intention algorithms are able to take part in manifold actions and even to change their action programs by case-based learning. Certainly, they are different from human actors, but they are also different from classical machines and media. Both features, their particular capacities of being active and interactive and their growing population in everyday gadgets and in the worldwide web of the internet, justify the undertaking which has been made in the following, to develop a more symmetrical and sophisticated concept of agency.
was described as a natural process of loose coupling, over-lapping activities, experimental adaptation, and a step by step stabilization of a common frame for the interactions (cf. Hutchins 1998). The concept of “distributed agency” that is presented in this paper follows the lines that were started by those concepts of “distributed computing” and “distributed cognition”. The first step to construct this concept of distributed agency has been to demonstrate that human action is distributed between many loci and instances that plan, control, and execute the activities. Distributed action means that someone is searching for significant marks, someone other is measuring the angles, a third one is plotting by drawing a line, and others are counting, communicating and correcting the data. All these interactions between them constitute an observable unit of action called navigation. This kind of distribution can also be transferred to computer operations. Theaction of sending a message to a certain person can be broken down into many activities at different places, like encoding, packaging, addressing, transporting, and reading TCP/IP protocols at the PC, at the server, at the local area network, or at one of the knots of the worldwide web.
The first two experiments that have been conducted share related two-layer architectures. In both cases, the top layers inherit their structure from classical reinforcement learning algorithms (Sutton and Barto, 1998). However, the lower sensory layers differ in their composition. In the docking experiment of Ch. 3 this layer consists of canonical artificial neurons. To obtain their activations the sum of the products of all ingoing connections andthe corresponding weights is computed. In contrast, the lower layer of the reaching experiment presented in Ch. 4 is composed of Sigma-Pi nodes (Softky and Koch, 1995; Weber and Wermter, 2007). The activation of post-synaptic neurons in this model depends on the weighted sum of a multiplicative term, representing the co-activation of input units. For training, both models use a similar mechanism. The RL prediction error δ of the top layer is not only used to modulate learning of action weights, but at the same time to adapt the weights of the sensory neurons of the lower layer, all in a single-step procedure (for details, please see Ch. 3.3 and 4.3, respectively). In traditional cognitive science and GOFAI, perceptual learning is usually defined as the problem of extracting useful features from passively received (visual) stimuli. Subsequently, these features are manipulated to generate an output (e.g. classification result or movement direction). Even in robotics, pre-given or handcrafted features (cf. anthropomorphic bias Ch. 2.4.9) are often assumed that are not customized to the motor repertoire of the machine (Pezzulo et al., 2011). In contrast, the agent in our experiments shapes its receptive fields by enacting its world. Depending on its embodiment, its situatedness andthe interplay of both it is able to identify the relevant perceptual stimuli and encode their relation to its own movements within the neural architecture, i.e. it is able to identify the sensorimotor laws. Furthermore, it can be compensated for wear and tear of the robot by constantly adjusting the RFs to the current situation (lifelong learning). Yet another advantage of our algorithms is their modality independence. For theaction-driven learning of the RFs it does not matter if visual, auditory, proprioceptive or any other (sensory) information is given.
So far there exists relatively little research on the role of education in the new trade literature. Yeaple (2005) proposes a model in which homogeneous firms have access to a “high-tech technology” and a “low-tech technology”. The work force is heterogeneous in their skill level and high skilled workers have a comparative advantage in high-tech production. In equilibrium, exporting firms are shown to be larger, to employ more high skilled labor, and to pay higher wages. A similar result has been derived by Manasse and Turrini (2001) for an economy in which firms are led by worker-entrepreneurs of different ability. The role of population size (or its growth rate) is not investigated in these studies. Our study, by contrast, focuses on a homogeneous workforce andthe impact of its human capital endowment on firm productivity through managerial education and human capital externalities at the firm level. In conjunction with a fertility-education trade-off at the household side, it establishes a negative association between population growth and firm productivity. Finally, Prettner and Strulik (2014) analyze the differential impact of the scale of an economy (in terms of its population size) andthe education of its workforce on per capita GDP relative to other countries. For their analysis, they use a trade model based on Eaton and Kortum (2001) in which they introduce endogenous education decisions of households. Consistent with empirical regularities, they show that population size has a positive effect on relative per capita GDP if and only if the countries under consideration are closed to international trade, while education always has a positive effect on relative per capita GDP, irrespective of trade-openness.
IV. RESULTS AND INTERPRETATION The curves in Fig. 12 for the wide beam show peaks that indicate the location of the longest IPP lifetimes. These peaks are more narrow than their counterparts in Fig. 13 for the long beam. This is likely a result of the deliberate horizontal heating that was applied to the beam. This spreads the distribution of betatron oscillation amplitudes and spin tunes, thus making it harder to obtain a good match for the various settings of the sextupole fields. So the IPP lifetime drops quickly away from the optimum setting. For the data in Fig. 13 , no horizontal heating was applied. Instead, the beam was first cooled as a coasting beam. Afterward bunching was turned on. The resulting beam in general had a larger longitudinal extent in COSY than the wide beam. Again, there are clear peaks in the scans of IPP lifetime, but they are broader. Without the horizontal beam spreading, the lifetime is less sensitive to sextupole setting. This would suggest that the greater longitudinal extent of the beam has not introduced another source of depolari- zation to replace that caused by the horizontal heating. In fact, for scans 2 (black) and 3 (blue) the best IPP lifetimes are longer than for the wide beam. This suggests that, even at its best, the sextupole cancellation of the horizontal heating was not completely effective. At the same time, the higher order contribution that was expected to arrive for the long beam from ðΔp/pÞ 2 terms has not appeared at a
professionally qualified employees (Commission for Employment Equity, 2012).
In light of this context, the following research question is explored in this study: What is the relationship between demographic characteristics andperception of inclusion? This question contributes to practice through exploring whether group characteristics such as age, race or gender affect theperception of inclusion of that group. It should be noted here that ‘race’ is a contested sociological construct (Montagu, 1974). Montagu (1974: 62) emphatically claimed that ‘race’ is a meaningless construct based on unexamined facts and unjustifiable generalisations, and which does not realistically define the continuous variations in biology between human beings. The researchers’ own preference is for the terms ‘ethnicities’ or ‘racio-ethnic groups’ (Cox & Nkomo, 1990) which highlights historico- social experiences and shared social identities – however, given that the formal classifications in the South African workplace is ‘race-based’ and national data is captured based on ‘race’, we chose to (reluctantly) use this term in our surveys and write-up. Understanding whether perceptions of diversity and inclusion can be attributed to a specific group characteristic allows managers and researchers to understand which groups perceive inclusion less positively. This understanding enables practitioners andthe research community to develop and implement measures to shift perceptions of exclusion in favour of inclusive environments that contribute to potential high performance. This study differs from previous studies in three ways. Firstly this study is conducted in South Africa where the demographic constitution of groups are largely dissimilar to the US and Israeli contexts examined in these studies. Secondly, this research uses a validated inclusion scale called the InclusionIndex™ survey (April & Blass, 2010), designed and run by a company called Performance Through Inclusion. Third, the ‘usual’ demographic characteristics examined in studies (race, gender, age, sexual orientation, disability, education, job categories, tenure) are somewhat different in this study which includes race, gender, age, tenure, sexual orientation, disability, position/grade, department, site location and religion. The following hypotheses are tested:
Starting from the initially separate recording of sound and images and their later synchronization with the aid of sound-film, progressing to their direct transfor- mation by means of analog electronic media and finally their real time generation in digital code, acoustic and visual phenomena gradually converged and merged to an “audiovisual substance”. This was accompanied by a shift in the relation- ship of seeing and hearing: for most of human history their connection was made exclusively in a subjective, sensory way, and became a technical-physical link about 150 years ago. Since the production of images and sounds is based on algorithms, it can be set in a real time feedback loop with seeing and hearing. This digital “audiovisual substance” is seemingly a direct correspondence to the senses of the performer using it. It can be designed individually and in a variety of ways. Meaning that the creative process entailed in generating audiovisual artifacts shifts from a physical instrument or apparatus to the manipulation or programming of software. But this dematerialization means that it becomes in- creasingly difficult for the recipient (e.g., the audience of such a live perfor- mance) to evaluate the respective artistic contribution and to distinguish it from the mere use of pre-programmed software applications. The current joke is that the guy behind the laptop in the club might as well be reading his e-mail while the public is raving to his sound or his visuals.
Our work marks a new step towards a global view on theconnectionbetween singular stochastic control problems and questions of optimal stopping by extending the existing results to multi-agent optimisation problems. A link between these two classes of optimisation problems is important not only from a purely theoretical point of view but also from a practical point of view. Indeed as it was pointed out in  (cf. p. 857) one may hope to “jump” from one formulation to the other in order to “pose and solve more favourable problems”. As an example, one may notice that questions of existence and uniqueness of optimisers are more tractable in control problems, than in stopping ones; on the other hand, a characterisation of optimal control strategies is in general a harder task than the one of optimal stopping rules. Recent contributions to the literature (e.g.,  and ) have already highlighted how the combined approach of singular stochastic control and optimal stopping is extremely useful to deal with investment/consumption problems for a single representative agent. It is therefore reasonable to expect that our work will increase the mathematical tractability of investment/consumption problems for multiple interacting agents.
The Nazi leadership had created the “Grand Cross of the German Eagle” in May 1937 to honor its al- lies abroad. Mussolini was the first recipient of the Cross; it had been pledged in June 1937 and was awarded on the occasion of Il Duce’s visit to Ber- lin in September of that year. The award came in six ranks—from “Grand Cross” and “Cross with Star” to a simple medal— and was later diff erenti- ated into military and civilian versions (“with swords” and “without swords”). The German Foreign Offi ce awarded the lesser ranks of the decoration liberally: from its inception through the end of 1939, there were 4,177 civilian and 5,718 military recipients. The Grand Cross was more restrictive, though hardly exclusive. It was awarded 256 times between 1937 and 1940. The large majority of recipients were Italians; the remainder Japanese, Spanish, Hungarian, and Bulgarian. In 1939, the award was amended to include a “Golden Grand Cross,” the recipients of which were limited to sixteen. They included Italian Foreign Minister Ciano, General Franco, the Japanese ambassador to Berlin Oshima, andthe German wartime allies Horthy, Antonescu, King Boris, Ryti, and Tiso. 39
Of course we must say that sociology developed a lot of theories and ideas helpful for communication science. Communication science should in general be seen as a cross- sectional discipline sharing a lot of theories and approaches with psychology, political science, media studies, and other disciplines. A lot of fundamental theories are derived from sociology: symbolic interactionism, the work of Habermas (1987), Luhmann's sys- tem theory (Luhmann, 2006), the cultural industry theory of Horkheimer and Adorno (1971), Goffman's (1971) interaction studies, for example. Also, research methodology and most research methods of communication science as a social science stem from so- ciology (cf. Krotz, 2005). Thus, there is an important connectionbetween sociology and communication science, although they are very different from one another, as we can see when comparing their basic views of human beings. We described the image of hu- mans in sociology above as the natural being producing the material basis of its life by transforming nature. In communication science, the starting point instead is the human being as the only being that has a complex language and uses a complex system of sym- bols, andthe only being that needs such a complex symbol system and depends on its use. It is the use of symbols andthe production of meaning that makes humans a species of its own, and thus symbol production and use is basic for any theory of communica- tion—the human being is the one and only being that is a symbolic being. At the same time, this means that logically semiotics are basic for any communication science that wants to understand human existence in a broader way than just media use.
Abstract It is quite common that most of companies’ decisions are made based on feelings, intuitions or personal experiences. The reasons for such patterns have organizational, technical and process oriented backgrounds. For instance, there is no structured way to deal with the analytical results on both sides simultaneously – organizational and technical. Usually, in case of analytics the ones doing analysis (e.g. data scientists) andthe ones using results of analytics (e.g. decision makers) are different persons. As a result, such a structure leads to ambiguity and misunderstanding betweenthe involved parties. In order to bridge the existing gap between data scientists and decision makers, we introduced the
According to my experience, the above-mentioned examples are more of an exception than a rule. So the question remains as to why the knowledge pool of po- litical science is not used more frequently in political de- cision making. In the run-up to the seminar “Cui bono scientia Politica? A Debate on the Relevance of (Austri- an) Political Science”, which took place in October 2017 in Innsbruck, I discussed with some of my former col- leagues whether they would resort to political-science research in their political work. The answers they gave only have anecdotal value, but they are in line with my own experience. A number of colleagues stated that the work of political scientists would certainly be very en- riching, but that they simply lacked the time for an in- depth examination of their publications. In this context it also necessary to know that the recent past has been characterized by a strong acceleration and densification in political work. As far as the European Parliament is concerned, this also meant an increase in meetings and appointments. In particular, the trend towards first reading agreements often requires several trilogue 4 meetings to be scheduled in addition to the normal com- mittee work and requires meticulous preparation of the content and additional expert meetings. In addition, the more frequent use of new media in politics has short- ened the reaction times and requires decision-makers to issue statements before they had enough time to obtain background information on a topic.
A third opportunity for application is to model the phenomenon of func- tional fixedness. Functional fixedness describes a cognitive bias that limits a person to use an object in another than its traditional way. In the famous candle problem, Duncker and Lees  gave participants a box of tacks, matches, and a candle and asked them to attach the candle to the wall. Only very few participants came up with the solution to tack the box to the wall and use it as a candle holder. Most participants seemed to be fixated on the box’s initial function as a tack container, which prohibited them from re- conceptualizing its potential use. Given a situation with an initially empty box, they were much more likely to solve the problem. In terms of PBPs andthe PATHS model, functional fixedness could be described as, some- times overly, committing to a particular interpretation of a situation. There is an obvious trade-off here since a strong functional bias is both a power- ful way to reduce the search space in typical problem situations but may also block one from finding a solution in atypical situations. The PATHS model at times shows the same phenomenon of getting stuck in a particu- lar type of interpretation. This happens when a hypothesis looks promising initially and turns out to be wrong later, at which time the positive feed- back loop betweenperceptionand hypothesizing has led the model into a local minimum in the space of interpretations that is difficult to escape. In such situations, resetting the model and having it start over on the prob- lem – restoring an open mind in a way – can be faster than continuing the search. Although this involves discarding information that the model had already gathered, it can help avoid becoming stuck on a particular “garden path” that can result when unfortunate coincidences are discovered early that only serve to distract from the correct categories. The PATHS model might allow to shine new light on mechanisms of functional fixedness and how they can be limiting or beneficial to problem-solving.