According to theResourceBasedViewof strategic management, analyzing the human resourceof a specific firm in terms of its potential to serve as a source of a sustainable competitive advantage requires an examination of – among others – theresource value. The question of how to parameterize this value, i.e., how to calculate human capital, straightly leads to an integration of RBV reasoning with market based models ofthe competitive environment at the factor and product market side. However, there seems to be a tacit consent among strategy scholars that the only adequate market mechanism to be used for resource valuation is the product market with the economic rents effectively created there. Yet this regularly ends in a tautology criticism ofthe RBV. It thus is the particular purpose of this paper to examine what market mecha- nism really is the adequate one to use for the calculation of human capital. For this, a deductive methodology is used. In advancing the idea of a product market orientation, one encounters some major dilemmas ultimately leading to the conclusions that a product market basedresource valuation is neither useful nor possible and that the re- source value must be measurable independent from any product market success – thus invalidating the tautology criticisms at the same time. This is in direct and flagrant contradiction to the prevailing academic view. As a consequence of this, e.g., “Value Added Approaches” and “Return Based Approaches” of Human Capital Management are discredited as not conform to theoretical requirements and not useful for practical business management, whereas factor market based methods alone prove helpful. Key words: Human Capital Calculation, Competitive Advantage,
RBV mainly concentrates on resources, capabilities and competences (generally defined with the overall term “resource”) that, when valuable, rare, hard or costly to imitate and used in organizational terms, can generate sustainable competitive advantage. RBV sees companies as different sets of physical and intangible resources (assets and capabilities), which determine how efficiently and effectively a company performs its activities (Barney 1997; Ghemawat and Pisano 2001; Zubac et al. 2010). This theory has also known its evolution in different directions, generating more specific approaches and views, such as dynamic capabilities (Teece 2007; Sirmon et al. 2007), rather than knowledge management (Davenport et al. 1998; Alavi and Leidner, 2001); or relational view (Dyer and Hutch 2006; Chou and Chow 2009). In this increasing open process, some new hints came out, as the fact that external stakeholders can become strategic resources for thefirm.
One ofthe fundamental questions facing contemporary firms is how to gain and maintain CAs on international markets. Looking back into the works of Penrose (1959), Lippman and Rumelt (1982), Nelson and Winter (1982), Wernerfelt (1984), Barney (1986), Conner (1991), Peteraf (1993) and others, or contemporary resource-advantage theory developed by Hunt (1995a, 1997a,b,c) and Hunt and Morgan (1995, 1996), theresource-basedview provides an acceptable framework for developing and defining CAs. The concept of defining resources as tangible and intangible entities that enable a firm to produce products or services which can be successfully implemented on the markets, gives firms a reliable mechanism for understanding which ofthe many resources are important in terms of gaining CA. Even more, in my opinion resource-advantage theory represents the most acceptable way to define CAs on international markets nowadays. The main reason for this is the importance of intangibles/invisibles - non-price factors - which differentiate the position and implementation of CAs on markets. Non-price factors, based on competencies and skills, are the most important sources of CA ofthefirm. From this point ofview we can define some ofthe many sources of CA which directly and indirectly influence the position and performance ofthefirm on international markets. In the paper I divide these sources into six groups: human resource, knowledge, environment and location, time (flexibility), innovation, quality. All factors are interdependent, directly and indirectly influence each other and also the implementation of CA on the markets. Among these factors human resource is the most important, central factor (human resource theory; Pffefer, 1994 etc.) which influences the effectiveness of all other factors. Each ofthe factors is defined and measured by several variables based on resource-advantage theory of competition.
Overcoming these barriers requires a rethinking of what GE’s former CEO Jack Welch has called an organization’s ‘social architecture’ – the bringing together of individual behavior, structure, and culture – which determines a company’s long-term performance. Dolan & Garcia (2002) called this adaptation of new values a ‘cultural reengineering’. And if adaptation and renovation is a complex phenomenon to understand within the general organizational context, understanding the same for HR practitioners, especially in innovating in technology for enhancing strategy, has not been dealt with sufficient rigor. The study re- ported herein focuses on the cross disciplines of change management and decisions about innovations, the use on online technology as the innovation driver, and the role of Human Resource Management in implementing it in viewof becoming more strategic. Moreover, the purpose of this paper is to explore the impact of new technologies on HR efficiency and effectiveness and also to better understand the dynamics of adaption of new technologies. The Technology Adoption Life Cycle model (hereafter TALC) is used to position HR departments in utilizing the web-based HRMS for enhancing their respective efficiency/effectiveness. Research on web-based HRMS adaptation and implementation is scant, anecdotal, and stems primarily from experiences of some firms and/or consultants. It seems that too often decisions to adopt web-based HRMS are driven by network-based effects that built on partnership (i.e., Lepak & Snell 1998) and cost considerations without sufficient attention to strategic issues. Numerous reasons can be identified in explaining why HR managers are having their eyes ‘wide shut’ toward these fundamental strategic HR issues. For one, many organizations streamline HR activities into information technology and simultane- ously downsize their HR personnel. The bottom-line is that innovative HR technology provides more processing power to the end-users, and has a substantial impact on bottom- line results ofthefirm due to efficiencies in workflows and downsizing (Beheshti & Bures 2000) but not necessarily on strategic issues.
Both theresource dependence theory and theresource-basedview see firmresource conditions as prime drivers for alliance behavior: the probability of a firm entering into an alliance will be a function ofthe need to acquire external resources. However, according to our point ofview, the first one is more adequate to explain alliances involving resource-poor firms while the latter fits better for firms that are relatively well resource- endowed. Our contribution is to show that results predicted by these two approaches should be moderated by perceived environmental uncertainty. Specifically, predictions oftheresource dependence theory are more likely to occur in contexts of high perceived environmental uncertainty while theresource-basedview fits better when this kind of uncertainty is not very high.
perspective […][Service-centred firms] must establish resource networks and outsource necessary knowledge and skills to the network […] and they must learn to manage their network relationships” (Vargo & Lusch,
2004, pp. 12-13).
Literature contends that the accomplishment of a competitive advantage through a service-centred view lies in the translation ofthe SDL principles into consistent behaviours within thefirm (e.g. Vargo & Lusch, 2008). On the other hand, the implementation of these behaviours, in turn, have been argued as being encouraged, developed and institutionalized through the development of one or more orientations (e.g. Lusch et al., 2007; Ballantyne & Varley, 2006). This demonstrates a shift in the literature towards an understanding ofthe actions that a company chooses to pursue when applying the principles of a service-centred view. This illustrates a movement towards the conceptual development of what we term a service-dominant orientation (see, for instance, Gummesson, 2008). This is particularly salient for two reasons. Firstly, this step represents a significant starting point towards the development of a strategic measure of a service-centred view. This is comparable to past works, including that ofthe marketing concept and theresource-basedview, which have provided an important lever for the theoretical development and empirical testing of their respective orientations, namely market orientation (Jaworski & Kohli, 1988; Narver & Slater, 1990; Slater & Narver, 2006) and resource orientation (Paladino, 2007; 2008; 2009). Secondly, as noted by Venkatraman (1989), the conceptualization of strategic orientations is the key to enable the development of reliable, valid measures and move forward with empirical research.
The competence movement, developed among others by Hamel/Prahalad (1994), Sanchez et al. (1996), Teece et al. (1997), offers undoubtedly a promising theory of sustaining competitive advantage and a quite dominant framework in strategic man- agement (i.e. Bresser et al. 2000; Barney 2002). Moreover the competence discussion, addressed by the competence-basedview, became a theoretical perspective independ- ent from theresource-basedview, although the latter can be regarded as the origin ofthe former. While offering management theory a framework of high relevance in or- der to explain the roots of corporate success, the contributions to organization theory are still to be analyzed the comprehensive way. In particular answers are required how far the competence-basedview offers a comprehensive theory ofthefirm. In this re- spect there are already a few publications (Conner 1991; Kogut/Zander 1992 & 1996; Conner/Prahalad 1996; Madhok 1996; Barney 1996; Foss 1996a & 1996b; Grant 1996; Langlois/Foss 1999; Osterloh et al. 1999; Dosi/Marengo 2000; Foss 2001; Madhok 2002). They deal with some particular aspects ofthe theory ofthefirm which refer either to theresource- or the competence- respectively knowledge-basedview. The competence-basedview can offer a real alternative to other theories in use, such as the more static transaction cost approach which focuses on contractual issues in order to explain the nature ofthefirm. However, a comprehensive competence-based assessment is still missing which addresses the following questions of a theory ofthefirm (Coase 1937, Holmstrøm/Tirole 1989: 65, Foss 1993, Langlois/Robertson 1995: 7, Foss 1996a: 470):
than resource-basedviewof competitive advantage, in order to examine the effect of HR practices on firm performance.
A series of limitations bounds the findings, conclusions, and implications of this study. The most obvious limitations of this study stem from the sample used and the measures employed. We examined a small set of HR practices that seem to have an effect on firm growth in Greek food industry. Given that managerial skills are to a large extent industry-specific, generalizability of research findings beyond the food industry remains an open question. Furthermore, given the dynamic nature offirm growth, this study measured one instance of this dynamic phenomenon. The effects of HR practices can take years to materialize into organizational performance. For example, selective hiring and training can produce results after years. Often, high performance work practices have better results in bundles than implemented in isolation. This study focused on established firms with more than 5 years of operation. However, the stage of a firm’s lifecycle, either growth, mature, decline or revival stage (Ciavarella, 2003) can be an important factor in applying specific HR practices. Another limitation ofthe findings is the use of self-report questionnaires to collect data on all measures. This limits our ability to draw conclusions about the causal nature ofthe relationships between HR practices and firm growth. In a future study there would be of great value to see how different HR and MD responses are.
The possession perspective is linked to theresource-basedview, which attributes sustainable competitive advantage to the ownership of ﬁrm-speciﬁc resources. From this lens, the emphasis is on internal drivers and input factors that underlie ﬁrm competitiveness, instead ofthe external focus that is characteristic ofthe position perspective. It sug- gests that the ﬁrm deliberately emphasizes a particular set of factors/resources considered strategic --- valuable, rare, inimitable and non-substitutable --- as it builds the basis for competitive advantage ( Barney, 1991 ). From the possession-based competition perspective, input factors can yield above-normal returns for as long as the ﬁrm is success- ful in maintaining their uniqueness. Therefore, barriers to imitation (an outcome ofresource properties) and not bar- riers to entry (an outcome of structural attributes) deﬁne the nature ofthe competition. A derivative oftheresource- basedview underscores ﬁrm capabilities and shifts the focus from the resources managed by a ﬁrm to the ﬁrm’s abil- ity to manage the resources ( Teece et al., 1997 ). Though somewhat distinct, it also suggests that factor market con- ditions and organization abilities are key determinants of performance differences among rival ﬁrms.
In the area of human capital, Coff and Kryscynski (2011)
highlight that a key aspect to create value and competitive advantages through human capital is the integration and interaction ofthe individual level (micro) and the organi- zational systems of human resources management (macro). These authors point out that the combination of idiosyn- cratic individuals and organizational systems for attracting, retaining and motivating talented employees may be among the most powerful isolating mechanism that can reduce imitation by competitors. Ployhart and Moliterno (2011) pro- pose a multilevel model to analyze the emergence of human capital as a ﬁrm resource connecting micro, intermediate and macro levels. There are three main parts in this model. First, from the ﬁeld of psychology, the origins and sources of human capital are cognitive (general cognitive ability, skills, experience) and non-cognitive (personality, interests) char- acteristics at the individual level. Second, these individual characteristics are combined and ampliﬁed through inter- action processes at group and team level. Third, human capital as a ﬁrm collective resource emerges through these processes.
migrate currently running instances and schedule resources during instance migration process
has become widely debated issue in workflow flexibility research.
Aiming to resolve the grid workflow scheduling problem, Sucha Smanchat et al. proposed a scheduling algorithm for a multi-parameter sweep workflow instance based on resource competition (Smanchat, Indrawan & Ling, 2011). Rizos Sakellariou et al. considered resource allocation problems to be a single activity instance ofthe workflow and set the earliest completion time for a certain activity instance as the goal of their resource scheduling method (Sakellariou, Zbao, Tsiakkouri & Dikaiako, 2007). R. Buyya analyzed the relationship between the overall deadline of a workflow instance and the load of an activity instance in order to estimate the deadline for each activity’s running time. The workflow instance resource scheduling problem has been developed into multiple scheduling problems (Yu, Buyya & Tham, 2005). G. B. Tramontina et al. adopted a variety of allocation rules (First In First Out, Earliest Due Time, Service In Random Order, and Shortest Processing Time) in order to schedule resources among multiple workflow instances (Tramontina & Wainer, 2005).
firstname.lastname@example.org , email@example.com
This paper presents the prototype of a lexicographic resource for spoken German in interaction, which was conceived within the framework ofthe LeGeDe-project (LeGeDe=Lexik des gesprochenen Deutsch). First of all, it summarizes the theoretical and methodological approaches that were used for the initial planning oftheresource. The headword candidates were selected by analyzing corpus-based data. Therefore, the data of two corpora (written and spoken German) were compared with quantitative methods. The information that was gathered on the selected headword candidates can be assigned to two different sections: meanings and functions in interaction.
While research into the effects ofthe adoption of a specific strategic vocabulary on the strategic agenda of a firm remains quite limited (Ocasio et al. 2018 ), prior research shows that the choice of a specific vocabulary affects which strategic issues are attended to and how attention can be shifted with the change or the adoption of a new vocabu- lary (Nigam and Ocasio 2010 ; Ocasio and Joseph 2008 ). The adoption of a specific vo- cabulary in the headquarters of an MNC can be highly influential in shifting the distribution ofthe whole company’s attention. Therefore, the introduction of a specific vocabulary is also likely to be highly contested. One must choose the vocabulary and language that are adopted throughout the corporation and how much variance is allowed in the different divisions, functions, and regions. Even in corporations with a common corporate language (e.g., Harzing and Pudelko 2013 ; Peltokorpi 2015 ), differ- ent degrees of fluency and proficiency in language can influence how managers from different parts ofthe organization can influence the attention of corporate headquar- ters. The existence of regional and divisional headquarters can help alleviate this chal- lenge by acting as a two-way “translation service” between the global headquarters unit and the subsidiaries. Thus, the different headquarters units could be seen as translators ofthe “corporate strategy language” into the divisional, functional, or regional contexts and ofthe “regional or local strategy language” to the corporate level, enabling both the global headquarters unit and the subsidiaries to better attend to each other’s strategic issues. Even without institutional, cultural, or language distances, differences between the business logic or organizational cultures ofthe different parts ofthe organization (e.g., different functions) may benefit from the translation “services” provided by the functional headquarters. Sometimes global concepts, such as “Digitalization,” “One- company strategy,” or a specific strategic vision that the global headquarters is strongly promoting, can be highly influential in penetrating the whole organization and enabling the attention ofthe whole MNC to be directed towards a common goal. Yet, even then, translation to the regional or divisional level is necessary for the different subsidiaries to understand their roles in implementing the strategy.
In viewof these reflections we well understand the severity ofthe second consequence of this attitude: endorsing the offer focused on idealistic and evasion values held by smaller groups. In fact, the result is that the protection of cultural heritage is a choice made by groups that are too small and have too little influence on the national economy to prevail over choices that conflict with cultural heritage or are simply indifferent. The only rational solution that coincides with the spirit and complexity of democracy lies in the formation of a deep community preference for the survival of cultural heritage. Therefore, in viewofthe current situation, the necessary improvement will only emerge from the adoption of a concept of cultural heritage as an economic resource. However, this approach must be functionally related to the socio- economic notions of ‘utility’ and ‘needs’, which decline in agreement with multiple use possibilities, depending on the physical or immaterial quality of humans’ daily existence. Only an enhancement so conceived can communicate the significance ofthe heritage to sufficiently large and diverse groups of people. At the same time, a decisive impetus to the formation of a preference of community founded on the widespread appreciation ofthe functions of these assets as ‘productive resources’ (in both cultural and economic terms) and as a qualitative component ofthe environment can certainly be derived from the growing success ofthe principles ofthe knowledge economy, in which the cultural object has a significant part. In fact, in the new current context, in which the recognition ofthe market value of historical heritage is accompanied by the social emersion of higher immaterial needs, which William Stanley Jevons had already identified during the nineteenth century in culture, art and beauty and which postmodern disenchantment causes to increase in size and reinterprets in terms ofthe ‘pleasure’ dimension, involves a demand for landscape and historical culture, not only of evasion, which must be satisfied on a mass level. Outside of this context, continuing to believe that problems can be solved only by increasing funding for the restoration and the operation of monuments and museums would be a naive illusion similar to that of one who believes that a sufficient remedy against damages by a certain type of industrial development lies only in stimulation ofthe progress of depollution techniques.
SDMA allows the reuse of time-frequency units in space [PNG03, ST81]. If the trans- mitting AP is equipped with an antenna array, its elements are controlled in amplitude and phase such that a beam is formed. The information about amplitude and phase is represented by a beamforming vector. Multiple beamforming vectors applied to the same time-frequency unit allow that an AP transmits in different directions and enable a spatial multiplexing. If the channel state information (CSI) defined as the instantaneous attenuation and phase shift caused by the wireless channel is known to the transmitting AP, various beamforming techniques leading to high data rates exist [PNG03, Qiu04, SSH04, JUN05]. However, the CSI is hard to obtain for the APs [VTL02]. Alternatives which enable SDMA although beamforming vectors are created without instantaneous information about the channel are proposed in litera- ture [VTL02,VALP06,GRB06a,RGT08]. These alternatives are presented more deeply in Section 1.3. One of these alternatives is considered in this thesis. The APs choose beamforming vectors out of a set of pre-defined vectors which are created before the operation ofthe relay network and without instantaneous information about the chan- nel. The chosen vectors form a set, where such a set is formed for each time-frequency unit used by an AP. The beams chosen for a time-frequency unit used by an AP is called grid of beams throughout this thesis. A time-frequency unit and a chosen beam form a resource block. Resource blocks are allocated to a link between the transmit- ting AP and the receiving station. An adaptive allocation of bits is possible for theresource blocks. This means that different modulation and coding schemes are ap- plied for different resource blocks. The detection of data carried by a resource block is disturbed by noise, co-channel interference and interference which occurs between resource blocks transmitted by the same AP and using the same time-frequency unit. This latter interference is called inter-beam interference.
The method applied is briefly explained below. 5
Since we wanted to use D E R E K O as a representation of current written language, we
have excluded data that contain the conceptually spoken language presented in Wikipedia discussions as well as the subcorpus “Sprachliche Umbrüche” from the years 1945 to 1968. One ofthe steps was to calculate the difference in lemma distribution in the two corpora by using different effect measures (odds ratio, %diff, relative risk, binary protocol of relative risk and frequency classes) and measures of statistical significance (log likelihood ratio and chi square). The lemma comparison table has been integrated into a tool we developed to quickly and easily filter and sort the data. With the help of this tool, the headword candidates can be dynamically evaluated, executed, and explored, and the parameters can be adapted to the needs ofthe lexicographers. After examining the results of different measurements ofthe frequency comparison, we opted for the difference ofthe “frequency classes” (“Häufigkeitsklasse” = HK; cf. Keibel, 2008, 2009), a measurement which is relatively intuitive to understand and frequently used in German lexicography (cf. e.g. Klosa, 2013). The most common word in a corpus is in frequency class 0, whereas the word(s) in class 1 is (are) about half as common as the most common word(s) in class 0, the words in class 2 are about half as common as those in class 1, etc. We calculated the difference ofthe frequency classes of a lemma in the two corpora as “difference ofthe frequency classes” (fc_diff = fc(dereko) – fc(folk)). After sorting the lemma list by descending fc_diff, we extracted about 320 one-word lemmas whose fc_diff was at least 2. The manual check of these candidates enabled us to see if they were suitable headword candidates in the one-word lemma range for our resource. Table 1 shows the top 25 candidates for which we can define different headword groups.
2.1 Evacuation Planning
well as transshipment nodes (i.e. nodes with neither supply nor demand). Furthermore, with each directed arc (i, j) ∈ A between two nodes i ∈ N and j ∈ N, a transit time t ij denoting the time required to move from node i to node j as well as a capacity c ij denoting the amount of flow that can traverse the arc at any time t are associated. It should be noted that these parameters are not necessarily constant for all times t. Instead, transit times or capacity constraints might change over time, or some nodes might not be available at all times (e.g. due to smoke or fire). A feasible flow for this problem satisfies the capacity constraints (i.e. the flow from node i ∈ N i ∈ N to node j ∈ N may not exceed the capacity c ij ofthe arc (i, j) ∈ A at any time t) as well as the transit times (i.e. a flow from node i ∈ N to node j ∈ N that leaves node i at time t arrives at node j at time t + t ij ). One ofthe earliest dynamic network flow problems has been introduced by Ford and Fulkerson ( 1958 ). Here, the network consists of exactly one source and one sink node, as well as an arbitrary number of transshipment nodes. Based on such a network, Ford and Fulkerson ( 1958 ) compute the maximum flow from the source to the sink node in a specified time horizon T . This problem, referred to as the maximum dynamic flow problem, has been extended by Gale ( 1959 ) by additionally requiring the cumulative amount of flow to reach the sink node in each time period to be maximal. Gale ( 1959 ) refers to this extended problem as the universal maximum flow problem or the earliest arrival flow problem. In the context of evacuation planning, these problems can be used to estimate the maximum number of people that can be evacuated from a danger zone (source node) to a safety zone (sink node) within a given time horizon T if the actual number of affected people is not known.
From the empirical perspective, the presence, quality and impact of firms’ organisational knowledge has been explored with the development of surveys relating to the implementa- tion of various practices. Examples cover the development of learning organisations that create opportunities for employees to use and develop their competencies, flat hierarchies, the empowerment of workers, self-governing teams, and the use of temporary structures and lateral communications as enabled by the adoption of ICTs (see, e.g. van Alstyne (1997); Birkinshaw and Hagstrom (2000); and Hales (2002)). In their comprehensive sur- vey ofthe practices intended under organisational capital Black and Lynch (2005) argue, contrary to Prescott and Visscher (1980), that organisational capabilities do not accumu- late as a by-product of production, but are the result of explicit investment decisions and implementation of practices, such as workforce training, employee voice and work design. The seminal paper of Bloom and Van Reenen (2007) introduced the World Manage- ment Survey (WMS), which has become a benchmark for many subsequent analyses ofthe importance of management quality. The paper details the methodology ofthe survey, carried out in manufacturing firms in the Unites States, the United Kingdom, France and Germany. In particular, management quality is defined as adopting practices in four areas: operations management, monitoring individual performance, setting and enforcing targets, and using career incentives to attract and retain talent. The authors find that their constructed measure of management quality is positively associated with productiv- ity, profitability, Tobin‘s Q, sales growth and survival, and that differences in competitive pressure and ownership structure explain the majority ofthe gap in management quality between the United States and the European countries. Furthermore, through a Ran- domised Control Trial, Bloom et al. (2013) show that the positive relationship between management quality and firm performance can be interpreted as a causal one.
The orthodox or canonical viewofthe concept of industrial district stems from a unique historical and social process. This restrictive version ofthe concept has been criticized by some authors (Paniccia 1998), arguing that only a few experiences ofthe Italian model could fulfill these requirements. Case studies by some authors have questioned its validity and potential, (Bianchi 1994; Harrison 1994) while other studies have postulated different origins and developments ofthe districts (Amin and Robins 1990, Spender 1998, Staber 1998). For instance, in a recent study, Lazerson and Lorenzoni (1999) revised the basic principles ofthe district, and were able to justify the presence of large firms in the Italian model. Along this vein, Zeitlin (1992) has proposed a more open model. He incorporates both spatial and institutional conditions to allow for the integration of different realities and historical, as well as social processes.
Figure 1: Major interaction domains in FOLK
The list of these different conversations (cf. Table 2) shows the broad diversity of interaction domains covered by FOLK. FOLK’s special feature is to document spoken German in spontaneous interaction. This distinguishes it from most other oral corpora in the DGD (see for example the corpus "Deutsche Standardsprache: König-Korpus" which includes reading texts, in particular excerpts from the German Grundgesetz; cf. Schmidt, 2014b: 1451). After the creation of an individual account, the access to the DGD is free of charge for research and teaching purposes. This makes the data base, with which the LeGeDe project works, transparent to the scientific public. Nevertheless, one aspect with regard to FOLK is not to be neglected: Even if it is among the largest available corpora of its kind, with a total of 1.95 million transcribed tokens, it is still a relatively small corpus. Corpus-based methods, which up to now have been used in lexicography on large volumes of written German, need to be looked at in a new way.