Turning to numerical realization of the bilevel optimization problem, a possible approach consists in discarding one of the two boundary conditions on the freeboundary and to append it to the cost functional on the upper level by using a penalty or augmented Lagrangian approach. Using this strategy, solving for the state u becomes a classical linear boundaryvalueproblem with well- posed boundary data in the lower level problem. Unfortunately, as noted in [ 24 ], this approach leads to serious convergence problems. A further disadvantage that was noted in [ 24 ] is that, depending on the formulation, a locally optimal triplet .u; !; ˝/ might not represent a physical solution to the freeboundaryproblem. For this reason, we adopt a segregation approach to solve the optimization problem, i.e., we find a solution to the freeboundaryproblem . F ! / first and then proceed to the upper level represented by the minimization of the cost functional. In this iterative procedure, . F ! / has to be solved several times for varying !. Therefore, one needs an efficient and robust solver for this type of problem. Possible solution strategies include trial methods, linearization methods (continuous or discrete) [ 7 ], and shape optimization methods [ 11 ]. Here we use a regularized fixed point method, which is a trial method. The main advantage of this approach is that it solves . F ! / using some simple updating formula based only on the solution of a state system. Moreover this method locally converges super-linearly [ 9 ].
For the two dimensional case it is possible to show an exponential decay of the solution to this problem. The techniques to solve the freeboundaryvalueproblem are the same as in the three dimensional case. The only difference is the rate of decay of the solution in the reflected domain. In the three dimensional case, where we considered a half space, this was an exterior domain, where we already have a precise analysis of the asymptotic behaviour of the solution due to Galdi and to Finn. In the case where we add a bottom to the domain, we have to consider a layer with two fixed flat boundaries as the reflected reference domain, where the body is moving in the middle between those parallel planes.
of the calculus of variations to solve the min-max and the max-min problem. The next step is to show that the functional has a saddle value, that is, the max-min and the min-max problems are solved at the same value. To that end, we will use a tool from , a “co- incidence theorem”, which is a corollary of a classical result of Knaster, Kuratowski and Mazurkiewicz ; we provide it here without a proof.
Notice that due to the three-dimensional structure of our problem (1.1) a direct study of the associated Hamilton-Jacobi-Bellman equation with the aim of finding explicit smooth solutions (as in the two-dimensional problem of , among others) seems hard to apply. In fact, differently to, e.g., , in our case the linear part of the Hamilton-Jacobi-Bellman equation for the value function of problem (1.1) is a PDE (rather than a ODE) and it does not have a general solution. On the other hand, arguing as in , we might tackle problem (1.1) by relying on a stochastic first-order conditions approach; that would allow us to characterize the unique optional solution l ∗ of the Bank-El Karoui representation problem (cf. ) as l t ∗ = z ∗ (X t x , Y t y ), with z ∗ the free- boundary surface that splits the state space into action and inaction regions. However, the integral equation for the free-boundary which derives from the main result of  (i.e., [19, Th. 3.11]) cannot be found in our multi-dimensional setting. Therefore it seems very hard to obtain any information on the geometry of the free-boundary surface z ∗ (x, y) by using only the characterization of the process l ∗ t .
On this account, F. Sans` o pioneered almost 30 years ago a completely new concept for solving the GBVP. His method, which has become known as the gravity space approach, solves the GBVP by mapping it into a dual space. Interestingly enough, this so-called gravity space is built upon quite specific coordinates. Since for the treatment of the GBVP the gravity vector is assumed to be known throughout the Earth’s surface, its three Cartesian com- ponents are chosen to form the new independent coordinates of gravity space. This can be accomplished by using a Legendre transformation, the simplest form of a contact transformation. The outstanding achievement of F. Sans` o’s approach is the fact that the resulting BVP in gravity space directly constitutes a fixed problem. That is, the boundary surface in gravity space is actually known, which results in direct consequence of the above stated prerequisite that the gravity vector must be known throughout the surface of the Earth to solve the GBVP. The knowledge of the boundary surface is clearly in contrast to the classical freeproblem and a tremendous advantage of the gravity space methodology. However, the other side of the coin is the fact that F. Sans` o’s approach suffers from a singularity at the origin. Moreover, F. Sans` o’s methodology is physically more abstract. In contrast to the classical geodetic boundaryvalueproblem where the potential is required in the Earth’s exterior domain, the resulting BVP in gravity space represents an interior problem in terms of the so-called adjoint potential. Naturally, such a situation is somewhat unusual. In addition, instead of coordinates that exhibit metric units, gravity space coordinates are given in the dimension of accelerations. And at last, within the linearized form of the classical GBVP the disturbing potential must satisfy Laplace’s equation, whereas for the corresponding problem in gravity space the adjoint disturbing potential must solve the more complex Poisson’s equation.
Apart from the above analytical calculations, we investigated the case of randomly ori- ented cracks of equal length also numerically for plane strain loading using finite difference relaxation methods, see Appendix A.5. This was done by distributing the crack centers randomly over the sample and then assigning a random angle to each crack. The number of cracks was kept constant at 100 cracks with a length of ten grid points each; the cracks were allowed to overlap and the size of the system was varied to obtain the respective values of the crack density parameter α. For each value of alpha, up to 10 runs with different initial distributions were performed. The results are shown in Figs. 6.7 and 6.8. For low crack densities, the numerical results agree with the prediction Eq. (6.22) in , but for higher values they seem to be systematically lower, probably due to prospective percolation. The scatter in the numerical data shows the impact that the difference in mi- crostructure can have on the effective elastic modulus of the total system. Note that most numerically obtained values are considerably lower than the analytical prediction. This demonstrates that the iterated homogenization method must overestimate the effective elastic moduli since it cannot take percolation effects into account.
Properties of the solution, a Hausdorff dimension estimate of the free bound- ary etc. have been derived in  and in . Moreover, in , the current authors gave a complete characterization of global two-phase solutions satis- fying a quadratic growth condition at a two-phase freeboundary point and at infinity. It turned out that each global solution coincides after rotation with the one-dimensional solution u(x) = λ +
6.3.1 Business models of digital platforms.
In my studies, I emphasize the role of digital platforms as resource integrators. I investigate how the integration of resources of one customer group (e.g., personalized ads in search engines from advertisers) into offerings for other customer groups (e.g., search engine results page with personalized ads for end consumers) affects value creation for all actors on the digital platform. Researchers should build on this finding to examine how to integrate resources as a digital platform to maximize the value derived from customer groups. Study 4 of my dissertation proposes two central integration methods that researchers could investigate. On the one hand, one customer group’s resources could coexist with the digital platform’s value proposition toward other customer groups; this option is common in ad-financed business models, in which advertisers use the platform as medium to advertise their products and services (S. Anderson & Gabszewicz, 2006), but the ads are not part of a platform’s content offerings (i.e., value proposition) to consumers. On the other hand, digital platforms could integrate one customer group’s resources into their offerings to other customer groups, as is the case in marketplace business models. On Booking.com, hotels’ resources (e.g., pictures and product descriptions) become part of the value proposition for consumers (e.g., information and booking). Research on advertising’s negative effect on consumers’ value perceptions (Wilbur 2008; Bleier und Eisenbeiss 2015) suggests that the second alternative (i.e., direct integration of one customer group’s resources into the value proposition to the other customer group) may be more promising for digital platforms than the coexistence of resources and value propositions. Researchers should test these assumptions, as well as identify possible boundary conditions, to help answer the question of suitable business models for digital platforms
normal densities and mixing weights ˆ w k estimated as the means of the probabilities ˆτ ik . The estimates
of the parameters are used to re-attribute a set of improved probabilities of group membership and the sequence of alternate E and M steps continues until a satisfactory degree of convergence occurs to the ML estimates. It is well known that the likelihood function of normal mixtures is unbounded and the global maximizer does not exist ( McLachlan and Peel 2000 ). Therefore, the maximum likelihood estimator of Ψ should be the root of the likelihood equation corresponding to the largest of the local maxima located. The solution usually adopted is to apply a range of starting solutions for the iterations. The model was fitted repeatedly using a variety of initial values. Deterministic starting values based on separate models for the outcome based on K means ( Kaufman and Rousseeuw 1990 ) were employed and the model fitted. The model was then fitted 10 additional times based on random jittering of the starting solution, and then another 10 times based on random jittering of the estimates at convergence of the previous runs. The results in the empirical application were fairly stable with respect to the starting solution in the sense that the same maximum for the likelihood or a value very close to it was invariably obtained.
micro-determination of how this total is divided between diﬀerent industries. Given the logical primacy of the macro-relations, we characterize Moseley as adopting a macroeconomic view.
8.1 The macro-determination of surplus-value
Moseley stresses an interpretation of the circuit of capital which sees a given amount of money advanced as (constant and variable) capital which reproduces itself together with an increment called surplus-value. Because it is this latter that has to be explained, the money advanced is taken as given. His approach implies that neither equation (2) nor its implication, equation (13), represent Marx’s labour theory of value. Instead, the latter is solely concerned with the determination of aggregate surplus-value in money terms on the basis of aggregate money capital advanced. While ‘aggregate’ might be taken to imply that something is aggregated, Moseley denies this on methodological grounds. Obviously the money advanced is spent on deﬁnite quantities of inputs, but what is purchased is a microeconomic issue that cannot be considered until aggregate surplus-value is ﬁrst determined. So while inputs are purchased at unit prices which are presumed to be prices of production, these latter are (methodologically) posited yet undetermined and so cannot be explicitly considered at this macroeconomic stage. Similarly, since these prices determine the quantities that are purchased, those quantities cannot be explicitly considered. All that can be considered is the total quantity of money laid out, and its division into what is spent on means of production, and what is spent on labour-power.
2. LITERATURE REVIEW
2.1 Global Value Chain Participation
Several related strands of literature have provided insights on GVCs and the role of firms, particularly SMEs. The fragmentation of production approach—as found in seminal works by Jones and Kierzkowski (1990) and Arndt and Kierzkowski (2001)— refines these insights. It shows how increasing returns and the advantages of specialization of factors within firms encourage the location of different stages of production across geographical space connected by service links. Products traded between firms in different countries are components rather than final goods. Two alternative approaches have been used to quantify the magnitude of fragmentation trade. One uses national trade data obtained from the United Nations trade data reporting system to identify trade in parts and components (e.g., Ng and Yeats  and Athukorala ). It suggests that East Asia’s trade is increasingly made up of parts and components trade, suggesting that global production networks are growing in importance. Another approach—relying on input–output tables to trace value added in production networks—suggests that value added seems a more accurate means of capturing production network activity than trade data (e.g., Koopman, Powers, Wang, and Wei  and WTO and IDE-JETRO ). Neither approach, however, sheds light on factors affecting firms joining supply chains. Case studies show that large multinational corporations (MNCs), which use the region as an international production base, drive the process of production fragmentation (Kuroiwa and Heng 2008; Kuroiwa 2009).
6. Numerical results. In this section, we present several examples to illustrate the optimal- ity of algorithm 3.1 and algorithm 3.2. For algorithm 3.2, we test the PCG method for LMAA with local Jabobi smoothing. Furthermore, in order to compare the two methods, we present ex- amples for HBMG and HBP on locally refined meshes. We remark that LMG and HBMG are implemented with O(N ) operations each iteration, where N is the number of degrees of freedom (DOFs, i.e., interior nodes or free nodes). As has been pointed out in  and , the overall com- plexity of HBMG (the symmetric case, e.g., with local Jacobi smoothing) and the hierarchical basis method used as a preconditioner for CG is O(N log(h min )|log²|) operations, required to reduce the initial error by a given factor ². On the other hand, for LMAA with local Jacobi smoothing as a preconditioner for CG, O(N |log²|) operations are required. The following implementation is based on the FFW toolbox from .
Every Riemann surface R of genus one is conformally equivalent to a flat torus, i.e., to a quotient space C/ , where = Zω1 + Zω2 is some two-dimensional lattice in C. The biholomorphic map from R to C/ , or from the universal cover of R to C, is called a uniformizing map. For a polyhedral surface of genus one, construct- ing a discrete uniformizing map amounts to solving Problem 3.1 with prescribed total angle Θ = 2π at all vertices. This provides us with a method to calculate approximate uniformizing maps for Riemann surfaces of genus one given in various forms. We consider examples of tori immersed in R 3 in Sect. 7.1 and elliptic curves in Sect. 7.2. (We will also consider tori in the form of Schottky uniformization in Sect. 8.2, as a toy example after treating the higher genus case.)
Boundary-layer measurements along the wing-glove surface are crucial to determine the state of the transitional boundary layer and measure the shape and amplitudes of contained disturbances. In order to acquire time- resolved velocity data along the exchangeable plexiglas measurement insert, a light-weight three-axis traversing system was developed which can be in- stalled on either side of the wing glove. A sketch of the traversing system is provided in Figure 3.11 (a), indicating the single components with different colors. Two turrets (grey) are connected to variable mounting threads along the glove chord to both sides of the measurement insert. Both turret heads provide coaxial pivot points on which a linear traversing assembly for the spanwise direction (blue) is supported. On the moving sledge of this linear traverse, another linear traverse (green) is mounted enabling streamwise movement of a hot-wire probe support. The set of linear traverses may be rotated around the pivot point by a stepper motor positioned next to the inboard turret (red), leading to a wall-normal displacement of the hot-wire support with respect to the glove surface. The wall-normal positioning ac- curacy of the hot-wire probe is 0.1 mm for highly resolved boundary-layer profiles. The lateral dimensions can be approached with approximately 1 mm accuracy. NanoTech stepper motors with feedback encoders ensure repeatability of the probe positioning.
spaces and embedding theorems, we refer in particular to  (cf. also [32, pp. 385ff.]). As the boundary Γ is only Lipschitz, the norm of Sobolev — Slobodetskij spaces on Γ shall be defined through summing up over disjoint straight boundary sections, which is different from the usual definition. In particular, the continuous embedding H r (Ω) ֒→ H r−1/2 (Γ) then
Teacher union representatives defend the contractual working conditions of all teachers at the school level, however negotiations regarding teacher pay, pensions, and the conditions themselves (e.g. hours worked, curriculum, pupil teacher ratios) are held at the national level. This means that it is impossible for unions to bargain only for their members, as non-union teachers employed in public sector schools will also receive any gains in benefits. Despite the possibility of being able to gain from the union negotiations, non-members are not required to pay any union dues. 9 These factors make the UK teacher labour market a prime example of the trade union free-rider problem, why do teachers choose to pay the costs of union membership if pay and working conditions are determined centrally?
Hyperbolic systems describe the propagation of waves with finite velocities, which in special relativity are naturally bounded by the speed of light. This fact is reflected in the beautiful mathematical structure of the equations under consideration. Namely Maxwell’s equations and the relativistic Euler equations are very typical representatives for those systems. Though the rela- tivistic Euler equations considered here seem to look complicated, a detailed study shows a simpler mathematical behaviour than the corresponding clas- sical Euler equations. For example, even the solution of the standard shock tube or Riemann problem for the classical Euler equations of gas dynamics may lead to a vacuum region within the shock tube that complicates a rig- orous mathematical analysis for the general initial valueproblem very much. However, we will see that at least for the so called ultra-relativistic Euler equations this behaviour will not occur.
Our key finding is that the rotation system according to which the apprentices move from one company to another over the course of their training creates various free-rider constellations in distributing the corporative benefits, both during the training itself and after its completion. An important issue during the training is the heterogeneity of the apprentices due to their different levels of advancement and different social attributes. The lead agency is challenged with a ‘fair’ distribution of this socially structured corporative benefit among the network companies. At the same time, the participating companies differ in size and profile, and thus also in terms of their significance for the network-based training scheme. Some are better able to assert their interests than others, which further nourishes imbalances. The assessment of whether the distribution of the corporative benefit in the form of “apprentices” is fair or not ultimately depends on the convention upon which a company relies in evaluating the quality of the young people sent by the lead agency.
“Apprentices who complete their entire training from beginning [to end] at [company X] know [company X] and will stay with [company X] afterwards. 4 But if someone has been at an airport or with a railway operator in the mountains or some place of that kind, it’s as clear as daylight that they’ll be mightily impressed and we’ll have lost them.” (Person responsible for in-company training, Public Transport Network) Free-rider problems in post-training recruitment also arise from the above- described uneven distribution of apprentices who are at different stages of training (4.2.1). Companies that train mainly final-year apprentices are in a better position to retain them by offering post-training employment before or once they have graduated. Some companies in the Region Network, for instance, demand that the lead agency send them third-year apprentices in the technical occupations – with the intention of keeping these youths for the fourth (and final) year of training and, hopefully, also after its completion. Accessing the jointly trained junior staff pool thus becomes more difficult for those companies that receive a larger number of apprenticeship novices.
Es gibt aber wahrscheinlich noch einen anderen, nicht durch unsere Hypothesen berücksichtig- ten Grund, warum sich türkische Migrantinnen seltener für den Weg des „boundary crossing“ entscheiden. Diese Vermutung wird durch die Ergebnisse einer qualitativen Studie unterstützt, in der wir auf der Grundlage von elf Gruppendiskussionen mit 55 Migrantinnen aus verschie- denen Ländern rekonstruiert haben, ob und in welchem Maße die Befragten Diskriminierungs- erfahrungen mit ihrem Namen gemacht und welche verschiedenen Strategien des Umgangs mit der symbolischen Grenze zwischen Mehrheitsgesellschaft und Minderheit sie entwickelt haben (Gerhards und Buchmayr 2018). Die Ergebnisse zeigen, dass Migrantinnen aus der Türkei und dem arabischem Raum deutlich häufiger von Diskriminierungserfahrungen berichten als andere Migrantinnengruppen, obwohl viele von ihnen strukturell gut in die deutsche Gesellschaft in- tegriert sind. Auf der Grundlage dieser Erfahrungen entwickeln sie eine besondere Art der Grenzpolitik. Weil sie sich von der Mehrheitsgesellschaft abgelehnt und nicht anerkannt fühlen, obwohl sie glauben, gut in die deutsche Gesellschaft integriert zu sein, beziehen sie sich in stärkerem Maße auf ihre migrantische Identität. Betrachtet man dieses Ergebnis zusammen mit den Befunden dieses Textes, dann ergibt sich das Bild eines sich wechselseitig verstärkenden Prozesses: Migrantinnen aus der Türkei haben aufgrund der höheren kulturellen Distanz zu Deutschland deutlich geringere Chancen, auf Namen zurückzugreifen, die in beiden Gesell- schaften üblich oder in ähnlicher Form verfügbar sind. Dies führt dazu, dass sie eher Namen vergeben, die vor allem in ihrem Herkunftsland üblich sind. Die deutsche Mehrheitsgesellschaft interpretiert dieses Verhalten wiederum als ein Zeichen der geringeren Bereitschaft, sich an die symbolische Ordnung der Mehrheitsgesellschaft anzupassen und reagiert überdurchschnittlich häufig mit Kategorisierungen, die als diskriminierend erlebt werden. Dies wiederum führt zu einer Reaktanzbildung und einer weiteren Zuwendung zu Namen aus dem Herkunftsland.