The intermediately well-known development program of transpiration cooled and cryogenically operated CMC rocket thrust chambers at DLR is structurally affected in general by the combination of CMC materials at the inner chamber liner and an outer load carrying shell made of carbon fiber reinforced plastics (CFRP). An illustration ofthecurrent design status is given in figure 1. The picture also presents an important demonstration test under relevant operation conditions at DLRs renewed P6.1 test facility. The idea of a ceramicrocket thrust chamber came up in the middle ofthe nineties. Expeditiously the application potential of micro-porous CMCs aiming on transpiration cooled high temperature structures became visible. Consequently first system analysis and the investigation of structural chamber approaches started. In parallel numerical flow analysis has been and will be performed in the near future to handle the behaviour ofthe coupled coolant-hot-gas-flow. Figure 2 shows the roadmap ofthe previous works. After positive experience over the first years a de-coupled structural design crystallized as being constructive. It
Several methods exist to study the regenerative cool- ing of liquid rocketengines. A simple approach is to use semi-empirical one-dimensional correlations to estimate the local heat transfer coefficient. However, one-dimensional relations are not able to capture all relevant effects that occur in asymmetrically heated channels like thermal stratification or the influence of turbulence and wall roughness. Especially when using methane as the coolant, the prediction is challenging and simple correlations are not sufficient [7, 8]. An accurate NN-based surrogate model for the maxi- mum wall temperature along the cooling channel is de- veloped by Waxenegger-Wilfing et al. . The training dataset uses results extracted from samples of CFD simulations. The NN employs a fully connected, feed- forward architecture with 4 hidden layers and 408 neu- rons per layer. It is trained using data from approxi- mately 20 000 CFD simulations. By combining the NN with further reduced-order models that calculate the stream-wise development ofthe coolant pressure and enthalpy, predictions with a precision similar to full CFD calculations are possible. The prediction of an entire channel segment takes only 0.6 s, which is at least 1000 times faster than comparable three-dimensional CFD simulations.
Innovation was earlier seen as a radical invention accomplished by a heroic inventor. However, thecurrent theories of innovation emphasise its nature as the result of co-operation in normal social and economic activities (Kline and Rosenberg, 1986; Lundvall, 1988). The innovation process normally includes many kinds of interaction, and innovations do not have to be radical, on the contrary, they are incremental social and organisational changes as well as technological advancements. Consequently, innovations are not just the results of scientific work in a laboratory-like environment. They take place in networks, where actors of different backgrounds are involved in the process placing new demands on innovativeness. The science push effect as the driving force of innovations is an exception rather than a rule in these processes (Schienstock and Hämäläinen, 2001). Rather, innovations seem to presume factors like ability to interact learn collectively and build trusting relations between the innovating partners (Harmaakorpi, 2004). Innovativeness depends on the innovation network functioning rather than on an individual actor’ s progress in a particular scientific field.
Pressure veriﬁcation tests are performed after assembling ofthe interrupters in the factory using sophisticated and expensive equipment, usually based on the magnetron principle [Fron 93b]. In this method, voltages up to several kilovolt and magnetic ﬁelds of several hundred millitesla are applied to the interrupter simultaneously. The ion current between contacts or between contacts and middle shield is a measure for the vacuum quality. This method is not useful for users because the vacuum interrupter has to be disassembled from the switchgear with the risk of improper remounting [Dams 95]. Nevertheless, failures of vacuum interrupters during service cannot be completely excluded and it may be possible that after some years of service the interrupter internal pressure increases. Reasons for residual gas pressure rise might be out-gassing and gas desorption from diﬀerent materials inside the interrupter (metallic parts and ceramic), gas permeation through the chamber walls and metallic ﬂanges, small gas leakage caused by excessive mechanical stress or in- suﬃciently welded or brazed junctions. For industrial applications leakage may occur by aging ofthe metal bellows due to the large number of operations. But also in distribu- tion applications with few number of operations, the residual gas pressure may increase by long term diﬀusion or deactivation ofthe getter material [Dams 95]. Therefore, as the functional reliability of vacuum interrupters is ensured over their internal pressure, it would be practical to be able to recheck the pressure within the interrupters after several years of service. For this reason, simpler methods, suitable for application on the site, are required. Such a method, on one hand should allow to re-check whether the above mentioned threshold pressure is still maintained and on the other hand, it should desirably provide information about the remaining safety margin of vacuum. The method should be as simple as possible, based on measurement of only one or two electric signals, without the necessity to remove the interrupter from the switchgear and with minimum investment in equipment and time [Fron 93b].
The test position P6.2 has been developed and erected in thefieldof gas dynamic studies in cold gas conditions. The objectives for P6.2 are basic research in altitude simulation for rocketengines and in flow separation phenom- ena of advanced nozzles. A special task for the P6.2 facility is the simulation of transient envi- ronmental pressure conditions similar to the flight conditions of a launcher during lift off.
about the acceptability of an injury criterion that is not based on the biomechanical relationship between loading to the body and injury causation (“black box”). The term ”black box approach” denotes the definition of an injury criterion that is based on indirect statistically based evidence. WG 20 has asked the EEVC Steering Committee for guidance on this approach. Provided that the EEVC Steering Committee will approve the validation of injury criteria based on statistical analysis offield accident findings, the WG 20 appears to have a reasonable chance of establishing a test procedure proposal in accordance with its terms of reference. WG 20 has received new indications that may delay the selection of an injury criterion (or injury assessment value). Earlier indications of good correlation to field accident data of Nkm and NIC appear to be contradicted by recent findings within the EU-Whiplash2 project and indicate better correlation to injury risk with LNL. Therefore further work to investigate the statistical methods behind these studies is needed and this adds some uncertainty about the time frame ofthe WG 20.
A special background with a high spatial frequency pattern is recorded by a camera. This background can be e.g. a printed pattern or a laser speckle background. Due to refractive index gradients which are induced by the flow, light rays are deflected by a small angle. This causes a displacement of pixel in the recorded image compared to a reference image without flow. These displacements can then be computed by different image processing algorithms to ultimately create an image ofthe flow field. DLR Lampoldshausen uses the in-house developed and Matlab-based software BOSVIS, (see Fig. 16) which applies the optical flow algorithm by Horn and Schunck  to calculate the flow field. Unlike cross-correlation algorithms, the original resolution ofthe images is preserved when using optical flow algorithms (see ). Furthermore, computational very efficient implementations of this algorithm are already available for Matlab (see ). In order to further improve the results ofthe Horn-Schunck algorithm, additional post processing features have been implemented in the BOSVIS software.
• A re-evaluation of results of corrosion studies for disposal casks. In this study, the corrosive mass losses of different metallic and ceramic materials under saline conditions were summarized . This report is based on numerous publications but also on the availability ofthe database of more than 7000 experimental data sets including details ofthe various long-term experiments. Most ofthe immersion test were performed at temperatures above 90°C up to more than 600 days.
The system evaluation of some transpiration cooled rocket engine configurations gives an impression ofthe ranking of DLRs former test results in the KSK-KT (2008)  and KSK-ST5 (2010) [1;2], both in combination with the API injector . In these test campaigns, using mainly oxidation sensitive C/C as inner liner material, the coolant mass flow could not be reduced significantly because of appearing material degeneration effects. Figure 3 shows the relation of combustion chamber pressure and coolant mass flow ratios. The amount of required coolant depends on the hot gas conditions, the inner surface area and the allowable wall temperature. The required coolant ratio decreases with larger chamber diameters and high pressures . Thecurrent chamber design was successfully operated with coolant ratios around 7% at chamber pressures around 60 bar in the previous test campaigns. Scaling of this design leads already to required coolant ratios below 1% at thrust levels above 1000 kN. There is however room for further reduction ofthe required coolant ratio. New CMC materials show extreme thermochemical resistance without any damage formation. There is also the possibility to decrease the characteristic chamber length from thecurrent value of approximately l* = 1.84 m, to values below l* = 1 m, assuming the availability of a suitable injector.
Summarizing theactivities in thefieldof innovative CMC high performance rocket engine development, currently a multitude of single technology elements fit together in a very constructive manner. First of all the transition from TRL 5 ofthe LOX/LH2 operation on up to 30 kN sub-scale level, to TRL 5 on 60 kN full-scale level using LOX and LCH4. Secondly further investigation and development ofthe improved HYPE technology is very promising, targeting future competitive rocketengines world-wide. The implementation possibility of unique TPs, given by WEPA, into the entire development program is of significant advantage. Specifically the development and implementation capability ofthe new CMC journal bearing technology, promising potential long-life applicability for TPs and supported essentially by TU- KL, suits the request of reliable future RLVs. Finally the availability ofthe accompanying construction of a suitable test capacity, supported by BEA, underlines the general overall capability of bringing forward successfully innovative and competitive future space transportation technology, developed by DLR in Germany.
LP1 is the lower chamber pressure ( P cc ) condi- tion and is considered stable because it has only weak coupling between injector resonances and the transverse acoustic mode in the chamber, and correspondingly low unsteady pressure amplitude. LP2 is a high-pressure LP with stronger resonant coupling. Both LPs have a similar ratio of oxi- dizer to fuel mass flow rate (ROF). As shown in the spectrogram in Fig. 3 , while the second longi- tudinal mode ofthe LOX post (LOX 2L) was de- coupled from the first transverse mode of combus- tion chamber (CC 1T) for LP1, both were coupled at around 10 kHz for LP2. This coupling raises the chamber pressure fluctuations ( p ) to around ± 2.5% of P cc , evident in Fig. 3 , which is close to the conventional classification for combustion in- stability.
front ofthe pitot probe the free stream Mach number is determined and from this the static pressure.
The free stream conditions for the experiments with the Scramjet and the isolator model are listed in Tab. 3-1. They are averaged over the test time which is in the range of about 2-5 ms. The nozzle ofthe tunnel is a slender conical nozzle with a half apex angle of 5.8°. This creates small axis gradients in the test section. They have been calibrated  and are listed in Tab. 3-2 for conditions I and II. For Condition I it has to be noted that that the cali- bration was done with a slightly different driver gas mixture (see chapter 3.1.3). Still no sig- nificant offset for the gradients is expected and they have been proven in use for the flow parameters listed in Tab. 3-2 , .They are also taken into account for the determination ofthe pressure coefficient (eq. (3.1)) and the Stanton numbers (eq. (3.2)) at the pressure probe and thermocouple positions by the evaluation ofthe local reference parameters . To place the isolator parts ofthe model in the axis ofthe tunnels optical access the model is placed in a position with the leading edge about 265 mm upstream ofthe nozzle exit plane. The free stream conditions in Tab. 3-2 are listed for this position.
In order to save costs for obtaining experimental data to validate numerical analysis methods, a Thermo-Mechanical Fatigue (TMF) test bench was set up at the Lampoldshausen site of German Aerospace Center (DLR) to reduce the need for expensive full scale rocket engine tests. The test bench uses so-called thermomechanical fatigue panels representing a small section ofthe geometry (typically 5 - 7 cooling channels) ofthe hot gas wall of real liquid rocketengines. To simulate the heat load, a diode laser with a wave length of 940 nm can provide thermal loading with heat fluxes up to ̇ = 25 MW/m² applied to an area of 10 mm x 34 mm. A mixture of supercritical cryogenic and gaseous Nitrogen at a temperature of T = 160 K and a pressure of p = 50 bar serves as coolant. For liquid rocket booster relevant TMF panel tests, the laser is cyclically powered on for typically 200 s until rupture is visible. The heat distribution on the laser-loaded surface ofthe TMF panel is measured with an infrared camera and the deformation ofthe surface is measured by a stereo camera system and the successive application of digital image correlation software. The fatigue life is assessed by counting the number of laser cycles.
Our main contribution is to use time-diary data to estimate effects of center-based care usage on parenting activities in Germany, a country with a universal child care system (see section 2). We do this by estimating the effects separately on (i) parents’ overall time spent together with the child, (ii) the absolute amount of time spent on parenting activities, and (iii) the relative time spent on parenting activities (i.e. as a share ofthe time spent together with the child). 4 We estimate theactivities share for parenting activities in general and also estimate effect for specific types of parenting activities such as reading and primary care. In doing so, we follow the child development literature, which distinguishes between activities that involve different levels of interaction (Kalil et al., 2012; Fort et al., 2020). We contribute to a very sparse literature addressing our question. 5 To the best of our knowledge, the only existing economic study is Kröll and Borck (2013), which uses data from the German Socio-Economic Panel (SOEP) and finds that center-based care increases maternal interactions with children. However, the analysis is based on how often mothers report having undertaken specific activities with their children in the past fortnight, rather than precise time diary data. The few studies from other social sciences that examine the relationship between center-based care and parent-child interactions tend to find small decreases that come mostly through primary care rather than development-enhancing activities (e.g. Booth et al., 2002; Folbre and Bittman, 2004; Craig and Powell, 2013; Habibov and Coyle, 2014). However, these studies do not attempt to address selection on unobservables. None of these studies examine parenting activities as a share of time spent with the child, and few place emphasis on the specific types ofactivities carried out.
percentage points. What are the forces leading to such a substantial distributional shift? At first sight, since the 1980s a wide range of developments have gone in the direction of expanding the power of capital – the first engine of inequality. First, the liberalisation of capital movements has led to a surge of capital flows - for foreign direct investment and for the acquisition of financial assets - driven by a search for higher profits. Second, the growth of financial activities – the most profitable, mobile and volatile form of capital – has dominated investment patterns. In the US the ratio of aggregate profits ofthe financial sector to profits of non-financial activities has increased from 20% in the 1970s to 50% after 2000 (Glyn, 2006, ch.3). The expansion of finance has led to the creation of increasingly complex markets for credit, stocks, bonds, real estate, currencies, futures, commodities, derivatives, etc., driven by a search for short-term speculative gains and leading to major bubbles - and to the financial collapse of 2008. Third, international production systems have emerged as a result ofthe use of new technologies – in information and communication and other fields – and ofthe freedom of movement of capital; this has greatly reduced the power – and the employment - of labour in advanced countries, with a corresponding fall of wages. We need to understand in which specific ways these developments affect inequality.
Former studies have shown frozen and oscillating wavy deformations. Kocourek et al.  has performed experiments on a liquid Galinstan drop which he submitted to a high frequency magnetic fieldof about 20 kHz. He was able to squeeze his drop up to a critical current before the drop started to oscillate in a mix of modes. Interestingly, the observed oscillations correspond to the capillary eigenfrequencies  which is usually far below the magnetic field frequency. An exception was the “starfish experiment” of Sneyd et al.  where the frequency ofthe Lorentz forces acting on a liquid mercury drop matched its eigenfrequencies. The results were very regular single mode oscillations. In an experiment of Perrier et al.  a mercury drop was placed in a homogeneous high-frequency magnetic fieldof about 14 kHz. Here, the drop developed a wavy pattern which remained static. Further current increase lead to pinching and kidney-like shapes ofthe drop. Another high-frequency experiment was done by Mohring et al.  who used an annular gap filled with Galinstan. Lorentz forces on the horizontal free surface on the upside caused short-waved ripples in a first state which irregularly moved. Upon field intensification he observed a superimposed long-waved static deformation as a second state and finally one or several simultaneous pinches at the bottoms ofthe long wave valleys. The vertical gap of Mohrings annulus allowed only 2D motions and still brought a lot of unexpected results.
terms . This formalism needs to be extended by incorporating classical chromofield. The evolution of partons can be obtained by solving the Schwinger-Dyson equation defined on a closed-time-path. This is formidable task because it involves a non-local non-linear integrodifferential equation and because of its quantum nature. There are two scales in the system: the quantum (microscopic) scale and the statistical-kinetic (macroscopic) one. When the statistical-kinetic scale is much larger than the quantum one, the Schwinger-Dyson equation may be recasted into a much simpler form ofthe kinetic Boltzmann equation by a gradient expansion. The Boltzmann equation describes evolution ofthe particle distribution function in the phase space of momentum-coordinate, and can be solved numerically for practical purpose. For a non-equilibrium system of quarks and gluons in a chromofield, the distribution function also depends on the classical color charge ofthe parton, since the color is exchanged between the chromofield and partons and among partons themselves. In this case, the Boltzmann equation also describes the evolution ofthe parton distribution function in the color space [2,3]. While the Boltzmann equation describes the kinetic and color evolution ofthe hard parton system, soft partons are normally treated as a coherent classical field whose evolution may be described by a equation which is similar to Yang- Mills equation. Therefore, one should study the transport problem for hard partons in the presence of a classical background field. For example, in high energy heavy-ion collisions, minijets (which are hard partons) are initially produced and then propagate in a classical chromofield created by the soft partons [4–6]. In this situation it is necessary to derive the equation ofthe gluon and the classical chromofield to study the formation and equilibration of quark-gluon plasma. Hence we see that the issues we want to address in this paper have practical implication in ultra-relativistic heavy ion collisions.
The ICA is a statistical technique used to reveal the independent components, hidden in the observed data. This allows the identification ofthe leading in- dependent structures appearing during the burning process and representing some physical processes involved in the observed phenomenon. As said in Chap. 4, before applying the ICA algorithm, the video data are first pro- jected onto their POD modes, in order to operate on a lower-dimensional dataset that still retains the most important information and has a higher signal-to-noise ratio. According to the previous study on the energy fraction carried by each eigenvalue (see Fig. 7.10 and 7.11), it is possible to state that no big differences on the main process dynamics are observed when 25, 50 or 100 modes are considered (see Sec. 7.2.1). On the other hand, when the ICA is performed, important differences are found when the data are projected onto 10, 25, 50 or 100 POD modes and the same number of independent components is computed. In particular, the more the data are filtered (which means the lower the number of retained POD modes), the better the main features ofthe combustion process are described. This is due to the fact that samples of observations of random variables converge to a Gaussian dis- tribution when the number of observation is sufficiently large. This means that, the larger the dataset, the more Gaussian the variables. Since the ICA technique starts from the assumption that the analyzed variables need to be non-Gaussian, the algorithm works better when applied on a lower dimen- sional and less noisy dataset. In order to prove this, the video data were projected onto 10 and 100 POD modes, before computing 10 independent components. The results show that, in the first case (10 POD modes, 10 ICs), the average flame structure is recovered, while, in the second case (100 POD modes, 10 ICs) only spurious fluctuations are found, see Fig. 7.14. This means that either the original signal is too noisy (thus, the dataset has to be reduced to a lower dimensional system) or 10 ICs are too few to describe such a big dataset.
characterization ofthe competitive firms, identifying its behavior to be able to propose actions of promotion ofthe competitiveness according to its needs.
Several authors affirm that the competitiveness is built in a very located process, based on firms located around one or various industries that converge or are intertwined some with other (Porter,1998a, 1998b; Grant, 1996, Mintzberg and Lampel, 1999). In a complementary way, it is affirmed that the strategy ofthe business should be based on its resources and internal capacities, having these factors preponderance in the market (Grant, 1996b).
phenomena we are interested in, still leaving out sources of noise and errors (such as the aft and rear end ofthe slab and the bottom ofthe fuel). The images are then exported to MATLAB Ⓡ and converted from true-colour RGB to binary data images, based on a luminance threshold. This allows us to work only with sparse matrices, thus saving comput- ing time and space. The background noise, which usually consists of small light spots (most likely burning paraffin droplets), is removed, to better detect the flame edge. The thresholds for the binary conversion and for the background noise removal were accurately and manually chosen for each combustion operating condition, since the flame luminosity and the level of background noise and window pollution is different depending on the fuel composition and oxidizer mass flow. Still, before starting the automatic construction ofthe Snapshot Matrix, these parameters are always checked for few sample frames. Finally, to compute the excited fre- quencies and wavelengths, the waves edge is automatically detected with the “Canny” method and the data are saved in MATLAB Ⓡ as 2D arrays. Each frame is then rearranged as a column vector and the Snapshot Matrix, which contains all the frames to analyse, is created. It has to be noticed that the recorded video data is a line of sight measurement. Thus, the data in the analysis represent an integrated measurement over the whole fuel slab width. This has to be taken into account when they are analysed.