Several methods exist to study the regenerative cool- ing ofliquidrocketengines. A simple approach is to use semi-empirical one-dimensional correlations to estimate the local heat transfer coefficient. However, one-dimensional relations are not able to capture all relevant effects that occur in asymmetrically heated channels like thermal stratification or the influence of turbulence and wall roughness. Especially when using methane as the coolant, the prediction is challenging and simple correlations are not sufficient [7, 8]. An accurate NN-based surrogate model forthe maxi- mum wall temperature along the cooling channel is de- veloped by Waxenegger-Wilfing et al. . The training dataset uses results extracted from samples of CFD simulations. The NN employs a fully connected, feed- forward architecture with 4 hidden layers and 408 neu- rons per layer. It is trained using data from approxi- mately 20 000 CFD simulations. By combining the NN with further reduced-order models that calculate the stream-wise development ofthe coolant pressure and enthalpy, predictions with a precision similar to full CFD calculations are possible. The prediction of an entire channel segment takes only 0.6 s, which is at least 1000 times faster than comparable three-dimensional CFD simulations.
DLR offers within its Black Engine program a large portfolio of fiber reinforced structure systems for functional components in rocket thrust chambers. New material classes show high application potential for several spacepropulsion systems, e.g. orbital spacepropulsion or high performance rocketengines. Load carrying structures made of CFRP (Carbon Fiber Reinforced Plastics) are of high interest focusing on high performance engines. They are promising future design candidates because of their general weight reduction potential accompanied by high material strength. CFRP load shell structures, used in representative tests, could be proved as cryogenic hydrogen tight with regard to typical test bench requirements. CMC (Ceramic Matrix Composites) materials stand in the major focal point. Firstly they are highly applicable as hot structures for inner liners in the subsonic combustion chamber component. Secondly they can be used for self-sustaining structures of supersonic nozzle extensions. Beside this typical target courses they also can be used in the injector component. Within relatively easy manufacturing processes highly sophisticated channel patterns can be applied in CMC injector elements. Apart from this constructive property CMC designs are very interesting with regard to requirements due to fabrication tolerances. Naturally CMC injector elements are suited for hot injection as well as for cold injection. A lot of CMC material development effort in the last years led to specific material selections in terms of thermo-chemical resistance under the ambitious hot gas conditions in high performance LOX/LH2 operation. Beside multiple numerical works concerning cooling methods extensive test campaigns have been conducted at DLR’s P8 and P6.1 test facilities. Principal goals could be reached: High efficient hot gas operationof transpiration cooled inner CMC liners in subscale LOX/LH2 demonstrators (up to 90 bars chamber pressure) compared under scaling aspects  to standard high performance thrust chambers (Vulcain engine); Damage free operation under relevant high performance hot gas conditions (50 mm chamber, 7 % overall coolant mass flow ratio); Structurally reliable light weight design showing high ratio of thermal and mechanical load de-coupling. In matters of cooling principles all standard methods seem to be interesting in conjunction with the CMC technology. Apart from the excellently working transpiration cooling, first evaluations are ongoing considering regenerative cooled CMC wall structures, but also radiative and film cooled systems targeting on the 500 N apogee motor classes, combined with new structural design approaches, show promising perspectives for future applications.
The Institute's ongoing research work focuses on fundamental research into the combustion processes in liquidrocketenginesand air-breathing enginesfor future space transport systems. TheInstitute is also concerned with the use of ceramic fiber materials in rocket combustion chambers andthe development and application of laser measurement processes for high-temperature gas flows. One of DLR's key roles in Lampoldshausen is to plan, build, manage and operate test facilities forspacepropulsion systems on behalf ofthe European Space Agency (ESA) and in collaboration with the European space industry. DLR has built up a level of expertise in the development andoperationof altitude simulation systems for upper-stage propulsion systems that is unique in Europe.
This work would not have been possible without the contributions of my coworkers and colleagues. Dr. Timon Schroeter, who talked me into chemoinformatics, gave me inspiration and motivation for my research. I would like to thank Dr. Matthias Rupp for his thorough review of my work and his reliable assistance within the past year. A great thanks goes to David Baehrens, Fabian Rathke, Dr. Ulf Brefeld and Peter Vascovic for their help on design- ing and implementing new algorithms. I would like to thank my collaborators at Idalab, Bayer Schering Pharma and Boehringer Ingelheim, especially Dr. Sebastian Mika, Dr. Anto- nius ter Laak, Dr. Nikolaus Heinrich, Dr. Andreas Sutter and Dr. Jan Kriegl, for many fruitful discussions andthe open exchange of experience. The participants of ”Navigation Chemical Compound Spacefor Materials and Bio Design” program at IPAM, UCLA inspired my work and I would like to thank Zachary Pozun, John Snyder, Prof. Dr. Kieron Burke, Dr. Daniel Sheppard and Prof. Dr. Graeme Henkelman for their calculations and willingness to share their knowledge.
The risk assessment due to meteoroids and debris penetration utilised forthe sizing ofthe shielding is based on ECSS methods . The determination ofthe mass and velocity distribution for all sources of impact (debris, meteoroids and meteoroid streams according to the Jennikens –Mc Bride model) have been determined with the help ofthe ESA program Master-2009. Particles with a mass situated between 0.01 g and 20 kg have been considered (see Figure 10 and Figure 11). The most sensitive parts ofthe high-thrust stage are the propellant tanks. A perforation of one ofthe tanks would lead to a major failure as the propellant would, in the best scenario, escape through the hole andthe pressure would decrease. In the worst case, both tanks would be perforated and propellant would enter into contact, leading to an uncontrolled ignition. To avoid such a problem, a Wipple shield is required to project the cylindrical part ofthe tanks. The lower and upper domes are already protected by the upper and lower cones. Engine feed-lines are protected by the lower skirt.
Heat dissipation of electronic devices would also be an important issue, due the already large amount of excess heat in the MEGAHIT spacecraft. Possibly, new materials should be studied to decrease the heat dissipation and consequently increase the capability of working at high temperatures. During a workshop ofthe MEGAHIT consortium, it was pointed out that The China Aerospace Science and Technology Corporation (CASC), is investigating the use of diamond powders alloyed with copper to improve heat dissipation, which is perhaps an opportunity for international cooperation. High temperature, high heat dissipation electronics will also find numerous applications in ICT industry, with interesting co-development opportunities for server farms/cloud computing applications. Another obvious synergy would be with defence and aeronautics industries for voltage, high temperature, rad-hard electronics.
describing each of them. These are termed as features or attributes that describe the data points. Additionally, there could also be information available on the category that each of those data points belongs to. For example, consider a list of products available in a supermarket. The set of features describing them could include ingredi- ents, packaging, make, fragility etc. Based on these features, the inventory supervisor is to make a decision on various aspects. Some examples are: (a) decide whether a certain product requires refrigeration or not; or (b) classify the products based on their shelf-life; or (c) grade the products from low-to-high or on a continuous scale; or (d) categorize the set of products which are seasonal, etc. An automated system that when fed with this products’ information (input) can help identify the category (or categories) each product belongs to (outputs). Such a system is said to ‘learn from data’. When the categories assigned to the products are already given, the system uses this information to learn characteristic features distinguishing products of one category from those ofthe other. It is then tasked with predicting the categories for products that are as yet unseen forthe system. This scenario is called supervised learning in which themachine is cognizant ofthe outputs and can use them to learn common patterns. In supervised learning, the discrete categories that the products need to be categorized into are typically referred to as ‘classes’, and their names as ‘class labels’. This scenario is called ‘classification’. Instead of discrete classes as labels, the output labels could be continuous, for example, grading a product’s pop- ularity on a continuous scale of 0 −5 (low−high). This scenario is called ‘regression’. Before making predictions on the unseen data points, the stage in which the system learns from the available data is called the training stage. In order to evaluate the predictions made by such systems, the available data is often split into two chunks, one used for training andthe other that is kept aside to be artificially treated as unseen or test data. Contrastingly, in the unsupervised learning scenario, no such information about the categories of products is available. In other words, in unsuper- vised learning, the information on the output label of each data point is missing. The system then simply ‘clusters’ all data points into various categories based on their similarities and/or differences. The (dis)similarity is computed using the feature in- formation ofthe data points. In this case, any number of clusters is plausible; the ideal number of clusters depends on the data andthe task. Unsupervised learning is also known as ‘clustering’.
The development process ofrocketpropulsion systems is no longer affected only by the demand for better performance properties like higher thrust, enhanced specific impulse and/or increased velocity gain. Instead, requirements are coming more and more into focus, which have been rated up to now as secondary [1,2]. These include amongst others a free and versatile thrust variation capability, simple handling and storage characteristics, low toxicity and health hazard risks both for propellant and exhaust flow species, improved safety in handling and use, environmental friendliness, reusability, and strategies for upgrading and decommissioning under the above mentioned aspects. Furthermore mission scenarios are getting more and more complex and existing propulsions systems with conventional propellants are not able to fulfil all ofthe envisioned demands of contemplated missions. Considerable efforts are currently undertaken worldwide to develop greener propellants, fuels andpropulsion systems (see e.g. Refs. [3-6]). For hydrazine replacement, energetic ionic liquids based on ADN and HAN are most promising. First satellites are in orbit with small thrusters using advanced green propellants based on ADN [7-9]. Nevertheless there is still a long way to go before the majority ofpropulsion systems using highly toxic and aggressive propellants may be replaced on all thrust levels andfor all mission demands by greener ones.
the combustion of reactive chemicals, consisting of fuel and oxidizer components, within a combustion chamber to supply the necessary energy. According to Newton’s third law, the gas expansion pushes the engine in the opposite direction. The net thrust of a rocket engine depends on the mass flow ofthe exhaust gas and its effective exhaust velocity. Changing the direction ofthe thrust can be achieved by gimbaling the whole engine, which is the standard procedure for big liquidrocketengines used on modern launch vehicles. For a given propellant combination, nozzle geometry, and ambient conditions, the magnitude ofthe thrust is a function ofthe combustion chamber pressure and mixture ratio, i.e. oxidizer to fuel ratio. Thus, those factors are usually the focus ofthe main control loops and are controlled via adjustable flow
events. As outlined in the chapter, existent annotation schemes fail to anchor events temporally forthe majority of events. This is due to restricting the scope ofthe annotation to the same or neighboring sentences. We perform an annotation study that shows that temporal information for events can be several sentences apart from the event mention. In this chapter, we propose a new annotation scheme that an- chors all events in time. It can be performed efficiently and with a good agreement by human annotators. In contrast to previous schemes, annotators take the com- plete document into account and are allowed to merge temporal information across a document. We then present an automatic system for this new annotation scheme. While the annotation for humans is simple, and more efficient compared to other annotation schemes, it generates several challenges for automatic systems. The an- notators took the complete document into account when anchoring an event in time. Hence, automatic systems must consider the complete document. We present a sys- tem that is based on a decision tree and in its nodes it applies local classifiers that are based on convolutional neural networks. We demonstrate that this system can take information from the complete document into account. Further, we demonstrate that the system generalizes to the task of automatic timeline generation.
The proposed application procedure ofthe performance map for a column design is shown in Fig. 7. Forthe calcula- tion of flooding points, HETS values andforthe generation of a performance map only the physico-chemical properties (density, viscosity, interfacial tension, distribution coeffi- cient, diffusion coefficient) and one pilot-plant forthe experimental investigation ofthe fluid-dynamics are required. The measured holdup is used forthe determina- tion ofthe coalescence parameter. The population balance model can be used to generate and plot the performance map or to directly calculate the column dimensions for a given separation task. Forthe direct calculation additional information is required: feed composition, volume flow ratio and number of required equilibrium stages. This infor- mation can be calculated in a flow sheet model. For given volume flow ratio the phase load is increased until the flooding point is reached. Industrial extraction columns are usually operated at approximately 80 % flooding load . For chosen volume flow ratio andoperationat 80 % flood- ing load, the specific volume flows (phase loads) andthe corresponding HETS values can be calculated with the pop- ulation balance model. On the basis of HETS and required equilibrium stages the column height is calculated. The col- umn diameter is calculated with the required feed volume flow.
The study ofthe heat flux on the hot side is based on an existing test case. The BhPhRM (high pressure and high Mixture Ratio combustion chamber) test specimen was operated atthe MASCOTTE test bench at ONERA laboratory [3-6]. It is composed of an injection head, two calorimetric sectors for temperature measurement ofthe wall and an optical window that can be placed up- or down-stream ofthe calorimetric segment for flame diagnostics. The tests were performed under various configurations during a 2018 campaign performed with the full calorimetric configuration as illustrated in Fig. 4. No optical access was used during this campaign. This configuration supplies full information on wall temperature and thermal flux (inversed method) along the combustion chamber.
For executing computationally intensive tasks, the HPNs are used. To achieve high comput- ing performance and keep system costs down, Commercial off-the-shelf (COTS) hardware is used. Atthe moment a Xilinx-Zynq Z7020 SOC [ 20 ] is used accompanied by 1 GiB of Random- access memory (RAM) and 4 GiB flash-based non-volatile memory forthe operating system andthe application software. The dual core ARM Cortex-A9@866 MHz CPU [ 21 ] ofthe Xilinx-Zynq SOC offers significantly more performance than the LEON3 used forthe RCNs. Apart from this, an FPGA is part ofthe Xilinx-Zync SOC, which is intended to be used for implementing highly computationally intensive tasks. Due to the na- ture of a distributed system, it is possible to use other kinds of high-performance nodes, like GPU accelerators or in the future replace them with modern, more performant hardware, without the need of replacing the whole system.
Autonomous robotic systems are approaching the matu- rity level to be able to support astronauts in spaceand on planetary surfaces. Commanding these robots with exist- ing input devices and command modalities cannot cope with their increased capabilities and limits the usability forthe astronaut. This paper presents alternative user in- terface (UI) concepts forthe use with ubiquitous devices such as smartwatches and tablet computers. In particular, it is proposed to command an autonomous space robot assistant on a low to medium level of autonomy using a smartwatch. When commanding the robot on a high level of autonomy, a tablet computer can be utilized with object-centered task-level commands. The respective in- teraction concepts designed to make best use ofthe in- put devices are presented in detail. A comparison ofthe proposed interfaces andthe presentation ofthe use ofthe tablet computer user interface in the METERON SUPVIS Justin ISS experiment concludes the paper.
When applying methods, not only the performance ofthe team plays a role, but also the efficiency of a method. In particular, it is interesting to see whether and how methods have been adapted to new scientific findings. Brainstorming, for example, acutely needs adaptation. In Germany it is still taught that brainstorming has to be carried out without criticism and in a group (Lindemann, 2009). Studies have shown, however, that allowing criticism leads to better results (Nemeth, Personnaz, Personnaz, & Goncalo, 2004). In addition, it could be shown that better results can be achieved if the members of a team work individually on the problem and not as a group (Mullen, Johnson, & Salas, 1991). These two findings are not results of recent years, but of recent decades. Adaptation is, therefore, long overdue, in order to make this method more effective and efficient. It can be assumed that many other methods used in product development also need adaptation. The review ofthe actuality ofthemethods is part oftheresearch approach presented in the following section. A further problem is the question of how to provide information about the adaptation ofthemethods. Currently, a team atthe University of Wuppertal is researching how the transfer of methodological knowledge can be improved. Practical experience and digitalization should play an important role here.
mapping is increasingly applied [ 4 – 8 ]. Hazard and risk mapping generally aim to identify and spatially delineate hazard-prone areas, while analyzing the potential risk from that in a targeted study [ 9 ]. To carry out a risk analysis, it is necessary to analyze the occurrence, characteristics, and impacts of past hazard events, to relate them to the present situation and to generate predictions forthe future. Several studies have been carried out to this end with various knowledge-based methods [ 10 , 11 ], with some also including several machinelearning (ML) methods [ 12 – 16 ]. Almost all ofthe ML methods that have been used to analyze the potential risk of landslides heavily depend on an inventory data set ofthe spatial extent of known landslides or at least one characterizing GPS point location per known landslide in the target study area. ML methods require such data sets for both training and validation steps [ 17 ]. Although some knowledge-based methods are independent ofthe existence of landslide inventory data sets forthe generation of hazard and risk maps, the resulting maps require accuracy assessment and sensitivity analysis steps and, therefore, need accurate inventory data sets [ 18 ]. Thus, it is crucial to have access to an accurate landslide inventory data set at all stages when monitoring, modeling and mapping landslide risk. Moreover, landslides often trigger emergency situations if the occur in the vicinity of human habitations or infrastructures, e.g., power lines, roads, bridges and settlement areas, which means there is often time pressure to detect and delineate landslides affecting certain areas to carry out tasks such as timely support planning and crisis responses [ 19 , 20 ]. Although some advanced field surveying techniques exist, e.g., laser rangefinder binoculars along with a GPS receiver [ 21 ], gaining access to the affected areas and conducting field surveys are too difficult or dangerous in most cases [ 22 ]. Thus
The three different MDO process chains will be used and further developed in DLR’s oLAF project (Optimally Load-adaptive Aircraft, 2020-2023), which is dedicated to the detailed investigation and quantification ofthe potential of aggressive load reduction in aircraft design. The focus is on the highly integrated designand optimization of a long-range aircraft, driven by load reduction aspects from the very beginning ofthedesign phase, predominantly by applying high-fidelity coupled procedures for aerodynamics, structure, aeroelasticity, loads, flight control and systems, and on the evaluation ofthe resulting optimally load-adaptive solution with regard to flight physical performance, technical feasibility, operational capability, maintenance aspects and economic efficiency. In addition, existing simulation methodsand processes for multidisciplinary analysis, designand optimization of load- adaptive aircraft will be sharpened and further developed. The goal is to come up an efficient MDO process with interfaces to overall aircraft designand engine design that is sustainable and modular, that can handle many design parameters and constraints and that has the ability to consider high-fidelity aerodynamic loads in the loads process by making use of reduced-order models for steady and unsteady loads on the basis of highly accurate CFD calculations.
Christopher T. Sheaf 4
Rolls Royce plc., Derby, United Kingdom, DE24 8BJ
This work describes the assessment ofthe effect of engine installation parameters such as engine position, size and power setting on the performance of a typical 300 seater aircraft at cruise condition. Two engines with very-high by-pass ratio and with different fan diameters and specific thrusts are initially simulated in isolation to determine the thrust and drag forces for an isolated configuration. The two engines are then assessed in an engine-airframe configuration to determine the sensitivity ofthe overall installation penalty to the vertical and axial engine location. The breakdown ofthe interference force is investigated to determine the aerodynamic origins of beneficial or penalising forces. To complete the cruise study a range of engine power settings were considered to determine the installation penalty at different phases of cruise. This work concludes with the preliminary assessment of cruise fuel burn for two engines. Forthe baseline engine, across the range of installed positions the resultant thrust requirement varied by 1.7% of standard net thrust. The larger engine was less sensitive with a variation of 1.3%. For an assessment over a 10000km cruise flight the overall effect ofthe lower specific thrust engine showed that the cycle benefits of –5.8% in specific fuel consumption was supplemented by a relatively beneficial aerodynamic installation effect but offset by the additional weight to give a -4.8% fuel burn reduction.