types of used panels, e.g., planar panels or panels with a limited polygonal degree. In general, the method in Chapter 9 enables higher (node) fairness as it (a) speciﬁcally utilizes a fairness energy functional and (b) is optimized in a global fashion. While the front growing method in Chapter 10 was restricted to using energy functionals relying only on single mesh entities, due to it being based on a monotonically increasing energy. Integrating a fairness energy functional is not directly possible in this method, since evaluating fairness of, e.g., a node, requires a complete 1-ring, which is only available after a whole ﬁlling has been completed. Although a simple heuristic could be employed to penalize fairness after completing a ﬁlling, similarly to how pointy ﬁllings are handled, a better founded solution should include global optimization with fairness persistent throughout the process. As discussed, in most cases it would be too restrictive to simply restrict the method in Chapter 9 to planar panels. However, additionally allowing a set of non-planar, bilinear panels could provide suﬃcient degrees of freedom for the optimization and increase both smoothness and approximation power, while still oﬀering well-deﬁned panel-panel intersection tests. Another interesting area of research is to enable direct form-ﬁnding based on the Zometool system instead of the current two step procedure where Zome meshes are ﬁtted onto existing freeform designs. We envision a setting similar to Laplacian-based mesh editing, where freeform modeling is performed by transforming a handle, possibly re-tessellating the mesh, and deforming it to best integrate the new geometry of the handle region.
We carried out the experiment in a lab on the premises of the educational center. The notebook and the 3D printer were placed side by side. A bowl of water was placed on another on the right.
4.3.3 Design and Procedure. The users were tested one at a time. Each of the participants ob-
tained a short briefing from the experimenter on the software interface and the connected 3D printer. Next, they were introduced to the testing procedure and each of the select 3Dmodels were explained. In the briefing, subjects were given the opportunity to ask open questions and to be guided to the microphone, to the 3D printer with its building platform, and to the bowl of water. After the briefing, the voice recognition was started and the microphone was handed over to the participant. The subjects had to select one of the models in the database by its designation and to confirm their choice. A subsequent voice command initiated the slicing of the 3D model and then the printing process. Dependent on which 3D model was chosen, the printing process required up to 30 minutes. This was reported to the subject at the beginning of the printing process. During the printing process, there was no need for the user to touch the 3D printer. After the print was complete, the subject was asked to wait until the building plate had cooled down and, subsequently, to remove the 3D printed object. Finally, the 3D object had to be placed in the bowl of water to dissolve the support structures.
To demonstrate the described approach in section II an optimizationfor the viscous RAE2822 test case is performed. An objective function for drag has been used while keeping the lift constant. It is necessary to evaluate the gradients for both, drag and lift. The design parameters are the freeform control points depicted in the left Figure 7 (20 design parameter) which are free to move up and down only and neglecting the two control points at the trailing edge. An in-house developed optimization suite based on a conjugate gradient method was used to perform the design process. The flow simulations have been converged to a density residual of 10 −9 and the same order of residual magnitude for both adjoint simulations, respectivley. The results from the design process are shown in Figure 7 (right). The drag was decreased by about 20% while keeping the lift constant. The final shape and pressure distribution C p can be seen in Figure 13 . The strong shock on the upper side of the aerofoil has been entirely removed. The optimization process required 9 gradient evaluations and the total time was 4 hours and 45 minutes on a single processor machine. The savings of using the complete adjoint chain is negligible because of the low number of design parameters and fast finite differences for this two-dimensional case.
One natural concern is the computational feasibility of the MPEC approach; that is, the limitations concerning the size of problems that modern constrained optimization solvers can solve. If the constraint Jacobian and the Hessian of the Lagrangian are sparse, and first-order and second-order analytical derivatives and the corresponding sparsity patterns are supplied, then we believe that 100,000 variables and 100,000 constraints are reasonable size limitations foroptimization problems to be solved using state-of-the-art constrained optimization solvers on a workstation. In fact, we have successfully solved a structural estimation problem with 100,052 variables and 100,042 constraints in an hour on a Macintosh workstation, MacPro, with 12 GB RAM. We expect that with the technological progress in computing hardware and software, the size of computationally tractable problems will grow accordingly, doubling perhaps every few years as has been the case for decades.
One of the main challenges in implementing 3D nanoimprint technology for our optical nanospectrometer is the residual layer (desirable minimization of thickness, reproducibility in thickness and lateral homogeneity). The residual layer (RL), which typically occurs nearly in every nanoimprint, has a serious influence on the vertical dimension of the individual filter cavities (cavity thickness). Consequently, a non-reproducible cavity thickness or lateral fluctuations of the resulting RL causes an un-controllable spectral shift in the desired filter lines. Therefore, it is important that the above mentioned effects influencing the RL should be overcome in the implementation of3D nanoimprint process. In the case of our optical nanospectrometer, it is challenging to avoid that the RL depends on the geometry of the imprint patterns. Since, the FP filter cavities are identical in lateral dimensions but they vary in vertical dimensions (heights). As a result, the volume of each individual filter cavity varies and the required amount of the imprint material is different. This diversity in the vertical dimension of the filter cavities causes the resulting RLs to vary in thickness slightly or considerably across all the filters. The rise of inhomogeneous RLs across all the vertically varying filter cavities is a critical issue of nanoimprint process based on soft template such as UV-based substrate conformal imprint lithography (SCIL) which is mainly discussed in this thesis work.
2. Hübner, Alexander, Düsterhöft, Tobias und Ostermeier, Manuel (2020): An optimiza- tion approach for product allocation with integrated shelf space dimensioning in retail stores.
Status: Eingereicht im European Journal of Operations Research am 28. Februar 2020
With Principal component analysis large concepts can be discovered in the first principal components. The best models predicted by all BPRNs trained with random data are evaluated. The wiki image dataset provides labels for age and sex of the persons in the images. With these labels the two visualizations are created. In Figure 6.8 the first 5 principal components are plotted with the data labeled for sex of the person in the image. On the upper diagonal a scatterplot shows a set of points from the dataset and in the bottom diagonal the mean values of all observations are plotted. The first two dimensions do inherit a very large portion of the sex concept. This revelation however was to be expected. The BFM itself also inherits the male female concept prominently in the fist component of the PCA, which is plausible because the human population can be best evenly split in male and female. The first two dimensions almost seamlessly separate both sexes, when used as a classifier with
I would like to thank Max for his support. Although he is very active, he always has an open mind and a solution ready. Also, I would like to express my sincere gratitude to Jürgen. When I was searching for a bachelor the- sis, he accepted my proposal which marked not only the beginning of my journey at TK but also initiated my interest in HCI. Since then, they have been my mentor and guidance. I am always fascinated by the precision with which manage to ask the right research questions. Thanks to the two of you! The experience would not have been the same without my colleagues at TK and especially the HCI area who helped with countless brainstorming sessions and last-minute activities. Many thanks to Niloo, Mo, and Roman for their support when I was new at TK, to Jochen who made Flexibles possible, and to Flo, Jan, Sepp and Markus for their tireless help during the last years. Thank you all for being such a terrific and inspiring team.
2.2. Fabricationof3D-printed microfluidic device and 3D printed samples
The primary stages involved in making a 3D printed product are the creation of a CAD file, conversion and preparation of the file using a slicer program, uploading the file to the printer and finally the physical creation of the object. All 3D printed models were fabricated using an Ultimaker 2 + 3D printer using a 0.4 mm nozzle size. The accuracy of the printer in the X , Y and Z dimensions is 12.5, 12.5 and 5 µm respectively. The nozzle size of 0.4 mm limits the positioning of features close to each other but still allows for the design and construction of features (such as channels) of dimensions down to approxi- mately 20 µm. However, while it is possible to create channels with this resolution, currently their formation is not reliable, and working at this scale would require improvements to the consistency of the plastic extrusion.
waveguide section length l w 30.0
wall thickness t 3.0
feed diameter d 7.0
horn antenna. Section 3. discuss the results of application of different metallic coatings and processes and assesses their quality. A pretuned SMA waveguide launcher is then interfer- ence fitted to the prematched waveguide section.This approach reduces prototype assembly and labour costs associated with post impedance tuning. In Section 4., the advantages of the 3D design approach are demonstrated through results for elec- tromagnetic analysis of the horn antenna and its accuracy is verified through correlations with measurement data. Finally some conclusions are given in Section 5. and potential guide- lines for repeatability and quality assessment are discussed. 2. Antenna Design
A problem with increasing the number of iterations is the computational cost of PSO. As the sample size increases so does the time required to compute the likelihood. For exam- ple, when the sample size is N = 40, T = 20 the likelihood functions begin to take over to a tenth of a second for each computation; with 100 particles, the computation quickly time consuming. Even if the likelihood function can be computed in a tenth of a second, for a data set that does not converge, 30,000 seconds, or over 8 hours are required, which is too long for estimating a large number of data sets. Parallelization is one way to address the efficiency issue. In 1995, Schnabel (Schnabel, 1995) outlined three key ways in which optimization procedures can be effectively parallelized. The first option is to parallelize the cost function itself. In the case presented, this would not be greatly beneficial, as the routines for distrib- uted matrix computing do not offer a significant advantage. The second strategy is to parallel- ize the linear algebra routines, which may offer some benefit, as many matrix operations and inverse computations are involved. The third approach is to parallelize the procedure itself. That is, the particles can be distributed across processors, allowing multiple likelihood func- tions to be computed simultaneously. Since most of the computation time is consumed with cost functions, large efficiency gains can result.
The 3D CFD simulations are carried out with the solver TRACE [33, 34] which is developed at DLR for turbomachinery application. For this work, TRACE solves the steady-state Reynolds-averaged Navier-Stokes (RANS) equations using an im- plicit cell-centered finite volume scheme on a structured grid. The inviscid fluxes are evaluated with a Roe scheme. For space discretization the Fromm scheme is applied. For the turbulence closure the two equation Wilcox k-ω turbulence model with extensions for stagnation point anomaly and rotational effects is se- lected. The compressor is modeled with a single blade passage approach with mixing planes and it is discretized with a structured multi-block grid with 7.2 million cells. The span-wise resolution is 71 points with 9 points in the clearances. On all blade surfaces, the wall boundary treatment is set to wall functions. The dimensionless wall distance y + on the majority of the blade surfaces is between 20 and 50. All rotor blade rows have fillets. The IGV and the first two stators have semi-clearances. The two rear stators are cantilevered. As initialization, the results of the throughflow computations of the corresponding operation point are used. Convergence is achieved when the relative change in mass flow, efficiency and pressure ratio is less than 0.05% for the component, each stage and each blade row for over 500 time steps.
Results and discussion 3D microfluidic features
By positioning the finest structures of the microfluidic net- work in the middle of the chip, the leachate reaches these parts last, which preserves the fine details. In Fig. 8 micros- copy images from (b) the top (minus z-direction) and (c) from the side (minus y-direction) are seen. The 3D hydrodynamic focusing is visible as the blue analyte stream narrows down at the conjunction in the y-direction (as is seen in (b)) and also narrows down in the z-direction (as is seen in c). In Fig. 8(c) also one folded narrow section with three levels is visible. The height of the middle level forms the narrowest part of the microfluidic device having a channel height of 40 μm. This shows the strength of SLE in manufacturing micro- fluidic systems with narrow parts, channels on multiple levels and a combination of different channel diameters and differently shaped cross sections. The manufacturing process is still simple and does not require any assembly steps.
The high reproducibility of results means that model initialization is no longer a greater source of error for the proposed model fitting. Thus, the influence from model initialization can practically be neglected for an appli- cation of the proposed method. For3D-SSM in general, this is a significant trait since their former sensitivity to the model initialization necessarily lead to an increased optimization effort, which includes highly complex measures for an accurate initialization and for the robustness of the subsequent model fitting. In this regard now, only the performance of the employed model fit- ting has to be considered during an application of the proposed method. Due to its considerable capture range, the proposed method does not require any placement at the organ of interest, which makes additional methods for pre- vious organ detection obsolete. In practice, this considerably facilitates an application of3D-SSM. As illustrated in the capture range experiments, the non-local appearance modeling of the proposed method allowed an effective use of widespread image information. This was shown in direct contrast to the local appearance modeling of the compared 3D-SSM, where a drastic decrease of usable information was observed with growing distance from the sought organs of interest.
Besides the method for driving the switch, another key func- tionality in ̀FACS is hydrodynamic focusing. By merging the analyte channel with the surrounding buffer-conducting channel, the analyte flow is sheathed by a buffer solution in the middle of the channel and can be narrowed down to a smaller cross section with marginal mixing with the sheath flow due to the characteristics of laminar flow. This method allows to control the position and cross section of the analyte flow in a scale of a few micrometres. Moreover this reduces the risk of multiple particles passing the detection volume at the same time and makes it easier to switch the analyte flow from the waste channel to the collection channel. The analytes are confined most often in 2D, i.e. the sheath flow fo- cuses the analyte flow in one direction, typically the horizon- tal one. 11,12,14–22 This is because of the applied lithographic methods which allow fabricationof planar structures with one global depth, but the application of these methods gets more challenging and complex for structures with changing depths or crossing channels. Changing depths are most often generated by multistep lithography and crossing channels by a Fraunhofer Institute for Laser Technology ILT, Steinbachstraße 15, 52074
even higher efficiencies. The combination of very high slurry concentration and ultrasonication during packing resulted in ultra-efficient columns with close to 500,000 plates over a length of 1 m corresponding to a reduced plate height near unity (Chapter 3). It can be concluded that the advantage of high slurry concentration was maintained (good radial homogeneity) while the sonication largely prevented the formation of unfavourable voids. These columns provide exceptional separation potential for very complex samples and are well suited for biological probes which provide only small sample volumes. While the handling of these columns is challenging due to high back-pressures and the sensitivity towards extra-column band broadening effects, recent studies have demonstrated the successful applications of such columns for the analysis of complex peptide mixtures  and digested rat hormones . To conduct similar studies for columns of analytical format, a new imaging and reconstruction procedure is required due to intransparent column walls (usually stainless steel in contrast to fused silica for the capillary columns). FIB-SEM was selected as an adequate method and a commercial narrow-bore column was chosen as the sample (Chapter 4). The bed structure was stabilized with poly(divinylbenzene) and extruded from the steel housing. Imaging and reconstruction was conducted for the first time for a packed analytical column at two selected positions to illuminate the region affected by column wall effects and the bulk structure of the column bed. In addition, simulations of fluid flow demonstrated the effect of the structure on the flow profile in the column. The observed peculiarity of the bed structure in the wall region was found to be in excellent agreement with macroscopic and chromatographic (indirect) characterisations of the wall effects. At the same time, the
Production processes are usually investigated using models and methods from queueing theory. Control of warehouses and their optimization rely on models and methods from inventory theory. Both theories are fields of Operations Research (OR), but they com- prise quite different methodologies and techniques. In classical OR queueing and in- ventory theory are considered as disjoint research areas. On the other side, the emer- gence of complex supply chains (≡ production-inventory networks) calls for integrated production-inventory models as well as adapted techniques and evaluation tools. Such integrated approaches to model production-inventory systems have been developed over the last decade and it turned out that the problem of determining e.g. steady state distri- butions of the systems results in either large simulation experiments or in using heuristic decomposition-aggregation methods or in solving the global balance equations numeri- cally.
A big advantage of procedural models in comparison to other approaches is the persistence of the additional knowledge. A procedural model can be created and parameterized once and reused for any query for any database. It is even possible to recombine and edit existent procedural models. There- fore, procedural models give huge possibilities of improving retrieval and classification in a long-lasting fundamental way. The vast majority of the available techniques are targeted towards a non-persistent single query improvement. Techniques, which are based on user interaction or parameter configuration during or after the result computation, generate non-persistent additional information, since this infor- mation is only valid for the specific ongoing query. Other techniques, which use additional information provided in advance, are non-persistent when the information is tailored towards the specific database. The additional information is not valid for other queries of other databases. Techniques that generate persistent additional information are typically targeted towards the formulation of a more informative query constructed in advance. This includes techniques of user based query formulation, like the 2D sketch from Funkhouser et al. [FMK ∗ 03] (since the 2D sketches can be reused and are non database dependent) and several part-based template techniques [BGM ∗ 07], [AKZM14], [OLGM11]. However, 2D sketches and part-based templates are only rough approximations. None of these techniques offer the intrinsic expressiveness and flexibility of a procedural model.
The next step was to analyse the spatial distribution of A549 cell in the printed constructs by fluorescence microscopy. Nuclei were stained with Hoechst and 3D distribution was visualized by Z-stack analysis. After one day of culture, cells were well distributed in all bioinks under investigation, i.e. the Matrigel content did not influ- ence the initial distribution of the cells (Fig. 2B , upper row). However, on day seven after the printing process, dis- tinct differences became obvious (Fig. 2B , lower row). In the absence of Matrigel, cells sank to the bottom of the construct. This can easily be ascribed to the lack of cell-adhesive peptide motifs (e.g. RGD) that support attach- ment of the cells to the matrix as the molten gelatin was most likely washed out of the construct of the construct upon cultivation at 37 °C. The construct containing 5% Matrigel showed that not all cells sank, though many did. In contrast, higher Matrigel concentrations of 20 and 50% were substantially superior to support long-term spatial distribution of the cells than lower amounts of Matrigel. Thus, the higher Matrigel content supported cel- lular attachment to the matrix. In addition, it improved stability of the 3D printed constructs. Still, there was an obvious loss of the overall cell number with all bioink formulations used during seven days of culture, which was less prominent with the bioinks including higher Matrigel concentrations.