Use of rocks and gabions in hydraulic works especially in the area of river engineering have been increased during recent decades. Gabion stepped spillway is a type of hydraulic structures which is designed for river bed protection. Most of the flow kinetic energy is dissipated when flow cascade from one step to another. Therefore in most cases, the stilling basin does not considered at the downstream end of spillway, because of this a scour hole may developed. The main objective of this study is to conduct a series of experimental tests to investigate the mechanism of scour hole development. A total of 19 tests were conducted. The results of this study reveals that the scour hole dimensions in simple stepped spillway is larger than the pooled stepped spillway. Three equations have been presented for predictionof maximum scour hole depth.
In the first study of the dissertation, a complete framework ofpredictionof driver LC behavior was developed with several methodological innovations in comparison to prior research. Firstly, driving contextual traffic and driving style were considered for the preparation of the training datasets, termed as driving style datasets. Datasets without any additional consideration were termed as non-categorized datasets. These datasets were used to train the ML models to classify LC and lane-keep (LK) data samples. Secondly, a newly gaze-based labeling (GBL) method was further proposed to label LC and LK data samples compared with the time-window labeling (TWL) method which was commonly used by the related works. The results show that ML models trained by the driving style datasets can achieve higher classification scores than the non-categorized datasets. In addition, by using the GBL method, the classification performances of the models are more promising than by using the TWL method. To counter the limitations of the first study, i.e. data collection from the driving simulator and the feature selection based on the insufficient empirical knowledge, a second study was conducted based on a large set of naturalistic driving data.
Point predictionof future upper record values is considered. For an underlying abso- lutely continuous distribution with strictly increasing cumulative distribution function, the general form of the predictor obtained by maximizing the observed predictive likelihood function is established. The results are illustrated for the exponential, extreme-value and power-function distributions, and the performance of the obtained predictors is compared to that of maximum likelihood predictors on the basis of the mean squared error and the Pitman’s measure of closeness criteria. For exponential and extreme-value distributions, it is shown that under slight restrictions, the maxi- mum observed likelihood predictor outperforms the maximum likelihood predictor in terms of both performance criteria.
ased predictionof future Pareto record values is discussed in Paul and Thomas ( 2016 ), and maximum likelihood predictionof future Pareto record values is studied in Raqab ( 2007 ). Since the model of record values is contained in the generalized order statistics model [see Kamps ( 1995 , 2016 )], all results pertaining to predictionof future gen- eralized order statistics can be specialized to solve the prediction problem for record values [see, e.g., Burkschat ( 2009 )]. Bayesian prediction methods for future record values were first discussed by Dunsmore ( 1983 ) and have subsequently been applied to various distribution families [cf. Madi and Raqab ( 2004 ); Ahmadi and Doostparast ( 2006 ); Nadar and Kızılaslan ( 2015 )]. It should be noted that, under exponential as well as under Pareto distributions, maximum likelihood predictionof the subsequent record value R r +1 becomes trivial, since the respective predictor is given by R r , i.e.,
Ruban [ 27 ] showed that the NLSE is suitable for the predictionof extreme waves applying the Gaussian variational ansatz to the NLSE in order to obtain a semi-quantitative predictionof non-linear spatio-temporal focussing. Farazmand and Sapsis [ 28 ] presented a reduced-order predictionof rogue waves using MNLSE. Their procedure is based on the identification of elementary wave groups (EWG) assuming that critical EWGs do not interact with other EWGs within the short-term prediction. The measured irregular sea state is decomposed into EWGs and the evolution and amplitude growth of critical EWGs is determined by precomputed EWGs based on MNLSE. Thus, the numerical effort for the predictionof extreme events is reduced to the proper decomposition of the sea state and identification of the underlying, precomputed EWG. Cousins et al. [ 29 ] showed a similar approach: a data-driven prediction scheme for the prediction on extreme events. This approach can also divided into two separate components. The first component provides the data base for the predictionof extreme events, is based on the MNLSE and can be applied prior any real world application. The basis are localized wave groups which are numerically evolved for varying amplitudes and periods resulting in specific group amplification factors. The second component is the real-time extreme wave predictor using field measurements. Within the measurements, coherent wave groups are to be identified and the future elevation is estimated from the data base. Again, the numerical effort is reduced to decomposition of the irregular wave sequence, identification of coherent wave groups, and resulting amplification.
In this project, insights into novel gene regulation mechanisms revealed potential new biomarkers for the predictionof childhood wheeze. As a potential new biomarker should be assessed as easily as possible in the clinic, genes with a different expression between the phenotypes under unstimulated conditions seem most promising. This has the advantage that no cell culturing is necessary in addition to the advantage of a non-invasive method of sample collection provided by cord blood. In order to distinguish between the wheeze phenotypes and to assess the personal asthma risk later in life, it seems highly interesting to further investigate the role of the inflammasome/IL-1R1 axis. In this project, the gene expression of NLRP3, Casp1 and IL-1R1 differed significantly between the wheeze phenotypes and, which is especially important, differed from the gene expression of healthy controls. Consequently, considering those genes as potential biomarkers for the predictionof childhood wheeze might be a possibility to assess the personal risk allowing a more personalized treatment strategy. However, further research is needed in order to assess the potential of these genes as predictive biomarkers of childhood wheeze.
The predictionof ship motions has been an important field of ship hydrodynamic research for over 100 years. Traditionally, ship motions are split into two classes: manoeuvring and seakeeping. Manoeuvring describes ship motions initiated for example by the rudder, propeller or other control surfaces, whereas seakeeping is the motion behaviour of the ship in waves. During most of the 20 th century, these two fields were investigated separately. The first paper on the predictionof ship manoeuvres in waves known to the author was published by Hirano et al. (1980). In recent years – supported by the International Maritime Organisation (IMO) lead discussion on minimum power requirements in waves – the increased number of publications on this subject shows the increasing interest in this field. Due to the growing computational capabilities of modern computer systems, it seems preferable to develop a prediction method solely based on numerical simulations and not depending on expensive and time consuming experimental model tests. In the following sections, an overview of the current state ofprediction methods for motions of ships in waves and mean wave forces is presented.
The finite element (FE) tool CODAC (Composite Damage Tolerance Analysis Code) is presented, which can evaluate low-velocity impact behav- iour and residual strength of composite struc- tures. CODAC is a fast tool for use in the de- sign process and allows for a quick evaluation of damage resistance and damage tolerance of wing and fuselage structures of today’s and fu- ture aircraft. A methodology for the simulation of low-velocity impact of stringer-stiffened pan- els as well as methods for the predictionof the compression after impact (CAI) strength of com- posite laminates are described. Currently CO- DAC is being extended for application to sand- wich structures. Special FE formulations were developed, permitting an accurate and efficient calculation of deformations and stresses in sand- wich structures. These are necessary for the pre- diction of damage in the relatively soft core mate- rial due to combined transverse compressive and shear loading during an impact event. A compari- son of CODAC’s predictions to experimental data is presented, the first example being an impact on a two-stringer monolithic panel, the second being an impact on a honeycomb sandwich plate. 1 Introduction
ProteaMAlg describes the proteasome’s degradation dynamics using a system of ordinary differential equations (Mishto et al. 2008). The model considers processes like uptake and release of fragments into/from the proteasome as well as proteolytic cleavage of peptides inside the proteasome. In addition, the amino acid at each position of the substrate is incorporated in the model using substrate-specific cleavage strengths which can be determined either experimentally or using PAProc, NetChop or a similar prediction algorithm. The authors find that predictionof peptides is not possible with their or other existing statistical models. They can only describe the production of observed fragments from a specific substrate by fitting the model parameters to the observed data. New substrates provide entirely new parameter values. It could be shown however that both the substrate length and the amino acid composition affect the substrate cleavage strength and the overall substrate degradation rate. It was also shown that the generation of double cleavage products is favored in presence of PA28.
Artificial neural networks fall in the field of artificial intelligence and can in this context be defined as systems that simulate intelligence by attempting to reproduce the structure of human brains. Neural networks are organised in the form of layers and within each layer there are one or more processing elements called ‘neurons’. The first layer is the input layer and the number of neurons in this layer is equal to the number of input parameters. The last layer is the output layer and the number of neurons in this layer is equal to the number of output parameters to be predicted. The layers in between the input and output layers are the hidden layers and consist of a number of neurons to be defined in the configuration of the NN. Each neuron in each layer receives information from the preceding layer through the connections, carries out some standard operations and produces an output. Each connectiv- ity has a weight factor assigned, as a result of the calibration of the neural network. The input of a neuron consists of a weighted sum of the outputs of the preceding layer; the output of a neuron is generated using a linear activation function. This procedure is followed for each neuron; the output neuron generates the final predictionof the neural network.
Variceal bleeding is one of the main causes of death in patients with liver cirrhosis. In- hospital mortality rates dropped in the last decades from around 42% to around 15% due to improved management of acute haemorrhage (101), however, risk of mortality remains high. PHT and the increased blood flow into the varices lead to a dilation of the varices and a thinning of the variceal wall. Acute variceal haemorrhage develops, when the developing wall tension of the vessel exceeds its elastic limits. The wall tension (T) can be calculated with the LaPlace equation: 𝑇 = 𝑇𝑃 × 𝑟/𝑤. The three influencing variables are transmural pressure (TP) which is the pressure gradient between variceal pressure and luminal (esophageal) pressure, radius (r) and wall thickness (w), respectively (102).
Upper extremity amputation can be related to accidental trauma, disease, defect or the progress of war. Although persons have suered arm amputation for centuries, it is only during the past 500 years that substitutes have been fabricated in an attempt to provide a substitute for the amputated part of the arm. Much of the early writing that exists in reference to amputation, is related to war injuries. One of the rst examples of an early articial hand is a mailed st that was made for Goetz von Berlichingen in the middle ages. This prosthesis was equipped with jointed ngers that could passively grips his sword like a vice. Even with the advances made in the electrical control of prosthetic part, man has not build an excellent substitute for a missing hand, wrist, elbow or shoulder (Meier and Atkins, 2004). In the context of non-invasive interfaces for controlling mechanical hands, a concrete possibility arises from forearm surface electromyography (EMG), a technique by which muscle activation potentials are gathered by electrodes placed on the patient's forearm. These potentials can be used to track which muscles the patient is activating, and with what force (Castellini and van der Smagt, 2009). The use of myoelectric signals for control seems to have been suggested rst by Reiter, in Germany, around 1948 (Wirta, et al., 1977).
hybridizations striving for equal sampling and minimal distance between pairs of genotypes (Kerr and Churchill 2001) was developed for each factorial to minimize average variance. Sixty-three, 21, 57 and 21 hybridizations were performed for exps. 1, 2, 3 and 4 including 21, 15, 22 and 15 inbred lines, respectively. Both dyes (Cy3 or Cy5) were alter- nately used for each genotype to reduce systematic bias. RNA labelling and hybridizations were performed according to the protocols of the maize oligonucleotide array project (http://www.maizearray.org). The micro-arrays were scanned (AppliedPrecision ArrayWorx Scanner; Applied Precision Inc., Issaquah, Washington, USA), and the data were evaluated using the Software GENEPIX PRO 4.0 (Molecular Devices, Sunny- vale, CA, USA). The 2k micro-array was used for exps. 2–4. For Exp. 1, the raw files from the 46k micro-array were reduced to the oligos from the 2k micro-array. The data for exps. 1–4 have been deposited in NCBI’s Gene Expression Omnibus (Edgar et al. 2002) and are accessible through GEO Series accession numbers GSE17754, GSE85286, GSE85287 and GSE85288, respectively.
When an accurate instrument is unavailable or prohibitively expensive but cer- tain data related to the cause of misclassification are available, one can develop an identifiable model in an attempt to correct for the misclassification bias in the esti- mators and predictors. For the single proportion problem using misclassified data with no training data, Gaba and Winkler (1992) and Viana et al. (1993) developed Bayesian approaches with highly informative priors. Bayesian inferences with in- formative priors were also developed for two-sample problems for two proportions. For example, see Evans et al. (1996) for risk difference (the difference of two pro- portions) and Gustafson et al. (2001) for odds ratios. Lahiri and Larsen (2005) used a mixture model to correct for the bias of the ordinary least square estimators of regression coefficients due to imperfect linkages.
Journal plain bearings are widely used machine elements. Machine and plant constructors appreciate among other advantages, e.g. towards roller bearings, their efficiency in high-speed applications, their favourable damping characteristics and reliability. As long as journal bearing operate in a hydrodynamic state they are insusceptible to wear phenomena. A wide used parameter for journal bearing design is the specific load p spec, which is the applied load divided by the projected bearing area. This parameter was refused by other authors in the past upon good reason [1,2]. In a dynamically loaded bearing, not the specific load is a parameter for assessing fatigue but rather the multiaxial and often non-proportional stress quantities, induced by the oil film pressure into the lining material. Other influences as mounting stresses and thermal induced stresses in bimetallic designs enhance the multiaxiality of the stress state. In order to overcome the uncertainty of the specific load parameter the design strategy is often to enlarge the projected bearing area to a maximum in order to lowering the specific load. This best practise approach contradicts the ideas of lightweight design and resource efficiency.
variables that can be easily measured in the field or laboratory, i.e., flow depth, mean flow velocity, energy slope, median sediment size and density, and water temperature. The writers used more than 3500 published total-load data from field and flume studies, and the results showed that 84% of the data were predicted within the a magnitude of 2 times the measured values. This is an encouraging achievement considering the large database and the range of variables covered by the formula. Within the flows investigated, the derived relationships are fairly and evenly consistent with the available data over a wide range of conditions.
There are three main questions that are left for further in- vestigation. One important question is whether the model developed here can be used for other data, i.e. whether the state of user satisfaction in case of satisfaction with the user’s own performance is comparable to satisfaction with the system’s performance. The next question is, which and how many features are suited best for the task of user satisfaction prediction. The approaches representing the current state of the art in this regard use different kinds of features, ranging from acoustic ones as in our case to be- havioural ones, like turn-taking. For real-world-scenarios, the obtainability of the features and the robustness of the feature extraction should be in the focus – in this context, acoustic features have proven to be easy to obtain in high- quality audio data, but their robustness against noise in “in the wild” scenarios should be further investigated. And finally, there is the temporal aspect of user satisfaction: At which point is it possible to say that the user is satisfied or dissatisfied with a dialogue? If user dissatisfaction is recognised early in the dialogue, the system can adapt itself to the user’s state and act appropriately. These questions should be addressed in further research.
Karoly Robert University College, Hungary
The aim of the study is to establish insolvency forecast model with the usage of different statistical methods and compare their efficiency. Besides this the relation and direction between indebtedness and financial distress is also part of the examination. With different approaches we nearly reached the same efficiency, the main focus was on the independent testing sample where we did not apply any modification on the dataset supposing realistic circumstances for predicting the probability of default. The research is focusing on small companies, since their number in the economy is considered high, but for this segment such insolvency forecasts are very rare.
Since the focus of this paper is to study the gains from moving from country-specific models to a global model, we concentrate on relatively tractable linear benchmark models. At the same time, we acknowledge that other classes of non-linear models such as Markov-switching models can be superior tools to forecast business-cycle turning points (e. g., Kim and Nelson , 1998 ). Thus, it would be an interesting extension of our research to see whether taking the international dimension into account pays off also when comparing country-specific Markov-switching models with a Markov-switching GVAR model (as, e. g., in Binder and Gross , 2013 ).