• Nem Talált Eredményt

1.2 Problem statement

1.2.3 Calculation and modelling efficiency

The efficiency of calculations and modelling can be compared via statistical simulation experiments. The main steps of the investigation of efficiency are

1) Model development 2) Verification/validation

3) Statistical experimentation and result analysis

There are several statistical indicators, including mean and mode indicators, as well as parameters (location, scale and shape). They tell important information about the distribution of random values (Law and Kelton, 2000). A confidence interval is the range of values a certain percentage of the population would be expected to fall into if the sample was drawn from a normal distribution. Mean-value analysis models are processed by their average output.

Considering idle state and waste flow disruptions they have a limitation, this is why simulation is required in some cases for accurate predictions. Since the failure properties of components are best described by statistical (probability) distributions, commonly used life distributions of RAMS software packages, including but not limited to normal, beta, binomial, exponential, gamma, lognormal, uniform and Weibull distribution, can be used in waste management modelling too. In addition, standard distributions can be examined visually for different combinations of input parameters. Weibull++ by ReliaSoft is one of the most comprehensive software in this field (ReliaSoft Corporation, 2009b).

Introduction

15 1.2.4 Failure characteristics

Failure curves illustrate the number of subsystems, equipment units or components in a population that fail at any given time of the lifetime of that population, and thus give the probability of failure. Once the basic attributes of the failure curves are established, mean time before failure (MTBF) and other potentially useful metrics can be easily calculated.

Since counting failure rates is a key issue for availability and reliability analysis, the corresponding function should be known. If the failure distribution D has a density f, the failure rate function (also known as failure rate, hazard rate, risk rate, force of mortality), λ(t) is defined for those values of t for which (Barlow and Proschan, 1996)

( )

1 (random) variable X to denote these measures.

The very closely related probability density function f(x) and cumulative distribution function F(x) are the two most important statistical functions in reliability engineering.

Most of reliability measures can be derived or obtained with the help of these functions.

The probability density function of X is the function f(x) for the numbers a and b:

( )

b

( )

a

P a X b≤ ≤ =

f x d x (1.17) where X is a continuous random variable and a b≤ .

The cumulative distribution function of X, i.e. the function F(x), can be defined by

( ) ( ) ( )

0

F x =P X x≤ =x

f s ds (1.18) This function can be used to measure the probability that an item will fail before the associated time value, t, and is also called unreliability.

The mathematical relationship between the probability density function and the cumulative distribution function is given by occurrence (a unit failure) by time t is given by

( ) ( )

0

F t =

t f s ds (1.21)

Introduction

16

This function is the unreliability function as it defines the probability of failure by a certain time. Subtracting this probability from 1 (or 100%) gives the reliability function, i.e. the probability of success of a component undertaking a mission of a given time duration (Fig.

1.4).

Figure 1.4 Reliability represented as the area under the probability density function.

As reliability and unreliability are the only two events being considered and they are mutually exclusive, the sum of these probabilities is equal to unity:

( ) ( )

1

F t +R t = (1.22)

( )

1

( )

R t = −F t (1.23)

( ) ( )

0

1 t

R t = −

f s ds (1.24)

( ) ( )

t

R t =

f s ds (1.25)

Conversely,

( )

d R t

( ( ) )

f t = − dt (1.26)

Statistical distributions are fully described by their probability density functions. The failure rate function, reliability function, MTBF function can be determined directly from the definition of the probability density function f(t). There are several distributions, including the exponential, normal (Gaussian), lognormal, Weibull, Poisson, binomial etc. (Leemis, 1995;

Evans et al., 2000).

In reliability engineering, various distributions are used for modelling, including uptime and downtime distribution, failure distribution, and repair distribution.

Introduction

17

Wearout failures of components can be described by the normal distribution as its hazard function increases. The normal probability density function can be defined by

( ) (

2 2

)

1/2exp

(

2

)

2

The lognormal distribution, which is used for many types of life data, can be expressed as

( ) ( )

1/2 1

( )

2

The Weibull distribution is frequently used for product life because it flexibly describes increasing and decreasing failure rates. It can be defined by

( )

1exp t 0

There are slightly different definitions for the above distributions. The ones applied here were described by Ireson et al. (1996).

Hardware failures are generally characterized by a bath curve (Fig. 1.5). The curve graphs the failure rate λ (the reciprocal of Mean Time Between Failures) versus time t. The chance of a hardware failure is high during the initial life of system components (infant mortality period).

Failure rates during the useful life of products are fairly low (failure rate is nearly constant).

Once the end of the line is reached, failure rates increase again, degradation of component characteristics cause hardware modules to fail (wearout period).

Figure 1.5 Typical bath curve of hardware failures (Ireson et al., 1996).

The mean time to failure can be mathematically expressed as

( ) ( )

0

MTTF E T= =

tf t dt (1.30)

Introduction

18 1.2.5 Cost considerations and cost efficiency

A highly desired aim of real life applications is to find the right balance between reliability, availability and costs. Total life cycle costs consist of acquisition costs and operational costs.

Without reliability program, operational costs are much higher than acquisition costs. Using a reliability program, the two costs are similar.

1.2.6 Environmental impact

Modern waste management and thermal treatment plants take effect to minimise their impacts on the environment. Analysing availability and reliability issues is an important extension of this analysis for many reasons. On one hand, modelling and optimising of these factors lead to better efficiency and actively contribute to the minimisation of the impact on the environment, which could be caused by a malfunction of the plant or its parts. On the other hand, informative availability and reliability predictions can be performed to let reliability engineers improve the design of the plant or its operation. RAMS software packages were identified by the Author as efficient supporters of these calculations and predictions (see Chapter 5.3.3).

1.2.7 Simultaneous optimisations

The operational reliability and availability are dependent upon several factors that must be optimised. Optimisation means achieving maximum or minimum value of the operation parameters. High levels of reliability and availability of production machinery is required for good productivity and profitability. Optimal benefits are realized when reliability is designed into a piece of equipment. However, it is important to improve reliability, availability and maintainability throughout the life of the equipment to meet RAM goals and objectives. The reliability model enables one to predict what is affordable and identify undesirable alternatives.

Effectiveness is influenced by the availability, reliability, maintainability, and capability of a system. How can such high reliability and availability be achieved? There are many factors that affect reliability and availability, and there are many issues that must be addressed. The most important ones are engineering design, material, manufacturing, operation, and maintenance.

In order to consider these issues, there is a need for information, modelling, analysing, testing, data, additional analysis, often additional testing, reengineering when necessary, and so forth.

1.3 Research questions

There are numerous commercial software available on the market, and their number increases.

A thorough assessment was performed in order to decide the applicability in the field of waste management. In addition, there is always the potential to develop own tools, however, this opportunity should be evaluated while considering time, effort and effectiveness.

The main question was from the very beginning how to identify general reliability software appropriate to apply in the field of waste management. The full list of features required to perform effective RAMS analysis were generated for this purpose.

Software support is capable to answer important questions, including:

• How to identify weak points of systems

• How to improve availability and reliability

• Introduce maintenance action plans

• Develop reliability culture for different plants

Limitations were also considered for the scope of research. If, for example, case studies were conducted about a modern and an older plant, respectively, outputs would be different.

A nearly optimal operation is not the best for experiment. On the other hand, old plants use some obsolete technologies that are not appropriate for creating general rules. This is the reason why choosing the right place for case studies is so important.

Introduction

19

1.4 Purpose of the research

The main purpose of the research is to contribute to both the theoretical and practical RAMS issues of waste management via software support. The mathematical, statistical, optimisation and modelling tools were combined in order to improve reliability issues in this field. The experimental results and industrial case studies proved the effectiveness of the methodology. This success means a new contribution to the field, considering much more than just cost effectiveness, but the reduction of the environmental impact, and the continuity of waste-to-energy processes.

20

CHAPTER 2

STATE-OF-THE-ART

Due to the specific features of waste management systems, a comprehensive review was needed before making new contributions in the field. Investigation of industrial approaches was necessary in order to consider waste management strategies, RAM methodologies and software applications simultaneously.

2.1 RAMS in industrial processes and waste management

The scope of research covered reliability related issues of industrial processes and especially waste management systems. Although there is a wide range of research papers on reliability in general, most of them are related to other industrial processes than waste management systems. On the other hand, there are new contributions to waste management but they consider neither reliability issues nor software support specifically.

2.1.1 Availability

Based on the general definition described earlier, availability of waste management chains represents the capability to manage waste continuously in a usual way (de Castro and Cavalca, 2006). The optimisation of this field can be investigated through various approaches, as it has been done by researchers recently.

2.1.1.1 Availability optimisation

An availability optimisation technique based on a genetic algorithm was presented recently by de Castro and Cavalca (2006). The literature overview in the paper shows that genetic algorithms are capable to solve many reliability related problems, including reliability allocation, redundancy allocation, multi-state system reliability optimisation, reliability design, safety system optimisation, optimisation of multi-level protection in series-parallel systems, preventive maintenance optimisation for mechanical components, total system cost minimalisation, and availability optimisation.

Basically, there are two ways to increase the availability of engineering systems:

1) Increasing the availability of each component, i.e. improving reliability and maintainability, increasing failure time and/or decresing repair time. The price of this is cost increment.

2) Using redundant components or subsystems. The drawback is the increment of design and maintenance costs, as well as volume and weight. This is the reason for redundancy optimisation.

A statistical distribution was used to represent the curves of reliability. For mechanical systems, Weibull distribution is suitable (e.g., for fatigue failures). Normal, lognormal, Rayleigh and Gamma distributions can be applied for failure analysis and maintenance analysis.

State-of-the-art

21

Exponential distribution is characterized by a constant failure rate in the time domain.

The strong point of the presented approach is that it takes into account redundant components and maintenance resources, while applying a robust search method. Genetic Algorithm is a powerful tool for availability optimisation as several authors used it efficiently.

A potential development could be a multi-objective optimisation procedure based on Genetic Algorithm. Another possibility is to use the P-graph framework in order to apply efficient optimising algorithms.

2.1.1.2 Spare optimisation

Spares are crucial parts of system life cycle.

Spare models based on renewal processes were proposed, among others, by Kumar and Knezevic (1998). These models can be used to predict spare requirements to prevent spare overstocks. These are common problems because they cost much and they are often useless or become obsolete because of technology developments.

As failure times of components follow exponential, gamma, normal and Weibull distribution, four different models were developed. They can be used to predict the required number of spares to reach a specified availability at a given time.

In the paper, there is a good overview of related research results about maximising availability or minimising costs.

There are several methods for spare requirement prediction; however, they have many drawbacks. E.g., if age-related failures are not concerned, the assumption is unrealistic as aging is a common failure cause.

Spare optimisation should consider the time dependent overall system availability which is missing from most of the existing spare optimisations.

Instead of item approach, system approach should be used.

It is far not the ideal solution to concentrate on k-out-of-N systems, because the reliability block diagram of many systems can be represented as a series-parallel structure.

The two main tools for spare optimisation are dynamic programming and non-linear integer programming. The disadvantage of these approaches is that computations become very complex for large problems with more than one constraint.

Other methods are only approximations.

For large problems, most of the spare optimisation methods require special software.

The proposed method is a kind of spare modelling using renewal process, i.e. components are replaced by a new identical spare component upon failure. This approach can avoid unrealistic assumptions (e.g., constant failure rate). The presented model is more accurate that the above mentioned ones and it can handle multiple constraints. However, there are some assumption requirements, including statistically independent and random failure time that follows an arbitrary distribution, spare components do not fail during storage, and the replacement time is negligible.

The strong points of the paper are good literature review, rigorous mathematical model, promising comparisons with existing methods, very good definition and algorithm collection.

In addition, the presented optimisation is more realistic than other approaches (avoids the assumption of constant failure rate).

The paper can be used for a further development of spare optimisation to solve problems with even less assumptions (e.g., handle the case if spare components fail during storage).

The formula used as the basis for spare optimisation (the optimisation problem) might be optimised using other techniques for comparison. However, the applied technique seems to be efficient. In fact, the Branch and bound method can be applied for optimisation and simulation of a wide range of applications (Strouvalis et al., 2002).

What is in the case of spare components with time to failure distributions other than exponential, gamma, normal or Weibull?

Some parts of the method may be useful for parallel system spare optimisation as well.

State-of-the-art

22 2.1.2 Reliability

Reliability is the probability that the waste management system will perform satisfactorily for at least a given period of time when used under stated conditions (derived from the general definition of Kuo and Zuo, 2003). Due to the complexity of reliability issues, it was necessary to include the investigation of industrial applications and review the efficiency improvement potential. Major approaches were outlined via comprehensive literature review.

2.1.2.1 Reliability optimisation

Goel et al. (2002) presented a method to improve inherent plant availability at the design stage by using different reliability analysis tools and methods (e.g., reliability block diagram, Petri net simulation, fault tree analysis etc.).

Plant availability can be improved at the operational stage by addressing reliability and maintainability at the conceptual design stage.

Plant availability has three subtypes:

Operational: reflects the system availability considering both unplanned and planned maintenance time as well as time lost through operational logistics and administration (maximum plant time)

Achievable: considering unplanned and planned maintenance time (plant available time)

Inherent: the availability expected when reflecting unscheduled (corrective) maintenance only (standard operating time)

Many maintenance optimisation frameworks apply a traditional a posteriori approach, providing only limited opportunity for availability improvement at the conceptual design stage.

In the paper, an a priori approach was used instead, where reliability and maintenance were made simultaneously with process synthesis and design optimisation. The first is an optimisation problem formulated to determine the structure of the process flowsheet, the size of process units etc. The latter one means optimisation for a given process structure to determine the size or capacity of process units, flowrates etc.

The process synthesis problem, with a given superstructure can be formulated as a mixed integer non-linear programming (MINLP) problem of the form:

( )

x is a vector of continuous variables specified in the compact set X y is a vetor of discrete, mostly binary 0-1 variables

(

,

)

P x y is a scalar economic objective function (annual profit)

(

,

)

h x y is a vector of equality constraints

(

,

)

g x y is a vector of inequality constraints

The process superstructure, process models, cost data, realiability, and cost function are given.

The problem is to determine an optimal system configuration with optimal reliability, i.e.

( )

State-of-the-art optimisation framework assumes a fixed system structure and initial reliability of process components.

Kececioglu (2002) differentiated the optimum reliability levels for the producer and the customer. The optimisation of total cost can be stated through the combination of reliability and maintenance schedule. Overall cost reductions can be achieved by the implementation of integrated reliability engineering programs which coordinate reliability issues of all disciplines that deal with the product through its whole lifecycle.

2.1.2.2 Risk assessment

A health risk assessment method for municipal waste management was introduced recently by Kumar et al. (2009). They developed a toolbox for MatLab called FLHS. The technique is capable to consider various parametric uncertainties, including natural variability, epistemic uncertainty, model uncertainty, scenario uncertainty, uncertainty of transport processes. The outputs of these models are fuzzy distributions, representing an expected (estimated) risk, as well as the uncertainty of that estimation and the model assumptions. Due to the application of probabilistic models, FLHS includes sensitivity and uncertainty analysis. Furthermore, it is capable to separate uncertainty and variability.

2.1.3 Maintenance and maintainability

The specific definition of maintenance in waste management can be defined as a specialization of the general definition. It covers the activities undertaken to keep the waste management chain operational (or restore it to operational condition when a failure occurs). These activities depend on the features of the analysed system. However, some general approaches can be extended to cope with the needs of specific scenarios.

2.1.3.1 Maintenance optimisation

The objective of maintenance optimisation is to estimate the optimum maintenance intervals through minimising the maintenance cost in order to improve the equipment effectiveness and achieve suitable productivity (Metwalli et al., 1998).

Volkanovski et al. (2008) proposed a new method for the optimisation of maintenance scheduling of generating units in power systems. It is a genetic algorithm, i.e. it can be treated as a search algorithm based on the concepts of natural selection and genetic inheritance. The reason of using genetic algorithm is the goal to obtain the best solution resulting in a minimal value of annual loss expectation value for the power system in the analysed period.

There are several approaches for maintenance scheduling problems, including a mixed integer programming model, Branch and bound techniques, decomposition, as well as the levelling incremental risks method. The main problem of these techniques is that they can be used to solve small problems only, and they have other constraints and limitations. The multi-objective type of maintenance planning focuses on two alternative object criteria: costs and reliability. Hybrid methods in combination with several optimisation techniques may overcome all the above methods. Both the constraints and the computational time can be improved by applying artificial intelligence approaches, along with knowledge based and expert system models, as well as fuzzy logic.

State-of-the-art

24

This new approach is based on exact, non-simplified probabilistic models for the generating units, using multiple constraints, and if necessary, adding further constraints.

The advantage of the proposed method is that it can be applied to real size power systems with short calculation times. It is also possible to provide multiple solutions according to

The advantage of the proposed method is that it can be applied to real size power systems with short calculation times. It is also possible to provide multiple solutions according to