• Nem Talált Eredményt

Adaptive gridding for simulating accidental release

In document Atmospheric chemistry (Pldal 142-151)

11. Air pollution modelling

11.1. Adaptive gridding

11.1.3. Adaptive gridding for simulating accidental release

Following the Chernobyl accident most countries in Europe developed national nuclear dispersion or accidental release models linked to weather prediction models of varying types and resolutions. The performance of a number of these models was evaluated against the ETEX European tracer experiments with no overall modelling strategy emerging from this evaluation as being statistically better than another. A comparative discussion of the available model types will be given in the next section. The second ETEX experiment proved to be a challenge for all the codes, since they failed to predict the location of the tracer plume accurately. Clearly therefore, issues still remain as to what is an appropriate strategy for predicting the impact of an accidental release. Sensitivity tests from the ETEX comparisons showed significant impact of both input data (e.g. three dimensional wind-fields and boundary layer descriptions) and model structure, such as the choice of numerical solution method and model resolution on output predictions. Models are not only used for predictive purposes such as exposure assessment, but are also often used to test our understanding of the underlying science that forms the foundation of the models. Improvements in our understanding are often tested by comparison of the models with measured data. If this is the case then one must be sure that changes made to input data to improve predictions are not compensating for errors that actually arise from the numerical solution of the problem. It follows that models used within the decision making and en-vironmental management context must use not only the best available scientific knowledge and data, but also the best available numerical techniques. By this we mean techniques that minimize the errors induced by the choice of numerical solution method rather than the structure of the physical model. Because of the historical development of environmental predictive models and the investment time required to implement new computational solution strategies, unfortunately the most appropriate numerical method is not always used. Therefore, the aim of this work is not to elaborate yet another numerical dispersion simulation code, but to present the practical application of improved numerical methods which may provide useful developments to some existing modelling strategies. The paper presents the use of adaptive Eulerian grid simulations to accidental release problems and it will be shown by comparison with some current numerical computational methods that improvements in accuracy can be made without significant extra computational expense. We present the use of the model for the prediction of hypothetical accidents from the Paks Nuclear Power Plant (NPP) in Central Hungary and we include a discussion of the com-parison of the present method with more traditional modelling approaches.

The Chernobyl release provided a large impetus for the development of accidental release models and several inter-comparisons between different model types have been made. The predominant model types are Lagrangian and Eulerian. The former trace air masses, particles with assigned mass, or Gaussian shaped puffs of pollutants along trajectories determined by the wind-field structures. Lagrangian models have the advantage that they can afford to use high spatial resolution although they rely on the interpolation of meteorological data. Their potential disadvantages are that in some cases they neglect important physical processes and often experience problems when strongly diverging flows lead to uncertainties in long-range trajectories. One example of this is discussed by Baklanov et al. (2002) who modelled the potential transport of radionuclides across Europe and Scandinavia from a hypothetical accident in Northern Russia using an isentropic trajectory model. In their ‘winter’ case studies the isentropic trajectory model simulated much of the deposition when compared to a high-resolution meteorolo-gical/dispersion model. In some cases however, additional deposition was predicted by the dispersion model in locations not predicted by the isentropic trajectory model due to the splitting of atmospheric trajectories.

There are several types of Lagrangian trajectory models. One example of a Gaussian puff model is the DERMA model, which uses a multi-puff diffusion parameterization. A Gaussian profile is assumed for the puff in the hori-zontal direction with complete mixing in the vertical direction within the boundary layer and a Gaussian profile above it. The UK MET office NAME model and the Norwegian SNAP model use a Lagrangian particle formulation, which resolves the trajectories of a large number of particle releases with assigned masses. The NAME model is

Air pollution modelling

vantage of this approach is its computational expense, since a large number of particles must be released when compared to the Gaussian puff approach.

Eulerian models use grid based methods and have the advantage that they may take into account fully 3D descriptions of the meteorological fields rather than single trajectories. However, when used traditionally with fixed meshes, Eulerian models show difficulty in resolving steep gradients. This causes particular problems for resolving dispersion from a single point source, which will create very large gradients near the release. If a coarse Eulerian mesh is used then the release is immediately averaged into a large area, which smears out the steep gradients and creates a large amount of numerical diffusion. The result will be to underpredict the maximum concentrations within the near-field plume and to over estimate the plume width. Close to the source the problem could be addressed by nesting a finer resolution grid to better resolve steep gradients. This need to resolve accurately both near and far field dispersion has been noted previously by for example Brandt et al. (1996) who used a combined approach of a Lagrangian mesoscale model coupled with a long range Eulerian transport model in the development of the DREAM model. The approach requires some kind of interpolation procedure between the two grids. A similar approach was also employed through the point source initialization scheme in the Swedish Eulerian model MATCH.

The MATCH model uses a Lagrangian particle model for the horizontal transport of the first 10 h after the point source release with vertical transport being described by an Eulerian framework during this time. Quite a large number of particles need to be released per hour to reduce errors in predicting the vertical transport, thus adding to the computational cost. These multi-scale modelling approaches showed significant improvements in predictions close to the release due to the higher resolution Lagrangian models. However, as with the nested Eulerian modelling approach, they still suffer from the drawback that the plume is averaged into the larger Eulerian grid as soon as it leaves the near source region. Brandt et al. (1996) argue that once the plume has left the near-source region the gradients will become sufficiently smooth as to lead to small errors due to numerical diffusion. This ignores however, the fact that steep gradients may persist at plume edges for large distances from the source due to mesoscale pro-cesses. Previous studies have shown that high contamination levels or deposition can exist over small areas several hundred kilometres from the source. Brandt et al. (1996) argue that long-range Eulerian models are not suitable for resolving single source releases alone and demonstrate this through the standard Molenkampf test of a rotating passive puff release in a circular wind-field. More recently, however, it has been shown that adaptive Eulerian methods are very capable of resolving the advection of such single source releases since they can automatically refine the mesh in regions where steep gradients exist. Moreover, they can be more efficient than say nested models, since they refine only where high spatial numerical errors are found and not in regions where grid refinement is not necessary for solution accuracy. This means that for the same computational run-time they may allow higher resolution of the grid in important areas. Eulerian models also provide an automatic framework for the description of mixing processes and non-linear chemical reaction terms. This study will therefore show that adaptive grid methods provide a consistent approach to coupling near-field and long-range simulations for single source acci-dental releases.

It is clear that one of the most crucial inputs to any dispersion model for point source releases is the underlying meteorological data such as wind-field and boundary layer descriptions. The importance of the horizontal resolution of meteorological data has been demonstrated by many of the simulations of the first ETEX experiment (Bryall and Maryon, 1998). The ETEX experiment was an international tracer campaign during which a passive tracer was tracked across Europe over several days from its release in France by monitoring at a large number of meteor-ological stations. Many numerical simulations were carried out which demonstrated the need for mesoscale weather modelling. Nasstrom and Pace (1998) for example showed that events resolved by a meteorological model of 45 km but not at 225 km resolution had a significant impact on even long-range dispersion of the ETEX plume. Sorensen (1998) showed that the double structure of the first ETEX plume was picked up by their model when using mesoscale weather predictions but not when using coarser resolution ECMWF data. The difference was attributed to a mesoscale horizontal anti-cyclonic eddy that was not resolved within the ECMWF data. The importance of resolving the vertical structure of wind-speeds and directions was demonstrated by the second ETEX experiment where evidence of decoupling of an upper cloud of pollution from the boundary layer plume was ob-served. In this case significant concentrations were measured behind the path of the plume predicted by most of the models tested. This behaviour has been attributed to the vertical lofting of the plume by the passage of a front, followed by the transport of the pollution cloud in upper levels of the atmosphere at a lower wind speed that in the boundary layer or even perhaps in a different wind-direction. The MET Office NAME model in this case showed a significant amount of mass above the boundary layer for the second ETEX experiment in contrast to the first one where particles where well distributed throughout the boundary layer. Previous simulations point to several important issues relating to the development of an accidental release model. Firstly, a mesoscale meteorological model must be used in order to capture small spatial scale effects such as frontal passages and small scale horizontal eddies.

Air pollution modelling

Secondly, the dispersion model used must be capable of representing, in some way, the possible vertical variations in wind speed and direction, rather than using a single boundary layer description with varying mixing layer height but a single wind-field. Thirdly, the dispersion model must be capable of resolving potentially steep concentration gradients that may be caused by either the single source release or features that may arise due to mesoscale meteor-ological events. This study will therefore discuss the practical application of an adaptive Eulerian dispersion model when coupled to data from a high resolution mesoscale meteorological model.

This model has been developed within a flexible framework and can therefore simulate both single source accidental releases and photochemical air pollution, where the pollution sources are both area and point sources and the chemical transformations are highly non-linear (Lagzi et al., 2004). A slightly modified version of the model is used here to demonstrate its capacity simulating nuclear dispersion calculations. In the model, the horizontal dis-persion of radionuclides is described within an unstructured triangular Eulerian grid framework. The vertical mixing of radionuclides is approximated by a parameterized description of mixing between four layers representing the surface, mixing, reservoir layers and the free troposphere (upper) layer. The horizontal grid is adaptive, i.e.

continuously changes in space and time to minimize the numerical errors. The transformation of radionuclides is described within each grid cell by the equations of nuclear decay. Transient refinement and de-refinement is then further invoked as necessary throughout the model run according to spatial errors and chosen refinement criteria.

The model domain is represented by an unstructured mesh of triangular elements surrounding each grid point, thus forming a small volume over which the solution is averaged. The model therefore falls into the category of Eulerian models described above, although it is not described by the standard Cartesian mesh approach. The use of adaptivity however, allows the model to overcome the usual problems of the Eulerian approach, since a fine mesh can be used where needed leading to averaging over small volumes. The term ‘unstructured’ represents the fact that each node in the mesh may be surrounded by any number of triangles, whereas in a structured mesh such as a Cartesian mesh, the number of surrounding grid points would be fixed. The use of an unstructured mesh easily enables the adequate resolution of complex solution structures that may be found following point source releases and which may not fall naturally into Cartesian geometrical descriptions. For example, following release, the plume from a point source may be stretched and folded in space due to advection and turbulent mixing. The unstructured trian-gular mesh then provides an efficient way of adapting around this complex geometry. The initial unstructured tri-angular meshes used in the model are created from a geometry description using the Geompack mesh generator.

The complex nature of atmospheric dispersion problems makes the pre-specification of grid densities very difficult, particularly for forecasting codes where the wind-field is not known in advance. Many complex processes can take place far from the source due to advection, mixing, chemical reactions and deposition, affecting the geometry of the plume and spatial concentration profiles. To this end the model presented here utilizes adaptive gridding tech-niques, which quantitatively evaluate the accuracy of the numerical solution in space and time and then automatically refine or de-refine the mesh where necessary. As described above, the accuracy of Eulerian models tends to become degraded in regions where the concentration of the pollutant changes steeply in space. The use of transient adaptivity allows us to overcome this problem. It is achieved by using a tree-like data structure with a method of refinement based on the regular subdivision of triangles (localh-refinement). Here an original triangle is split into four similar triangles as shown in Figure 11.7 by connecting the midpoints of the edges.

Figure 11.7: Example of localh-refinement in the model.

Air pollution modelling

be chosen. The technique used here is based on the calculation of spatial error estimates, allowing a certain amount of automation to be applied to the refinement process. This is achieved by first calculating some measure of numer-ical error in each species over each triangle. A reliable method for determining the numernumer-ical error is to examine the difference between the solution gained using a high accuracy and a low accuracy numerical method. Over regions of high spatial gradient in concentrations the difference between high and low order solutions will be greater than in regions of relatively smooth solution, and refinement generally takes place in these regions. The use of absolute as well as relative tolerances allows the user to define a species concentration below which high relative accuracy is not required. For the current example the choice of this tolerance may be driven by a minimum concentration of concern from a health point of view for example. An integer refinement level indicator is calculated from the scaled error above to give the number of times the triangle should be refined or de-refined. The choice of tolerances will therefore reflect, to a certain extent, a balance between desired accuracy and available computational resources, since tighter tolerances usually lead to a higher number of grid cells. It is also possible within the code for the user to control the maximum number of levels of adaptivity, thus limiting the minimum grid size in regions of very steep gradients i.e. close to the point source. Since the error is applied at the end of time-step it is too late to make refinement decisions. Methods are therefore used for the prediction of the growth of the spatial error using quad-ratic interpolants. The spatial error can therefore be used to predict within which regions the grid should be refined or coarsened for the next time-step in order to give good spatial accuracy with the minimum computational resource.

The application of adaptive rectangular meshes would be also possible, but would be less effective in terms of the number of nodes required in order to achieve high levels of adaptivity. Although the data structures resulting from an unstructured mesh are somewhat more complicated than those for a regular Cartesian mesh, problems with hanging nodes at boundaries between refinement regions are avoided. The use of a flexible discretization stencil also allows for an arbitrary degree of refinement, which is more difficult to achieve on structured meshes.

The model domain covers Central Europe including Hungary with a domain size of 1550 × 1500 km. The model describes the domain using a Cartesian coordinate system through the stereographic polar projection of the curved surface onto a flat plane. Global coordinates are transformed by projecting the surface of the Earth, from the op-posite pole onto a flat plane located at the North Pole that is perpendicular to the Earth’s axis. Due to the orientation of the projection plane this transformation places the Cartesian origin at the North Pole. The model includes four vertical atmospheric layers: a surface layer, a mixing layer, a reservoir layer and the free troposphere (upper) layer.

The surface layer extends from ground level to 50 m altitude. Above the surface layer is the mixing layer whose height is determined for 0.00 and 12.00 UTC values by radiosonde measurements in Budapest. The night time values are assumed to be identical to the midnight value while the height between 6.30 and 15.00 were linearly interpolated between the night and the highest daytime values. The reservoir layer, if it exists, extends from the top of the mixing layer to an altitude of 1000 m. Different wind-fields are represented for each layer and vertical mixing and deposition are also parameterized. The local wind speed and direction was considered as a function of space and time. These data were obtained from the mesoscale meteorological model ALADIN, which provides data with a time resolution of 6 h and a spatial resolution of 0.10 × 0.15°. The ALADIN model is a hydrostatic, spectral limited area model using 24 vertical layers where initial and boundary conditions are determined from larger scale ECMWF data. The model domain for ALADIN covers the Central European region from latitude 43.1°

to 52.0° and longitude 10.35° to 25.1°. The data from ALADIN were interpolated using mass conservative methods to obtain data relevant to a given space and time point on the adaptive model grid. The surface temperature and cloud coverage, relative humidity and wind field for each layer were determined from the ALADIN database. The

to 52.0° and longitude 10.35° to 25.1°. The data from ALADIN were interpolated using mass conservative methods to obtain data relevant to a given space and time point on the adaptive model grid. The surface temperature and cloud coverage, relative humidity and wind field for each layer were determined from the ALADIN database. The

In document Atmospheric chemistry (Pldal 142-151)