• Nem Talált Eredményt

2.6 Convergence

A question that can pop up from time to time, and it is not a dismissable question, is that what are the convergence properties of the above used techniques for creating painterly rendered images, do they converge to something at all, is their convergence guaranteed ?

In this chapter we presented stochastic painting methods with certain extensions. That is, at the base of the above methods is a random process, because when strokes are placed on the canvas, their positions are randomly selected. As we already mentioned above, the process can be controlled by either a quality goal specied in dB, when the process should stop, or a relative error measure, which means that if a certain change in relative error between two consecutive steps has not occurred, the process should also stop. The combination of these two conditions makes sure the painting process will stop. The time required to achieve a stopping condition depends mostly on the size of the model image and the used stroke scales.

2.6. Convergence 32

Stroke type Stroke emulation Approx. coded size

[25] geometrical scaled geometrical templates 8 bytes/stroke strokes with sampled color

[71] antialiased weighted scaled 8 bytes/stroke/line

lines clipped templates aligned

at edges to form lines

[11] no strokes circular or uy templates 6 bytes/stroke

[31] long curved curved strokes (10 bytes)

strokes, replaced by series of for each

gradient based smaller rectangular control point/stroke style, orientation templates

[96] rectangular simple scaled 7 bytes/stroke

antialiased rectangular templates strokes

[33] short lines rectangular or line templates 9 bytes/stroke/line [7] no strokes, use ltering to determine 5 bytes/ltered

ltering grayscale stroke template template [34] rectangular, mostly scaled rectangular 7 bytes/stroke

or user stroke series

dened binary bitmaps

[15] following series of smaller 7 bytes/stroke/

edge-chains, geometrical strokes edge-chain

curved strokes

[24] central axis of seg- circular weighted 6 bytes/template mented areas stroke templates

combined into tokens;

Filbert brush simu-lation along tokens

[54] user-dened user dened scaled 7 bytes/stroke

grayscale (256 weighted templates levels) templates

[53] any shape and -//- 6 bytes/stroke/line

size of brush templates grown

into lines with varying thickness

Table 2.1: Possible similar emulations of mentioned stroke representations by a common painting model (multiscale, template-based, stochastic) for using similar compression schemes. Last column shows approximations of bytes needed to store a stroke if simulated by the model.

3. Painting with scale-space 33

Chapter 3

Painting with scale-space

In this chapter we will present an new automatic painterly rendering approach which - being also partially built upon the previously presented approaches - is a method controlled by automatically extracted scale-space image features. These features are weighted edge and ridge maps extracted by a scale-space approach.

Multiple eect generation variations will be presented, some stochastic, some a combination of stochastic methods with structural features, some fully sequential. The question could be asked, why should stochastic elements still sometimes be retained ? The answer to that is both simple and complex. During our work we thought that retaining a level of randomness in the painting process retains a bit less articial touch. That is why we also present methods where e.g. the stroke position remains random while size and orientation is taken from local image structure information.

In others the stochastic nature can fully be eliminated. Still, these methods are variations based on a common grounding, showing that the approach we took in creating these methods is that of trying to have more free hand in the rendering process. Thus fairly similar approaches can still produce dierent types of images. Optionally retaining randomness in the stroke placement process can give a somewhat more natural feeling, but in the case of other painting parameters like stroke size, orientation, color or weighting, randomness can cause disturbing unnatural eects.

These latter parameters are best taken from image structure, color and texture data, which is what we try to include in the presented methods.

The novelty of this approach, which dierentiates it from the previous works in the SBR literature and our previous methods, is in that it builds upon image structure information and the control of the whole image generation process is based on these extracted features. Previous

3.1. Rendering with multiscale image features 34

techniques have often incorporated some image analysis results in the rendering process (i.e. fol-lowing edge directions or determining stroke patterns from the underlying image texture), but our approach was new in the sense that

it remained an automatic eect generator,

it can be fully controlled by image structure information (weighted edge and ridge maps),

it can still be combined with all the previous approaches (stochastic or not, using templates or geometric shapes, using multiple layers or not).

3.1 Rendering with multiscale image features

We have already seen that dropping the trial-and-error nature of stroke orientation and selecting orientations following edge directions can increase the eciency of the rendering. What we propose is to further increase this eciency by determining both stroke orientation and scale from image structures. For this purpose we use weighted edge and ridge maps of the model images, extracted by a variation of Lindeberg's method [69].

When thinking about the "natural" way of painting, one can hardly say that a painter creates a painting by randomly scattering the same stroke templates over a canvas, although this is a fairly well usable and behaving approximation that can be used for generating articial painterly renderings, as our methods - and others' - have already shown. But, for example, usually a painting starts by roughly sketching the contours, then lling the remaining areas. This idea is where we started from when considering development of a painting process which would stand closer to real life painting in its concepts, and which we will present in this chapter..

The basic idea behind this approach is that we should allow the painting process to be driven by important image features, which give structural information. Here, edges and ridges. The idea is to extract a suitable edge and ridge map from the model image and use them for automatic stroke positioning, scale and orientation determination. The goal was to obtain a technique which would not be stochastic and not need multiple stroke layers, but still remain fully automatic.

This method tries to nd main edges by searching for edge positions which produce an accentuate curve in scale-space. Besides this, it provides more than other edge detectors, that is weighting the resulting edges and ridges where the weights reect how relevant the respective curve is.

3.1. Rendering with multiscale image features 35

Given an imagef, its Gaussian scale-space representation will be noted asL:R2×Rdened by

which will be used to generate the scales used in the detection process,t=σ2 wheretis the so called scale parameter andσ is standard deviation. More specically

g((i, j);t) = 1

TheL(·, t)representations are obtained by applying Gaussian scale-space derivatives, e.g. for two dimensional imagef function andx directional derivative:

Lx((x, y);t) = (∂x·g((x, y);t))∗f(x, y) (3.4) At each scale level, edges are dened from points at which the gradient magnitude assumes a local maximum in the gradient direction. We use the edge denition when at a given point, if it is an edge-point then the second order derivative (Lxx) is zero and the third order derivative (Lxxx) is negative (for each respective direction, x for horizontal, y for vertical):

(Lxx= 0

Lxxx<0 (3.5)

We use the following geometrical ridge denition:



which is the formulation for 2 directions (horizontal and vertical). We extended this formulation with 2 more directions, the two diagonals. Using 10 scales generated with Gaussian convolution with dierent scale parameters we calculate scale-space derivatives. Then we search for all curves connected on the same (maximal) scale and calculate the edge strength formulated as:

3.1. Rendering with multiscale image features 36

E=tγ(L2x+L2y) (3.7)

and the ridge strength as:

R=t((Lxx−Lyy)2+ 4L2xy) (3.8) Theγnormalization value used above is a normalization factor introduced in [1] by Linde-berg, having1/2 for edges and3/4for ridges. Then signicance measures, edge/ridge weights are calculated, which is a line integral of weighted derivatives along the actual curve.

SCe =R

E(e)de and SCr =R

R(r)dr (3.9)

eandrbeing the individual edge and ridge curve points respectively. We store all connected curves on the image and sort them in descending order of their summed weights. At last we display the rstN (usually around100) curves on an edge/ridge map. For an example of extracted edges and ridges see upper part of Figure 3.2.

The edge map is used to determine placed stroke orientations, placing strokes to follow nearby edge directions.

The ridge map is used for obtaining stroke size (scale) automatically (simple circular shaped strokes are used). We get the maximal and minimal ridge weight on the ridge map, and project the available stroke sizes over the weight interval in such a way that lowest weight corresponds to smallest stroke scale and highest weight to highest stroke scale. During the painting process, when a stroke needs to be placed its scale is determined by searching for the nearest8 ridges and calculating the stroke scale by weighting the scales associated to the found ridge weights with the distances they are from the current stroke position. The actual stroke scale will be calculated as

So=

whereSai are the weighted distances of the nearest8ridge points, the w values are the maximal, minimal and actual weights respectively obtained from the extracted ridge map. Thedivalues are the real distances of the ridge points in pixels.