9512.net
甜梦文库
当前位置:首页 >> >>

Modeling Geometry for Linear Feature Generalization


Modeling Geometry for Linear Feature Generalization
Corinne Plazanet Institut Géographique National - Service de la recherche, 2 Avenue Pasteur, 94160 St Mandé, France

Abstract
This paper highlights the importance of modeling the geometry of linear features in order to generalize them. It proposes a constructive approach by segmenting lines and qualifying sections detected according to different criteria at several levels ("hierarchical" process). The aim of such an approach is clearly to guide the choices of adequate sequences of generalization operations and algorithms, and also to provide basic tools for assessing the quality of the results of generalization alternatives in terms of shape maintenance.

1 Introduction
Cartographic generalization schematically consists in selecting the features to be maintained at the targeted scale, simplifying non-relevant characteristics, enhancing significant shapes, displacing without defacing global and local shapes and finally harmonizing the final aspect. This paper addresses issues related to linear feature generalization (especially simplification and enhancement of road features) considered independently from their cartographic context. A primary requirement for cartographic generalization, as well as for data generalization, is to preserve in the best possible way geometrical properties, spatial and semantical relations, when coming to a coarser resolution. For cartographic generalization, this change in resolution is due to scale reduction, graphical limitations and symbolization while it is motivated by data reduction in the case of data generalization. So far, according to many studies such as: (Beard, 91), (McMaster, 89), (Buttenfield, 91), the existing automated tools for linear feature generalization are not efficient enough because of: ? ? ? numerous constraints to take into account (see section 1.1); variations in the quality of their output according to the characteristics of the line; the subjective aspect of generalization (e.g. how to decide whether or not to omit or maintain, simplify or enhance of a local shape within a linear feature).

These studies have strengthened our conviction that generalization requires first that the object under scrutiny be segmented, so that each of its local characters can be processed by adequate algorithms. This point of view converges with some approaches in other fields of generalization: e.g. R. Weibel in (Weibel, 89), on relief generalization, proposed a segmentation and characterization stage based on geomorphological principles before applying specific algorithms. From our point of view, we need first to segment the linear feature according to geometrical criteria. So we propose to construct a hierarchical model, based on recursive segmentation and geometrical qualification stages. Such a descriptive process should provide us with basic and complementary information valuable for guiding appropriate global and local decisions of

generalization, facilitating the choice of the most suitable generalization algorithms and tolerance values on segmented line sections. From our point of view, three kinds of issues need to be tackled in order to achieve good generalization results for linear features. They are: ? the elicitation of geometric knowledge on linear features; ? the formalization of rules for generalization decisions of operations and algorithms; ? the assessment of the quality of generalization operations in terms of shape maintenance. This paper focuses on the first point, while the conclusion addresses the second and third points. 1.1 Cartographic Constraints Cartographers, when generalizing manually, have a global and continuous perception of each line, clearly applying cartographic knowledge that has not been formalized as rules anywhere (Plazanet et al, 94). Road network generalization, especially in mountainous areas, raises spatial problems solvable by simplification or enhancement. To eliminate a hairpin bend from a winding road, (French) cartographers generally proceed by maintaining the first and last bends, choosing the one bend to eliminate (often the smallest) and emphasizing the others (Weger, 93). What kind of knowledge guides the cartographer to take decisions? Can we produce objective rules in order to obtain consistent generalization? (McMaster, 89). So far, some cartographic rules which guide intrinsic linear feature generalization have been identified, from looking at the cartographer’s own procedures, such as: ? ? ? ? ? Preserve intrinsic topology, Take into account symbol width, Take into account semantic nature of the features, Focus on the message of each particular feature, Respect geometric shapes.

The cartographic environment of linear shapes and the notion of scale are essential, providing subjective decisions by the cartographer. One of the challenges in automated generalization consists in devising approaches which account for shape generalization with respect to scale and shape environment. 1.2 Operations and Algorithms Many algorithms for linear feature generalization, especially line simplification have been developed such as the famous Douglas algorithm. So far, a lot of studies such as (Beard, 91) (Jenks, 89) (Buttenfield, 91) (McMaster, 89) have shown how difficult it is to apply them to a variety of linear features. According to (Herbert et al, 92), "the use of line simplification algorithms on a data set within an existing GIS can be a very uncertain process. Effectively, a user is forced into a position of experimenting with different tolerance levels to try and get the results he/she wants, repeating a ’generalize ... display’ cycle until he/she achieves his/her objective". Furthermore, the quality of the result seems to vary according to the line characteristics (Buttenfield, 91). Also, a complex sequence of generalization operations may be needed in order to generalize a road (McMaster, 89).

Besides, linear generalization encompasses more than simplification operations. Shape emphasizing or bends elimination within a series of consecutive bends of same shape (operation of schematization) may also be required (see the example at the end of the present paper). The problem lies clearly in the choice of the sequences of algorithms and tolerance values to apply according to their geometrical characteristics. Furthermore, the geometrical characteristics in many cases vary along the line. We think it is necessary to segment lines into sections sharing similar characteristics in order to choose the most suitable operation or sequence of operations of generalization (i.e. simplification, caricature, schematization) and corresponding sequences of algorithms for each line section.

2 A Descriptive Model of Linear Feature Geometry
Our approach is in fact a hierarchical segmentation and qualification of linear features process analogous to Ballard’s strip trees (Ballard, 81) and to the method proposed in (Buttenfield, 91) where series of measures are computed for each line. B. Buttenfield in (Buttenfield, 87) already proposed a very similar process for the classification of cartographic lines also dealing with segmentation (based on the assumption that "a line is composed of a trend line and details that bifurcate from it") and measurements. Our approach is different in that we do not consider local changes of direction, but variations according to shape criteria (at a ’more global’ level). 2.1 The Sinuosity Notion According to Mc Master (McMaster, 93): "Individuals seem to judge the shape of the line on two criteria: the directionality of the line and the basic sinuosity of the line". We think that sinuosity is a qualifier of greater importance to describe lines, and particularly roads. A line that contains many changes of direction is generally qualified as sinuous in a basic sense. Sinuosity may also be qualified according to the semantic type of objects. For instance a river trace is quite often irregular, rough, while a road may follow "zigzags", hairpins, closed bends (Plazanet, 95). Road features are human constructions, designed from regular mathematical curves. As mentionned above, in France and in some other countries, modern roads are often made up of straight lines, circle arcs and clothoids. The directionality may be seen as a mean line that B. Buttenfield in (Buttenfield, 87) calls the trend line: "Conceptually, the trend line is the smoothest possible approximation of the cartography, a generalization to the n th degree".

Figure 1: Trend line (smoothing) of a IGN-BDCarto? line

At the global level, the sinuosity may be qualified by the sinuosity of the trend line and the sinuosity of the original line considered here very roughly (see above figure 1). At the local level, we may qualify the sinuosity of lines, looking locally within homogeneous sections at bends making spirals, hairpins and so on. Such portions of curves can be described according to: ? ? ? ? ? ? shape (kind of curvature: sine curves, spirals, rectangles) amplitude base symmetry direction size (linked with the notion of resolution).

2.2 Descriptive Model and Process Our assumption is that, in fact, different levels of perception imply different shapes of a line, from the most global level down to the most local. At the global level, we can look at the entire line, and appreciate the shape of the trend line, while individual bend shapes appear at a more local level of perception. In this paper, we use the following terminology for global to local analysis levels (see below figure 2): ? ? ? the global level, corresponding to the entire line, the intermediate levels, corresponding to a line section, the local level, corresponding to a bend.

Figure 2: An example of natural line segmentation (IGN BDCarto? road)

Then our perception of a line is generally based upon the different ways we may make explicit different criteria of sinuosity at different levels of perception (shape of the trend line, local shapes of bends, differences in shapes i.e. heterogeneity, density of bends, complexity). In our approach, homogeneity is required as the first criterion adopted to describe lines. A line may be called unhomogeneous when it contains sections that seem different according to sinuosity. At each level of perception, the line (or line section) is considered from the point of view of homogeneity, and then possibily segmented according to this intuitive criterion. Basically, the process consists therefore in examining line sections recursively (starting from the entire line), and at each step, by either segmenting or further analyzing the line section. The test on homogeneity dictates further computation: ? ? if the line is unhomogeneous, it is segmented at a lower level, or else, a detailed classification is determined through closer analysis.

As a result, the descriptive model which is built is a description tree corresponding to an hierarchical analysis. The nodes of the tree correspond to line sections and carry descriptive attributes, and the transition(s) between a node and its child(ren) may be segmentation or analysis step(s). The root of the tree corresponds to the original line and gets as attribute a rough sinuosity qualification, while the leaves may correspond to highly homogeneous sections, or individual details1, and carry a shape class attribute.

The lower the node in the tree is, the more detailed is the analysis which corresponds to this node. At the root level, homogeneity is considered only very roughly in order to determine if the line must be segmented into significantly different sections. Inversely, a leaf corresponding to an individual detail such as a bend can be qualified according to the above-described criteria (in section 2.1). Once the description tree is built, a bottom-up analysis is required in order to check the coherence of processed information between the different nodes (e.g. to avoid further segmentation when each segmented section belongs to the same class at a given level). An example of how the description tree may be used for generalization is presented at the end of the present paper.

3 Computational Methods for the Description
The segmentation and qualification stages involve different processes according to the varying levels of analysis. Techniques for geometrical measurements and determination of critical points have been investigated, which aim at providing robust representations. A large number of tools tested for use in quantitative measurements as well as qualitative descriptions are based on the location of characteristic points. Some experimental results are presented in section 4.

1 For example, a single bend defined as a constant sign curvature section (for a given level of analysis) delimited by two inflection points

3.1 The Use of Characteristic Points The line segmentation and geometry description stages are based on characteristic point detection. As a rule, in literature, be it psychology or computer vision or more recently cartography, shape detection is based on the detection of characteristic points (Attneave, 54), (Hoffman et al, 82) (Thapa, 89) (Affholder, 93) (Plazanet et al, 96). In Euclidean geometry, characteristic points are curvature extrema: vertices and inflection points. Actually, as H. Freeman said, this definition is too restrictive: "We shall expand the concept of critical points to include also discontinuities in curvature, end points, intersections (junctions) and points of tangency" (Freeman,78). According to our needs (no discontinuities in curvature for the considered features), we define the following points as characteristic points: ? ? ? ? start and end points, minima of curvature i.e. inflection points, maxima of curvature i.e. vertices, critical points (which are a sub-set of inflection points).

In the following, among the characteristic points, the paper is essentially focussed on the detection of the main inflection points. Particularly, the critical points are used to divide a line into sections. The other points are used for the precise qualification of bends (see section 3.3 and 5). The Inflection Point Detection Because of the acquisition process, lines frequently contain spurious micro-inflections which are not characteristic. Moreover, according to the level of analysis, only the main inflection points are of interest. A process has been implemented and tested to detect significant inflection points from a smoothed line (Plazanet et al, 96): According to many authors in computer vision, and to (Babaud et al, 86) who shows notably that the Gaussian filter gσ:
gσ(k) =
1 √ e? σ 2π
k

2

2σ2

is "the only kernel for which local maxima always increase and local minima always decrease as the bandwidth of the filter is increased". The convolution of the curve with the filter is computed in a discrete space, where the line has first been re-sampled. Every point (x,y) of the re-sampled line has a homologous point (xl ,yl ) on the smoothed line so that:
xlisse =

k= 4σ

?

x(i

? k) * g ( k) ,
σ

ylisse =


k= 4σ

?

y( i

? k) * g (k)
σ

Studying the vectorial product variation along the smoothed line allows for the detection of characteristic inflection points: around these points, there is a significant change in the value of the vectorial product. The value of σ (number of neighbouring points taken into account to compute an average position) characterizes the smoothing scale. The higher the σ value, the stronger the smoothing. Thus, the choice of the σ value depends on the level of analysis.

Progression in the analysis is ensured by the tuning of inflection point detection: The lower the level in the tree, the lower the value of σ. This way a more and more detailed series of inflection points is used as the basis for segmentation and sinuosity classification. We are currently using the following empirical rule 2 for a first level of analysis. More generally, such a tolerance value should be considered as being proportional to the scale, and probably also to the line sinuosity. 3.2 Segmentation: a Process for a First Level D. D. Hoffman and W. A. Richards (Hoffman et al, 82) suggest that curves should be cut at the minima of curvature, which are particular characteristic points. In our process, critical points, i.e. points at which a line is segmented, are considered as being a sub-set of inflection points. The above-listed shape criteria (section 2.1) are obviously difficult to model, as they remain rather fuzzy and may be considered from different points of view. As a first attempt for a rough level segmentation, we consider the following homogeneity definition based on the variation of the distances between consecutive inflection points: Let IP(λ) be the set of inflection points detected for a level of analysis λ. Let M(IP(λ)) be the mean of distances d(IPi ,IPi+1) between consecutive inflection points. Let ?(IP(λ)) be the sequences of deviations Di = d(IPi ,IPi+1) - M(IP(λ)). In fact, only the signs Si of these Di are retained here:
IPi

+ +
d(IPi-j-1,IPi-j) - M > 0
IPi is retained as a critical point.

-

-

d(IPi-j-1,IPi-j) - M < 0

Figure 3: A rough segmentation stage

Then the inflection points IPi, such as Si-2 = Si-1 ≠ Si, define potential locations for the segmentation of lines. These points are retained as being critical points (see above figure 3). A line so that inter-distances between inflection points vary significantly is qualified as being unhomogeneous. Such a definition is applicable more particularly when lines are quite unhomogeneous. Thus we are considering to add a test for the homogeneity of lines (by tuning an indicator of homogeneity) in order to process the segmentation stage only when necessary. Besides, it is clear that homogeneity encompasses more criteria than just inter-distances between inflection points. Nevertheless, the above-described definition is applicable for the first segmentation stage, particularly thanks to the Gaussian filtering that is used to retain the inflection points within highly sinuous sections (that are highly persistent to smoothing)

2 σ = k*length(line)*ln(k*length(line)) where ln is the neperian logarithm and k is a constant (1/100 for our data set presented below)

and to quickly eliminate details in other, lower sinuous sections (see below figure 4).

Figure 4: An example of significant difference in the detection of inflection points according to the sinuosity type

To ensure further segmentation at a lower level, we are considering to add complementary criteria used in the same way as the inter-distance of inflection points.

3.3 Qualification F. Attneave in (Attneave, 54) has proved that "information is further concentrated at points where a contour changes direction most rapidly" i.e. vertices. The qualification stages in our process are in a great part based on quantitative measurements and qualitative representations starting on characteristic point location. Classification of First Level Segmented Sections Since original lines are segmented in homogeneous sections (i.e. between two critical points) at a more or less rough level of analysis λ, a set of measurements can be calculated based on the location of characteristic points on these sections, which can then be classified into shape classes. Actually, such a classification may take into account not only a sinuosity criterion, but other criteria as well. But the measurements are chosen so as to represent first facets of sinuosity. For every bend defined as a fraction of curve between two consecutive inflection points (see Figure 5 below), can be calculated:
φ2
h

φ1

S

l
I1
base

Inflection points I1, I2 Vertex S

I2

Figure 5: Measurements on a Bend

? ? ? ?

height h euclidian distance between the inflection points, base curve length between inflection points I1 and I2, l angle between φ1 and φ2

These first measurements allow us to define line classes at a rougth level. We are considering the addition of more complex measurements in order to provide further details in the description: ? ? ? the area between the curve and the segment (I1,I2) the area of the triangle (S,I1,I2) the projection of the vertex S on the segment (I1, I2)

For a section, we may calculate the mean (or median) value, variance, minimum and maximum values of each of these measurements. Some more measurements based on the line connecting the inflection points seem to be for some interest involving: ? ? ? the total number of bends the total absolute angles of the inflection points line the ratio of the curve length of the line connecting the inflection points and the curve length of the original line

A lot of measurements are possible from our hierarchical representation or from those previously proposed ((McMaster, 86), (Buttenfield, 91), (Plazanet, 95)) that can be reused or refined for our representation such as, for instance, the area of the bounding rectangle of each segmented section. The difficulty lies in choosing sub-sets appropriate for the different analysis tasks and analysis levels, so that every sub-set contains normalized and non-correlated (or loosely correlated) measurements. Measurements on simple bends such as in figure 5 may correspond to the simple criteria described in section 2.1. But we acknowledge the fact that the sets of measurements considered for evaluating the criteria do not provide a comprehensive view of them. These first measurements bring out a definition of sinuosity criteria as described in section 2.1. Nevertheless, they do not allow us to precisely qualify bend shapes. For instance, the ratio between the area of the curve and the segment (I1,I2) and the area of the triangle (S,I1,I2) gives an indication of the complexity of the bend shape, but does not qualify it. A proposition of a qualitative description method is proposed below.

Future Research: A Qualitative Description of Bend Shape Very sinuous sections, especially mountain roads, may correspond to a series of complex bend shapes that need further qualitative description, added to the basic rougher classification. Bends in the example in the figure 6 below are complex, i.e. containing more inflection points than the delimited ones, detected at a high level of analysis. The inflection points delimiting a bend are the ones which are highly persistent to the Gaussian smoothing. A hierarchy in the importance of the inflection points of a bend is required so that they can be described in a more detailed manner.

Characteristic points at level 1 Characteristic points at level 2

Figure 6: An Example of Complex Shape Bends

Thus a complex bend could be described as a combination of simple bends. Existing representations of sections of curves such as "codon" descriptors in computer vision (see (Hoff-

man et al, 82), (McWorth et al, 92), (Rosin, 93), (Ueda et al, 90)) seem to propose far too restricted classes of curves. Due to the complexity of geographic feature shapes, we need to extend the "codon" notion. Having pre-defined bend shape classes of codon descriptors (as in the first attempt proposed in figure 7), matching thereto the codon descriptor of a particular bend may provide a qualitative structured description of bends.

Simple bend patterns:

Examples of combinations of simple patterns:

Symmetric:

Non symmetric:

Figure 7: Examples of Different Kinds of Bends

The main difficulty for an implementation is likely to remain the determination of the corresponding characteristic points at different levels of smoothing (required in order to build the hierarchy in the importance of the inflection points). Inflection points are not regularly located when applying the different smoothings necessary to identify them.

4 Experimental Results
A first series of part of experiments have been conducted on a set of lines taken from the BDCarto? IGN data base, based on the measurements described above, and using a classical cluster analysis software. The first objective consists in splitting lines into straight / sinuous / strongly sinuous sections. 4.1 Characteristic Points Detection and Segmentation Hereafter are presented some examples of inflection point and critical point detection on a series of 5 m resolution BDCarto? roads (scale 1:50,000), see below figure 8. Vertices between two inflection points have been simply estimated approximately for this experiment at the point which is the farthest point from the anchor line defined by the two inflection points (see above in the figure 5). The tolerance value σ of the smoothing is determined automatically for each line (see the empirical method in section 3.2). First of all, these experiments show that even with a rough

homogeneity criterion, variations in distances between inflection points are inspected to determine a first level set of homogeneous sections.

4.2 Clustering Homogeneous Sections Then an experiment is conducted in order to qualify the sinuosity of each segmented section by classifying it. For a first experiment, we chose a small set of measurements based on the simple measurements: h, l and base (see figure 5 in section 3.3). For each segmented section, we calculated the median values: ? the ratio l/base between the curve length l from I1 to I2 and the Euclidean distance base, ? the ratio h/base between the height h and the Euclidean distance base. A set of 40 lines have been segmented and classified using the S-PLUS? cluster analysis package. The results show some deficiencies but, still, we come to a relevant first level of classification into barely sinuous / sinuous / highly sinuous sections (See below Figure 8).

4491 4069 3973

Examples of results of automatic segmentation of BDCarto? roads

S4 S3 S2

4376

3104

S1

Class 1 Class 2 Class 3

Figure 8: Classification on some of them Small dots are the detected inflection points Big ones are critical points

The main limitation is that different shapes (from a perceptual point of view) produce similar values (see in the figure 8 on the top left -section S2- vs the bottom right -section S4- of line 4376). A closer analysis of each section is required in order to differentiate their geometrical characteristics (see the qualitative description in the section 3.3 as a possible solution). Yet another problem is related to the local positions of the inflection points, and thus of the critical points, most notably with complex bend shapes. Local refinement of these positions is required so that they can be used reliably and predictably within the given theories of both segmentation and measurement.

5 Potential Outcomes
The resulting geometrical description is a tree, which needs first to be validated by means of a bottom-up analysis. From a theoretical point of view, we can consider that this tree provides us with the knowledge base from which we will be able to choose adequate generalization operations. In order to guide such choices, a rule base is now needed. Among the factors that we have to take into account, there are initial and targeted scales, targeted map symbolization and geometry (quantitative measurements and qualitative description). For instance, looking at two series of bends that perceptually seem to share similar geometrical characteristics, it has to be remembered that if the symbol widths or their semantical natures are different, generalization operations can be different. Moreover, if two series of bends belong to the same geometrical class, but if their sizes are very different, different generalizations might be required: little details might be eliminated, while bigger ones are emphasized. How can we take such problems of size into account ? Our segmentation and analysis process actually is not linked with generalization algorithms, but rather with human visual perception. Will it be possible, through formalized rules, to directly link the geometrical classes to the generalization operations, or even to the algorithms ? In order to fully access these questions, further studies are required: a) Assessing the effects of generalization algorithms and validating the quality of generalization alternatives We need computational methods for assessing the effects of generalization algorithms on the geometry of linear feature (shape degradation, topological degradation, quantitative measurements, comparison to manual generalization). Besides, a series of tests is required for each geometrical type in order to correctly determine which algorithms or sequences of algorithms are suited to each operation. In terms of shape maintenance, a potential outcome of the description tree might be to compare two versions of a line in terms of shapes: it should be easier to compare the description trees, instead of the raw geometries of the line and its generalized version. The results at the first level correctly distribute line sections into ’rather straight’, ’smoothly sinuous’ and ’significantly sinuous’ classes, so that we know where to faithfully apply the Douglas algorithm or smoothing using a Gaussian filtering with tolerance values in relation to the sinuosity type. It is clear that a problem is raised for handling curves at critical points, when using different algorithms or sequences of algorithms on a segmented line. b) Formalizing rules for generalization decisions (operations and / or algorithms) In addition, there is a need to formalize cartographic rules from geometrical knowledge. Thus, in order to extract necessary information from the line description at several levels (Plazanet, 95), further analysis of the tree is required such as: ? ? ? shape levels (relative levels of shapes within shapes: (Buttenfield, 87)), shape environment (relative positions of shapes within shapes, how shapes fit into the more global shapes of upper levels), shape or bend repetitions,

?

intrinsic conflict areas of the line.

Some other cartographic rules clearly intervene in generalization decisions related to map specifications: symbolization, scale reduction factor, theme of the map. We need to formalize this knowledge by means of rules. A promising way to acquire generalization procedural knowledge (following (Armstrong, 91) terminology) by means of techniques of computational intelligence is presented in (Weibel et al, 95). We may also formalize simple rules such as, for instance, for a particular bend or series of bends delimited by two inflection points (example of the section S42 in the figure 9): ? ? ? rule 1: if the distance between these points is smaller than symbolization width, then an internal conflict is detected. rule 2: if these points determine an isolated hairpin bend inside a globally straight section, then this bend is a salient bend which must be maintained. rule 3: a salient bend which gives rise to an internal conflict must be amplified, if it is a road.

An example is proposed below to show how to use the description tree in order to choose sequences of generalization operations and adequate algorithms. Each node of the tree carries a descriptor composed of qualitative criteria values and measurements, a section class and a bend shape class code that may both be zero (see below figure 9).

Section classes: SC1: very sinuous SC2: barely sinuous SC3: nearly straight
Bend shape classes: BC1 BC2

Obj-class: road A Obj-id: 4376 SC1 Unhomogenous ...

S1

S2

S3 SC1 BC1 (6 bends) Homogenous ...

S4

Qualification

Measures

SC1 BC2 (6 bends) Homogenous Complex ... base=... heigth=... length=... ...

SC3 Homogenous ...

SC2 Unhomogeneous. ...

S41 SC3 Homogenous ...

S42 SC1 BC1 Homogenous ...

S43 SC2 BC1 Homogenous ...

S44 SC2 BC1 Homogenous ...

Figure 9: an example of a descriptive tree of a BDCarto? road

Now figure 10 below illustrates the use of this geometrical knowledge in cartographic generalization decisions where map specifications and knowledge on generalization operations and sequences of algorithms are integrated.

Geometrical knowledge
Description tree (figure 9)

Structural knowledge Rule 1, Rule 2, Rule3, ...

Map specifications
Symbolization for road A Scale reduction Theme of the map

Knowledge on generalization operations and algorithms

Rule base
Douglas on SC3 sections Cubic arcs on BC1 shapes ...

Choices of adequate local generalization
One possible solution : SECTION S1 S2 S3 S41 S42 S43 S44
j j j j j j j

OPERATIONS Schematization Simplification + Smoothing Schematization Smoothing Enhancement Simplification + Smoothing Smoothing

ALGORITHMS
j j j j j j j

??? Douglas (80 m.) + Gaussian smoothing ??? Gaussian smoothing ? ??? Douglas + Gaussian smoothing Gaussian smoothing ?

Example of manual realization :

S2: Simplification + Smoothing S3: Schematization S1: Schematization S41: Smoothing S42: Enhancement S43: Simplification + Smoothing

S44: Smoothing

Figure 10: Example of Use of the Description Tree

References
J.G. Affholder, 1993 Road modelling for generalization. NCGIA Initiative 8. Specialist Meeting Buffalo. Unpublished. Available on ftp anonymous ftp.ign.fr (Attneave, 54) F. Attneave, 1954 Some informational aspects of visual perception. Psychological Review. Vol. 61, No. 3. (Armstrong, 91) M. Armstrong, 1991 Knowledge classification and organization In Map Generalization Chapter 2. p. 86-102. Buttenfield and McMaster Ed. Longman Scientific & Technical. London 1991 (Beard, 91) K. Beard 1991 Theory of the cartographic line revisited. Cartographica 28(4) p 32-58 (Babaud et al, 86) Babaud, J., Witkin, A., Baudin, M. and Duda, R. 1986 Uniqueness of the {G}aussian {K}ernel for {S}cale-{S}pace {F}iltering IEEE Trans. on PAMI. Vol. 8, No. 1, p. 26-33 (Ballard, 86) D.H. Ballard, 1986 Strip trees: A hierarchical representation for curves. Communications of the ACM. Vol. B, p 2-14 (Buttenfield, 87) B.P. Buttenfield 1987 Automating the Identification of Cartographic lines. The American Cartographer. Vol 14, No. 1, p. 7-20 (Buttenfield, 91) B.P. Buttenfield 1991 A rule for describing line feature geometry. In Map Generalization Chapter 3. p. 150-171. Buttenfield and McMaster Ed. Longman Scientific & Technical. London 1991 (Freeman, 78) H. Freeman, 1978 Shape description via the use of critical points. Pattern Recognition. Vol. 10. p. 2--14 (Herbert et al, 92) Herbert G., Joao E. M. and Rhind 1992 Use of an artificial intelligence approach to increase user control of automatic line generalisation. EGIS 1992 p 554-563 (Hoffman et al, 82) Hoffman D.D. and Richards W.A. 1982 Representing smooth plane curves for visual recognition AAAI 82 p. 5-8 (Jenks, 89) G.F. Jenks, 1989 Geographic logic in line generalisation Cartographica p. 27-42 (Lagrange, 94) Lagrange J.P. and Ruas A. Geographic Information Modelling: GIS and Generalisation, proc. of SDH’94 Vol. 2 pp 1099-1117 (McMaster, 86) R.B. McMaster, 1986 A statistical analysis of mathematical measures for linear simplification - The American Cartographer 13(2) p.103-117 (McMaster, 89) M.B. McMaster, 1989 The integration of simplification and smoothing algorithms in line generalization - Cartographica 26 p.101-121 (McMaster, 93) R.B.McMaster 1993 Knowledge Acquisition for Cartographic Generalization: Experimental Methods. ESF GISDATA Work. Compiègne France Dec. 1993. GIS and Generalization: Methodology and Practice. Taylor & Francis London (McWorth et al, 92)F. Moktharian, and A.K. Mackworth, 1992 A theory of multi-scale, curvature based shape representation for planar curves. IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 14 p. 789--805 (Plazanet et al, 94) C. Plazanet, J.P. Lagrange, A. Ruas, J.G. Affholder 1994 Représentation et analyse de formes pour l’automatisation de la généralisation cartographique. EGIS’94 Proc. Vol 2. p 1112-1121 (Affholder, 93)

(Plazanet, 95)

(Plazanet, 96)

(Rosin, 93)

(Thapa, 89) (Ueda et al, 90) (Weger, 93) (Weibel, 89)

(Weibel et al, 95)

C. Plazanet 1995 Measurements, characterisation and classification for automated linear features generalisation. AUTOCARTO 12, Proc. Vol. 4 p. 59--68 C. Plazanet, J.G. Affholder, E. Fritsch 1996. The importance of geometric modelling in linear feature generalisation. To appear in special issue of CaGIS journal on Map Generalization. P.L. Rosin 1993. Multiscale Representation and Matching of Curves using Codons. SPIE Conference, Computer Vision, Graphics and Image Processing, p. 66-77 K. Thapa 1989 Data compression and critical points detection using normalized symmetric scattered matrix. AUTOCARTO 9, P. 78-89 N. Ueda, S. Suzuki 1990 Automatic shape model acquisition using multiscale segment matching. Proceedings of the 10th ICPR. p. 897-902 G. Weger 1993 Cours de cartographie. IGN Internal report R. Weibel 1989 Konzepte und Experimente zur Automatisierung der Reliefgeneralisierung. PhDThesis Geographisches Institut. Universitat Zurich R. Weibel, S.Keller, T. Reichenbacher 1995 Overcoming the Knowledge Acquisition Bottleneck in Map Generalization: The role of Interactive Systems and Computational Intelligence. To appear in COSIT’95.


赞助商链接

更多相关文章:
更多相关标签:

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。
甜梦文库内容来自网络,如有侵犯请联系客服。zhit325@126.com|网站地图