9512.net

甜梦文库

甜梦文库

当前位置：首页 >> >> # Fast line detection algorithms based on combinatorial optimization

Fast Line Detection Algorithms Based on Combinatorial Optimization

Marco Mattavelli , Vincent Noel , Edoardo Amaldi

1

1

1

2

Integrated Sy

stems Laboratory - ISL - EPFL Swiss Federal Institute of Technology, CH 1015 Lausanne, Switzerland Marco.Mattavelli@epfl.ch , Vincent.Noel@epfl.ch 2 DEI, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan, Italy amaldi@elet.polimi.it

Abstract. In this paper we present a new class of algorithms for detecting lines in digital images. The approach is based on a general formulation of a combinatorial optimization problem. It aims at estimating piecewise linear models. A linear system is constructed with the coordinates of all contour points in the image as coefficients and the line parameters as unknowns. The resulting linear system is then partitioned into a close-to-minimum number of consistent subsystems using a greedy strategy based on a thermal variant of the perceptron algorithm. While the partition into consistent subsystems yields the classification of the corresponding image points into a close-to-minimum number of lines. A comparison with the standard Hough Transform and the Randomized Hough Transform shows the considerable advantages of our combinatorial optimization approach in terms of memory requirements, time complexity, robustness with respect to noise, possibility of introducing “a priori” knowledge, and quality of the solutions regardless of the algorithm parameter settings.

1

Introduction

The Hough Transform (HT) and its numerous variants are the classical approaches used to detect and recognize straight lines in digital images [1], [2]. The various HT variants have been developed to try to overcome the major drawbacks of the standard HT, namely, its high time complexity and large memory requirements. In some cases, variants such as the randomized, probabilistic and hierarchical HT [7], [8], achieve an effective complexity reduction. In others, however, they face serious difficulties and fail to provide solutions of the desired quality. This happens, for instance, when several line segments need to be simultaneously detected or when there is a relatively high level of noise [1], [2]. In general, selecting small values for the thresholds of those HT variants may yield erroneous solutions, while selecting larger values may substantially increase the computational load and therefore jeopardize their nice features of reduced time complexity and lower memory requirements. Thus, in the presence of several lines to be detected, one has to find a delicate trade-off between time/memory requirements and quality of solutions. In this paper we present a new approach for detecting lines in digital images that differs from the HT-based ones.

The basic idea is to formulate the problem as that we introduced in [3] for estimating general piecewise linear models, namely as the combinatorial optimization problem of partitioning an inconsistent linear system into a close-to-minimum number of consistent subsystems. The method we devise to find approximate solutions to those problem formulations provides results of equivalent or even higher quality than the HT and it compares very favorably in terms of time and memory requirements as well as robustness. The paper is organized as follows; section 2 describes the combinatorial optimization formulation, some of its properties as well as a greedy strategy to find good approximate solutions in a short amount of time. Then some details of the algorithms as well as the convergence and projection strategy of the algorithms are presented in section 3, while some possible optimizations are presented in section 4. Some typical results are reported in section 5 and compared with those provided by the basic and randomized HT. The paper is concluded by presenting some general remarks and perspectives.

2

The MIN-PCS Based Formulation of Line Detection

The problem of classifying the points of an image into line segments can be formulated as that of partitioning an inconsistent linear system into consistent subsystems. Indeed, the coordinates of points belonging to a line segment satisfy a simple linear system whose solution corresponds to the parameters of the line. In the presence of several lines and noise distributed in the image, the linear system is inconsistent, i.e. there exists no solution satisfying the equations corresponding to all image points. In such cases, regressions and robust regression based approaches are clearly inadequate. The breakdown point of classical robust regression methods limits, for instance, their applicability to a very restricted type of situations. In particular, there must be a dominant subsystem that corresponds to at least 50% break-down limits [6], or for other approaches [10] the solution is guaranteed only for uniform or "a priori" known noise distributions. In the case of inconsistent systems corresponding to several "unknown" consistent subsystems and noise with "unknown" distributions other approaches have to be found. For these reasons, alternative approaches generally based on the HT and its numerous variants have been extensively investigated [1], [2], [7]. Although, in general, these approaches tend to provide reasonable results and to be relatively robust with respect to noise, they have high time and memory requirements and they are quite sensitive with respect to the threshold settings. In this paper we show that accurate solutions to the problem of line detection can be found by considering the following combinatorial optimization problem that we have introduced in [3]. MIN PCS: Given an inconsistent linear system A?x=b where A is an m?n matrix and x,b are n-dimensional vectors, find a Partition of the set of equations into a MINimum number of Consistent Subsystems. In the case of line detection, the coefficients of each row of the inconsistent linear system correspond to the coordinates of one of the contour points at hand. Any partition into a number s of consistent subsystems is then clearly equivalent to a partition of all contour points into V line segments. Given the choice of the objective function, we look for the simplest set -for the smallest number- of line segments that

account for all contour points. To cope with noise and quantization or acquisition k k th errors, it suffices to replace each equation a x=bk, where a is the k row of A and bk is th the k component of b, by the two complementary inequalities:

?D N .[ ? EN + e ? (1) ? N ?D .[ ? EN - e ?

(1)

where ε is the maximum tolerable error. See [3] for the description of a simple geometric interpretation of this version of MIN PCS. In the present setting, it amounts to finding a minimum number of hyperslabs in n-dimensions whose width is proportional to ε and such that each point corresponding to the coordinates of one contour point is contained in at least one hyperslab. Although we proved in [3], [4] that MIN PCS is an NP-hard problem, and hence, it is unlikely that any algorithm is guaranteed to find an optimal solution in polynomial time, we have developed a heuristic which works well in practice and finds good approximate solutions in a short amount of time. The results obtained for other problems and time series modeling clearly confirm this assertion [3] [4]. Since in practical applications we are interested in finding close-to-minimum partition rapidly, we adopt a greedy strategy in which the original problem is subdivided into a sequence of smaller subproblems. Several projection schemes can be used for the solution of each subproblem depending on its nature. Here we introduce one scheme designed for line detection (i.e. MIN-PCS problems with two variables). Starting with the original inconsistent system of pairs of inequalities (1), close-to-maximum consistent subsystems are extracted iteratively. Clearly the iterative extraction of consistent subsystems yields the partition into consistent subsystems. The extraction of close-to-maximum consistent subsystems is performed by using a thermal variant of the perceptron procedure that originally comes from machine learning (see [4], [5], [6] and the references therein). The algorithm can be described as follows, see also [3]:

- Problem: Given any system an and

[max ??

N

Q

$[ = E

is

and any maximum admissible error

such that the couple of complementary inequalities satisfied for the maximum

e > 0 , look for D [max ? E N + e

N

N ?{1,..., S} .

D [max ? E N - e

number

of

indices

- Initialization: Take an arbitrary

[0 ?? Q , set F := 0 , and initial temperature W := W0 , g (F, & )

decreasing for

select a predefined number of cycles C as well as function increasing c and such as begin

g (& , & ) = 0 .

L?0;

repeat

F ? F +1; while 6 ? ?

W ? W0 .g (F, & ) ;

do

6 ? {1,..., S} ;

pick

L

V ?6 N ?V;

randomly and remove s from S

( N := E N - D N .[L ;

L L L

? - (LN ? W d L := exp ? ?; ? W ? W0 ? ?

L

if

(D

NL

[L ? E N - e

L

else

if

while take end

F<& [ +1 as an estimate of [max

L

L ? L +1 ; project the current solution onto the unit cylinder

(D

)

[L +1 := [L + d L D N

L

L

;

L

NL

[L ? E N + e

)

[L +1 := [L - d L D N

;

where t0 is determined by the average deviation from consistency (average inequality error) for the current solution xt at the beginning of each cycle. Intuitively, the behavior of the algorithm can be explained as follows. At high normalized temperature t/t0, all equations with both high or low deviations from consistency lead to a significant correction of the current solution xi. Conversely, at low temperatures, only those equations with small deviations from consistency yield relevant corrections to the current solution. Convergence of the procedure is guaranteed because when t is decreased to zero, the amplitude of the corrections tends to zero. In our experiments we have used exponentially decreasing functions for t, from an initial t0 to 0 in a predefined maximum number of cycles C through the equation set. See [3], [4] for more details on the algorithm and the annealing schedules.

3 Convergence Analysis of MIN-PCS Based Line Detection Algorithms

In this section we study the convergence behavior of the proposed algorithms. So as to better clarify convergence issues let us consider a linear system composed by only two equations in two variables:

?D11 [1 + D12 [2 = E1 $[ = E ? ? ?D21 [1 + D22 [2 = E2

2

(2)

These two equations represent two straight lines in ? . Assuming that the two lines are not parallel, the relaxation algorithm described before can be illustrated as in figure 1 (see also [3], [6]). The angle α between the two lines and the convergence rate are related as follows:

;

L

+1

L

- ;*

; - ;*

= cos a ,

2

;

L

+Q

L

- ;*

(3)

; - ;*

= cos a

2Q

It is clear that when the two lines are nearly parallel, i.e. when α is close to 0, the algorithm will converge very slowly (since cosα is close to 1). These situations occur very frequently in line detection problems. It can be easily shown that this is the case of relatively short image segments located far from the (conventional) coordinate origin in the binary image. Conversely, when α is close to π/2, the algorithm converges very quickly. In practice, one should try to make the row vectors in the system of linear equations mutually orthogonal by using classical techniques see for instance [9]. Unfortunately, these kind of orthogonalization procedures cannot be applied in the case of inconsistent systems (i.e. several consistent subsystems) because the ‘‘line pencil’’ is not unique. In fact there are several consistent subsystems corresponding to different lines as illustrated in Figure 1.

Starting point xk+3 xk+1

Starting point Searched solution

Searched solutions

x*

α xk+2 xk

Figure 1. Left: graphical representation of the relaxation algorithms to extract close-tomaximum consistent subsystems, center: example of convergence for the case of a consistent system, right: example for a case of an inconsistent system, i.e. two consistent sub-systems.

So as to devise a fast line detection algorithm, we have to guarantee a fast convergence for all possible input data (i.e. line positions in the image). The idea is to find a suitable surface on which to project the current solution xi that is as much as possible orthogonal to all possible lines so as to speed up the convergence and to constraint the solution to a desired region of the space. With this objective the working space (parameter space) has been extended to a three-dimensional space. 2 Each point X=(x,y) in R (image space) is mapped into a plane in the threedimensional parameter space R according to the linear equation ax+by+c=0. Then

3

each line to be extracted in R defined by {(x,y) ∈ R /aix+biy+ci=0} corresponds to a

2 2

line in R defined by {(x,y,z) ∈ R /?γ ∈ R,(a,b,c)= γ(ai,bi,ci)}. Actually, in the case of

3 3

an inconsistent linear system we have several ‘‘plane pencils’’ corresponding respectively to each straight line in the image and each intersects to a line (all these lines contains the origin since all linear equation are homogeneous). Thus, the problem we have to solve is that in three dimensions each solution line contains the origin (i.e.(a,b,c)=(0,0,0)) so that the algorithm always converge to the trivial solution 0. To avoid this occurrence, at each solution update (correction) we perform a

projection to the unit cylinder of equation (D, E, F ) ? ? 3 / D 2 + E 2 = 1 . For each image point we alternate a projection to the corresponding solution plane and one to the unit cylinder. Hence, this procedure constrains the current iterative solution to remain close to the unit cylinder and its intersecting planes. Specifically the following nonorthogonal projection is performed:

{

}

D?

D D +E

2 2

,E ?

E D +E

2 2

,F ?

F D +E

2 2

(4)

Applying this procedure the speed of convergence can now be expressed as:

ur uu r Q1.Q2 cos a = ur uu r Q1 . Q2

(5)

with Q1 and Q2 the normal vectors corresponding to two consecutive planes. A rigorous study of the convergence speed in the general case of an inconsistent system is obviously much more complex. This type of analysis must also take into account the statistical behavior of the consistent subsystems distributions and of the corrections versus the temperature scheme.

ur

uu r

c

P2 P1 P0

Solution

a

a b

Starting point

b

Fig. 2. 3-D representation of the projection scheme in case of a consistent system (3 aligned points in the image correspond to the pencil defined by planes P1, P2 and P3).

4

Line Detection Algorithms for Shape Detection and Tracking

In the basic scheme described in the previous paragraph, consistent subsystems (i.e. lines in an image) are extracted iteratively by the MIN-PCS greedy strategy starting from random initial solution and letting the relaxation algorithm to converge. This is the most generic approach to cope with applications for which no “a-priori” knowledge is available (number of line to be extracted, probable positions, object shape, geometrical relation between lines, etc…). In many applications, additional

information such as object shape, approximated position, number of line to extract, fixed angle (or distance) between lines, etc … might indeed be available. The nature of the approach allows the addition of various type of a-priori information that can dramatically improve robustness and performance. This “a priori” information can be easily embedded into the kernel of the MIN-PCS based algorithms while classical approaches have to consider it in the post-processing stage. Multi Solutions Algorithms An example of such inclusion of “a priori” information can be the following. If the number of line is known (or at least, the minimum number) several solutions can be combined in the same temperature annealing scheme, saving the computation of the temperature decrease scheme and the associated post processing stage for each line. There are two possible options to perform the correction: - For each geometrical point randomly picked into the image (in the inner loop), each solutions is updated. This is the simplest implementation, but because of their independence, some solutions could merge and converge to the same value. This option is powerful if a good approximation of the solution is known (for instance in tracking applications). - Only the closest solution is updated. This option avoids the possible merging of solutions. Geometrical Constraints The projection scheme introduced previously yield directly the Hough parameters of the line (solutions are constrained to the unit cylinder) as follow:

?cos (a K ) = d F .D ? ?1 if F ? 0 ?sin (a K ) = d F .E with d F = ? ? -1 if F < 0 ? G K = d F .F = F ? ?

(6)

With αh and dh being the Hough parameters of the line. This approach allows an easy implementation of additional geometrical constrains into the inner loop. For instance, in the case of two simultaneous lines, the second line can be forced to be parallel to the first one by imposing that: - Only the nearest solution is projected, - The other is updated so as to be parallel (just a copy of the a & b parameter). The same strategy can be applied for more than two parallel lines, perpendicular or with any given angle. Although the general strategy can always be used with very good results, a suitable correction strategy for the application at hand can considerably improve the overall performance. For instance when MIN-PCS methods are used for initial features detection, it is only important that the temperature is high enough, while the choice of an initial random solution is irrelevant. The only drawback is the need of more computation to let the algorithm converge. Conversely, for tracking application, the “a priori” information available from the past images (i.e. the previous position of the features we are interested in) can be used to dramatically reduce the computation

load. The initial solution is not chosen randomly, but corresponds to one of the probable solutions and the initial temperature is set to lower values compatibly with the maximum admissible distance from the past solution. If the current solution is very near to the past solution the algorithm converges very rapidly, if not the algorithm is capable of finding the new one within the admissible variation range, thus still saving computation if compared to the general detection algorithm not using any “a priori” knowledge.

5 Some Simulation Results

The generic MIN PCS algorithm has been applied to some synthetic and natural test images with different levels of noise. Fig 3 reports an example of results for images without noise and with 2\% of the total image or equivalently of 118% of contour points as randomly distributed noise. The same images have been processed by the classical HT and by the randomized HT (RHT) [1], [7] with a 256?256 accumulator arrays equal to the image resolution. Table 1 summarizes the results of a profiling analysis of HT, RHT and MIN PCS algorithms on a SUN UltraSparc WS. All results are normalized to 3.57 seconds. The lower part of the table indicates the (subjective) quality of the obtained results.

Noise % (*) (**) HT RHT MIN-PCS HT RHT MIN-PCS 0 0 1 0.08 0.06 ++ ++ ++ 1 59 1.16 0.13 0.49 ++ ++ ++ 2 118 1.29 0.14 0.93 ++ + ++ 5 295 1.77 0.46 2.19 + 10 590 4.79 3.30 11.2 / + 15 885 4.76 4.01 28.0 / / -

Table 1. Comparison of speed (in seconds) and quality of results obtained with HT, RHT and MIN PCS algorithms fort he image of Fig. {house} at different levels of noise. (*) The noise % refers to total image points (256×256). (**) The noise % w.r.t. total contour points. [++] all information is correctly detected, [+] all information can be extracted by a simple postprocessing stage, [-] information is missing, [/] no results can be obtained.

As confirmed by all our experiments, the MIN PCS approach provides the highest quality results in all noise conditions. Moreover, the time requirements are much lower than the HT for low levels of noise and comparable for higher noise levels, while yielding higher quality results. At high noise levels no results can be obtained with the HT. RHT shows its limits and fails to provide any results even for medium levels of noise. It has to be pointed out that the comparison would be much more favorable for larger image sizes. For instance, considering the same image, but at double resolution (512?512 pixel instead of 256?256) HT memory requirements and processing time increase by a factor of 4, while the time and space complexity of our algorithm only increases with the number of contour points and number of subsystems. A more detailed analysis of memory and complexity of MIN PCS algorithms versus the HT is omitted here for brevity. Another result that demonstrates the excellent performance and robustness versus noise of the MIN PCS based

algorithm is reported in Fig. 4. The image contains 500 points obtained by adding a Gaussian noise of σ=10 pixels to two original segments. For various noise levels, the MIN PCS approach always recovers the two original segments with 3.65 seconds of processing time. As shown in Fig. 4 (right), RHT never provides a correct result and 65% of the segments determined by the HT (in 6.25 seconds) are not correctly grouped or do not have the correct parameters.

Fig. 3. From left to right: original gray scaled test image, original binary image (after basic edge detection), results of MIN PCS based line detection, results with 295% of the points as random noise (i.e. 5% of the total number of points).

Fig. 4. From left to right: MIN-PCS approach (original image: two lines whose points have been displaced by quantity with a Gaussian distribution), MIN-PCS with 250% of additional random noise (w.r.t. the points of the original image), HT with 250% of additional noise, RHT with 250% of additional noise.

Fig. 5. Left: Synthetic image composed by 10 randomly distributed lines. Right: MIN PCS solution when 50% are the original image points and 50% are noise (randomly distributed points).

Another example of results is reported in Fig. 5. These tests are based on images containing 10 randomly distributed segments with additional speckle noise corresponding to 50% of the line points. The MIN PCS approach always provides the

correct results. Since in average each subsystem contains 3-5% of the total points, no method based on robust regression techniques can be applied.

Conclusion

We have presented a class of algorithms based on a general combinatorial optimization formulation able to detect lines in images. The problem is formulated as that of partitioning inconsistent linear systems into a minimum number of consistent subsystems. The linear system is obtained by the contour lines extracted by the images. A generic algorithm as well as possible variants able to include “a priori” information available in some applications have been described. A projection strategy avoiding critical convergence speed for some data distributions, short segments located far from the conventional origin of the reference system, has been also developed. The MIN PCS approach can be applied to a variety of other problems for which piecewise linear models are valuable. Since higher degree polynomials can be viewed as linear functions with respect to their coefficients, the approach can also be extended to the estimation of piecewise polynomial models with submodel of bounded degree, thus also for detecting other shapes than lines.

References

[1] P. Kultaken, L. Xu, and E. Oja, "A new curve detection method: Randomized Hough transform," Pattern Recognition Letters, vol. 11, pp. 331-338, 1995. [2] E. Zapata, N. Guil, and J. Villalba, "A fast Hough transform for segment detection," IEEE Trans. on Image Processing, vol. 4-11, pp. 1541-1548, 1995. [3] M. Mattavelli and E. Amaldi, "A combinatorial optimization approach to extract piecewise linear structure in nonlinear data and an application to optical flow segmentation," School of Operations Research and Industrial Engineering, Cornell University 1997. [4] M. Mattavelli, "Motion analysis and estimation: from ill-posed discrete inverse linear problems to MPEG-2 coding," in Department of Communication Systems. Lausanne: EPFL - Swiss Federal Institute of Technology, 1997. [5] M. Mattavelli, V. Noel, and E. Amaldi, "An efficient line detection algorithm based on a new combinatorial optimization formulation," presented at ICIP98, Chicago, 1998. [6] P. Meer, D. Minz, and A. Rosenfeld, "Robust Regression Methods for Computer Vision: a Review," International Journal of Computer Vision - Kluwer Academic Publisher, vol. 6-1, pp. 59-70, 1991. [7] E. Oja, H. Kalviainen, and P. Hirvonen, "Houghtool a software package for hough transform calculation," presented at 9th Scandinavian Conf. on Image Analysis, Uppsala, Sweden, 1995. [8] P. Palmer, M. Petrou, and J. Kittler, "A Hough Transform Algorithm with a 2-D Hypothesis Testing Kernel," CVGIP, vol. 58-2, pp. 221-234, 1993. [9] H. Stark and Y. Yang, Vector Space Projection - A numerical approach to signal and image processing, neural nets, and optics, John Wiley & Sons ed. New York, 1998. [10] C. V. Stewart, "MINPRAN: "A New Robust Estimator for Computer Vision"," PAMI, vol. 17-10, pp. 925-938, 1995.

- Combinatorial Algorithms for Fast Clock Mesh Optimization
- Hybrid Algorithms for On-Line Searchand Combinatorial Optimization Problems.
- A Fast Harmonic Detection Method Based on Recursive DFT
- Rollout Algorithms for Combinatorial Optimization
- Off-Line Detection, Identification and Correction of Branch Parameter Errors Based on Several Measur
- Oppositional Biogeography-Based Optimization for Combinatorial Problems
- Edge Detection in Range Images Based on Scan Line Approximation
- Optimization of Controller Parameters Based on the Improved Genetic Algorithms
- Fast Rotation Invariant Multi-View Face Detection Based on Real AdaBoost
- Motor Fault on-line Diagnosis Based on Innovation Energy Detection
- Edge Detection in Range Images Based on Scan Line Approximation
- Genetic algorithms for combinatorial optimization The assembly line balancing problem
- A KNOWLEDGE BASED ON-LINE DIAGNOSTIC SYSTEM FOR THE FAST BREEDER REACTOR KNKII
- Optimization Method based on Genetic Algorithms
- Potential-based on-line policy iteration algorithms

更多相关文章：
**毕业论文模版**

*combinatorial* *optimization* problem has become the ... and can be *fast* convergence, but, with the ...*Algorithms*,简称 SIA),群智能(Swarm Intelligence,SI...**求解连续空间量化的[1]..**

*algorithms* and its applications Abstract *Optimization* technology is *based* *on* ...*combinatorial* *optimization* problems.Many problems possess a set of parameters ...**...***Optimization* of modular machining *lines*

is solved using different models*based* *on* integer and constraint programming....Keywords Machining *line* ? *Combinatorial* *optimization* ? Integer programming ? ...**粒子群算法论文**

can solve various*combinatorial* *optimization* problems...*optimization* *algorithms* by simulating the behavior ...The *algorithm* converges *fast*, needing less ...**基于仿真平台上的蚁群算法的研究**

*algorithms*, evolutionary planning, evolutionary ...Firstly, the problem of *combinatorial* *optimization* ...The The method of simulation *based* *on* ant colony...**外文翻译**

*based* solution for Register Allocation for Embedded...*Algorithms* and heuristics have been adapted in our...An approach for *combinatorial* *optimization* is to ...**一种新的进化粒子群算法及其在TSP中的应用**

powerful improved*algorithms* *based* *on* PSO to solve concrete engineering ... *combinatorial* *optimization* problem, is a typical NP problem, many practical ...**全国数模历年考题 - 副本**

Multi-objective programming 02ALight*line* light ...*Combinatorial* *optimization* model 04AOlympic Games ... the fusion of all kinds of modern *algorithms*. ...**顶级期刊排名**

*Based* Systems Kybernetika Natural Language Engineering...*Combinatorial* *Optimization* Journal of Experimental ...*Algorithms* and Discrete Methods SIAM Journal *on* ...**外企面试英语自我介绍范文5篇**

*line*.In addition ,during my college years,I was...*on* *Combinatorial* *Optimization* and Network Scheduling ...*algorithms*, I introduced an improved evolutionary ... 更多相关标签：

is solved using different models

can solve various

powerful improved

Multi-objective programming 02ALight