• Aucun résultat trouvé

Distance Metric between 3D Models and 2D Images for Recognition and Classication

N/A
N/A
Protected

Academic year: 2022

Partager "Distance Metric between 3D Models and 2D Images for Recognition and Classication"

Copied!
34
0
0

Texte intégral

(1)

MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY

A.I. Memo No. 1373 July, 1992

Distance Metric between 3D Models and 2D Images for Recognition and Classication

Ronen Basri and Daphna Weinshall

Abstract

Similarity measurements between 3D objects and 2D images are useful for the tasks of object recognition and classication. We distinguish between two types of similarity metrics:

metrics computed in image-space (image metrics) and metrics computed in transformation- space (transformation metrics). Existing methods typically use image metrics; namely, metrics that measure the dierence in the image between the observed image and the nearest view of the object. Example for such a measure is the Euclidean distance between feature points in the image and their corresponding points in the nearest view. (Computing this measure is equivalent to solving the exterior orientation calibration problem.) In this paper we introduce a dierent type of metrics: transformation metrics. These metrics penalize for the deformations applied to the object to produce the observed image.

We present a transformation metric that optimally penalizes for \ane deformations" under weak-perspective. A closed-form solution, together with the nearest view according to this metric, are derived. The metric is shown to be equivalent to the Euclidean image metric, in the sense that they bound each other from both above and below. For the Euclidean image metric we oer a sub-optimal closed-form solution and an iterative scheme to compute the exact solution.

c

Massachusetts Institute of Technology (1992)

This report describes research done at the Massachusetts Institute of Technology within the Articial Intelligence Laboratory and the McDonnell-Pew Center for Cognitive Neuroscience.

Support for the laboratory's articial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Oce of Naval Research contract N00014-91-J-4038. Ronen Basri is supported by the McDonnell-Pew and the Rothchild post- doctoral fellowships. Daphna Weinshall is at IBM T.J. Watson Research Center, Hawthorne, NY.

(2)

1 Introduction

Object recognition is a process of selecting the object model that best matches the observed image. A common approach to recognition uses features (such as points or edges) to rep- resent objects. An object is recognized in this approach if there exists a viewpoint from which the model features coincide with the corresponding image features, e.g. [Roberts, 1965, Fischler and Bolles, 1981, Lowe, 1985, Huttenlocher and Ullman, 1987, Basri and Ullman, 1988, Thompson and Mundy, 1987, Ullman and Basri, 1991]. Since images often are noisy and mod- els occasionally are imperfect, it is rarely the case that a model aligns perfectly with the image.

Systems therefore look for a model that \reasonably" aligns with the image. Consequently, measures that assess the quality of a match become necessary.

Similarity measures between 3D objects and 2Dimages are needed for a range of applica- tions:

The recognition of specic objects in noisy images, as described above.

The initial classication of novel objects. In this application a new object is associated to similar objects in the database. This way an image of, e.g., a Victorian chair is associated with models of (dierent) familiar chairs.

The recognition of non-rigid objects whose geometry is not fully specied. An example is the recognition of 3Dhand gestures. In this task only the generic shape of the gesture is known, and the particular instances dier according to the specic physiology of the hand.

Existing recognition methods are usually tailored to solve the rst of these application, namely, the recognition of specic objects from noisy images. Many of these methods are sub-optimal (see Section 2 for a review), which may result in large number of either mis-recognition or false-positives. When these methods are extended to handle problems such as classication and recognition of non-rigid objects their performance may even be less predictable. The general problem of recognition therefore requires measures that provide a robust assessment of the similarity between objects and images. In this paper we describe two such measures, and develop a rigorous solution to the minimization problem that each measure entails.

A common measure for comparing 3Dobjects to 2D images is the Euclidean distance be- tween feature points in the actual image and their corresponding points in the nearest view of the object. The assumption underlying this measure is that images are signicantly less reliable than models, and so perturbations should be measured in the image plane. This assumption often suits recognition tasks. Other measures may better suit dierent assumptions. For exam- ple, when classifying objects, there is an inherent uncertainty in the structure of the classied object. One may therefore attempt to minimize the amount of deformations applied to the object to account for this uncertainty. Such a distance is measured in transformation space rather than in image space. A denition of these two types of measures is given in Section 3.

(3)

Measures to compare 3D models and 2D images generally are desired to have metrical properties; that is, they should monotonically increase with the dierence between the measured entities. (A more exact denition is given in Appendix A.) The Euclidean distance between the image and the nearest view denes a metric. (We refer to this measure as the image metric.) The diculty with employing this measure is that a closed-form solution to the problem has not yet been found, and therefore currently numerical methods must be employed to compute the measure. A common method to achieve a closed-form metric is to extend the set of transformations that objects are allowed to undergo from the rigid to the ane one. The problem with this measure is that it bounds the rigid measure from below, but not from above.

Other methods either achieve only sub-optimal distances, or they do not dene a metric. The existing approaches are reviewed in Section 2.

This paper presents a closed-form distance metric to compare 3D models and 2D images.

The metric penalizes for the non-rigidities induced by the optimal ane transformation that aligns the model to the image under weak-perspective projection. The metric is shown to bound the least-square distance between the model and the image both from above and below. We foresee three ways to use the metric developed in this paper:

1. Obtain a direct assessment of the similarity between 3Dmodels and 2Dimages.

2. Obtain lower and upper bounds on the image metric. In many cases such bounds may suce to unequivocally determine the identity of the observed object.

3. Provide an initial guess to be then used by a numerical procedure to solve the image distance.

The rest of this paper is organized as follows: In Section 2 we review related work. In Section 3 we dene the concepts used in this paper. In Section 4 we summarize the main results of this paper. These results are discussed in detail and proved in section 5 for the transformation metric, and section 6 for the image metric. Sections 5 and 6 can be omitted in rst reading. Finally, in Section 7 we compare the distances between 3D objects and 2D images, obtained by alignment, to our results.

2 Previous approaches

Previous approaches to the problem of model and image comparison using point features are divided into three major categories:

1. Least-square minimization in image space.

2. Sub-optimal methods using correspondence subsets.

3. Invariant functions.

(4)

The traditional photometric approach to the problem of model and image comparison in- volves retrieving a view of the object that minimizes the least-square distance to the image.

This problem is referred to as the exterior orientation calibration problem (or the recovery of the hand-eye transform) and is dened as follows. Given a set ofn3Dpoints (model points) and a corresponding set of n2Dpoints (image points), nd the rigid transformation that minimizes the distance in the image plane between the transformed model points and the image points.

An analytic solution to this problem has not yet been found. (Analytic solutions to the absolute orientation problem, the least-square distance between pairs of 3D objects, have been found, see [Horn, 1987, Horn, 1991]. An analytic solution to the least-square distance between pairs of 2D images has not yet been found.) Consequently, numerical methods are employed (see reviews in [Tsai, 1987, Yuan, 1989]). Such solutions often suer from stability problems, they are computationally intensive and require a good initial guess.

To avoid using numerical methods, frequently the object is allowed to undergo ane trans- formations instead of just rigid ones. Ane transformations are composed of general linear transformations (rather than rotations) and translations, and they include in addition to the rigid transformations also reection, stretch, and shear. The solution in the ane case is sim- pler than that of the rigid case because the quadratic constraints imposed in the rigid case are not taken into account, enabling the construction of a closed-form solution. At least six points are required to nd an ane solution under perspective projection [Fischler and Bolles, 1981], and four are required under orthographic projection [Ullman and Basri, 1991].

The ane measure bounds the rigid measure from below. The rigid measure, however, is not bounded from above, as is demonstrated by the following example. Consider the case of matching four model points to four image points under weak-perspective. Since in this case there always exists a unique ane solution, the ane distance between the model and the image is zero. On the other hand, since three points uniquely determine the rigid transformation that aligns the model to the image, by perturbating one point we can increase the rigid distance unboundedly.

A second approach to comparing models to images involves the selection of a small sub- set of correspondences (alignment key), solving for the transformation using this subset, and then transforming the other points and measuring their distance from the corresponding im- age points. Three [Fischler and Bolles, 1981, Rives et al., 1981, Haralick et al., 1991] or four [Horaud et al., 1989] points are required under perspective projection, and three points un- der weak perspective [Ullman, 1989, Huttenlocher and Ullman, 1987] . The obtained distance critically depends on the choice of alignment key. Dierent choices produce dierent distance measures between the model and the image. The results almost always are sub-optimal, since it is generally better to match all points with small errors than to exactly match a subset of points and project all the errors onto the others.

A third approach involves the application of invariant functions. Such functions return a con- stant value when applied to any image of a particular model. Invariant functions were success- fully used only with special kinds of models, such as planar objects (e.g., [Lamdan et al., 1987, Forsyth et al., 1991]). More general objects can be recognized using model-based invariant

(5)

functions [Weinshall, 1993]. For noise-free data, model-based invariant functions return zero if the image is an exact instance of the object. To account for noise, the output of these functions usually is required to be below some xed threshold. In general, very little research has been conducted to characterize the behavior of these functions when the model and the image do not perfectly align. The result of thresholding therefore becomes arbitrary.

3 Denitions and notation

In the following discussion, we assume weak-perspective projection. Namely, the object under- goes a 3Dtransformation that includes rotation, translation, and scale, and is then orthograph- ically projected onto the image. Perspective distortions are not accounted for and treated as noise.

In order to dene a similarity measure for comparing 3D objects to 2D images, as discussed in section 1, we rst dene the

best-view

of a 3Dobject given a 2Dimage:

Denition 1:

[best-view] Let @ denote a dierence measure between two 2D images of n features. Given a 2Dimage of an object composed ofnfeatures, the

best-view

of a 3Dobject

(model) composed ofn corresponding features, is the view for which the smallest value of@ is obtained. The minimization is performed over all the possible views of the model; the views are obtained by applying a transformationT, taken from the set of permitted transformations , and followed by a projection, .

We compute @, the dierence between two 2D images ofnfeatures in two ways:

image metric:

we measure position dierences in the image, namely, it is the Euclidean dis- tance between corresponding points in the two images, summed over all points.

transformation metric:

the images are considered to be instances of a single 3D object.

The metric measures the dierence between the two transformations that align the object with the two images. This dierence can be measured, for instance, by computing the Euclidean distance between the matrices that represent the two transformations (when the two transformations are linear).

As mentioned above, the measure @ is applied to the given image and to the views of the given model. These views are generated by applying a transformation from a set of permitted transformations. The view that minimizes the distance@to the image is considered as the

best view

, and the distance between the best view and the actual image is considered as the distance between the object and the image.

We consider in this paper two families of transformations: rigid transformations and ane transformations, and we discuss the following metrics:

(6)

Nim

:

a metric that measures the image distance between the given image and the best rigid view of the object.

Naf

:

a metric that measures the image distance between the given image and the best ane view of the object.

Ntr

:

a transformation metric. We assume that the image is an ane view of the object. (When it is not, we substitute the image by the best ane view.) We look for the rigid view of the object so as to minimize the dierence between the two transformations: the ane transformation (between the object and the image) and the rigid transformation (between the object and its possible rigid views.) In other words, we look for a view so as to minimize the amount of \ane deformations" applied to the object.

To illustrate the dierence between image metrics and transformation metrics, Figure 1 shows an example of three 2Dimages, whose similarity relations reverse, depending on which kind of metric is used. Consider the planar object in Figure 1(b) as a reference object, and assume contains the set of rigid transformations in 2D. The images in (a) and (c) are obtained by stretching the object horizontally (by 9/7) and vertically (by 3/2) respectively. (The image in (b) is obtained by applying a unit matrix to the object.)

a) b) c)

closer in space closer in

transformation−

space

image−

( ( (

Figure 1: The 2D image shown in (b) is closer to the image in (a) when the dierence is computed in transformation space, and closer to the image in (c) when the dierence is the Euclidean dierence between the two images.

The image metric between the images in (b) and (a) is 4, two pixel at each of the left corners of the rectangle.

The image metric between the images in (b) and (c) is 2, one pixel at each of the upper corners of the rectangle.

Therefore, according to the image metric, Figure (c) is closer to (b) than (a) is.

(7)

To compute the transformation metric consider the planar object illustrated in (b). We compute the dierence between the matrices that represent the ane transformation from (b) to both (a) and (c) and the matrix that represent the best rigid transformation (in this case it is the unit matrix): (a) is obtained from (b) by a horizontal stretch of 9=7.

The transformation metric between (a) and (b) is therefore 2=7 = 9=7;1.

(c) is obtained from (b) by a vertical stretch of 3=2. The transformation metric in this case is 1=2 = 3=2;1.

Therefore, according to the transformation metric, Figure (a) is closer to (b) than (c) is.

It is interesting to note that in this example the solution obtained by minimizing the transfor- mation metric seems to better correlate with human perception than the solution obtained by minimizing the image metric.

3.1 Derivation of

Nim

and

Naf

We now dene the rigid and the ane image metrics explicitly. Under weak-perspective pro- jection, the position in the image, ~qi = (xi;yi), of a model point ~pi = (Xi;Yi;Zi) following a rigid transformation is given by

qi = (R ~pi+~t)

where R is a scaled, 33 rotation matrix, ~t is a translation vector, and represents an orthographic projection. More explicitly, denote by~r1T and ~rT2 the top two row vectors of R, and denote~t= (tx;ty;tz); we have that

xi = ~r1T~pi+tx

yi = ~r2T~pi+ty (1) where

~rT

1 ~r

2 = 0

~rT

1 ~r

1 = ~r2T~r2 (2)

The rigid metric,Nim, minimizes the dierence between the two sides of Eq. (1) subject to the constraints (2).

When the object is allowed to undergo ane transformations, the rotation matrix R is replaced by a general 33 linear matrix (denoted by A) and the constraints (2) are ignored.

That is

qi = (A~pi+~t)

Denote by~aT1 and~aT2 the top two row vectors ofA, we obtain

xi = ~aT1 ~pi+tx

yi = ~aT2 ~pi+ty (3)

(8)

The ane metric, Naf, minimizes the dierence between the two sides of Eq. (3).

To dene the rigid and the ane metrics, we rst note that the translation component of both the best rigid and ane transformations can be ignored if the centroids of both model and image points are moved to the origin. In other words, we begin by translating the model and image points so that n

X

i=1

~ pi =Xn

i=1

~

qi= 0 (4)

We claim that now~t= 0. The proof is given in Appendix C.

Denote

P =

0

B

@ X

1 Y

1 Z

... 1 Xn Yn Zn

1

C

A

a matrix of model point coordinates, and denote

~ x=

0

B

@ x

...1 xn

1

C

A

~ y =

0

B

@ y

...1 yn

1

C

A (5)

the location vectors of the corresponding image points. A rigid metric that reects the desired minimization is given by

Nim= min~r

1;~r22R3k~x;P~r1k

2+k~y;P~r2k2 s:t: ~rT1 ~r2 = 0; ~rT1 ~r1 =~rT2 ~r2 (6) The corresponding ane metric is given by

Naf = min~a

1;~a22R3k~x;P~a1k

2+k~y;P~a2k2 (7) In the ane case the solution is simple. We assume that the rank of P is 3 (the case for general, not coplanar, 3D objects). Denote P+ = (PTP);1PT, the pseudo-inverse of P; we obtain that

~a

1 = P+~x

~a

2 = P+~y (8)

And the ane distance is given by

Naf =k(I;PP+)~xk2+k(I;PP+)~yk2 (9) Since the solution in the rigid case is signicantly more dicult than the solution in the ane case, often the ane solution is considered, and the rigidity constraints are used only for verication (e.g. [Ullman and Basri, 1991, Weinshall, 1993, DeMenthon and Davis, 1992]).

(9)

The constraints (2) (substituting~ai for~ri, and using Eq. (8)) can be rewritten as

~xT(P+)TP+~y = 0

~

xT(P+)TP+~x = ~yT(P+)TP+~y Denote

B = (P+)TP+ (10)

we obtain that

~xTB~y = 0

~

xTB~x = ~yTB~y (11) where B is an nn symmetric, positive-semidenite matrix of rank 3. (The rank would be smaller if the object points are coplanar.)

We call B the

characteristic matrix

of the object. B is a natural extension to the 33 model-based invariant matrix dened in [Weinshall, 1993]. A more general denition, and its ecient computation from images, is discussed in Appendix B.

3.2 Derivation of

Ntr

We can now dene a transformation metric as follows. Consider the ane solution. The nearest

\ane view" of the object is obtained by applying the model matrix,P, to a pair of vectors,~a1 and~a2, dened in Eq. (8). In general, this solution is not rigid, and so the rigid constraints (2) do not hold for these vectors. The metric described here is based on the following rule. We are looking for another pair of vectors,~r1 and~r2, which satisfy the rigid constraints, and minimize the Euclidean distance to the ane vectors~a1, and~a2. P~r1 andP~r2 dene the best rigid view of the object under the dened metric. The metric, Ntr, is dened by

Ntr = min~r

1;~r22R3k~a1;~r1k

2+k~a2;~r2k2 s:t: ~rT1 ~r2 = 0; ~rT1 ~r1 =~rT2 ~r2 (12) where~a1 and~a2 constitutes the optimal ane solution, therefore

Ntr = min~r

1;~r22R3kP

+

~ x;~r

1 k

2+kP+~y;~r2k2 s:t: ~r1T~r2= 0; ~rT1 ~r1 =~rT2 ~r2 (13) In Section 5 we present a closed-form solution for this metric, and in Section 6 we show how this metric can be used to bound the image metric from both above and below.

4 Summary of results

In the rest of the paper we prove the following results:

(10)

4.1 Transformation space:

The transformation metric dened in Eq. (13) has the following solution

Ntr = 12

~xTB~x+~yTB~y;2q~xTB~x~yTB~y;(~xTB~y)2

whereB is dened in Eq. (10), and~x;~y in Eq. (5). The

best view

according to this metric is given by

~ x

= PP+(1~x+2~y)

~ y

= PP+(1~x+2~y) where1;2;1;2 are dened in Appendix D.

4.2 Image space:

Using Ntr we can bound the image metric from both above and below. Denote

Naf =k(I;PP+)~xk2+k(I;PP+)~yk2 we show that

Naf +1Ntr Nim Naf+3Ntr

where12 3 are the eigenvalues of PTP. A sub-optimal solution toNim is given by

Naf+ 212

1+2Ntr

where the computation of1;2 is described in Appendix E. A tighter upper bound is deduced from this sub-optimal solution

Nim Naf+ H:M:f2;3gNtr Naf+ 22Ntr

where H:M:f2;3g = 1 2

2 +

1

3

is the Harmonic mean of 2, 3. The sub-optimal solution is proposed as an initial guess for an iterative algorithm to compute Nim.

5 Closed-form solution in transformation space

We now present a metric to compare between 3D models and 2D images under weak perspective projection. The metric is a closed-form solution to the transformation metric, Ntr dened in Eq. (13). We use the notation developed in Section 3. B is the nn

characteristic matrix

of the object,~x;~y 2Rncontain the x- and y-coordinates of the image features. The metric is given by

Ntr = 12~xTB~x+~yTB~y;2q~xTB~x~yTB~y;(~xTB~y)2 (14) This metric penalizes for the nonrigidities of the optimal ane transformation. Note that

Ntr = 0 if the two rigid constraints in Eq. (11) are satised. Otherwise, Ntr >0 represents the optimal penalty for a deviation from satisfying the two constraints.

(11)

Derivation of the results:

In the rest of this section we prove that the expression for Ntr, given by Eq. (14), is indeed the solution to the transformation metric dened in Eq. (13). The proof proceeds as follows:

Theorem 1 computes the minimal solution when~r1 and ~r2 are restricted to the plane spanned by~a1 and~a2; Theorem 2 extends this result to three-space.

a a

r r

1 2

1 2

Figure 2: The vectors~a1,~a2,~r1, and~r2 in the coordinate system specied in Theorem 1.~a1 and~a2 represent the solution for the ane case. ~r1and~r2are constrained to be in the same plane with~a1and~a2, to be orthogonal, and to share the same norm.

Theorem 1:

When~r1 and~r2 are limited tospanf~a1;~a2g,Ntr is given by Eq. (14).

Proof:

We rst dene a new coordinate system in which

~a

1 = w1(1;0)

~a

2 = w2(cos ;sin)

~ r

1 = s(cos;;sin)

~ r

2 = s(sin;cos)

(see Figure 2). is the angle between ~a1 and ~a2, w1 and w2 are the lengths of ~a1 and ~a2 respectively. sis the common length of the two rotation vectors,~r1 and~r2, and;is the angle between~a1 and ~r1. Without loss of generality it is assumed below that 0 180 and

;90 90. Notice thatw1,w2, and are given and that sand are unknown.

(12)

Denotef the term to be minimized, that is

f(;s) =k~a1;~r1k2+k~a2;~r2k2 then

f(;s) = (w1;scos)2+s2sin2+ (ssin;w2cos)2+ (scos;w2sin)2

= w12+w22+ 2s2;2s([w1+w2sin]cos+w2cossin) The partial derivatives of f are given by

f = 2s([w1+w2sin]sin;w2coscos)

fs = 4s;2([w1+w2sin]cos+w2cossin) To nd possible minima we equate these derivatives to zero

f = 0

fs = 0

Solutions with s= 0 are not optimal. In this case f(;0) = w21+w22, and later we show that solutions with s>0 always imply smaller values forf.

When s6= 0, f= 0 implies

tanmin= w2cos

w

1+w2sin therefore

cosmin = 1

q1 + (tanmin)2 = w1+w2sin

q

w 2

1+w22+ 2w1w2sin

fs= 0 implies

smin= 12([w1+w2sin]cosmin+w2cossinmin)

Notice the similarity of this expression to the expression forf. At the minimum point f can be rewritten as

fmin =w21+w22;2(smin)2 (15) (From which it is apparent that any solution forf withs6= 0 would be smaller than the solution withs= 0.) Substituting for min we obtain

smin = 12([w1+w2sin]cosmin+w2cossinmin)

= 12 cosmin(w1+w2sin+w2costanmin)

= w1+w2sin

2qw21+w22+ 2w1w2sin(w1+w2sin+ w22cos2

w

1+w2sin)

= 12qw12+w22+ 2w1w2sin

(13)

and therefore

fmin =w21+w22;2(smin)2=w21+w22;1 2

w 2

1 +w22+ 2w1w2sin or,

fmin = 12(w12+w22;2w1w2sin) Recall thatw1 and w2 are the lengths of~a1 and~a2, that is

w 2

1 = ~aT1 ~a1 = ~xTB~x

w 2

2 = ~aT2 ~a2 = ~yTB~y and is the angle between the two vectors, namely

w

1 w

2sin=qw21w22(1;cos2) =q~xTB~x~yTB~y;(~xTB~y)2 We obtain that

fmin = 12~xTB~x+~yTB~y;2q~xTB~x~yTB~y;(~xTB~y)2

2

In Theorem 1 we proved that if~r1 and~r2 are restricted to the plane spanned by~a1 and~a2, the metric Ntr is given by Eq. (14). In Theorem 2 below we prove that any other solution for

~r

1and~r2 results in a larger value forf, and therefore the minimum forf is obtained inside the plane, implying thatNtr indeed is given by Eq. (14).

Theorem 2:

The optimal~r1 and~r2 lie in the plane spanned by~a1 and~a2.

Proof:

Assume, by way of contradiction, that ~r1;~r2 62 spanf~a1;~a2g; we show that the corresponding value forf is not minimal.

Consider rst the plane spanned by ~r2 and ~a1, and assume, by way of contradiction, that

~r

1

62spanf~r

2

;~a

1

g; we show that there exists a vector~r01 such that

k~r 0

1

k = k~r2k

~r 0

1

? ~r

2

and

k~r 0

1

;~a

1 k<k~r

1

;~a

1 k

contradicting the optimality off.

(14)

Assume k~r2k =s, and denote by ~r01 a vector with length sin the direction (~r2~a1)~r2. This vector lies in spanf~r2;~a1g and satises

k~r 0

1

k = k~r2k

~r 0

1

? ~r

2

(There exist two such vectors, opposing in their direction. We consider the one nearest to~a1.) We now show that

k~r 0

1

;~a

1 k<k~r

1

;~a

1 k

Denote the angle between~a1 and~r01 by , and denote the angle between ~r01 and~r1 by . Also, denote w1 =k~a1k and s=k~r1k=k~r2k=k~r01k. We can rotate the coordinate system so as to obtain

~ r 0

1 = s(1;0;0)

~ r

2 = s(0;1;0)

~a

1 = w1(cos;sin;0)

~ r

1 = s(cos;0;sin) Now,

k~r 0

1

;~a

1 k

2 = (s;w1cos)2+w12sin2 = w12+s2;2sw1cos

k~r

1

;~a

1 k

2 = (scos;w1cos)2+w21sin2+s2sin2 = w12+s2;2swcoscos and therefore, when 6= 0 and 6= 0 (when = 0,~r1 and~r01 coincide.)

k~r 0

1

;~a

1 k<k~r

1

;~a

1 k

contradicting the minimality property. Therefore,~r1 2spanf~r2;~a1g. Similarly, it can be shown that~r22spanf~r1;~a2g, therefore all four vectors~a1,~a2,~r1, and ~r2 lie in a single plane.

2

Corollary 3:

The transformation metric is given by

Ntr = 12~xTB~x+~yTB~y;2q~xTB~x~yTB~y;(~xTB~y)2 and the

best view

for this metric is

~ x

= PP+(1~x+2~y)

~ y

= PP+(1~x+2~y)

Références

Documents relatifs

When considering the capacities obtained at C/12 after 100 cycles, the order of perfor- mance of the Li/MCMB half cells made using the different binders is: XG &gt; PVDF &gt;

A recently pro- posed approximate message passing method using a Cauchy maximum a posteriori image denoising algorithm is thus employed to account for the non-Gaussianity

Effects of genotypes at marker locus rs13483370 on chromosome 18 exhibiting the strongest evidence of linkage to macrovesicular steatosis (G), microvesicular steatosis (H)

Pour être considérés comme réflexes médullaires, les mouvements observés peuvent être de nature complexe, comme le typique signe de Lazare, mais ne devraient pas pouvoir

match is almost not meaningful... The left most and middle images correspond to the target and the scene shape elements, respectively. The right most image shows their normalized

Key words: Robust speech recognition, spectral subtraction, acoustic backing-off, bounded-distance HMM, missing features, outliers.. Email address:

Asplünd’s metric defined in the Logarithmic Image Processing (LIP) framework for colour and multivariate images.. IEEE International Conference on Image Pro- cessing ICIP 2015,

‫و ﻋﻠﯾﻪ‬.‫ ووﺿﻌت ﺑﺎﻗﻲ اﻟﺣرﻛﺎت ﻋﻠﻰ اﻟﻣﺣك‬,‫ ﺗورطت ﻓﯾﻪ اﻟﺣرﻛﺔ اﻹﺳﻼﻣﯾﺔ اﻟرادﯾﻛﺎﻟﯾﺔ‬،‫ﻏﯾر اﻟﻣﺳﺑوق‬ ‫ و‬،‫ وﻣﺎ ﻫﻲ ﻣواﻗﻔﻬﺎ ﻣن اﻟﻧظﺎم‬،‫ﻓﺈن اﻟﺳؤال اﻟذي ﯾﺗﺑﺎدر إﻟﻰ