C OMPOSITIO M ATHEMATICA
I. J. S CHOENBERG A NNE W HITNEY
A theorem on polygons in n dimensions with applications to variation-diminishing and cyclic variation-diminishing linear transformations
Compositio Mathematica, tome 9 (1951), p. 141-160
<http://www.numdam.org/item?id=CM_1951__9__141_0>
© Foundation Compositio Mathematica, 1951, tous droits réservés.
L’accès aux archives de la revue « Compositio Mathematica » (http:
//http://www.compositio.nl/) implique l’accord avec les conditions gé- nérales d’utilisation (http://www.numdam.org/conditions). Toute utili- sation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit conte- nir la présente mention de copyright.
Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques
http://www.numdam.org/
ations
tovariation-diminishing
andcyclic
variation-diminishing linear
transformationsby
I. J.
Schoenberg
and AnneWhitney
Philadelphia, Pa
Introduction and statement of results.
1. Let
be a
general
real linear transformationwhich,
in terms of itsmatrix
A,
may also be written as(y)
=A(x). Throughout
thispaper we denote the rank of A
by
r. Letv (y )
denote the number of variations ofsign
in the sequence of the numbers Yll y2, ..., YM.If we
pick
a set of rlinearly independent y’s,
and call themyl’, y2’, ..., yr’,
it is clear that we mayassign
to themarbitrary
values
alternating
insign;
for such a set of x’s andcorresponding y’s
we now have thatv ( y ) ~ v(y’)
= r-1. This proves theinequality
We now
inquire
as to when theequality sign
holds in(2).
Inother words: When do we
always
have theinequality
f or arbitrary
x’s ?The answer is
conveniently
described in terms of thefollowing concepts.
We say that a real matrix isdefinite
if it has no two elements ofopposite signs.
We say that a matrix hasdefinite
columns, if each one of its columns is a definite one-column matrix. Let r be the rank of
A ;
if 1i r,
we denoteby
A (i)the
matrix,
of(7)
rows and() columns,
whose elements are theith order minors of
A,
where all minors formed from the same set of i rows of A appear in the same row of A (i) and the same ruleholds
regarding
cotumns. It is well known that A(r) has rankunity
so that its columns areproportional.
The answer to thequestion
raised above may now be described as follows:THEOREM 1. The
inequality (3) always
holdsi f
andonly i f
thematrix A (r) has
de f inite
columns.A
proof
isgiven in §
1.2. As mentioned in the
title,
our result may beinterpreted geometrically.
Let usinterpret
the rows of A aspoints Pi
=(ail,
...,ain), (’i
= 1, ...,?n),
in the spaceEn,
the vectorsOPi spanning
an r - flat. Nowv(y) represents
the number of times thehyperplane
alxl +... + anxn = 0(of
coefficients xi, ...,xn)
is crosscd
by
thepolygonal
line lI =P1P2 ... P m.
Theinequality (2)
shows thatappropriate hyperplanes through
theorigin
arecrossed
by
II at least r - 1 times, while(3) requires
that therebe no such
hyperplanes
which are crossedby
II more than r - 1times.
By
Theôrem 1 we have aconfiguration satisfying (3)
ifand
only
if the matrix A(r) has definite columns.Let us
apply
our result to thefollowing
related situation. LetQi
=(bi1, ..., bin), (i
= 1, ...,m),
be napoints
inEn
and let rbe the dimension of tlie least flat manifold
containing
them; thedimension r is determined
hv
the rank of the matrixwhich must be r + 1. As before we find that
appropriate (un- restricted) hyperplanes
will be crossedby
thepolygonal
linen* ==
Q1Q2 ... Qm
at least rtimes,
since forappropriate
valuesof the variables
(x)
the m linear formswill show a number of variations of
signs v(y)
> r.Applying
Theorem 1, we may state the
following.
COROLLARY. Let the
polygonal
line II* =Q1Q2 ... Q rn
inEn
be such that the matrix
(4)
iso f
rank r + 1. Then H* crosses îtohyperplane o f E n
more than r tiniesil
andonly il
the matrix B(r+1) hasde f inite
columns. In theparticular
case when r = n, therequirement
reduces to the condition that allnon-singular
minorso f
B,of
order n + 1, beo f
the samesign.
Such
polygonal
lines 03A0* are said to be of order r. If r = 2, H* is anordinary plane
convexpoly gonal
line. Thisgeometric
interpretation
of Theorem 1 will not be referred to in thesequel;
it was mentioned
explicitly
because of itspossible
usefulness in thetheory
of arcs of order r in the sense ofJuel, Haupt,
andScherk
(See [4]).
3. Let us return to the transformation
(1)
and let us denoteby v(x),
andv(y),
the number of variations ofsign
in the sequence of the variables(x),
and( y ), respectively.
In a number ofanalyti-
cal
investigations
those transformations(1)
are ofimportance
which have the
property
that wealways
have theinequality
(5) v(y) ~ v(x),
for
arbitrary
values of the x’s.Special
instances of such trans- formations were foundby Laguerre,
A.Hurwitz, Polya
andFekete
(See [2]); Polya
called themvariation-diminishing
trans-formations.
Theproblem
ofcharacterizing
such transformationswas attacked
by Schoenberg
in 1930 withonly partial
success(See [5]);
it was solved in 1933by
Th. Motzkin(See [3])
asfollows.
THEOREM 2
(Motzkin).
Thetransformation (1)
is variation-diminishing il
andonly i f
its matrixA, o f
rank r,enjoys
thefollowing
twoproperties:
A. The matrices
A, A(2),
..., A (r-1) aredefinite.
B. The matrix A(r) has
definite
columns.In § 2
we show as a firstapplication
of Theorem 1 how Motzkin’s Theorem 2 may bereadily
derived from it.4. As a second
application
of Theorem 1 we solvein §
3 therelated
problem
ofcyclic variation-diminishing
transformations.By
the numbervc(x)
of thecyclic
variations ofsign
of the numbers xl, ..., xn, we mean thefollowing:
Place the numbers xi, ..., xn inorder,
at the vertices of aregular
n-gon and letvc(x)
denotethe number of variations of
signs
countedalong
the circum-scribed circle.
Clearly vc(x)
isalways
an evennumber;
moreexactly
Thus
v,(x)
is the least evennumber ~ v(x).
We call the transfor- mation(1) cyclic variation-diminishing
if andonly
if wealways
have the
inequality
for
arbitrary
x’’s.Since
(5) implies (7),
in view of(6),
it is seen that a variation-diminishing
transformation is alsocyclic variation-diminishing;
the converse,
however,
is not true. A convenient formulation of the main resultrequires
thefollowing.
DEFINITION 1. Let r = 2s be an even
positive integer.
We say that the 1natrÍxB = Ilbapll, o f
r + 2 rows and rcolumns,
is aseparable matrix,
or anS-matrix,
or B ES, i f
andonly il
itsr -F 2 consecutive
cyclic
block-minorso f
order ralternate
strictly
insign.
Cyclic variation-diminishing
transformations are describedby
the
following theorem.
THEOREM 3. The
transformation (1)
iscyclic
variation-dimi-nishing,
i. e. theinequality (7) always holds, i f
andonly i f
itsmatrix
A, of
rank r,enjoys
thefollowing
threeproperties:
a. For every
odd i,
1 ~ i r, the matrix A(i) isdefinite.
b.
Il
r is odd, then A(r) hasdefinite
columns.c.
Il
r is even, then A should have no submatrix B,of
r + 21’OWS and r
columns,
which is an S-matrix(See
Definition1).
The conditions of Theorem 3 are better understood if we realize that
they
must be invariant withrespect
tocyclic permutations
of rows, or of
columns, performed
on A. This remarkexplains
the role
played by
theparity
of the orders of minors of A : Notice the obvious fact that an odd order determinant neverchanges sign
if wepermute
its rowscyclically
and that this is nolonger
true for even orders.
The conditions of Theorem 3
simplify considerably
in thespecial
case when m = 1’; thenonly
condition(a)
remains to berequired
since conditions(b)
and(c)
arevacuously
satisfied.A final remark concerns the purpose of a
study
ofcyclic
varia-tion-diminishing
transformations. One of the authors has studied the convolution transformationwhere
(x)
is agiven
summablefunction,
and has found the necessary and sufficient conditions which(x)
is tosatisfy
inorder that the transformation
(8)
should bevariation-diminishing.
By
this we mean that for every boundedf(t)
the transformed functiong(x)
should have no more variations insign
than theoriginal
functionf(t) (See [6]).
Motzkin’s Theorem 2 was oneof the tools used in the solution of this
problem.
Let now0(x)
be a
given
function ofperiode
203C0 which is summable over aperiod.
If
f(t)
is a bounded function ofperiod
2n thenis a continuous fuiletion of
period
203C0. Thefollowing periodic analogue
of theprevious problem
wasproposed orally by
H.Cartan: For which functions
0(x)
will(9)
becyclic
variation-diminishing
in the sense that we shallalways
have theinequality
For this
problem,
to which wehope
to return at a futuredate,
Theorem 3 should prove to be an
important
tool.§
1. Aproof
of Theorem 1.5. PROOF OF NECESSITY. We assume that the
system (y)
=A(x) always implies
tv.atand we are to show that the columns of the matrix A(r) are
definite. We first
dispose
of thespecial
case whenr = n = m - 1, in which case our transformation becomes
with a matrix of rank n. Since we
always
have thatv(y)
n - 1,we see that
v(y)
= n isimpossible
so that they’s
will neverstrictly
alternate insign. However,
with an obvious notation forminors,
the relationis the
only
constraint among they’s.
Hence no two among the minors of order nA i, A 2, ..., An+1
can be ofopposite signs,
forotherwise we could
clearly satisfy
the relation(1.2)
with valuessuch that
(-1)v-1yv
> 0, for all v, which we know to be im-possible.
We pass now to the
general
case. Take a set of rlinearly
in-dependent
columns ofA,
sayCl, C2, ... Cr. Setting
we obtain the
system
which inherits the
property
thatv(y) ~ r-1,
forarbitrary
values of xl, ..., xr. We are to show that all minors of
order r,
of its
matrix,
are of the samesign.
Moreprecisely:
All the non-singular
among them arepositive,
or all arenegative.
Let usdenote
by A r
ageneric non-singular
minor of order r and letA’r
andA r bc- any
two such distinct minors.By
a known theorem(See [3],
page64)
these two minors may be made to be the first and last of a finite sequence ofA’r’s
such that twoadjacent
members of the sequence have
precisely
r - 1 rows in common.Let
A(1)r, A(2)rr be adjacent
members of the sequence. Thesystem
formed
only
of those r + 1equations corresponding
to rows ofA(1)r
orA(2)r
alsoenjoys
theproperty (1.1); by
theprevious
case
already
settled we conclude thatA(1)r . A(2)r
> > 0.Applying
this to all
pairs
ofadjacent
elements of the sequence, we conclude thatA’r · A"r
> 0 and thenecessity proof
iscompleted.
6. PROOF OF SUFFICIENCY. Let us assume that
(1.3)
A(r) has definite columns,and let us show that
( y )
= A(x ) always implies (1.1).
We startby adding
a number of additionalassumptions
withoutthereby losing generality.
A. Without loss
o f generality
we may assume that r = n.Indeed,
in view of the relation( y )
-A (x),
the column ofy’s
is alinear combination of the columns of A. Therefore the column of
y’s
may also beexpressed
as a linear combination of a set of rlinearly independent columns,
ail’ ..., air, say. Withappropriate
x’1,
...,xr
we therefore have thatNow
(1.4)
is a linear transformation with r = n, whose matrixhas minors of order r of one
sign only.
Hence if our theorem isalready
established for thisparticular
case, then(1.1)
holds andeverything
is settled. Let ustherefore
assume that A iso f
rank nand we are to prove that
(1.5) v(y)
~ n-1.B. Without loss
o f generality
we may and do assume that noneo f
the rowsof
A hasonly
zero elements.Indeed,
any such rows, ifpresent,
maysimply
be struck out withoutimpairing
the assump- tions on A.C. Without loss
o f generality
we may and do assume that noneo f
they’s
vanish.Indeed,
if some of they’s vanish,
we may soslightly
alter the x’s as to make thosey’s non-vanishing,
withoutchanging
thesigns
of thenon-vanishing y’s.
Thisoperation
canonly
increasev(y)
so that if(1.5)
is true after thechange,
itcertainly
was true before.7. At this
point
it is more convenient to invert theargument
and to show that our
assumptions
areincompatible
with theadditional
assumption
thatTo this end we first prove the very
simple
LEMMA 1. The
quantities
y,, ..., y,,,having f -ixed non-vanishing
values we
de f ine
the newquantities
Zvby
concerning
which ule claim thefollowing:
1.
Il
we can
always find positive coefficients
av,w, (v
= 1, ..., m-1),
such that
2.
If
we can
always find positive coefficients
oc,,,03B2v,
such thatThe first statement is
clearly
true, for(1.8)
means that they’s
alternate insign;
if we chooseall w
= 1 and all oc,, =N,
thensgn zv = sgn y,
(v
= 1, ..., m1), provided
N islarge enough,
and
(1.9)
follows.The second statement is
proved by
induction withrespect
to m.Being trivially
true for m = 2, let us assume the statement truefor 1n - 1 rather than m. Let us strike out the last
equation (1.7).
There are two cases:
OE)
yl, y2, . ..., y m-1’ alternate insign. By
the firstpart
ofLemma 1 we may choose av,
f3v positive (11
= 1, ..., rrz2)
suchthat sgn zv = sgn yv,
(v
= 1, ..., m -2) obtaining v(z1,
...,zm-2)
=v(y1, ..., ym-1)-
1.However, by (1.10)
we must havey m-1 y m > 0, hence also sgn zm-1 = sgn ym-1 = - sgn ym-2 =
- sgn zm-2. Hence in the end we have
v(z) = v(y)
no matterhow the
positive 03B1m-1, 03B2m-1
are chosen.03B2)
Yll y2, ..., ym-, do not alternate insigne By
our inductionassumption
we can choose zl, ..., z m-2 so thatIf we now consider y 1n there are two
possibilities:
If ym-1ym 0, we can arrange to have Zm-1 of either
sign
andwe choose it so as
to produce
a variation ofsign
in the z’s.If Ym-1 ym > 0, we cannot
help achieving
the desiredinequality (1.11).
8. We now return to the
assumptions
of Article 6 and we areto show that
they
areincompatible
with the additionalassumption (1.6). Clearly (1.6) implies
that m ~ n + 1. Weproceed by
in-duction in m.
If ni = n + 1, then
(1.6) implies v(y)
= n,so that the
y’s
alternate insign. However,
they’s again satisfy
the relation
(1.2),
whichagain
turns out to beimpossible
and forthe same reason as in Article 5.
Let us assume now that m > n + 1 and that the
impossibility
of
(1.6)
hasalready
been established for lesser values of m. In order to prove itsimpossibility
for m, we determine thepositive
coefficients oc,,,
03B2v,
in(1.7)
so as tosatisfy
the conditions(1.9),
or
(1.11),
of Lemma 1. In terms of the matrixthe transformation
(1.7)
may be written as(z)
=M(y)
andcombining
it with(y)
=A(x)
we obtain the linear transformation(1.13) (z)
=MA (x).
Let us assume for the moment that the matrix
MA,
of dimensions(m - 1) by
n, has all theproperties enjoyed by A, namely:
1. MA is
of
rank n.2. All
non-singular
minorsof
order no f
MA have the samesign.
If these
properties
aregranted
for the moment, we mayeasily complete
the inductionargument by operation
with the trans-formation
(1.13)
instead of(y)
=A(x). Indeed,
if(1.8) holds,
then also
(1.9)
is true. Henceagainst
our inductionassumption,
for(1.13)
hasonly
m-1equations,
However, if(1.10) holds,
then(1.11) follows,
so thatby (1.6) we find
again
in contradiction to our inductionassumption.
Finally,
aproof
of theproperties
1 and 2 of MA will follow from thefollowing properties
of the matrix M:(i)
A ll minorso f M, 01
allorders,
are > 0.(ii)
The columnsof
every setof
n columnsof
M arelinearly independent.
The
property (i)
iseasily
shownby
induction in m. As to thesecond,
we remark that n m - 1 and that the columnsof every
set of m - 1 columns of M are
linearly independent,
because thecorresponding
determinant of order m - 1 ispositive (it
eitherreduces to the
product
of itspositive diagonal
terms or else itsplits
into theproduct
of two minorshaving
thisproperty).
The
properties
1 and 2 of the matrix MA are now establishedas follows:
1. MA is
of
rank n ; indeed letbe a
positive
minor of A formed from the rowsil, ..., j..
Considerin M the set of
columns j1, ..., jn;
thesebeing independent, by (ii),
we can select in M
rows i1,
...,i n,
such thatWe may now
identify
apositive minor,
of order n, ofMA,
since(non-négative terms)
> 0.2. All
non-singular
minorsof
order nof
MA have the santesign,
since both factors M and A have this
property
which ispreserved by multiplication.
This concludes aproof
of Theorem 1.§
2. Aproôf
of Theorem 2concerning variation-diminishing
transformations.9. PROOF oF NECESSITY. Let us assume that
(1) (y)
=A(x) always implies
thatWe show first that
(2.2)
A (i)(1 ~ i ~ r)
has definite columns.Indeed,
letCl, C2, ..., Ci,
say, be a set of ilinearly independent
columns of A.
Setting
xi+1 = ... = xn = 0 we obtain the trans- formationof
rank i,
which is alsovariation-diminishing,
so that wealways
have the
inequalities
By
Theorem 1 we learn that allnon-singular
minors oforder i,
ofthe matrix of
(2.3),
have the samesign;
this proves(2.2).
There remains to show that if 1
i
r, then notonly
are thecolumns of A(i)
definite,
but that A(i) itself is definite. For this purpose we consider thespecial
case wlen(2.4)
m = n = r, i = n - 1,and show that A(n-1) is definite. It is clear that the
y’s
may nowby
chosenarbitrarily;
let them havealternating
values such thatOur transformation
being variation-diminishing,
we havev(y) = n -
1~ v(x),
hencev(x)
= n - 1, so that the corres-ponding
x’s also alternate insign.
Onsolving
for the x’s, wefind
by
Cramer’s rule the relationsSince
(-1)vyv
> 0, and because we knowalready
that allAV1’
have the same
sign,
that allAv2
also have the samesign
and soforth,
the alternation insign
of the x’simplies
that all non-vanish-ing
among theA,i
have the samesign,
i. e. A(n-1) is definite.We return now to "the
general
case to show that A(i) is definite if 1 ~ i r. Let0393,
0393’ be two sets of iindependent
columnsof A such that
they
have i - 1 columns in common,differing
inthe last
columns,
and such that the combined set of i + 1 columns(0393 + 0393’)
arelinearly independent.
LetAi+1
be anon-singular
minor of h -E- F’ of order i + 1.
By setting
the x’s = 0 whichcorrespond
to columns other than those of h + 0393’ andby ignoring
the
equations corresponding
to rows other than those ofAi+1,
we obtain a
variation-diminishing
transformation(y)
-Ai+1 (x).
Here we have the
special
casecorresponding
to(2.4).
We concludethat the minors of rand those of F" must have the same
sign,
forAi+1
must containnon-singular
minors of order i of rand also such of F’. If the two sets0393,
0393’ do notsatisfy
the additional conditions assumedabove,
thenby
a theorem of Motzkin weknow that
they
can bejoined by
a chain of sets of iindependent columns,
twoadjacent
members of which dosatisfy
those con-ditions
(See [3],
page64).
The aboveargument, applied
con-secutively
toadjacent
members of thechain,
shows that A (i) is indeed definite.10. PROOF OF SUFFICIENCY. We need the
following
LEMMA 2.
Il
the matrix Asatisfies
the conditionso f
Theorem 2,then so does the matrix B obtained
from
Aby striking
out acolumn,
and also the matrix A * obtained
from
Aby replacing
twoadjacent
columns
o f
Aby
their linear combination withpositive coefficients.
PROOF: The statement
concerning
B isclear,
for if s is the rank ofB and i ~ s ~
r, then B(i) is a submatrix of A(i). To prove the secondpart
let us assume thatand let r and r*
(~ r)
be their ranks.1. r* = r. If i r* = r, it is clear that A*(i) is
definite,
for aminor of order i of A * is either one of A or else a
positive
linearcombination of two
such,
hence all are of the samesign.
Also theminors of order r, in r
independent
columns ofA *,
are of the samesign
becausethey
are either identical with similar minors ofA,
or else
they
are the elements of a linear combination of two columns ôf A(r) which are definite andproportional.
2. r* r. In this case A*(i) is
obviously
definite if i ~r*,
since i r. This
completes
aproof.
Let us now assume that A satisfies the conditions of Theorem 2 and let us show that the
inequality (2.1)
must hold for an arbi-trary
but fixed set of x’s. Withoutlosing generality
we mayassume that not all x’s are zero. We now
perform
on the matrixA the
following operations:
1. We strike out those columns of A
corresponding
to vanish-ing
x’s.2. We
replace
twoadjacent columns, corresponding
to x’s ofthe same
sign, by
theirpositive
linear combinationaccording
tothe
equation
By repeating
the secondoperation sufficiently often,
we reacha
stage
wherethe,x’s
alternate insign. By
Lemma 2, the conditionsof Theorem 2 are
thereby preserved;
thus without lossof generality
we may assume that
v(x) -
n - 1,i. e. that the x’s alternate in
sign
to start with. Since A(r) has definitecolumns, by
Theorem 1 we now conclude that§
3. Aproof
of Theorem 3concerning cyclic variation-diminishing
transformations.Il. T he conditions of Theorern 3 will be more
conveniently
handled in terms of the
following
definitions.DEFINITIONS 2. We say that A
o f
rank r, is a F-matrix, orA E
F, il
andonly i f
thefollowing
two conditions aresatisfied :
1. For every odd
i,
i r. the matrix A (i) isdefinite.
2.
Il
r isodd,
then A (r) hasdefinite
col1trnns.DEFINITION 3. We say that A is a
0394-matrix,
or A~ 0394, if
andonly i f
thefollowing
two conditions aresatisfied :
1. A E F
(See Definition 2).
2.
If
r is even, werequire
that A should have no sub-matrixB, o f
r + 2 rows and rcolumns,
which is an S-matrix(See Definition
1,Article
4).
We may now restate Theorem 3 as
THEOREM 3’. The
transformation (y)
=A (x)
iscyclic
variation-diminishing,
i. e. wealways
havei f
andonly i f
its matrix A is a 0394-matrix.12. We need a number of
auxiliary
theorems. The roleplayed by
S-matrices is revealedby
thefollowing
LEMMA 3. Let r be a
positive
even number and let A =~ aij ~
be an S-matrix
o f
r + 2 rows and r columns. The linear trans-f ormation (y)
=A(x)
has theproperty
In other words : For
appropriate
valueso f
xl, x,,, thequantities
Yll - - " yr+2 will alternate in
sign.
The statement
(3.2)
has asimple geometric interpretation:
Ifwe consider in the space
Er
the r + 2points
Pi = (ail,
ai2, ...,air), (i
= 1, ..., r +2),
and if we denote
by
II the closedpolygon Pl P 2 ... Pr+2P1,
then(3.2)
means that someappropriate hyperplane through
theorigin
0, of
Er,
will be crossed(strictly) by
77 r + 2 times. Such ahyperplane
willseparate strictly
thepoints, Pl, P3,
...,Pr+1’
from the
points P2, P4, ..., P,.+2;
this remarkexplains why
A iscalled a
"separable
matrix"(Definition 1).
Theproof
will begeometric
and free use will be made of fundamental notionsconcerning
convexpolyhedral
domains. We shall denoteby K(V1, V2,
...,Tls )
the convex extension of thepoints Vl,
...,VS.
PxooF OF LEMMA 3: 1. The
assumptions of
Lemma 3imply
that the
origin
0 is not apoint o f
thepolyhedron
where --
Pl
=(-
ail, - - .,-air).
Indeed,
suppose that 0 EK ;
then it followsby
a known theorem(See [1 ],
page607)
that 0 isalready
apoint
of the convex exten-sion of some r + 1 among the r + 2
points spanning
K. Thiswill be shown to be
impossible by considering simultanously
thematrices
where thé
symbols
indicate rows. Notice now thefollowing
facts:a. Since A is an
S-matrix,
also B is anS-matrix,
because allcyclic
block-minors of A either agrée invalue,
or agree withchanged sign,
with thecorresponding cyclic
block-minors of B.Let
Q,
=Pl, Q2 = -P2, ..., Qr+2
=-Pr+2’
so that we may writeThere now remains to prove the
following
statement.b. ivo Jnatter what gi-ottp
o f
r + 1 among thepoints Qv
wei-hoose,
their convex extension never contains theorigine
Indeed, we get such a groupby striking
out one of the r + 2points Q,.
Theproperty
of B ofbeing
an S-matrix isclearly
invariantby cyclic permutations
of thépoints Qv
and so is theproperty (b)
whichwe wish to prove. We therefore lose no
generality
if weonly
prove tliat-
This statement
depends
on thesigns
of the solutions of asystem
of r
homogeneous equations
in r + 1 unknowns:1-Iciice there
is,
up to afactor, only
one solutionproportional
totlie minors of the matrix
11 Qlq Q29
...,Qr+1~
withsuccessively
changed signs. However,
since B ~S,
we know that the tBVO minorsDl
andDr+1,
obtainedby striking
out thefirst
and lastcolumn
respectively,
are ofopposite signs.
Since. r is even, thesolutions are
given by
tlie relationswhieh show that pl and Pr+1 are indeed of
opposite signs.
’rhisestablishes
(3.3)
and concludes aproof
of our statement 1.2. Let
K,
be the convex conespanned by
the vectorsOP1’
OP3,
...,OPr+1’
and letK2
be the convex conespanned by 0 P2, OP4, . - ., OPr+2.
These two cones do not have a ray in common.For if
they
had a ray in common,they
would have a commonpoint R ~
0 such thatHowever,
tliese relationsimply by substraction,
thata relation which contradicts our statement 1.
A
proof
of Lemma 3 is nowreadily
concluded.Indeed,
the conesK,
andK2, having
no common ray, may bestrictly separated by
acertain
hyperplane
where al, ..., ar are the
running
coordinates.However,
this meansthat for this choice of the
x’s,
in thesystem ( y )
=A(x),
they’s
will alternate in
sign.
13. We pass to a
couple
of theorems which show the in- variance of theJ-property
withrespect
to certainsimple operations.
LEMMA 4.
If
A E d and B is a submatrixof
A, then also BEA.PROOF: Let r and s be the ranks of A and
B, respectively,
s r.
If s = r there is
nothing
to prove, for sindependent
columnsof B remain a fortiori
independent
if continued intoA,
and thenB E d is clear if s = r is odd and
equally
so if it is even.If s r, there are two cases. If s is
odd,
the matter isagain
clear. Assume s to be even. Let
Ci, C.,
beindependent
columnsof B and
Cl,
...,Cs
the same columns continued intoA,
wherethey
are alsoindependent. Finally,
let C be a new columnof A,
so selected that
Cl,
...,C,,
C are s + 1independent
columns of A.The number s + 1
being odd,
the matrixprovided
the columns of B* appear in the same order as in A and not in the artificial order of thisdefining equation. By
Theorem 1,the
corresponding system
(y)
=B*(x)
has theproperty v(y) (s
+1)
-1 = s.If we set = 0 thé variable x,
corresponding
to the columnC,
anddrop
allequations
not formed with rows of B, we see that also thesystem
By
Lemma 3 we conclude that B ~ 0394:Indeed,
the presence of an S-submatrix of B wouldimply
that supvc(y)
> s + 2,against
our last statement that
vc(y)
s,always.
LEMMA 5.
I f A
isa 0394-matrix,
then so is a matrix A * obtainedfrom
Aby replacing
twoadjacent
columnso f
Aby
their linear c01nbination withpositive coefficients.
PROOF: To fix the ideas let
and let r and *r* be their
respective
ranks. We have thatComparing
with thecorresponding proof
of Lemma 2(Article 10)
we see that the
only
caserequiring
discussion is when r is odd and r* = r - 1 is even.Suppose then,
that A * were not ad -matrix,
i. e. that A * does have a
submatrix, of r* + 2
rows and r*columns,
which is an S-matrix. But
then, by
Lemma 3, we conclude that thesystem (y)
=A * (x)
has theproperty
and the same conclusion remains valid for the
system (y)
=A (x).
But
vc(y)
> r* + 2 = r + 1implies
thatv(y)
> r, in contra- diction to Theorem 1.14. Our last
auxiliary
theorem is ananalogue
of Theorem 1 which we state asTHEOREM
4. Let A be a F-matrix(Definition
2, Article11) of
rank r. Thecorresponding system (y)
=A(x)
has theproperty
if
andonly il
A is a d-matrix(Definition
3, Article11).
PROOF OF NECESSITY: The
necessity
of the condition A -Ed isclear,
for if A were not ad-matrix,
then Lemma 3 wouldimply
that sup
vc(y) ~ r
+ 2, in contradiction to theassumption (3.4).
PROOF OF SUFFICIENCY. Let us now assume that A ~ 0394 and let us prove that
(3.4)
holds. This result is clear if r isodd,
be-cause then A(r) has definite columns so that Theorem 1
implies
that
v(y)
~ r -- 1, hencevc(y) ~ r
- 1(because
r - 1 iseven)
and a fortiori
(3.4).
We may therefore limit ourselvesexclusively
to even values of r. The statement
(3.4) being
trivial if r = 0,we use induction and assume
(3.4)
to be true for all smaller evenvalues of r. Assume now for the remainder of the
proof
that(3.4)
is wrong and that, on the
contrary,
for a certain set of x’s we haveBy expressing,
if necessary, all columns as linearcombinations
of a set of r
linearly independent
columns(as
in Article 6,A)
wesee that we may assume that n = r without loss of
generality.
Select r + 2
y’s having alternating signs,
which ispossible
inview of
(3.5).
In this way weget
asystem which,
onchanging notations,
may be written aswhose matrix B is a L1-matrix
(by
Lemma4),
and such thatLet r* be the rank of B. We
distinguish
two cases:1. r* r. If r* is
odd,
Theorem 1implies
thatin contradiction to
(3.7).
Let r* be even;by
our induction assump- tionconcerning (3.4),
we conclude thatvc(y) ~ r*,
hencev(y) r*
r,again contradicting (3.7).
2. r* = r. We claim:
I. None
of
the r + 2cyclic
block-minorso f
order r,o f B,
canvanish.
Indeed,
suppose that one of these were zero.By cyclic permutations
of rows, which still preserve theproperty
B eL1,
wemay assume that
But then, for the
partial system
of rank
r’,
we have r’ r,v(y)
= r -1,vc(y)
= r. If r’ is oddwe have a contradiction with Theorem 1; if r’ is even, the in- duction
assumption
shows thatvc(y) ~ r’ r, in
contradiction tovc(y)
= r.II. The consecutive
cyclic
block-minorsof
B alternate insign (hence B ~ S,
lIl contradiction toA ~ 0394).
Indeed, in(3.6)
wemust have x1 ~ 0, because r* = r. ive now write down the r -i- 2 systems of r
equations,
in o unknowns, from among theequations (3.6),
whose matrices arerespective!y
the r + 2cyclic
block-minors of B.
Solving
each of thesesystems
for x1by
Cramer’srule,
we obtain the relationswhere
(-1)v-1yv
> 0, say, while all the minorsB(i)j1, being
aIl ofthé saIne odd order o -1, are
necessarily
of thé samesign (or zéro).
Since none of the D’s
vanishes, by
ourprevious
statement I, andx1 ~ 0, we conclude that the D"s must indeed alternate in
sign.
As
already
remarked,this
result contradicts ourassumption
thatA e L1 and thé
proof
iscompleted.
We can
finally
turn to aproof
of Theorem 3’(Article 11).
15. PROOF THAT THE CONDITION OF THEOREM 3’ IS NECESSARY.
Let us assume that thé
inequality (3.1) always
llolds and let usprove that
(3.8)
A ~ 0394We prove first that
(3.9)
A ~ 0393.The
proof
runsalong
thé lines of thé arguments of Article 9.Let i be
odd,
1~ i ~ r,
and letC1,
...,C 2’
say, be ilinearly independent
columns of A. TlleSystem (2.3)
is alsocyclic
varia-tion-diminishing’
so thatHowever, i being odd,
we hâvevc(x1, ..., .ri) i 1,
hencevc(y) ~ i - 1, and
a fortiorir(y)
i - 1.By
Theorem 1 weconclucie that A (i) has definite
columns. Again
ibeing odd,
let1 i r, and let us show that A(i) is def inite.
Again
we considertlie
special
case(2.4)
and make(-1)vyv
> 0,(j,
= 1,..,n).
Since .n = i + 1 is even, w e find for tliese
particular
values thathence