Séminaire Équations aux dérivées partielles – École Polytechnique
V. C ASELLES
B. C OLL
J.-M. M OREL
Partial differential equations and image smoothing
Séminaire Équations aux dérivées partielles (Polytechnique) (1995-1996), exp. no21, p. 1-30
<http://www.numdam.org/item?id=SEDP_1995-1996____A21_0>
© Séminaire Équations aux dérivées partielles (Polytechnique) (École Polytechnique), 1995-1996, tous droits réservés.
L’accès aux archives du séminaire Équations aux dérivées partielles (http://sedp.cedram.org) im- plique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/conditions).
Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.
EQUATIONS AUX DERIVEES PARTIELLES
PARTIAL DIFFERENTIAL EQUATIONS
AND IMAGE SMOOTHING
V. CASELLES, B. COLL and J.-M. MOREL
CENTRE
DEMATHEMATIQ UES
Unite de Recherche Associ6e D 0169
ECOLE POLYTECHNIQUE
F-91128 PALAISEAU Cedex
(FRANCE)
T61.
( 1 ) 69
33 40 91Fax
(1)
69 33 3019 ;
T61ex 601.596 FS6minaire 1995-1996
Partial Differential equations and image smoothing
Vicent Caselles, Bartomeu Coll and Jean-Michel Morel
July 12, 1996
Abstract
We try to answer the
following question,
crucial indigital image processing :
Isimage smoothing possible ?
Wegive
anessentially negative
answer based on thephysics
ofimage generation
and consequent invariancerequirements
which we prove not to bepossi- bly
matchedby
any localsmoothing operation.
We temper thisnegative
resultby showing
that local
image smoothing
ispossible provided singular points
of theimage
have beenpreviously
detected andpreserved.
We define the associateddegenerate partial
differen-tial
equation
and sketch theproof
of its mathematicalvalidity.
Ouranalysis
uses thephenomenological theory
ofimage perception
of GaetanoKanisza,
whose mathematical translationyields
analgorithm
for the detection of thediscontinuity points
ofdigital images.
1 Introduction
Images,
in a continuousmodel,
are functionsu(x)
defined on a domain Q of theplane, generally
arectangle.
Let usconsider,
without much loss ofgenerality,
grey levelimages,
that isis, images
where is a real number between(say)
0 and255
denoting
the"grey
level" orbrightness.
Inpractice, images
ared2gztized,
whichmeans that values of
u(x)
aregiven only
in a discretegrid
of arectangle.
Itis,
how-ever, very convenient to discuss the
geometry
ofimages
in the continuous framework of functionalanalysis.
This passage to a continuous model is validprovided
if it isshown that
algorithms
defined in the continuous model can be made effective on theirdigitized
trace.We shall in this conference sketch the process of
image
formation and deduce which kind ofgeometric
structureimages
are thenexpected
to have. We shall beparticularly
interested in thesingularities
of theimage
which are inherent to theformation process. We shall then discuss whether
smoothing procedures
which havebeen
proposed
in thepast
in order to smooth theimage (and
thereaftercompute
derivatives of the smoothedimage)
arecompatible
or not with theimage
formationprocess. In short : can a
smoothing
process beperformed,
which does notdestroy
what it
ought
todetect,
theimage
structure ? We shall see that the answer is close tonegative,
and show that in factsingularitzes
in theimage
must be detectedpreviously
to anysmoothing procedure.
Animage smoothing operator,
in a very restrictive way, can then be defined and we shallexplain
how.It would be very useful to
compute
derivatives of any order on animage u(x).
The datum
being
at firstsight
discontinuouseverywhere,
this looksdesesperate
ifsome notion of scale is not introduced. The scale
parameter,
which we denoteby t,
measures the
degree
ofsmoothing
we allow in animage.
Scale t = 0corresponds
tothe actual
image,
and we denoteby u(t, x)
theimage
smoothed at scale t. Let us, without criticism for the firstreview,
list thesmoothing
processesproposed
so far.In all of these processes, described
by parabolic P.D.E.’s,
we setu(O, x)
=u(x),
theinitial datum. In the
forthcoming formulas,
Du =a2 2
ax2+ a2 2
ay2 denotes theLaplacian
ofu(t,
x,y)
withrespect
to the space variables(x, y),
Du =(,
itsspatial gradient
and
curv(u)
= the curvature of the levelline,
where is the euclideanII ’
norm of Du and div =
-2-
ax +;y.
y The mainsmoothing equations
we have in mind are9 The heat
equation [Wi] :
9 The mean curvature motion
(Osher-Sethian equation [OS])
. The A.M.S.S. model
([AGLM1], [ST1])
Other diffusion
equations
have beenproposed
forimage smoothing,
but the onesthe discussion. It is proven that the heat
equation
is theonly linear, isotropic image smoothing model,
and the A.M.S.S. model is theonly
contrastinvariant,
afhne invariant
image smoothing
model. The Osher-Sethianequation
is somehowan intermediate case,
being
bothquasilinear
and constrast invariant.The heat
equation performs beautifully
as for the initial mentionnedaim,
thecomputation
of derivatives of any order in animage. Now,
it is not contrast invari- ant. Constrast invariance means that thesmoothing operator Tt : u - u(t)
=Ttu
commutes with contrast
changes
g.Any nondecreasing
realfunction g
is an admis-sible contrast
change
and the observed valuesu(x)
are inpractice
defined up to an unknown contrastchange.
The third considered model has beenproposed
as themost invariant
smoothing
model in[STI] .
We refer to for a verysynthetical proof
of itsuniqueness.
Thereis, however,
asignificant objection
to thesecond and third models : in order to make sense
mathematically, they request
that the initialimage
be continuous ! We shall see in the next section that this is a nota realistic
requirement
forimages.
If we remove thisrequirement,
we can still drawthe same conclusions as in
[AGLM],
butonly locally,
atpoints
where theimage
isnot
discontinuous,
which we callsingular points.
This will become clear from the discussion below.Our
plan
is as follows. In Section2,
we show how theimage
formation pro-cess entails
generation
ofsingularities
and leads to invariancerequirements
for theimage operators.
In Section3,
we deduce which are thelargest
invariantobjects (or
"atoms" )
in animage
which can be taken as initial data forsmoothing operations
: we prove that
they
consist inpieces
of level lines of theimage joining singular points.
Section 4 is devoted to the effectivecomputation
of "atoms" andsingular points
of theimage
and to firstexperiments.
In Section 5 wefinally join
ouraim,
which is to propose a P.D.E. model with the
right boundary
conditions atimage singular points.
We proveexperimentally
that the visualaspect
ofimages
is notaltered
by
such a structurepreserving smoothing
process. Anexperiment
showshow,
thanks to thesmoothing
thusdefined,
we can transform animage
into a to-pographic
maprevealing
the structure of theimage
and its essentialsingularities.
This conference text
presents
some of thearguments, experiments
andproofs
whichare
extensively developed
in an article to appear([CCM]).
The axiomatictheory
ofimage smoothing
which we use is theobject
of a book to appear( (GM~ ) .
2 How images
areenerated : occlusion and trans- parency
asbasic operations
In this
section,
we make two concreteassumptions
about the kind ofimage
inconsideration. We first assume that those
images
are obtainedby
aphotographic device,
thatis,
a camera sensible to visiblespectrum.
We restrain thestudy
to thecase of grey level
images. Second,
we assume that thephotographs
thus taken come from anatural,
human oranimal,
environment and thereforeroughly correspond
tocommon retinian
images.
This may seem a very vagueassertion,
but we shallgive
it a
precise
sense in thefollowing.
2.1 Occlusion
as abasic operation
onimages
Natural,
human visual world is made ofobjects
of many sizes andheights
whichare
spread
out on theground.
Since the observer or the camera are assumed to beput
close to theground level,
it is assumed that thoseobjects
tend to occlude each other. Aspointed
outby
thephenomenologist
Kanizsa[Ka],
wegenerally
seeonly parts
of theobjects
in front of us becausethey
occlude each other. Let us formalize thisby assuming
thatobjects
have been added oneby
one to a scene. Given anobject A
in front of the camera, we call A theregion
of theimage
onto which it isprojected by
the camera. We call uA thepart
of the Nimage
thusgenerated,
IV whichis defined in A.
Assuming
now that theobject A
is added in a real sceneR
of the world whose Nimage
was v, we N observe a newimage
whichdepends
upon which Npart
ofA
is in front ofobjects
ofR,
and whichpart
in back.Assuming
thatA
occludesobjects
of R and is occludedby
noobject
ofR,
weget
a newimage
uRuA definedby
Of course, we do not take into account in this basic model the fact that
objects
in
R
mayintercept light falling
onA,
andconversely.
In otherterms,
we have omitted theshadowing effects,
which will now be considered.Figure
B.T-junctions
and occlusions : the lefthandfigure, composed
of tworegions
A antB is
interpreted
as thesuperposition
of twoobjects
A’ and B’ which arerepresented
onthe
right by
their contours. Thecompletion
thus effectuatedby
a low level vision process consists inextending beyond
theT junctions
a and b theboundary
of theobject B’,
whichapparently "stops"
when it meets theobject
A’. Thisinterruption
isinterpreted
as an occlusion.2.2 Transparency (or shadowing)
as asecond basic opera-
tion
onimages
Let us assume that the
light
source is apoint
in euclidean space, and that anobject
A is
interposed
between a scene R whoseimage
is v and thelight
source. Wecall 9
the shadow
spot
of A and S’ theregion
itoccupies
in theimage plane.
Theresulting image u
is definedby
Here, g
denotes a contrastchange
function due to theshadowing,
which is assumedto be uniform in
S. Clearly,
we must haveg(s)
s, because thebrightness
decreasesinside a
shadow,
but we do not know ingeneral how g
looks. Theonly assumption
for
introducing g
is thatpoints
withequal
grey level s beforeshadowing get
a new,but the same, grey level
g(s)
aftershadowing.
Of course, this model is not true on theboundary
of theshadow,
which can beblurry
because of diffraction effects orbecause the
light
source is notreally
reducible to apoint.
Anotherproblem
isthat g
in fact
depends
upon the kind ofphysical
surface which is shadowed so that it may well be different on each one of the shadowedobjects.
This is no realrestriction,
since this
only
means that the shadowspot
S’ must be divided into as manyregions
as shadowed
objects
in the scene ; weonly
need to iterate theapplication
of thepreceding
modelaccordingly.
A variant of theshadowing
effect which has been dis- cussed inperception psychology
istransparency.
In thetransparency phenomenon,
a
transparent homogeneous object ~ (in glass
forinstance)
isinterposed
betweenpart
of the scene and the observer. Since Sintercepts part
of thelight
sentby
thescene, we still
get
a relation like(2),
so thattransparency
andshadowing simply
are
equivalent
from theimage processing viewpoint.
Iftransparency (or shadowing)
occurs
uniformly
on the whole scene, Relations(2)
reduces towhich means that the
grey-level
scale of theimage
is alteredby
anondecreasing
contrast
change
function g.2.3 Requirements for image analysis operators
Of course, we do not know a
priori,
when we look at animage,
what are thephysical objects
which have left a visual trace in it. Weknow, however,
that theoperations having
led to the actualimage
aregiven by
formulas( 1 )- (2) . Thus,
anyprocessing
of the
image
should avoid todestroy
theimage
structureresulting
from(1)-(2) :
this structure traduces the information about
shadows,
facets and occlusions whichare clues to the
physical organization
of the scene in space. Theidentity
andshape of objects
must be recoveredfrom
theimage by
means which should be stable withrespect
to theseoperations.
As a basic andimportant example,
let us recall howthe Mathematical
Morphology
school has insisted on the fact thatimage analysis operations
must be invariant withrespect
to any contrastchange
g. In thefollowing,
we shall say that an
operation
T on animage u
ismorphological
iffor any
nondecreasing
contrastchange
g.3 Consequences of image formation : basic ob-
jects of image analysis
We call bas2c
objects
a class of mathematicalobjects, simpler
to handle than the wholeimage,
but into which anyimage
can bedecomposed
and from which it canbe reconstructed. The classical
examples
ofimage decompositions
areo Additive
decompositions
intosimple
waves : Basicobjects
of Fourieranalysis
arecosine and sine
functions,
basicobjects
of Waveletanalysis
are wavelets or waveletpackets,
basicobjects
of Gaboranalysis
are windowed(gaussian modulated)
sinesand cosines. In all of these cases the
decomposition
is an additive one, notadapted
to the structure of
images, except perhaps
for restoration processes in presence of ad- ditive noise.Indeed, operations leading
to the construction of real worldimages
arestrongly
nonlinear and thesimplest
ofthem,
the constrastchange,
does not preserve additivedecompositions.
If u = Ul + u2, then it is not true thatg (u)
=+ g (u2 )
if the constrast
change g
is nonlinear..
Next,
we have therepresentation
of theimage by
asegmentatzon,
thatis,
a decom-position
of theimage
intohomogeneous regions separated by boundaries,
or"edges" .
The criterion for the creation of
edges
or boundaries is thestrength
of constrast onthe
edges, along
with thehomogeneity
ofregions.
Both of these criteria aresimply
not invariant with
respect
to contrastchanges.
IndeedVg(u)
= so that wecan alter the value of the
gradient by simply changing
the constrast.9
Finally,
we candecompose,
asproposed by
the MathematicalMorphology school,
an
image
into itsbinary images (or
levelsets)
obtainedby thresholding.
We setXau(x)
= wh2te ifu(r) > A
andXxu(x)
= black otherwise. The white set is then called level set of u. It iseasily
seenthat,
underfairly general conditions,
animage
can be reconstructed from its level sets
by
the formulaThe
decomposition
therefore isnonlinear,
and reduces theimage
to afamily
ofplane
sets
(Xx).
Inaddition,
if we assume that the level sets areCacciopoli
subsets of1R2,
that is sets whoseboundary
has finitelength,
then a classical theorem asserts that itsboundary
is a union of closed Jordan curves(see [MS], Chapter 6).
In thediscrete
framework,
this theorem is obvious and we can associate with each level seta finite set of Jordan curves which define its
boundary. Conversely,
the level set isuniquely
defined from those Jordan curves. We shall call them level curvesof
the2mage.
Are level sets and level curves thesought
for basicobjects
ofimage
process-ing ?
In some sensethey
are better than all above discussed "basicobjects"
becausethey
are invariant under contrastchanges.
To be moreprecise,
if we transform animage u
into gou,where g
is anincreasing
continuous function(understood
as acontrast
change),
then it iseasily
seen that the set of level sets of gou isequal
tothe set of level sets of u.
However,
level sets aredrastically
alteredby
occlusion orshadowing.
Let us discussthis
point.
One can see inFigure
4 anelementary example
ofimage generated by
occlusion. A grey disk is
partly
occludedby
a darker square(a) .
In(b)
wedisplay
a
perspective
view of theimage graph.
In(c)
and(d)
we see two of the four distinct level sets of theimage,
the other onesbeing
theempty
set and the wholeplane.
Itis
easily
seen that none of the level sets(c)
and(d) corresponds
tophysical objects
in the
image. Indeed,
one of them results from their union and the other one from their set difference. The samething
is true for the level lines(e)
and(f) : they
appear as the result of some "cut and
paste"
processapplied
to the level lines of theoriginal objects.
It is nonthelesstrue,
because of the invariance withrespect
to constrast
changes,
that the basicobjects
ofimage processing
must be somehowparts
of the level lines. This leads us to what will be ourproposition.
o Our
proposition :
Basicobjects
are alljunctions of
levellines, (particularly:
T-junctions, T-junctzons, X-junctions, Y-junctions)
andparts of
level linesjoining
them.
figure 4 : I mage, I eve I sets, I eve I I i nes
and T- j unct i ons.
The terms contained in this
proposition
will beexplicited
oneby
one,starting
withT-junctions.
Weagain
refer toFigure
4 for a firstsimple (but,
infact, general)
example.
The level lines(e)
and(f) represent
two level lines at two different levels and in(g)
we havesuperposed
them in order toput
in evidence howthey
areorganized
withrespect
to each other. We havedisplayed
one of them as a thinline,
the other one as a bold line and the common
part
in grey. TheT-junctions
canbe defined in such an ideal case as the
points
where two level linescorresponding
to two different levels meet and remain
together.
Let us start with somesimple phenomenology of junctions.
Figure
5 : The two kinds ofT-junctions depending
on theordering
of grey levels3.1 T-junctions
As shown in
Figure 5,
there are twopossibilities
whichdepend
upon the order of grey levels around theT-junction.
In the first case, theT-junction
appears as twojoint
level linestaking
at thejunction
twoopposite
directions. In the second case,one of the level lines is
locally straight
and the other one has twobranches,
one ofwhich coincides with the first level line. Thus a
T-junction
at apoint x
can be(a
little bit
naively)
described in thefollowing
way.~ Two level lines meet at
point
x.The half level lines
(or branches)
from x thus four incoincide and two take
opposite
directions.(The
branches which coincidebelong
todifferent level lines. The branches which take
opposite
directions may alsobelong
to the same level line or
not).
3.2 Transparencies and X-junctions
The next case of
singularity
is causedby
thetransparency phenomenon
and we shallcall it
X-junctions.
In thetransparency phenomenon,
we have assumed that insidea
spot S,
anoriginal image v
ischanged
into u =g(v),
where g is a contrastchange.
We
keep u
= v outside thespot
~’. As a consequence, a level line with level A outside S’ becomes a level line with levelg(A)
inside thespot,
so that the level lines inside S’ are continuations of level lines outside S. Fuchs showed that this results in the continuationperceptual phenomenon :
we tend to see the visibleedges
of vcrossing
the
boundary
of S’(F~.
Of course, this illusion is based on thecontinuity
of direction.In the same way, of course, the
boundary
of S is seen ascrossing
the level lines ofv. In
fact,
level lines cannot cross and the behaviour of level lines isanalyzed
inFigure
9. In thetransparency phenomenon,
theapparent crossing,
which we shallcall
X-junction,
consists in themeeting
at apoint x
of two level lines. These level lineslocally
create fourangular regions.
Without loss ofgenerality,
we can assumethat the
angular regions
have grey levels1, 2,
3 and 4.Indeed,
the grey levels need not be constant inside eachregion,
but we know that the ranges of the fourangular regions
are ordered. This is an obvious consequence of the fact that eachpair
ofregions
isseparated by
a level line. Then we see that three cases may occur, whichare shown in
Figure
9.Figure
9. The three kinds ofX-junctions.
We firstdisplay (above)
threekinds of visual
experiments
which leadrespectively
tocheckerboard,
shadowing
and transparency sensations. The second and third rows focus on theresulting X-junctions
and the local grey level values. The last row shows theresulting configuration
of level lines.By "=2.5",
we denote the level linesseparating
theregions
1 and 2 from 3 and 4.o First case : "checkerboard" . The
higher
grey levels 3 and 4 are in twodiago-
nally opposite regions
and the lower grey levels 1 and 2 in the other twoopposite
regions. (The perfect
checkerboard casecorresponds
to 3=4 and1=2,
but we dis-play a
moregeneral
case with the same structure for levellines).
The checkerboardsingularity
does not seem tocorrespond
to manyphysical phenomena,
but is non-theless
frequent
in the humanworld, probably
for thedesigner’s pleasure
ofshowing
a
parodoxal, physically impossible,
visual situation. In this case, we can see how anintermediate level line forms a cross.
· Second
case : "shadowing" .
The lowerregions
1 and 2 areadjacent
and so are theregions
3 and 4. Inaddition,
1 isadjacent
to 3 and 2 to 4.Somehow,
this situationis the most stable and least
paradoxical
ofall,
since it can beinterpreted
as thecrossing
of a shadow line with anedge.
It is worthnoticing
that in this case thelevel line which
separates
the sets~x, u(x)
>21
andIx, u(x) 31
can be inter-preted indifferently
as the shadow line or theedge
line. The other(shadow
oredge)
line has no existence as a level
line,
but is obtainedby concatenating
two branchesof two distinct level lines. The level lines
bounding
the sets u > 1 and u > 3only
meet at
point x
and form the"X-junction"
which we shall trace inexperiments
onreal
images.
. Third case :
"transparency" .
Theregions
1 and 2 with lowergrey-level
andthe
higher grey-level regions
3 and 4 areagain adjacent,
but thepairs
of extremeregions
also areadjacent :
1 to 4 and 2 to 3. Thetransparent
material isadding
its own
color,
which is assumed here to belight
grey, so that white becomeslight
grey and black becomes dark grey. The
interpretation
is not twofold in this case :the
original edge (horizontal
in thefigure)
must be the level linebounding
the setu > 2. The shadow line
(vertical
in thefigure),
is built with two branches of distinctlevel lines.
3.3 Discussion of the decomposition into basic objects
Let us summarize our
proposition
for basicobjects,
or "atoms" ofimage processing.
It consists in
arguing
about the invariance of theimage analysis
withrespect
to the accidents of theimage generation.
Argument
1(Invariance Argument
o Since the
image formation
may mclude an unknown and non recoverable con- trastchange :
We reduce theimage
to itsparts
invariant withrespect
to contrastchanges,
thatis,
the level lines.o
Since,
every time we observe two level lines(or more) joining
at apoint,
thiscan be the result
of
an occlusion orof
ashadowing,
we must break the level lines at thispoint : indeed,
the branchesof
level linesarriving
at ajunction
arelikely
to beparts of different
visualobjects (real objects
orshadows).
As a consequence, everyjunction
is apossible
cue to occlusion orshadowing.
A remarkable
point
aboutArgument
1 is that it needsabsolutely
noassumption
about the
physical objects,
butonly
on the "final"part
ofimage generation by
con-trast
changes,
occlusions andshadowing.
The conclusion ofArgument
1 coincideswith what is stated
by phenomenology [Ka, 1, 2]. Indeed,
Gaetano Kanizsaproved
the main
structuring
role of T andX-junctions
in ourinterpretation
ofimages.
Ofcourse, he does not
give
indications on how thesejunctions
should bedetected,
butArgument
1 shows that there is a canonical way to do the detection.4 Computation of basic objects in
adigital image
4.1 Algorithm computing basic objects
In this
section,
we discuss how the atoms discussed above can be detected indig-
ital
images
and wepresent experimental
results. The abovedescription
of T- andX-junctions
is based on theassumptions
that~ Level sets and level lines can be
computed.
~ The
meeting
of two level lines with different levels can be detected.In a
digital image,
the level sets arecomputed by simple thresholding.
A levelset
~u(~) > Al
can beimmediately displayed
in white on blackbackground.
In theactual
technology, A
=0,1, ..., 255,
so that we can associate with animage
255 levelsets. The Jordan curves
limiting
the level sets areeasily computed by
astraight-
forward
algorithm : they
are chains of vertical and horizontalsegments limiting
thepixels.
In the numericalexperiments,
these chains arerepresented
as chains ofpixels
by simply inserting" boundary pixels"
between the actualimage pixels.
In order tofactor 2.
We
define "junctions"
ingeneral
as everypoint
of theimage plane
where two levellines
(with
differentlevels)
meet. Themeeting
of two level lines can be consideredas
physically meaningful
if andonly
if the level linesdiverge significantly
from eachother. In order to
distinguish
between the truejunctions
and the ones due to quan- tizationeffects,
we decide to take into accountT-junctions
if andonly
if~ the area of the
occulting object
islarge enough,
~ the
apparent
area of the occultedobject
islarge enough
and~ the area of
background
islarge enough
too.This threshold on area must be of course as low as
possible,
because there is a risk to looseT-junctions
between smallobjects.
Small sizesignificant objects
appear very often in naturalimages : they
maysimply
beobjects
seen at alarge
distance.In any case, the area threshold
only depends
on thequantization parameters
of thedigital image
and tendsideally
to zero when the accuracy of theimage
tends toinfinite. The
algorithm
for the elimination of dubiousjunctions
is as follows.Junction Detection
Algorithm
o Fix an area threshold n
(in practice,
n = 40pixels
seems sufficient to elimi- natejunctions
due toquantization effects.)
o Fix a grey level threshold b
(in practice :
b = 2 is sufficient to avoid grey levelquantization effects) .
o At every
point
x where two level lines meet : defineAo
po the minimum and maximum value of u in the fourneighboring pixels
of x.o We denote
by LA
the connectedcomponent
of x in the set~~, u(y) Al
andby M,
the connectedcomponent
of x in the set~~, u(~) >
Find the smallestA >
Ao
such that the area ofLa
islarger
than n. Call this valueAi .
Find thelargest
p,
~l ~c
yo, such that the area ofM~
islarger
than n. We call this valueo If
ÀI
and have beenfound,
ifA,
>2b,
and if the set b >u(y)
>ÀI
has a connectedcomponent containing x
with arealarger
than n, then retainx as a valid
junction.
4.2 Experiments
The
experiments
have been made on WorkStation SUNIPC,
with theimage
pro-cessing
environmentMegaWave II,
whose author isJacques
Froment.o
Experiment
1 : The boolean structure of occlusion retrievedby
level setanalysis
in two successiveimages
of a sequence. Theeight images
Ul, ..., ug of theexperiment
aredisplayed
from left toright
and fromtop
to bottom. Theimages
ul and u2 aregrey-level images
and the other ones arebinary images (black
and
white)
whichdisplay
level sets andlogical operations
effectuated on them. Thecoding
convention is black = 0 =f alse,
white = 255 =true,
so that weidentify
level sets with
binary images
and union with max, intersection with min and set difference with "-" . We first see el caminante(a walking researcher, photographed by
PacoPerales)
in two successivepositions
ul and u2. Then el caminante with-out
skirting-board :
U3 = >101
and el camznante withskirting-board
u4 = >
701. Next,
U5 = U4 - U3, thatis,
theskirting-board
without thewalking
man, u6 =~x, U2(X)
>70},
El caminante withskirting-board
in Position 2.Finally
U7 =max(u3, u6) :
theresulting
blackparts display
thepart
of theskirting-
board occluded in Position 1
plus
thepart
of theground
which remains occludedby
the feet of the cam2nante in bothimages.
In U8 =min(u7, U5),
wedisplay
thereconstruction of the whole
skirting-board by adding
the occludedpart
in Position1 deduced from Position 2.
~
Experiment
2 : detection ofT-junctions
asmeeting points
of level lines.The
original grey-level image
is ul. In U2 we see El caminante withskirting-board :
ul > 60 and u3 is El caminante without
skirting-board.
Wedisplay
in u4 the level lines of u2 in white and the level lines of u3 in black with their commonparts
in dark grey. Candidates toT-junctions generated by
the main occlusions can be seen.They
are characterized asmeeting points
of ablack,
a white and a grey line. In U5,we see all
T-junctions
detected from those two levelsets,
after afiltering
ofspurious
T-junctions
has beenmade,
with area threshold n = 80.Experiment 1.
5 Ima g e filtering preserving Kanizsa singularities
5.1 Image and level
curvesmoothing models
In this
section,
wepush
a little further theexploration
of invariantimage
or levellines
filtering
methods which have beenrecently proposed
in[AGLMI]
and(ST1~.
According
to the axiomaticpresentation
ofimage
iterativesmoothing (or "pyramidal
filtering" ,
or "scalespace" ) proposed
in[AGLMI] ,
all iterativefiltering methods,
depending
on a scaleparameter t,
can be modelizedby
a diffusivepartial
differentialequation
of the kindwhere
uo(x, y)
is the initialimage, u(t,
x,y)
is theimage
filtered at scale t andDu, D2u
denote the first and second derivatives withrespect
to(x, y). (We
set( ax (x, y), au (x, y)) _
ax 09Y and in the same way, we defineD2U( x, y)
as thesymmetric
matrix of secondpartial
derivatives uxx, uxy, uyx, uyy of u withrespect
to x andy.)
The most invariant "scale
space" proposed
in[AGLMI] ,
is the AffineMorpholog-
ical Scale
Space,
thatis,
anequation
of thepreceding
kind which is invariant withrespect
to contrastchanges
of theimage
and affine transforms of theimage plane,
is the curvature of the level line
passing by (x, y).
Theinterpretation
of the AMSS model isgiven
in[AGLMI]
andcorresponds
to thefollowing
evolutionequation
forcurves,
proposed independently
in[ST1]
and which we call(ASS) (Affine
ScaleSpace
of
curves)
where