In this section we consider G= (V, E) to be any transient graph. Let Λ be a finite
subset of V. The Gaussian Free Field (or GFF) with 0 boundary conditions on Λ is
defined to be the random (Gaussian) fieldϕ= (ϕ_{x} :x∈Λ) in R^{Λ} with distribution

dPΛ[ϕ] := 1

Z_{Λ}exp[−D_{Λ}(ϕ)]dϕ,

where Z_{Γ} is a normalizing constant, dϕ stands for the Lebesgue measure on R^{Λ} and
DΛ(ϕ) is theDirichlet energy given by

D_{Λ}(ϕ) := ^{1}_{2} X

xy∈E {x,y}∩Γ6=∅

(ϕ_{x}−ϕ_{y})^{2},

where ϕ_{x} is extended to every vertex of V by setting ϕ_{x} = 0 for every x∈Λ^{c}. Under
the assumption of transience of the graph G, which follows from (Hd) for d > 2, one
can extend the GFF to Λ = V by taking the weak limit P of the measures PΓ as Γ
tends to V. The measure P is simply the centered Gaussian vector with covariance
matrix given by the Green function G- see [30]. Expectation with respect toP (resp.

PΓ) is denoted by E (resp. EΓ). The main result of this section is the following.

Proposition 4.2.1. Let G be a transient graph. Then for any finite subset S of V one has

E[Pp(ϕ)(S ←→ ∞)]6 ≤exp[−^{1}_{2}cap(S)] (4.2.1)
where p(ϕ)_{xy} := 1−exp[−2(ϕ_{x}+ 1)_{+}(ϕ_{y} + 1)_{+}] for every xy∈E.

Note that for S = {x}, one gets that x is connected to infinity with positive annealed probability. One may wonder why we added 1 to the GFF: we refer to the remarks at the end of this section for a discussion of this technical trick.

The key step in the proof of Proposition 4.2.1 is the following lemma.

Lemma 4.2.2. Let G be a transient graph and fix a finite subset S of V and t ∈R^{S}.
If X_{S}^{t}(ϕ) := exp[−P

x∈St_{x}(ϕ_{x}+ 1)], then

E[P_{p(ϕ)}(S ←→ ∞)]6 ≤E[X_{S}^{t}(ϕ)].

Before proving this lemma, let us show how it implies Proposition 4.2.1.

Proof of Proposition 4.2.1. SinceP

x∈St_{x}(ϕ_{x}+ 1) is a Gaussian random variable with
mean P

x∈St_{x} and variance P

x,y∈St_{x}t_{y}G(x, y), we deduce that
E[X_{S}^{t}(ϕ)] = exp

−X

x∈S

t_{x}+ 1
2

X

x,y∈S

t_{x}t_{y}G(x, y)
.
Now, we chooset according to the equilibrium measure of S, namely

t_{x} =e_{S}(x) := d(x)P[X_{k}∈/S ∀k ≥1|X_{0} =x]

(which turns out to be the optimal choice of t). This gives the result by observing that cap(S) =P

x∈Se_{S}(x) and that P

y∈Se_{S}(y)G(x, y) = 1 for all x ∈ S (which can
be deduced in a straightforward way via a decomposition of the random walk started
atx in terms of its last visit to S).

Let us now turn to the proof of the lemma.

Proof of Lemma 4.2.2. The proof proceeds in three steps. The first one relates the GFF on Γ to an Ising model on Γ with + boundary conditions and random coupling constants. The second one relates the Ising model to Bernoulli percolation via the Edwards-Sokal coupling. The last step consists in taking the limit as Γ tends to V.

In the first two steps, we fix a finite subset Γ of V and consider P^{Λ}. We also define

|ϕ+ 1|:= (|ϕ_{x}+ 1|)x∈V,
σ(ϕ) := (sgn(ϕ_{x}+ 1))x∈V,
J(ϕ)_{xy} :=|ϕ_{x}+ 1| |ϕ_{y}+ 1|.

(4.2.2)

Step 1: Conditionally on |ϕ+ 1|, the random variable σ(ϕ) is distributed according to the Ising model on Γ with + boundary conditions and coupling constants J(ϕ).

Let us mention that this is an observation that was already made in the literature
(see e.g. [103]). Recall that the Ising model on Γ with + boundary conditions and
coupling constantsJ = (Jxy) is defined on configurationsσ = (σx :x∈Γ) in{−1,+1}^{Γ}
by

µ^{+}_{Γ;J}(σ) := 1

Ze_{Λ}exp[−H_{Λ,J}(σ)],

where Ze_{Γ} is a normalizing constant and H_{Γ,J}(σ) is theHamiltonian given by
H_{Λ,J}(σ) :=− X

xy∈E {x,y}∩Γ6=∅

J_{xy}σ_{x}σ_{y},

where σ is extended to V by setting σ_{x} = +1 for every x∈Γ^{c}.

Now, the fact that ϕ_{x} = 0 for everyx outside Γ obviously implies σ(ϕ)_{x} = +1. In
addition, we have that

D_{Λ}(ϕ) = ^{1}_{2} X

xy∈E {x,y}∩Γ6=∅

(ϕ_{x}−ϕ_{y})^{2} =F(|ϕ+ 1|) +H_{Γ,J}_{(ϕ)}(σ(ϕ)),

where F is some function on R^{Γ}. This implies that

dPΛ[ϕ] =G(|ϕ+ 1|)µ^{+}_{λ;J(ϕ)}(σ(ϕ))dϕ

for allϕ∈R^{Γ}, whereGis some function onR^{Γ}. Sinceϕ7→(|ϕ+ 1|, σ(ϕ)) is a bijection
from (R\{−1})^{Γ}(which has total Lebesgue measure) to R^{>0}×{−1,+1}Γ

, the above equation implies Step 1 readily.

Step 2: For S ⊂Λ, one has that EΓ[P_{p(ϕ)}(S←→6 Γ^{c})]≤EΓ[X_{S}^{t}(ϕ)].

This step relies on the Edwards-Sokal coupling (see [71] for details), which we recall
for completeness. Sample a configurationσ on Λ according to the Ising model with +
boundary conditions and coupling constants J_{xy}. Then, construct a configuration ω
on the edges in E intersecting Γ as follows: for every edge xy, let ω_{xy} be a Bernoulli
random variable with parameter 1−exp(−2J_{xy}1{σ_{x}=σy}). Note that ω_{xy} = 0
automat-ically if σ_{x} 6= σ_{y}. Below, P_{J} denotes the law of (σ, ω) and E_{J} the expectation with
respect to P_{J}. (We only use this notation in this step.)

The percolation process ω thus obtained is called the random-cluster model with cluster-weight q = 2 and wired boundary condition, but this will be irrelevant for us.

The important feature of this coupling will be that, conditionally on ω, σ is sampled as follows:

— every vertex connected to Γ^{c} receives the spin +1;

— for every connected componentC ofω not intersecting Γ^{c}, choose a spinσ_{C} equal
to +1 or−1 with probability 1/2, independently for each connected component,
and setσ_{x} =σC for every x∈ C.

Given a realization of the GFF ϕ, we construct ω as above for J(ϕ) and σ(ϕ) as
defined in (4.2.2) (recall from Step 1 thatσ(ϕ) is indeed an Ising model with coupling
constants J(ϕ)). As a direct consequence of the construction, conditionally on ω, the
law of the σ_{x} for the vertices which are not connected to Γ^{c} is symmetric by global
flip. Applying these observations, we deduce that

EΓ[X_{S}^{t}(ϕ)| |ϕ+ 1|]≥E_{J(ϕ)}[E_{J(ϕ)}(X_{S}^{t}(ϕ)|ω)1_{{S}_{←→Γ}_{6} ^{c} _{in ω}}| |ϕ+ 1|]

≥P_{J(ϕ)}[S ←→6 Γ^{c} in ω]. (4.2.3)
In the last inequality we used that, conditionally on |ϕ+ 1| and the event that S
is not connected to Γ^{c} in ω, log(X_{S}^{t}(ϕ)) = P

x∈St_{x}|ϕ_{x} + 1|σ_{x} has mean 0, so that
E_{J(ϕ)}(X_{S}^{t}(ϕ)|ω)≥1 by Jensen’s inequality.

Now, conditioned on σ, the only vertices that can potentially be connected to Γ^{c}
in ω are those which are connected by a path of pluses in σ. For an edge xy with
at least one endpoint of this type, one has 1−exp(−2J_{xy}(ϕ)1{σ(ϕ)x=σ(ϕ)y}) =p(ϕ)_{xy}.
This observation together with the Edwards-Sokal coupling implies

P_{J(ϕ)}[S ←→6 Γ^{c} in ω|σ] =P_{p(ϕ)}[S ←→6 Γ^{c}].

Integrating over σ (given|ϕ+ 1|) and using Step 1 gives

P_{J(ϕ)}[S ←→6 Γ^{c} in ω] =EΓ[P_{p(ϕ)}(S ←→6 Γ^{c})| |ϕ+ 1|]. (4.2.4)
Step 2 follows readily by putting (4.2.4) into (4.2.3) and then integrating with respect
to|ϕ+ 1|.

Step 3: Passing to the infinite volume.

Step 2 implies that for every S⊂T ⊂Γ,

E^{Γ}[Pp(ϕ)(S ←→6 T^{c})]≤E^{Γ}[X_{S}^{t}(ϕ)]. (4.2.5)
Since S and T are finite, the random variables considered in the previous inequality
are continuous local observables of ϕ. Letting Γ tend to V, by weak convergence we
obtain

E[P_{p(ϕ)}(S ←→6 T^{c})]≤E[X_{S}^{t}(ϕ)].

LettingT tend to V concludes the proof.

Remark 4.2.3. Notice that in this section, we did not fully use the assumption that
(H_{d}) holds ford >4, from Theorem4.1.1. The only property we needed fromGwas its
transience (implied by (Hd) for d > 2), so that we could consider the GFF in infinite
volume.

Remark 4.2.4. If we were only interested in provingE[P^{p(ϕ)}(x←→ ∞)]>0, but not the
quantitative bound (4.1.1), we could have proceeded as follows. The Edwards-Sokal
coupling implies that for any x

P_{J(ϕ)}[x←→Γ^{c} in ω] = E_{J(ϕ)}[σ_{x}].

Using the above, in place of (4.2.3), and subsequently applying (4.2.4) and integrating with respect to|ϕ+ 1|, we deduce that

EΛ

P_{p(ϕ)}(x←→Γ^{c})

=EΛ

sgn(ϕ_{x}+ 1)
.
Proceeding as in the third step, we obtain E

P_{p(ϕ)}(x←→ ∞)

≥E

sgn(ϕ_{x}+ 1)

>0.

Remark 4.2.5. In this remark, we explain an alternative approach to Proposition 4.2.1
based on recent developments in the study of GFF on the cable system. Consider the
(extended) GFF ˜ϕon the metric graph ˜Gconstructed by interpreting each edge ofGas
an interval where the field takes values continuously. By comparing with [102], one can
deduce that the clusters of the annealed percolation model with random parameters
p(ϕ) exactly correspond to the connected components (when restricted to the vertices
of G) of the super level-set {y ∈ G˜ : ˜ϕ_{y} + 1 >0}. This connection follows from the
following observations: first, the field ˜ϕcan be constructed fromϕ by simply putting
independent Brownian bridges on each edge, interpolating between the values on its
endpoints; second, the probability that a Brownian bridge betweenaandbstays above

−1 is exactly 1−exp[−2(a+ 1)+(b+ 1)+] (see [102] for details). One can then prove
that the super level-set {y ∈ G˜ : ϕ˜_{y} + 1 > 0} stochastically dominates a random
interlacement of parameter 1/2 (see for example Theorem 3 of [102]). Let us mention
that the argument of Lupu is based on uniqueness of the infinite sign-cluster, which is
currently written only forZ^{d}. Even though the uniqueness may fail for the graphs that
we are studying, the domination mentioned above should still be true in our context.

Taking this result as granted, and observing that the probability thatS intersects the
random interlacement is equal to 1−exp[−^{1}_{2}cap(S)], this would provide an alternative
proof of Proposition 4.2.1.

Remark 4.2.6. In the same spirit as in the previous remark, Bernoulli percolation with
random parameters given by q(ϕ)_{xy} := 1−exp[−2(ϕ_{x})_{+}(ϕ_{y})_{+}] corresponds to the 0
super level-set{y ∈G˜: ˜ϕy >0}. Also, using the strong Markov property for ˜ϕ, Lupu
proved in [102] that the sign of ˜ϕ can be sampled by assigning independent uniform
signs to each excursion of |ϕ|. As a consequence, one has˜

E[P_{q(ϕ)}(x←→y)] = 1

2E[sgn(ϕ_{x})sgn(ϕ_{y})] = 1

π arcsin G(x, y) pG(x, x)G(y, y)

(4.2.6) for every x, y ∈ V. One can easily deduce from this identity that the (annealed) percolation on the random environment q(ϕ) has infinite susceptibility, i.e., for every x∈V one hasP

y∈V E[P_{q(ϕ)}(x←→y)] = +∞.

Remark 4.2.7. The previous remark shows that the two-point connectivity probabilities of the model with edge-parameters q(ϕ) tend to zero when the distance between the points diverges. This explains why we shift the GFF by 1: the edge-parameters p(ϕ) are such that the two-point connectivity probabilities do not tend to zero, hence suggesting the existence of an infinite connected component.