• Aucun résultat trouvé

A new proof of the pigeonhole principle, for weak monotone systems

N/A
N/A
Protected

Academic year: 2022

Partager "A new proof of the pigeonhole principle, for weak monotone systems"

Copied!
65
0
0

Texte intégral

(1)

A new proof of the pigeonhole principle, for weak monotone systems

Anupam Das University of Bath

Wessex Theory Seminar Swansea University

January 24, 2013

(2)

Outline

1 Proof complexity and cut-elimination

2 The pigeonhole principle and threshold functions in propositional logic

3 A high-level proof of the pigeonhole principle

4 Generalisation to arbitrary permutations

5 Conclusions

(3)

The complexity of proofs and cut-elimination

(4)

Propositional proof complexity

A propositional proof system (PPS) is a polynomial-time decidable surjection{0,1}?→TAUT.

We can ask, given a tautology, what is the smallest proof of it. This is a fundamental question in proof complexity.

Theorem (Cook)

There is a PPS where every tautology has a proof of size polynomial in its length iffNP=co-NP.

Textbook systems such as Frege and Gentzen calculi are amongst the strongest PPSs known. In particular, there are no classes of tautologies known to require superpolynomial-size proofs in these systems.

In order to simplify the problem of finding lower bounds, proof complexity theorists have considered imposing various

restrictions on proofs in these systems.

(5)

Propositional proof complexity

A propositional proof system (PPS) is a polynomial-time decidable surjection{0,1}?→TAUT.

We can ask, given a tautology, what is the smallest proof of it.

This is a fundamental question in proof complexity.

Theorem (Cook)

There is a PPS where every tautology has a proof of size polynomial in its length iffNP=co-NP.

Textbook systems such as Frege and Gentzen calculi are amongst the strongest PPSs known. In particular, there are no classes of tautologies known to require superpolynomial-size proofs in these systems.

In order to simplify the problem of finding lower bounds, proof complexity theorists have considered imposing various

restrictions on proofs in these systems.

(6)

Propositional proof complexity

A propositional proof system (PPS) is a polynomial-time decidable surjection{0,1}?→TAUT.

We can ask, given a tautology, what is the smallest proof of it.

This is a fundamental question in proof complexity.

Theorem (Cook)

There is a PPS where every tautology has a proof of size polynomial in its length iffNP=co-NP.

Textbook systems such as Frege and Gentzen calculi are amongst the strongest PPSs known. In particular, there are no classes of tautologies known to require superpolynomial-size proofs in these systems.

In order to simplify the problem of finding lower bounds, proof complexity theorists have considered imposing various

restrictions on proofs in these systems.

(7)

Propositional proof complexity

A propositional proof system (PPS) is a polynomial-time decidable surjection{0,1}?→TAUT.

We can ask, given a tautology, what is the smallest proof of it.

This is a fundamental question in proof complexity.

Theorem (Cook)

There is a PPS where every tautology has a proof of size polynomial in its length iffNP=co-NP.

Textbook systems such as Frege and Gentzen calculi are amongst the strongest PPSs known. In particular, there are no classes of tautologies known to require superpolynomial-size proofs in these systems.

In order to simplify the problem of finding lower bounds, proof complexity theorists have considered imposing various

restrictions on proofs in these systems.

(8)

The complexity of cut-elimination

Traditionally, proof complexity theorists consider systems where the depth of cut-formulae is bounded, whereas in structural proof theory one restricts the behaviour of the structural rules, i.e. negation, weakening and contraction.

By imposing structural restrictions on cuts in different combinations we obtain a class of PPSs of varying strengths, ranging from unrestricted Gentzen to cut-free Gentzen.

Question

How much of cut-elimination can we do efficiently?

The exponential blowup in cut-elimination is primarily due to interaction with the contraction rules, and so it is these rules, in particular, that we will target.

(9)

The complexity of cut-elimination

Traditionally, proof complexity theorists consider systems where the depth of cut-formulae is bounded, whereas in structural proof theory one restricts the behaviour of the structural rules, i.e. negation, weakening and contraction.

By imposing structural restrictions on cuts in different combinations we obtain a class of PPSs of varying strengths, ranging from unrestricted Gentzen to cut-free Gentzen.

Question

How much of cut-elimination can we do efficiently?

The exponential blowup in cut-elimination is primarily due to interaction with the contraction rules, and so it is these rules, in particular, that we will target.

(10)

The complexity of cut-elimination

Traditionally, proof complexity theorists consider systems where the depth of cut-formulae is bounded, whereas in structural proof theory one restricts the behaviour of the structural rules, i.e. negation, weakening and contraction.

By imposing structural restrictions on cuts in different combinations we obtain a class of PPSs of varying strengths, ranging from unrestricted Gentzen to cut-free Gentzen.

Question

How much of cut-elimination can we do efficiently?

The exponential blowup in cut-elimination is primarily due to interaction with the contraction rules, and so it is these rules, in particular, that we will target.

(11)

The complexity of cut-elimination

Traditionally, proof complexity theorists consider systems where the depth of cut-formulae is bounded, whereas in structural proof theory one restricts the behaviour of the structural rules, i.e. negation, weakening and contraction.

By imposing structural restrictions on cuts in different combinations we obtain a class of PPSs of varying strengths, ranging from unrestricted Gentzen to cut-free Gentzen.

Question

How much of cut-elimination can we do efficiently?

The exponential blowup in cut-elimination is primarily due to interaction with the contraction rules, and so it is these rules, in particular, that we will target.

(12)

Weak monotone systems

Themonotone sequent calculus,MLK, is the fragment of Gentzen where cuts are restricted to negation-free formulae.

This is equivalent, from the point of view of proof complexity, to disallowing cuts between descendants of a¬-left and a

¬-right rule. Theorem (Atserias et al.)

MLKquasipolynomially simulates Frege systems.

What larger classes of cuts can we still eliminate efficiently?

Definition

Byweak monotone calculuswe mean the fragment of Gentzen free of cuts between descendants ofanystructural rules.

(13)

Weak monotone systems

Themonotone sequent calculus,MLK, is the fragment of Gentzen where cuts are restricted to negation-free formulae.

This is equivalent, from the point of view of proof complexity, to disallowing cuts between descendants of a¬-left and a

¬-right rule.

Theorem (Atserias et al.)

MLKquasipolynomially simulates Frege systems.

What larger classes of cuts can we still eliminate efficiently?

Definition

Byweak monotone calculuswe mean the fragment of Gentzen free of cuts between descendants ofanystructural rules.

(14)

Weak monotone systems

Themonotone sequent calculus,MLK, is the fragment of Gentzen where cuts are restricted to negation-free formulae.

This is equivalent, from the point of view of proof complexity, to disallowing cuts between descendants of a¬-left and a

¬-right rule.

Theorem (Atserias et al.)

MLKquasipolynomially simulates Frege systems.

What larger classes of cuts can we still eliminate efficiently?

Definition

Byweak monotone calculuswe mean the fragment of Gentzen free of cuts between descendants ofanystructural rules.

(15)

Weak monotone systems

Themonotone sequent calculus,MLK, is the fragment of Gentzen where cuts are restricted to negation-free formulae.

This is equivalent, from the point of view of proof complexity, to disallowing cuts between descendants of a¬-left and a

¬-right rule.

Theorem (Atserias et al.)

MLKquasipolynomially simulates Frege systems.

What larger classes of cuts can we still eliminate efficiently?

Definition

Byweak monotone calculuswe mean the fragment of Gentzen free of cuts between descendants ofanystructural rules.

(16)

Weak monotone systems

Themonotone sequent calculus,MLK, is the fragment of Gentzen where cuts are restricted to negation-free formulae.

This is equivalent, from the point of view of proof complexity, to disallowing cuts between descendants of a¬-left and a

¬-right rule.

Theorem (Atserias et al.)

MLKquasipolynomially simulates Frege systems.

What larger classes of cuts can we still eliminate efficiently?

Definition

Byweak monotone calculuswe mean the fragment of Gentzen free of cuts between descendants ofanystructural rules.

(17)

Weak monotone systems

The reason for considering this particular fragment is twofold:

1 It is these cuts that seem crucial to the argument of Atserias et al., and so it is pertinent to ask what happens when they are removed.

2 There is a known correspondence between these systems and deep inference systems, where a lot is already known about normalisation.

Question

What is the complexity of proofs in the weak monotone calculus?

The system seems surprisingly strong, and we will see this by considering a particular case study.

(18)

Weak monotone systems

The reason for considering this particular fragment is twofold:

1 It is these cuts that seem crucial to the argument of Atserias et al., and so it is pertinent to ask what happens when they are removed.

2 There is a known correspondence between these systems and deep inference systems, where a lot is already known about normalisation.

Question

What is the complexity of proofs in the weak monotone calculus?

The system seems surprisingly strong, and we will see this by considering a particular case study.

(19)

Weak monotone systems

The reason for considering this particular fragment is twofold:

1 It is these cuts that seem crucial to the argument of Atserias et al., and so it is pertinent to ask what happens when they are removed.

2 There is a known correspondence between these systems and deep inference systems, where a lot is already known about normalisation.

Question

What is the complexity of proofs in the weak monotone calculus?

The system seems surprisingly strong, and we will see this by considering a particular case study.

(20)

Weak monotone systems

The reason for considering this particular fragment is twofold:

1 It is these cuts that seem crucial to the argument of Atserias et al., and so it is pertinent to ask what happens when they are removed.

2 There is a known correspondence between these systems and deep inference systems, where a lot is already known about normalisation.

Question

What is the complexity of proofs in the weak monotone calculus?

The system seems surprisingly strong, and we will see this by considering a particular case study.

(21)

Normalisation via deep inference

Due to thelocalityof rules in deep inference, there are more powerful normalisation procedures that operate independently and in parallel on each atom in a proof.

The crucial case, for complexity, corresponds to the elimination of cuts between left and right contraction steps:

aa

−−−

a

−−−

aa

a

−−−

aa a

−−−

aa

−−−−−−−−−−−

aa

−−−

a aa

−−−

a

If there are too many of these cuts in a proof, then this normalisation procedure can take exponential-time.

(22)

Normalisation via deep inference

Due to thelocalityof rules in deep inference, there are more powerful normalisation procedures that operate independently and in parallel on each atom in a proof.

The crucial case, for complexity, corresponds to the elimination of cuts between left and right contraction steps:

aa

−−−

a

−−−

aa

a

−−−

aa a

−−−

aa

−−−−−−−−−−−

aa

−−−

a aa

−−−

a

If there are too many of these cuts in a proof, then this normalisation procedure can take exponential-time.

(23)

Normalisation via deep inference

Due to thelocalityof rules in deep inference, there are more powerful normalisation procedures that operate independently and in parallel on each atom in a proof.

The crucial case, for complexity, corresponds to the elimination of cuts between left and right contraction steps:

aa

−−−

a

−−−

aa

a

−−−

aa a

−−−

aa

−−−−−−−−−−−

aa

−−−

a aa

−−−

a

If there are too many of these cuts in a proof, then this normalisation procedure can take exponential-time.

(24)

Normalisation via deep inference

Theorem

If the path of any atom through a (monotone) deep inference proof of sizeShas at mostlalternations of left and right contraction steps, then the proof normalises in timeO(Sl).

Corollary

If the number of such alternations in a proof is polylogarithmic in its size, then the proof normalises in quasipolynomial time.

Intuitively one can think of the condition above as imposing that any inductions involved in the construction of the proof terminates in logarithmic time, i.e. using a divide-and-conquer strategy.

(25)

Normalisation via deep inference

Theorem

If the path of any atom through a (monotone) deep inference proof of sizeShas at mostlalternations of left and right contraction steps, then the proof normalises in timeO(Sl).

Corollary

If the number of such alternations in a proof is polylogarithmic in its size, then the proof normalises in quasipolynomial time.

Intuitively one can think of the condition above as imposing that any inductions involved in the construction of the proof terminates in logarithmic time, i.e. using a divide-and-conquer strategy.

(26)

Normalisation via deep inference

Theorem

If the path of any atom through a (monotone) deep inference proof of sizeShas at mostlalternations of left and right contraction steps, then the proof normalises in timeO(Sl).

Corollary

If the number of such alternations in a proof is polylogarithmic in its size, then the proof normalises in quasipolynomial time.

Intuitively one can think of the condition above as imposing that any inductions involved in the construction of the proof terminates in logarithmic time, i.e. using a divide-and-conquer strategy.

(27)

The pigeonhole principle and threshold functions

in propositional logic

(28)

The propositional pigeonhole principle

The pigeonhole principle is a fundamental tool in combinatorics. It states,

“if there aren+1pigeons innholes, then there must be two pigeons in the same hole”.

It can be encoded as a sequence of propositional tautologies, indexed byn, as follows:

PHPn:=

n

^

i=0 n

_

j=1

pij

n

_

j=1

_

i06=i

pij∧pi0j

(NB: the above encoding allows the mapping from pigeons to holes to be many-many.)

(29)

The propositional pigeonhole principle

The pigeonhole principle is a fundamental tool in combinatorics. It states,

“if there aren+1pigeons innholes, then there must be two pigeons in the same hole”.

It can be encoded as a sequence of propositional tautologies, indexed byn, as follows:

PHPn:=

n

^

i=0 n

_

j=1

pij

n

_

j=1

_

i06=i

pij∧pi0j

(NB: the above encoding allows the mapping from pigeons to holes to be many-many.)

(30)

The propositional pigeonhole principle

The pigeonhole principle is a fundamental tool in combinatorics. It states,

“if there aren+1pigeons innholes, then there must be two pigeons in the same hole”.

It can be encoded as a sequence of propositional tautologies, indexed byn, as follows:

PHPn:=

n

^

i=0 n

_

j=1

pij

n

_

j=1

_

i06=i

pij∧pi0j

(NB: the above encoding allows the mapping from pigeons to holes to be many-many.)

(31)

The propositional pigeonhole principle

PHPnis a benchmark class of tautologies in proof complexity;

many PPSs have only exponential-size proofs, e.g. cut-free Gentzen, Resolution and bounded-depth Frege.

These systems seem unable to formulate basic counting arguments. This is arguably because boolean counting

functions cannot, in general, be expressed by short formulae of bounded depth.

On the other hand,MLKhas quasipolynomial-size proofs.

What about the weak monotone calculus?

(32)

The propositional pigeonhole principle

PHPnis a benchmark class of tautologies in proof complexity;

many PPSs have only exponential-size proofs, e.g. cut-free Gentzen, Resolution and bounded-depth Frege.

These systems seem unable to formulate basic counting arguments. This is arguably because boolean counting

functions cannot, in general, be expressed by short formulae of bounded depth.

On the other hand,MLKhas quasipolynomial-size proofs.

What about the weak monotone calculus?

(33)

The propositional pigeonhole principle

PHPnis a benchmark class of tautologies in proof complexity;

many PPSs have only exponential-size proofs, e.g. cut-free Gentzen, Resolution and bounded-depth Frege.

These systems seem unable to formulate basic counting arguments. This is arguably because boolean counting

functions cannot, in general, be expressed by short formulae of bounded depth.

On the other hand,MLKhas quasipolynomial-size proofs.

What about the weak monotone calculus?

(34)

Threshold functions

Threshold functionsare a class of boolean functions THnk :{0,1}n→ {0,1}byσ 7→1iffP

iσi≥k.

We can construct monotone formulae of quasipolynomial size that compute these functions using the following

divide-and-conquer identity:

TH2nk (a,b) = _

i+j=k

THni(a)∧THnj(b)

Clearly the threshold functions are symmetric, but what is the complexity of proving this? I.e.

Question

Given a permutationπ∈Sn, what is the size of a proof of THnk(ai)⇒THnk(a)in a given PPS?

(35)

Threshold functions

Threshold functionsare a class of boolean functions THnk :{0,1}n→ {0,1}byσ 7→1iffP

iσi≥k.

We can construct monotone formulae of quasipolynomial size that compute these functions using the following

divide-and-conquer identity:

TH2nk (a,b) = _

i+j=k

THni(a)∧THnj(b)

Clearly the threshold functions are symmetric, but what is the complexity of proving this? I.e.

Question

Given a permutationπ∈Sn, what is the size of a proof of THnk(ai)⇒THnk(a)in a given PPS?

(36)

Threshold functions

Threshold functionsare a class of boolean functions THnk :{0,1}n→ {0,1}byσ 7→1iffP

iσi≥k.

We can construct monotone formulae of quasipolynomial size that compute these functions using the following

divide-and-conquer identity:

TH2nk (a,b) = _

i+j=k

THni(a)∧THnj(b)

Clearly the threshold functions are symmetric, but what is the complexity of proving this? I.e.

Question

Given a permutationπ∈Sn, what is the size of a proof of THnk(ai)⇒THnk(a)in a given PPS?

(37)

A high-level proof of the pigeonhole principle

(38)

Proof idea

We give the following outline of a proof ofPHPn:

1 Since each pigeon is in a hole, and there aren+1pigeons, we have thatTHn(n+1)n+1 (pij), where the inputs are ordered by pigeons then holes.

2 Since eachTHnkis symmetric, we have thatTHn(n+1)n+1 (pji), i.e. where the inputs are ordered by holes then pigeons.

3 Evaluating every possibility of assignments of pigeons to holes, given by the divide-and-conquer identity, yields that there must be some hole with two pigeons (by the pigeonhole principle).

(1) and (3) have (fairly) simple proofs. We focus on (2).

Atserias et al. provided proofs of (2) by decomposing

permutations into a product of transpositions. But an induction on such a decomposition takes too long for our weak systems, so we take a different approach.

(39)

Proof idea

We give the following outline of a proof ofPHPn:

1 Since each pigeon is in a hole, and there aren+1pigeons, we have thatTHn(n+1)n+1 (pij), where the inputs are ordered by pigeons then holes.

2 Since eachTHnkis symmetric, we have thatTHn(n+1)n+1 (pji), i.e.

where the inputs are ordered by holes then pigeons.

3 Evaluating every possibility of assignments of pigeons to holes, given by the divide-and-conquer identity, yields that there must be some hole with two pigeons (by the pigeonhole principle).

(1) and (3) have (fairly) simple proofs. We focus on (2).

Atserias et al. provided proofs of (2) by decomposing

permutations into a product of transpositions. But an induction on such a decomposition takes too long for our weak systems, so we take a different approach.

(40)

Proof idea

We give the following outline of a proof ofPHPn:

1 Since each pigeon is in a hole, and there aren+1pigeons, we have thatTHn(n+1)n+1 (pij), where the inputs are ordered by pigeons then holes.

2 Since eachTHnkis symmetric, we have thatTHn(n+1)n+1 (pji), i.e.

where the inputs are ordered by holes then pigeons.

3 Evaluating every possibility of assignments of pigeons to holes, given by the divide-and-conquer identity, yields that there must be some hole with two pigeons

(by the pigeonhole principle).

(1) and (3) have (fairly) simple proofs. We focus on (2).

Atserias et al. provided proofs of (2) by decomposing

permutations into a product of transpositions. But an induction on such a decomposition takes too long for our weak systems, so we take a different approach.

(41)

Proof idea

We give the following outline of a proof ofPHPn:

1 Since each pigeon is in a hole, and there aren+1pigeons, we have thatTHn(n+1)n+1 (pij), where the inputs are ordered by pigeons then holes.

2 Since eachTHnkis symmetric, we have thatTHn(n+1)n+1 (pji), i.e.

where the inputs are ordered by holes then pigeons.

3 Evaluating every possibility of assignments of pigeons to holes, given by the divide-and-conquer identity, yields that there must be some hole with two pigeons (by the pigeonhole principle).

(1) and (3) have (fairly) simple proofs. We focus on (2).

Atserias et al. provided proofs of (2) by decomposing

permutations into a product of transpositions. But an induction on such a decomposition takes too long for our weak systems, so we take a different approach.

(42)

Proof idea

We give the following outline of a proof ofPHPn:

1 Since each pigeon is in a hole, and there aren+1pigeons, we have thatTHn(n+1)n+1 (pij), where the inputs are ordered by pigeons then holes.

2 Since eachTHnkis symmetric, we have thatTHn(n+1)n+1 (pji), i.e.

where the inputs are ordered by holes then pigeons.

3 Evaluating every possibility of assignments of pigeons to holes, given by the divide-and-conquer identity, yields that there must be some hole with two pigeons (by the pigeonhole principle).

(1) and (3) have (fairly) simple proofs. We focus on (2).

Atserias et al. provided proofs of (2) by decomposing

permutations into a product of transpositions. But an induction on such a decomposition takes too long for our weak systems, so we take a different approach.

(43)

Proof idea

We give the following outline of a proof ofPHPn:

1 Since each pigeon is in a hole, and there aren+1pigeons, we have thatTHn(n+1)n+1 (pij), where the inputs are ordered by pigeons then holes.

2 Since eachTHnkis symmetric, we have thatTHn(n+1)n+1 (pji), i.e.

where the inputs are ordered by holes then pigeons.

3 Evaluating every possibility of assignments of pigeons to holes, given by the divide-and-conquer identity, yields that there must be some hole with two pigeons (by the pigeonhole principle).

(1) and (3) have (fairly) simple proofs. We focus on (2).

Atserias et al. provided proofs of (2) by decomposing

permutations into a product of transpositions. But an induction on such a decomposition takes too long for our weak systems, so we take a different approach.

(44)

Decomposition of matrix transposition

The specific permutation required, ordering by pigeons to ordering by holes, corresponds to thetranspositionof a matrix.

Observation

IfBandCare matrices of equal dimensions, then

A B C D

|

=

A| C| B| D|

.

Attempting a divide-and-conquer we obtain A B|

= A|

B|

and C D|

= C|

D|

.

To achieve the full transposition from these, we need to interleavethe rows of these two matrices.

(45)

Decomposition of matrix transposition

The specific permutation required, ordering by pigeons to ordering by holes, corresponds to thetranspositionof a matrix.

Observation

IfBandCare matrices of equal dimensions, then

A B C D

|

=

A| C| B| D|

.

Attempting a divide-and-conquer we obtain A B|

= A|

B|

and C D|

= C|

D|

.

To achieve the full transposition from these, we need to interleavethe rows of these two matrices.

(46)

Decomposition of matrix transposition

The specific permutation required, ordering by pigeons to ordering by holes, corresponds to thetranspositionof a matrix.

Observation

IfBandCare matrices of equal dimensions, then

A B C D

|

=

A| C| B| D|

.

Attempting a divide-and-conquer we obtain A B|

= A|

B|

and C D|

= C|

D|

.

To achieve the full transposition from these, we need to interleavethe rows of these two matrices.

(47)

Decomposition of matrix transposition

The specific permutation required, ordering by pigeons to ordering by holes, corresponds to thetranspositionof a matrix.

Observation

IfBandCare matrices of equal dimensions, then

A B C D

|

=

A| C| B| D|

.

Attempting a divide-and-conquer we obtain A B|

= A|

B|

and C D|

= C|

D|

.

To achieve the full transposition from these, we need to interleavethe rows of these two matrices.

(48)

Decomposition of matrix transposition

Letakbdenote the interleaving of vectorsa,bof the same length.

Observation

(a,b)k(c,d) = (akc,bkd)

... and this yields a divide-and-conquer strategy for interleaving vectors.

Theorem

There are weak monotone proofs transposing inputs of threshold formulae forTHnkof sizeO(nlog2n).

(Thelog2ncomes from the fact that we have used two divide-and-conquer inductions, one nested inside the other).

(49)

Decomposition of matrix transposition

Letakbdenote the interleaving of vectorsa,bof the same length.

Observation

(a,b)k(c,d) = (akc,bkd)

... and this yields a divide-and-conquer strategy for interleaving vectors.

Theorem

There are weak monotone proofs transposing inputs of threshold formulae forTHnkof sizeO(nlog2n).

(Thelog2ncomes from the fact that we have used two divide-and-conquer inductions, one nested inside the other).

(50)

Decomposition of matrix transposition

Letakbdenote the interleaving of vectorsa,bof the same length.

Observation

(a,b)k(c,d) = (akc,bkd)

... and this yields a divide-and-conquer strategy for interleaving vectors.

Theorem

There are weak monotone proofs transposing inputs of threshold formulae forTHnkof sizeO(nlog2n).

(Thelog2ncomes from the fact that we have used two divide-and-conquer inductions, one nested inside the other).

(51)

Decomposition of matrix transposition

Letakbdenote the interleaving of vectorsa,bof the same length.

Observation

(a,b)k(c,d) = (akc,bkd)

... and this yields a divide-and-conquer strategy for interleaving vectors.

Theorem

There are weak monotone proofs transposing inputs of threshold formulae forTHnkof sizeO(nlog2n).

(Thelog2ncomes from the fact that we have used two divide-and-conquer inductions, one nested inside the other).

(52)

Arbitrary permutations

Interleavings, by themselves, do not form a generating set for the symmetric group. However a generalisation of them, the set of allriffle shuffleson a deck of cards, does form such a set.

Think of this as just merge sort in reverse. The decomposition is attained by sorting the inverse of a permutation.

Proposition

Every permutation can be decomposed into a product of riffle shuffles whose underlying circuit has logarithmic depth.

Corollary

There are quasipolynomial-size proofs permuting the inputs of threshold formulae, for any permutation.

(53)

Arbitrary permutations

Interleavings, by themselves, do not form a generating set for the symmetric group. However a generalisation of them, the set of allriffle shuffleson a deck of cards, does form such a set.

Think of this as just merge sort in reverse. The decomposition is attained by sorting the inverse of a permutation.

Proposition

Every permutation can be decomposed into a product of riffle shuffles whose underlying circuit has logarithmic depth.

Corollary

There are quasipolynomial-size proofs permuting the inputs of threshold formulae, for any permutation.

(54)

Arbitrary permutations

Interleavings, by themselves, do not form a generating set for the symmetric group. However a generalisation of them, the set of allriffle shuffleson a deck of cards, does form such a set.

Think of this as just merge sort in reverse. The decomposition is attained by sorting the inverse of a permutation.

Proposition

Every permutation can be decomposed into a product of riffle shuffles whose underlying circuit has logarithmic depth.

Corollary

There are quasipolynomial-size proofs permuting the inputs of threshold formulae, for any permutation.

(55)

Arbitrary permutations

Interleavings, by themselves, do not form a generating set for the symmetric group. However a generalisation of them, the set of allriffle shuffleson a deck of cards, does form such a set.

Think of this as just merge sort in reverse. The decomposition is attained by sorting the inverse of a permutation.

Proposition

Every permutation can be decomposed into a product of riffle shuffles whose underlying circuit has logarithmic depth.

Corollary

There are quasipolynomial-size proofs permuting the inputs of threshold formulae, for any permutation.

(56)

Conclusions

(57)

Summary

We constructed quasipolynomial-size proofs ofPHPnin the fragment of Gentzen free of cuts between descendants of any structural rules.

The normalisation procedures from deep inference appear crucial for controlling the complexity of these proofs.

Procedures internal to the Gentzen formalism seem inadequate.

While the existence of small proofs ofPHPnis encouraging, the relative complexity of weak monotone systems is open.

This problem seems fundamentally related to the relationship between divide-and-conquer induction and regular induction in the absence of negation, something which could be studied in an arithmetic setting.

The Gentzen formulation of our proofs still contains some cuts. The complexity of (partially) eliminating these cuts is the topic of further study.

(58)

Summary

We constructed quasipolynomial-size proofs ofPHPnin the fragment of Gentzen free of cuts between descendants of any structural rules.

The normalisation procedures from deep inference appear crucial for controlling the complexity of these proofs.

Procedures internal to the Gentzen formalism seem inadequate.

While the existence of small proofs ofPHPnis encouraging, the relative complexity of weak monotone systems is open.

This problem seems fundamentally related to the relationship between divide-and-conquer induction and regular induction in the absence of negation, something which could be studied in an arithmetic setting.

The Gentzen formulation of our proofs still contains some cuts. The complexity of (partially) eliminating these cuts is the topic of further study.

(59)

Summary

We constructed quasipolynomial-size proofs ofPHPnin the fragment of Gentzen free of cuts between descendants of any structural rules.

The normalisation procedures from deep inference appear crucial for controlling the complexity of these proofs.

Procedures internal to the Gentzen formalism seem inadequate.

While the existence of small proofs ofPHPnis encouraging, the relative complexity of weak monotone systems is open.

This problem seems fundamentally related to the relationship between divide-and-conquer induction and regular induction in the absence of negation, something which could be studied in an arithmetic setting.

The Gentzen formulation of our proofs still contains some cuts. The complexity of (partially) eliminating these cuts is the topic of further study.

(60)

Summary

We constructed quasipolynomial-size proofs ofPHPnin the fragment of Gentzen free of cuts between descendants of any structural rules.

The normalisation procedures from deep inference appear crucial for controlling the complexity of these proofs.

Procedures internal to the Gentzen formalism seem inadequate.

While the existence of small proofs ofPHPnis encouraging, the relative complexity of weak monotone systems is open.

This problem seems fundamentally related to the relationship between divide-and-conquer induction and regular induction in the absence of negation, something which could be studied in an arithmetic setting.

The Gentzen formulation of our proofs still contains some cuts. The complexity of (partially) eliminating these cuts is the topic of further study.

(61)

Summary

We constructed quasipolynomial-size proofs ofPHPnin the fragment of Gentzen free of cuts between descendants of any structural rules.

The normalisation procedures from deep inference appear crucial for controlling the complexity of these proofs.

Procedures internal to the Gentzen formalism seem inadequate.

While the existence of small proofs ofPHPnis encouraging, the relative complexity of weak monotone systems is open.

This problem seems fundamentally related to the relationship between divide-and-conquer induction and regular induction in the absence of negation, something which could be studied in an arithmetic setting.

The Gentzen formulation of our proofs still contains some cuts.

The complexity of (partially) eliminating these cuts is the topic of further study.

(62)

Further work

We can also use this method to prove the generalised

pigeonhole principle: if there arenk+1pigeons innholes then there arek+1pigeons in some hole.

This result is equivalent to saying that the maximum value of a random variable is at least its expected value, a principle at the heart of many probabilistic arguments.

It would be interesting to see if this could be used to formalise probabilistic arguments of certain combinatorial principle in weak systems.

It is open whether any of the quasipolynomial upper bounds mentioned can be improved to a polynomial.

(63)

Further work

We can also use this method to prove the generalised

pigeonhole principle: if there arenk+1pigeons innholes then there arek+1pigeons in some hole.

This result is equivalent to saying that the maximum value of a random variable is at least its expected value, a principle at the heart of many probabilistic arguments.

It would be interesting to see if this could be used to formalise probabilistic arguments of certain combinatorial principle in weak systems.

It is open whether any of the quasipolynomial upper bounds mentioned can be improved to a polynomial.

(64)

Further work

We can also use this method to prove the generalised

pigeonhole principle: if there arenk+1pigeons innholes then there arek+1pigeons in some hole.

This result is equivalent to saying that the maximum value of a random variable is at least its expected value, a principle at the heart of many probabilistic arguments.

It would be interesting to see if this could be used to formalise probabilistic arguments of certain combinatorial principle in weak systems.

It is open whether any of the quasipolynomial upper bounds mentioned can be improved to a polynomial.

(65)

Thank you.

Références

Documents relatifs

In this section, we provide a more efficient algorithm for computing the MLS controller, which is based on two ideas: to explore, for each state, inputs with a lower priority only if

In the new abstraction algorithm the symbolic controls are essentially the feedback controllers which solve this control synthesis problem.. The improvement in the number of

5.3 Monotone Proofs of the Weak Pigeonhole Principle The results of this section provide the first example of considera- tions in the complexity of deep inference yielding new

LEMMA 1.. - The definitions of g^t) and g\t) and all the statements of Lemma 3 are valid for an arbitrary measurable function g, on an arbitrary measure space (see [3]), but

We directly prove that the amalgamated monotone or anti-monotone products (in N. Muraki’s originary sense [17], for the scalar-valued case) (see also M. Popa’s paper [22], for

There are two large classes of sampling algorithms, we can sample using a breadth first sampling (BFS) approach, constructing the sample from the local area first, or we can use

In this paper we study homogenization for a class of monotone systems of first-order time-dependent periodic Hamilton-Jacobi equations.. We characterize the Hamiltonians of the

We shall make essential use of the results in the two recent papers [QS], where corresponding results for scalar equations were obtained, and [BS], where an