A new proof of the pigeonhole principle, for weak monotone systems
Anupam Das University of Bath
Wessex Theory Seminar Swansea University
January 24, 2013
Outline
1 Proof complexity and cut-elimination
2 The pigeonhole principle and threshold functions in propositional logic
3 A high-level proof of the pigeonhole principle
4 Generalisation to arbitrary permutations
5 Conclusions
The complexity of proofs and cut-elimination
Propositional proof complexity
• A propositional proof system (PPS) is a polynomial-time decidable surjection{0,1}?→TAUT.
• We can ask, given a tautology, what is the smallest proof of it. This is a fundamental question in proof complexity.
Theorem (Cook)
There is a PPS where every tautology has a proof of size polynomial in its length iffNP=co-NP.
• Textbook systems such as Frege and Gentzen calculi are amongst the strongest PPSs known. In particular, there are no classes of tautologies known to require superpolynomial-size proofs in these systems.
• In order to simplify the problem of finding lower bounds, proof complexity theorists have considered imposing various
restrictions on proofs in these systems.
Propositional proof complexity
• A propositional proof system (PPS) is a polynomial-time decidable surjection{0,1}?→TAUT.
• We can ask, given a tautology, what is the smallest proof of it.
This is a fundamental question in proof complexity.
Theorem (Cook)
There is a PPS where every tautology has a proof of size polynomial in its length iffNP=co-NP.
• Textbook systems such as Frege and Gentzen calculi are amongst the strongest PPSs known. In particular, there are no classes of tautologies known to require superpolynomial-size proofs in these systems.
• In order to simplify the problem of finding lower bounds, proof complexity theorists have considered imposing various
restrictions on proofs in these systems.
Propositional proof complexity
• A propositional proof system (PPS) is a polynomial-time decidable surjection{0,1}?→TAUT.
• We can ask, given a tautology, what is the smallest proof of it.
This is a fundamental question in proof complexity.
Theorem (Cook)
There is a PPS where every tautology has a proof of size polynomial in its length iffNP=co-NP.
• Textbook systems such as Frege and Gentzen calculi are amongst the strongest PPSs known. In particular, there are no classes of tautologies known to require superpolynomial-size proofs in these systems.
• In order to simplify the problem of finding lower bounds, proof complexity theorists have considered imposing various
restrictions on proofs in these systems.
Propositional proof complexity
• A propositional proof system (PPS) is a polynomial-time decidable surjection{0,1}?→TAUT.
• We can ask, given a tautology, what is the smallest proof of it.
This is a fundamental question in proof complexity.
Theorem (Cook)
There is a PPS where every tautology has a proof of size polynomial in its length iffNP=co-NP.
• Textbook systems such as Frege and Gentzen calculi are amongst the strongest PPSs known. In particular, there are no classes of tautologies known to require superpolynomial-size proofs in these systems.
• In order to simplify the problem of finding lower bounds, proof complexity theorists have considered imposing various
restrictions on proofs in these systems.
The complexity of cut-elimination
• Traditionally, proof complexity theorists consider systems where the depth of cut-formulae is bounded, whereas in structural proof theory one restricts the behaviour of the structural rules, i.e. negation, weakening and contraction.
• By imposing structural restrictions on cuts in different combinations we obtain a class of PPSs of varying strengths, ranging from unrestricted Gentzen to cut-free Gentzen.
Question
How much of cut-elimination can we do efficiently?
• The exponential blowup in cut-elimination is primarily due to interaction with the contraction rules, and so it is these rules, in particular, that we will target.
The complexity of cut-elimination
• Traditionally, proof complexity theorists consider systems where the depth of cut-formulae is bounded, whereas in structural proof theory one restricts the behaviour of the structural rules, i.e. negation, weakening and contraction.
• By imposing structural restrictions on cuts in different combinations we obtain a class of PPSs of varying strengths, ranging from unrestricted Gentzen to cut-free Gentzen.
Question
How much of cut-elimination can we do efficiently?
• The exponential blowup in cut-elimination is primarily due to interaction with the contraction rules, and so it is these rules, in particular, that we will target.
The complexity of cut-elimination
• Traditionally, proof complexity theorists consider systems where the depth of cut-formulae is bounded, whereas in structural proof theory one restricts the behaviour of the structural rules, i.e. negation, weakening and contraction.
• By imposing structural restrictions on cuts in different combinations we obtain a class of PPSs of varying strengths, ranging from unrestricted Gentzen to cut-free Gentzen.
Question
How much of cut-elimination can we do efficiently?
• The exponential blowup in cut-elimination is primarily due to interaction with the contraction rules, and so it is these rules, in particular, that we will target.
The complexity of cut-elimination
• Traditionally, proof complexity theorists consider systems where the depth of cut-formulae is bounded, whereas in structural proof theory one restricts the behaviour of the structural rules, i.e. negation, weakening and contraction.
• By imposing structural restrictions on cuts in different combinations we obtain a class of PPSs of varying strengths, ranging from unrestricted Gentzen to cut-free Gentzen.
Question
How much of cut-elimination can we do efficiently?
• The exponential blowup in cut-elimination is primarily due to interaction with the contraction rules, and so it is these rules, in particular, that we will target.
Weak monotone systems
• Themonotone sequent calculus,MLK, is the fragment of Gentzen where cuts are restricted to negation-free formulae.
• This is equivalent, from the point of view of proof complexity, to disallowing cuts between descendants of a¬-left and a
¬-right rule. Theorem (Atserias et al.)
MLKquasipolynomially simulates Frege systems.
• What larger classes of cuts can we still eliminate efficiently?
Definition
Byweak monotone calculuswe mean the fragment of Gentzen free of cuts between descendants ofanystructural rules.
Weak monotone systems
• Themonotone sequent calculus,MLK, is the fragment of Gentzen where cuts are restricted to negation-free formulae.
• This is equivalent, from the point of view of proof complexity, to disallowing cuts between descendants of a¬-left and a
¬-right rule.
Theorem (Atserias et al.)
MLKquasipolynomially simulates Frege systems.
• What larger classes of cuts can we still eliminate efficiently?
Definition
Byweak monotone calculuswe mean the fragment of Gentzen free of cuts between descendants ofanystructural rules.
Weak monotone systems
• Themonotone sequent calculus,MLK, is the fragment of Gentzen where cuts are restricted to negation-free formulae.
• This is equivalent, from the point of view of proof complexity, to disallowing cuts between descendants of a¬-left and a
¬-right rule.
Theorem (Atserias et al.)
MLKquasipolynomially simulates Frege systems.
• What larger classes of cuts can we still eliminate efficiently?
Definition
Byweak monotone calculuswe mean the fragment of Gentzen free of cuts between descendants ofanystructural rules.
Weak monotone systems
• Themonotone sequent calculus,MLK, is the fragment of Gentzen where cuts are restricted to negation-free formulae.
• This is equivalent, from the point of view of proof complexity, to disallowing cuts between descendants of a¬-left and a
¬-right rule.
Theorem (Atserias et al.)
MLKquasipolynomially simulates Frege systems.
• What larger classes of cuts can we still eliminate efficiently?
Definition
Byweak monotone calculuswe mean the fragment of Gentzen free of cuts between descendants ofanystructural rules.
Weak monotone systems
• Themonotone sequent calculus,MLK, is the fragment of Gentzen where cuts are restricted to negation-free formulae.
• This is equivalent, from the point of view of proof complexity, to disallowing cuts between descendants of a¬-left and a
¬-right rule.
Theorem (Atserias et al.)
MLKquasipolynomially simulates Frege systems.
• What larger classes of cuts can we still eliminate efficiently?
Definition
Byweak monotone calculuswe mean the fragment of Gentzen free of cuts between descendants ofanystructural rules.
Weak monotone systems
The reason for considering this particular fragment is twofold:
1 It is these cuts that seem crucial to the argument of Atserias et al., and so it is pertinent to ask what happens when they are removed.
2 There is a known correspondence between these systems and deep inference systems, where a lot is already known about normalisation.
Question
What is the complexity of proofs in the weak monotone calculus?
The system seems surprisingly strong, and we will see this by considering a particular case study.
Weak monotone systems
The reason for considering this particular fragment is twofold:
1 It is these cuts that seem crucial to the argument of Atserias et al., and so it is pertinent to ask what happens when they are removed.
2 There is a known correspondence between these systems and deep inference systems, where a lot is already known about normalisation.
Question
What is the complexity of proofs in the weak monotone calculus?
The system seems surprisingly strong, and we will see this by considering a particular case study.
Weak monotone systems
The reason for considering this particular fragment is twofold:
1 It is these cuts that seem crucial to the argument of Atserias et al., and so it is pertinent to ask what happens when they are removed.
2 There is a known correspondence between these systems and deep inference systems, where a lot is already known about normalisation.
Question
What is the complexity of proofs in the weak monotone calculus?
The system seems surprisingly strong, and we will see this by considering a particular case study.
Weak monotone systems
The reason for considering this particular fragment is twofold:
1 It is these cuts that seem crucial to the argument of Atserias et al., and so it is pertinent to ask what happens when they are removed.
2 There is a known correspondence between these systems and deep inference systems, where a lot is already known about normalisation.
Question
What is the complexity of proofs in the weak monotone calculus?
The system seems surprisingly strong, and we will see this by considering a particular case study.
Normalisation via deep inference
• Due to thelocalityof rules in deep inference, there are more powerful normalisation procedures that operate independently and in parallel on each atom in a proof.
• The crucial case, for complexity, corresponds to the elimination of cuts between left and right contraction steps:
a∨a
−−−−−
a
−−−−−
a∧a
→ a
−−−−−
a∧a∨ a
−−−−−
a∧a
−−−−−−−−−−−−−
a∨a
−−−−−
a ∧ a∨a
−−−−−
a
• If there are too many of these cuts in a proof, then this normalisation procedure can take exponential-time.
Normalisation via deep inference
• Due to thelocalityof rules in deep inference, there are more powerful normalisation procedures that operate independently and in parallel on each atom in a proof.
• The crucial case, for complexity, corresponds to the elimination of cuts between left and right contraction steps:
a∨a
−−−−−
a
−−−−−
a∧a
→ a
−−−−−
a∧a∨ a
−−−−−
a∧a
−−−−−−−−−−−−−
a∨a
−−−−−
a ∧ a∨a
−−−−−
a
• If there are too many of these cuts in a proof, then this normalisation procedure can take exponential-time.
Normalisation via deep inference
• Due to thelocalityof rules in deep inference, there are more powerful normalisation procedures that operate independently and in parallel on each atom in a proof.
• The crucial case, for complexity, corresponds to the elimination of cuts between left and right contraction steps:
a∨a
−−−−−
a
−−−−−
a∧a
→ a
−−−−−
a∧a∨ a
−−−−−
a∧a
−−−−−−−−−−−−−
a∨a
−−−−−
a ∧ a∨a
−−−−−
a
• If there are too many of these cuts in a proof, then this normalisation procedure can take exponential-time.
Normalisation via deep inference
Theorem
If the path of any atom through a (monotone) deep inference proof of sizeShas at mostlalternations of left and right contraction steps, then the proof normalises in timeO(Sl).
Corollary
If the number of such alternations in a proof is polylogarithmic in its size, then the proof normalises in quasipolynomial time.
Intuitively one can think of the condition above as imposing that any inductions involved in the construction of the proof terminates in logarithmic time, i.e. using a divide-and-conquer strategy.
Normalisation via deep inference
Theorem
If the path of any atom through a (monotone) deep inference proof of sizeShas at mostlalternations of left and right contraction steps, then the proof normalises in timeO(Sl).
Corollary
If the number of such alternations in a proof is polylogarithmic in its size, then the proof normalises in quasipolynomial time.
Intuitively one can think of the condition above as imposing that any inductions involved in the construction of the proof terminates in logarithmic time, i.e. using a divide-and-conquer strategy.
Normalisation via deep inference
Theorem
If the path of any atom through a (monotone) deep inference proof of sizeShas at mostlalternations of left and right contraction steps, then the proof normalises in timeO(Sl).
Corollary
If the number of such alternations in a proof is polylogarithmic in its size, then the proof normalises in quasipolynomial time.
Intuitively one can think of the condition above as imposing that any inductions involved in the construction of the proof terminates in logarithmic time, i.e. using a divide-and-conquer strategy.
The pigeonhole principle and threshold functions
in propositional logic
The propositional pigeonhole principle
• The pigeonhole principle is a fundamental tool in combinatorics. It states,
“if there aren+1pigeons innholes, then there must be two pigeons in the same hole”.
• It can be encoded as a sequence of propositional tautologies, indexed byn, as follows:
PHPn:=
n
^
i=0 n
_
j=1
pij→
n
_
j=1
_
i06=i
pij∧pi0j
(NB: the above encoding allows the mapping from pigeons to holes to be many-many.)
The propositional pigeonhole principle
• The pigeonhole principle is a fundamental tool in combinatorics. It states,
“if there aren+1pigeons innholes, then there must be two pigeons in the same hole”.
• It can be encoded as a sequence of propositional tautologies, indexed byn, as follows:
PHPn:=
n
^
i=0 n
_
j=1
pij→
n
_
j=1
_
i06=i
pij∧pi0j
(NB: the above encoding allows the mapping from pigeons to holes to be many-many.)
The propositional pigeonhole principle
• The pigeonhole principle is a fundamental tool in combinatorics. It states,
“if there aren+1pigeons innholes, then there must be two pigeons in the same hole”.
• It can be encoded as a sequence of propositional tautologies, indexed byn, as follows:
PHPn:=
n
^
i=0 n
_
j=1
pij→
n
_
j=1
_
i06=i
pij∧pi0j
(NB: the above encoding allows the mapping from pigeons to holes to be many-many.)
The propositional pigeonhole principle
• PHPnis a benchmark class of tautologies in proof complexity;
many PPSs have only exponential-size proofs, e.g. cut-free Gentzen, Resolution and bounded-depth Frege.
• These systems seem unable to formulate basic counting arguments. This is arguably because boolean counting
functions cannot, in general, be expressed by short formulae of bounded depth.
• On the other hand,MLKhas quasipolynomial-size proofs.
• What about the weak monotone calculus?
The propositional pigeonhole principle
• PHPnis a benchmark class of tautologies in proof complexity;
many PPSs have only exponential-size proofs, e.g. cut-free Gentzen, Resolution and bounded-depth Frege.
• These systems seem unable to formulate basic counting arguments. This is arguably because boolean counting
functions cannot, in general, be expressed by short formulae of bounded depth.
• On the other hand,MLKhas quasipolynomial-size proofs.
• What about the weak monotone calculus?
The propositional pigeonhole principle
• PHPnis a benchmark class of tautologies in proof complexity;
many PPSs have only exponential-size proofs, e.g. cut-free Gentzen, Resolution and bounded-depth Frege.
• These systems seem unable to formulate basic counting arguments. This is arguably because boolean counting
functions cannot, in general, be expressed by short formulae of bounded depth.
• On the other hand,MLKhas quasipolynomial-size proofs.
• What about the weak monotone calculus?
Threshold functions
• Threshold functionsare a class of boolean functions THnk :{0,1}n→ {0,1}byσ 7→1iffP
iσi≥k.
• We can construct monotone formulae of quasipolynomial size that compute these functions using the following
divide-and-conquer identity:
TH2nk (a,b) = _
i+j=k
THni(a)∧THnj(b)
• Clearly the threshold functions are symmetric, but what is the complexity of proving this? I.e.
Question
Given a permutationπ∈Sn, what is the size of a proof of THnk(ai)⇒THnk(aiπ)in a given PPS?
Threshold functions
• Threshold functionsare a class of boolean functions THnk :{0,1}n→ {0,1}byσ 7→1iffP
iσi≥k.
• We can construct monotone formulae of quasipolynomial size that compute these functions using the following
divide-and-conquer identity:
TH2nk (a,b) = _
i+j=k
THni(a)∧THnj(b)
• Clearly the threshold functions are symmetric, but what is the complexity of proving this? I.e.
Question
Given a permutationπ∈Sn, what is the size of a proof of THnk(ai)⇒THnk(aiπ)in a given PPS?
Threshold functions
• Threshold functionsare a class of boolean functions THnk :{0,1}n→ {0,1}byσ 7→1iffP
iσi≥k.
• We can construct monotone formulae of quasipolynomial size that compute these functions using the following
divide-and-conquer identity:
TH2nk (a,b) = _
i+j=k
THni(a)∧THnj(b)
• Clearly the threshold functions are symmetric, but what is the complexity of proving this? I.e.
Question
Given a permutationπ∈Sn, what is the size of a proof of THnk(ai)⇒THnk(aiπ)in a given PPS?
A high-level proof of the pigeonhole principle
Proof idea
• We give the following outline of a proof ofPHPn:
1 Since each pigeon is in a hole, and there aren+1pigeons, we have thatTHn(n+1)n+1 (pij), where the inputs are ordered by pigeons then holes.
2 Since eachTHnkis symmetric, we have thatTHn(n+1)n+1 (pji), i.e. where the inputs are ordered by holes then pigeons.
3 Evaluating every possibility of assignments of pigeons to holes, given by the divide-and-conquer identity, yields that there must be some hole with two pigeons (by the pigeonhole principle).
• (1) and (3) have (fairly) simple proofs. We focus on (2).
• Atserias et al. provided proofs of (2) by decomposing
permutations into a product of transpositions. But an induction on such a decomposition takes too long for our weak systems, so we take a different approach.
Proof idea
• We give the following outline of a proof ofPHPn:
1 Since each pigeon is in a hole, and there aren+1pigeons, we have thatTHn(n+1)n+1 (pij), where the inputs are ordered by pigeons then holes.
2 Since eachTHnkis symmetric, we have thatTHn(n+1)n+1 (pji), i.e.
where the inputs are ordered by holes then pigeons.
3 Evaluating every possibility of assignments of pigeons to holes, given by the divide-and-conquer identity, yields that there must be some hole with two pigeons (by the pigeonhole principle).
• (1) and (3) have (fairly) simple proofs. We focus on (2).
• Atserias et al. provided proofs of (2) by decomposing
permutations into a product of transpositions. But an induction on such a decomposition takes too long for our weak systems, so we take a different approach.
Proof idea
• We give the following outline of a proof ofPHPn:
1 Since each pigeon is in a hole, and there aren+1pigeons, we have thatTHn(n+1)n+1 (pij), where the inputs are ordered by pigeons then holes.
2 Since eachTHnkis symmetric, we have thatTHn(n+1)n+1 (pji), i.e.
where the inputs are ordered by holes then pigeons.
3 Evaluating every possibility of assignments of pigeons to holes, given by the divide-and-conquer identity, yields that there must be some hole with two pigeons
(by the pigeonhole principle).
• (1) and (3) have (fairly) simple proofs. We focus on (2).
• Atserias et al. provided proofs of (2) by decomposing
permutations into a product of transpositions. But an induction on such a decomposition takes too long for our weak systems, so we take a different approach.
Proof idea
• We give the following outline of a proof ofPHPn:
1 Since each pigeon is in a hole, and there aren+1pigeons, we have thatTHn(n+1)n+1 (pij), where the inputs are ordered by pigeons then holes.
2 Since eachTHnkis symmetric, we have thatTHn(n+1)n+1 (pji), i.e.
where the inputs are ordered by holes then pigeons.
3 Evaluating every possibility of assignments of pigeons to holes, given by the divide-and-conquer identity, yields that there must be some hole with two pigeons (by the pigeonhole principle).
• (1) and (3) have (fairly) simple proofs. We focus on (2).
• Atserias et al. provided proofs of (2) by decomposing
permutations into a product of transpositions. But an induction on such a decomposition takes too long for our weak systems, so we take a different approach.
Proof idea
• We give the following outline of a proof ofPHPn:
1 Since each pigeon is in a hole, and there aren+1pigeons, we have thatTHn(n+1)n+1 (pij), where the inputs are ordered by pigeons then holes.
2 Since eachTHnkis symmetric, we have thatTHn(n+1)n+1 (pji), i.e.
where the inputs are ordered by holes then pigeons.
3 Evaluating every possibility of assignments of pigeons to holes, given by the divide-and-conquer identity, yields that there must be some hole with two pigeons (by the pigeonhole principle).
• (1) and (3) have (fairly) simple proofs. We focus on (2).
• Atserias et al. provided proofs of (2) by decomposing
permutations into a product of transpositions. But an induction on such a decomposition takes too long for our weak systems, so we take a different approach.
Proof idea
• We give the following outline of a proof ofPHPn:
1 Since each pigeon is in a hole, and there aren+1pigeons, we have thatTHn(n+1)n+1 (pij), where the inputs are ordered by pigeons then holes.
2 Since eachTHnkis symmetric, we have thatTHn(n+1)n+1 (pji), i.e.
where the inputs are ordered by holes then pigeons.
3 Evaluating every possibility of assignments of pigeons to holes, given by the divide-and-conquer identity, yields that there must be some hole with two pigeons (by the pigeonhole principle).
• (1) and (3) have (fairly) simple proofs. We focus on (2).
• Atserias et al. provided proofs of (2) by decomposing
permutations into a product of transpositions. But an induction on such a decomposition takes too long for our weak systems, so we take a different approach.
Decomposition of matrix transposition
• The specific permutation required, ordering by pigeons to ordering by holes, corresponds to thetranspositionof a matrix.
Observation
IfBandCare matrices of equal dimensions, then
A B C D
|
=
A| C| B| D|
.
• Attempting a divide-and-conquer we obtain A B|
= A|
B|
and C D|
= C|
D|
.
• To achieve the full transposition from these, we need to interleavethe rows of these two matrices.
Decomposition of matrix transposition
• The specific permutation required, ordering by pigeons to ordering by holes, corresponds to thetranspositionof a matrix.
Observation
IfBandCare matrices of equal dimensions, then
A B C D
|
=
A| C| B| D|
.
• Attempting a divide-and-conquer we obtain A B|
= A|
B|
and C D|
= C|
D|
.
• To achieve the full transposition from these, we need to interleavethe rows of these two matrices.
Decomposition of matrix transposition
• The specific permutation required, ordering by pigeons to ordering by holes, corresponds to thetranspositionof a matrix.
Observation
IfBandCare matrices of equal dimensions, then
A B C D
|
=
A| C| B| D|
.
• Attempting a divide-and-conquer we obtain A B|
= A|
B|
and C D|
= C|
D|
.
• To achieve the full transposition from these, we need to interleavethe rows of these two matrices.
Decomposition of matrix transposition
• The specific permutation required, ordering by pigeons to ordering by holes, corresponds to thetranspositionof a matrix.
Observation
IfBandCare matrices of equal dimensions, then
A B C D
|
=
A| C| B| D|
.
• Attempting a divide-and-conquer we obtain A B|
= A|
B|
and C D|
= C|
D|
.
• To achieve the full transposition from these, we need to interleavethe rows of these two matrices.
Decomposition of matrix transposition
• Letakbdenote the interleaving of vectorsa,bof the same length.
Observation
(a,b)k(c,d) = (akc,bkd)
• ... and this yields a divide-and-conquer strategy for interleaving vectors.
Theorem
There are weak monotone proofs transposing inputs of threshold formulae forTHnkof sizeO(nlog2n).
• (Thelog2ncomes from the fact that we have used two divide-and-conquer inductions, one nested inside the other).
Decomposition of matrix transposition
• Letakbdenote the interleaving of vectorsa,bof the same length.
Observation
(a,b)k(c,d) = (akc,bkd)
• ... and this yields a divide-and-conquer strategy for interleaving vectors.
Theorem
There are weak monotone proofs transposing inputs of threshold formulae forTHnkof sizeO(nlog2n).
• (Thelog2ncomes from the fact that we have used two divide-and-conquer inductions, one nested inside the other).
Decomposition of matrix transposition
• Letakbdenote the interleaving of vectorsa,bof the same length.
Observation
(a,b)k(c,d) = (akc,bkd)
• ... and this yields a divide-and-conquer strategy for interleaving vectors.
Theorem
There are weak monotone proofs transposing inputs of threshold formulae forTHnkof sizeO(nlog2n).
• (Thelog2ncomes from the fact that we have used two divide-and-conquer inductions, one nested inside the other).
Decomposition of matrix transposition
• Letakbdenote the interleaving of vectorsa,bof the same length.
Observation
(a,b)k(c,d) = (akc,bkd)
• ... and this yields a divide-and-conquer strategy for interleaving vectors.
Theorem
There are weak monotone proofs transposing inputs of threshold formulae forTHnkof sizeO(nlog2n).
• (Thelog2ncomes from the fact that we have used two divide-and-conquer inductions, one nested inside the other).
Arbitrary permutations
• Interleavings, by themselves, do not form a generating set for the symmetric group. However a generalisation of them, the set of allriffle shuffleson a deck of cards, does form such a set.
• Think of this as just merge sort in reverse. The decomposition is attained by sorting the inverse of a permutation.
Proposition
Every permutation can be decomposed into a product of riffle shuffles whose underlying circuit has logarithmic depth.
Corollary
There are quasipolynomial-size proofs permuting the inputs of threshold formulae, for any permutation.
Arbitrary permutations
• Interleavings, by themselves, do not form a generating set for the symmetric group. However a generalisation of them, the set of allriffle shuffleson a deck of cards, does form such a set.
• Think of this as just merge sort in reverse. The decomposition is attained by sorting the inverse of a permutation.
Proposition
Every permutation can be decomposed into a product of riffle shuffles whose underlying circuit has logarithmic depth.
Corollary
There are quasipolynomial-size proofs permuting the inputs of threshold formulae, for any permutation.
Arbitrary permutations
• Interleavings, by themselves, do not form a generating set for the symmetric group. However a generalisation of them, the set of allriffle shuffleson a deck of cards, does form such a set.
• Think of this as just merge sort in reverse. The decomposition is attained by sorting the inverse of a permutation.
Proposition
Every permutation can be decomposed into a product of riffle shuffles whose underlying circuit has logarithmic depth.
Corollary
There are quasipolynomial-size proofs permuting the inputs of threshold formulae, for any permutation.
Arbitrary permutations
• Interleavings, by themselves, do not form a generating set for the symmetric group. However a generalisation of them, the set of allriffle shuffleson a deck of cards, does form such a set.
• Think of this as just merge sort in reverse. The decomposition is attained by sorting the inverse of a permutation.
Proposition
Every permutation can be decomposed into a product of riffle shuffles whose underlying circuit has logarithmic depth.
Corollary
There are quasipolynomial-size proofs permuting the inputs of threshold formulae, for any permutation.
Conclusions
Summary
• We constructed quasipolynomial-size proofs ofPHPnin the fragment of Gentzen free of cuts between descendants of any structural rules.
• The normalisation procedures from deep inference appear crucial for controlling the complexity of these proofs.
Procedures internal to the Gentzen formalism seem inadequate.
• While the existence of small proofs ofPHPnis encouraging, the relative complexity of weak monotone systems is open.
• This problem seems fundamentally related to the relationship between divide-and-conquer induction and regular induction in the absence of negation, something which could be studied in an arithmetic setting.
• The Gentzen formulation of our proofs still contains some cuts. The complexity of (partially) eliminating these cuts is the topic of further study.
Summary
• We constructed quasipolynomial-size proofs ofPHPnin the fragment of Gentzen free of cuts between descendants of any structural rules.
• The normalisation procedures from deep inference appear crucial for controlling the complexity of these proofs.
Procedures internal to the Gentzen formalism seem inadequate.
• While the existence of small proofs ofPHPnis encouraging, the relative complexity of weak monotone systems is open.
• This problem seems fundamentally related to the relationship between divide-and-conquer induction and regular induction in the absence of negation, something which could be studied in an arithmetic setting.
• The Gentzen formulation of our proofs still contains some cuts. The complexity of (partially) eliminating these cuts is the topic of further study.
Summary
• We constructed quasipolynomial-size proofs ofPHPnin the fragment of Gentzen free of cuts between descendants of any structural rules.
• The normalisation procedures from deep inference appear crucial for controlling the complexity of these proofs.
Procedures internal to the Gentzen formalism seem inadequate.
• While the existence of small proofs ofPHPnis encouraging, the relative complexity of weak monotone systems is open.
• This problem seems fundamentally related to the relationship between divide-and-conquer induction and regular induction in the absence of negation, something which could be studied in an arithmetic setting.
• The Gentzen formulation of our proofs still contains some cuts. The complexity of (partially) eliminating these cuts is the topic of further study.
Summary
• We constructed quasipolynomial-size proofs ofPHPnin the fragment of Gentzen free of cuts between descendants of any structural rules.
• The normalisation procedures from deep inference appear crucial for controlling the complexity of these proofs.
Procedures internal to the Gentzen formalism seem inadequate.
• While the existence of small proofs ofPHPnis encouraging, the relative complexity of weak monotone systems is open.
• This problem seems fundamentally related to the relationship between divide-and-conquer induction and regular induction in the absence of negation, something which could be studied in an arithmetic setting.
• The Gentzen formulation of our proofs still contains some cuts. The complexity of (partially) eliminating these cuts is the topic of further study.
Summary
• We constructed quasipolynomial-size proofs ofPHPnin the fragment of Gentzen free of cuts between descendants of any structural rules.
• The normalisation procedures from deep inference appear crucial for controlling the complexity of these proofs.
Procedures internal to the Gentzen formalism seem inadequate.
• While the existence of small proofs ofPHPnis encouraging, the relative complexity of weak monotone systems is open.
• This problem seems fundamentally related to the relationship between divide-and-conquer induction and regular induction in the absence of negation, something which could be studied in an arithmetic setting.
• The Gentzen formulation of our proofs still contains some cuts.
The complexity of (partially) eliminating these cuts is the topic of further study.
Further work
• We can also use this method to prove the generalised
pigeonhole principle: if there arenk+1pigeons innholes then there arek+1pigeons in some hole.
• This result is equivalent to saying that the maximum value of a random variable is at least its expected value, a principle at the heart of many probabilistic arguments.
• It would be interesting to see if this could be used to formalise probabilistic arguments of certain combinatorial principle in weak systems.
• It is open whether any of the quasipolynomial upper bounds mentioned can be improved to a polynomial.
Further work
• We can also use this method to prove the generalised
pigeonhole principle: if there arenk+1pigeons innholes then there arek+1pigeons in some hole.
• This result is equivalent to saying that the maximum value of a random variable is at least its expected value, a principle at the heart of many probabilistic arguments.
• It would be interesting to see if this could be used to formalise probabilistic arguments of certain combinatorial principle in weak systems.
• It is open whether any of the quasipolynomial upper bounds mentioned can be improved to a polynomial.
Further work
• We can also use this method to prove the generalised
pigeonhole principle: if there arenk+1pigeons innholes then there arek+1pigeons in some hole.
• This result is equivalent to saying that the maximum value of a random variable is at least its expected value, a principle at the heart of many probabilistic arguments.
• It would be interesting to see if this could be used to formalise probabilistic arguments of certain combinatorial principle in weak systems.
• It is open whether any of the quasipolynomial upper bounds mentioned can be improved to a polynomial.