* COMMENT el
# (findistfile "glyphs.el")
#
# (load "~/LATEX/istanbulglyphs.el")
# (eevucsetcomposes)
# (findxpdfpage "~/LATEX/2014istanbula.pdf")
* defs
# (findistfile "quotes.tex" "\\def\\BF")
\def\BF#1{\noindent{\bf#1}\quad}
\def\liml{\underleftarrow {\lim}{}}
\def\limr{\underrightarrow{\lim}{}}
\def\frakCat{\mathfrak{Cat}}
\def\frakTop{\mathfrak{Cat}}
\def\elephantpage#1{((page #1))}
\def\ob{{\operatorname{ob}}}
\def\sh{\mathbf{sh}}
\def\Sh{\mathbf{Sh}}
\def\Sp{\mathbf{Sp}}
\def\Lop{\mathbf{Lop}}
\def\Hom{\mathrm{Hom}}
\def\sdd{\ssk{\scriptsize (...)}\ssk}
\def\mdd{\msk{\scriptsize (...)}\msk}
* title
{\bf Logic and Categories, or:
Heyting Algebras for Children}
A tutorial presented at the UniLog 2015 conference (Istanbul),
June 2022, 2015
\msk
By Eduardo Ochs
{\tt eduardoochs@gmail.com}
\url{http://angg.twu.net/mathb.html#istanbul}
\msk
Version: 2015jun22
* 
* TEXED :big: (Nonmathematical) introduction
\bsk
** TEXED Why study CT {\sl now}?
\includegraphics[width=8.5cm]{garota_de_ipanema.jpg}
\par Public education in Brazil is being dismantled 
\par maybe we should be doing better things than studying
\par very technical \& inaccessible subjects
\par with no research grants 
** TEXED Category theory should be more accessible
\par Most texts about CT are for specialists in research universities...
\par {\sl Category theory should be more accessible.}
\msk
\par To whom?...
\begin{itemize}
\item Nonspecialists (in research universities)
\item Grad students (in research universities)
\item Undergrads (in research universities)
\item Nonspecialists (in conferences  where we have to be quick)
\item Undergrads (e.g., in CompSci  in teaching colleges)  (``Children'')
\end{itemize}
** DONE What do we mean by "accessible"?
1) Done on very firm grounds:
mathematical objects made from numbers, sets and tuples
FINITE, SMALL mathematical objects whenever possible
avoid references to nonmathematical things like
windows, cars and pizzas (like the OO people do)
avoid reference to Physics  avoid QM at all costs
time is difficult to draw  prefer static rather than changing
2) People have very short attention spans nowadays
3) Selfcontained, but not isolated or isolating 
our material should make the literature more accessible
4) We learn better by doing.
Our material should have lots of space for exercises.
5) Most people with whom I interact are not from Maths/CS/etc
6) Proving general cases is relatively hard.
Checking and calculating is much easier.
People can believe that something can be generalized after
seeing a convincing particular case.
(Sometimes leave them to look for the right generalization
by themselves)
** DONE Generalizations
General case Full things
 ^  ^
projection  : lifting projection  : lifting
v : v :
Particular case Things for children
Paper:
Internal Diagrams and Archetypal Reasoning in Category Theory
Eduardo Ochs 2103  Logica Universalis
http://angg.twu.net/ < my home page
http://angg.twu.net/mathb.html
http://angg.twu.net/mathb.html#idarct
The typetheorical view of the formalization of all that
is something that I can discuss with very few people =(
The parts "for children" are more fun
(esp. because I don't have to do them alone)
so I have to get good excuses for working on that
instead of on the very technical typetheoretical parts
** DONE Yesterday and slides
To do: apologize for yesterday
I overestimated my TeXnical skills!!! =(
Some of my diagrams need LuaLaTeX extensions
that took me _much_ longer to write than I expected,
and the emergency tools that let me use ascii+utf8 slides
also took me one (_very_ frantic) day more than I had 
...so things were not ready yesterday  and as I could not
connect my computer directly to the projector, and I did not
have enough material in PDF form, I had to cancel 
These ascii slides will be at
http://angg.twu.net/mathb.html
soon.
Nice (Lua)(La)TeXed slides should be ready soon,
for some adequate, notverysmall value of "soon".
* 
* :big: The very basics
** DONE Evaluation: steps
# (findes "tex" "underbrace")
How do we evaluate an expression like 2*3+4*5
to obtain its value, 26?
This involves a series of steps...
There are several ways to represent these steps:
2*3+4*5 2*3+4*5 > 2*3+20
\/ \/  
6 20 v v
\/ 6+4*5 > 6+20 > 26
26
2*3+4*5 = 2*3+20
2*3+4*5 = 6+20 = 6+20
= 26 = 26
The technical term for the arrows is ``reductions''.
Informally, they are ``evaluation steps''.
** DONE Evaluation sequences
2*3+4*5 2*3+4*5 > 2*3+20
\/ \/  
6 20 v v
\/ 6+4*5 > 6+20 > 26
26
2*3+4*5 = 2*3+20
2*3+4*5 = 6+20 = 6+20
= 26 = 26
To evaluate is to reduce, and reduce, and reduce
until there is nothing more to do  until we reach
someting that is ``fully evaluated''.
Note that two ``evaluation sequences'' starting
at the same expression may ``diverge'' at some point,
but they necessarily ``converge'' later  this is
what assures us that the result of evaluation is
unique.
(Technical terms: ``confluence'', ``diamond lemma'',
``normalization''...)
** DONE Algebra: tables
Some expressions, e.g. (a+b)(ab) and a*ab*b,
return equal values for all inputs...
We can /test/ whether (a+b)(ab)=a*ab*b
in a /finite number/ of cases using tables,
a b (a+b)(ab) a*ab*b (a+b)(ab)=a*ab*b

0 0 0 0 T
2 3 5 5 T
10 1 99 99 T
but we can not /know/ that (a+b)(ab)=a*ab*b
just by checking a finite number of table lines...
** DONE Algebra: reductions
To understand why (a+b)(ab)=a*ab*b we need /algebra/,
which includes reductions like these, which use associativity
always in the "expansion" direction, i.e., x(y+z)>xy+xz,
(a+b)(ab) > (a+b)a(a+b)b > (a+b)aabbb
  
 v v
 aa+ba(a+b)b > aa+baabbb
 :
v :
a(ab)+b(ab) > a(ab)+babb :
  :
v v v
aaab+b(ab) > aaab+babb   > aabb
...but then we don't get confluence!
To get the two "  >" arrows we need
commutativity of + and ·, plus xx=0...
We need several new things to learn how to do
"calculations with letters"!...
We can teach evaluation to small children.
We can only teach algebra to /older/ children,
who have already learned evaluation.
The important thing:
evaluation comes first,
algebra comes later.
** DONE Logic: evaluation and tautologies
Logic is simpler than algebra (for children),
because we can prove lots of things just by
looking at tables with a finite number of lines.
For example, we can check that P&Q>P∨Q just by
calculating the result of P&Q>P∨Q in all four cases.
(P > Q)∨(Q > P) P Q P>Q Q>P (P>Q)∨(Q>P)
\/ \/ \/ \/ 
0 1 1 0 0 0 1 1 1
\/ \/ 0 1 1 0 1
1 0 1 0 0 1 1
\/ 1 1 1 1 1
1
P Q P&Q P∨Q P>Q

0 0 0 0 1
0 1 0 1 1
1 0 0 1 0
1 1 1 1 1
When we teach logic to children it is
good karma to spend a LONG time on this:
what propositions are tautologies?
** DONE A glimpse at other logics
The truth values of classical logic are {0,1}.
The truth values of O(2)logic are {00,01,11}.
Some propositions, like ¬¬P>P, are tautologies
in classical logic but not in O(2)logic...
P Q P&Q P∨Q P>Q P ¬P ¬¬P ¬¬P>P
 
00 00 00 00 11 00 11 00 11
00 01 00 01 11 01 00 11 01
00 11 00 11 11 11 00 11 11
01 00 00 01 00
01 01 01 01 11
01 11 01 11 11
11 00 00 11 00
11 01 01 11 01
11 11 11 11 11
** DONE $λ$calculus: subtopics
Lambdacalculus, even for children, is a big topic...
We will see it as this series of subtopics:
* the usual way of defining (named) functions
* the meaning of `:' and `>' in f:A>B
* the meaning of `>'
* evaluation, a.k.a., reduction
* lambdanotation
* betareduction
* the diamond lemma
* name clashes
* pairing and projection
* types
* contexts, prejudgments and judgments
* unions of contexts
* type inference
* Set as model for lambda1
** DONE Defining named functions
The usual notation for defining functions is like this:
f: N > R (*)
n > 2+sqrt(n)
name: domain > codomain
variable > expression
It creates _named_ functions.
To use this notation we have to fill up five slots,
and use a ":", an ">" and a ">" in the right places.
After stating (*) we can "reduce" expressions with f, like this:
f(4+5) > 2+sqrt(4+5)
: :
: :
v v
f(9) > 2+sqrt(9) > 2+3 > 5
** DONE $λ$calculus: defining unnamed functions
Now compare
name: domain > codomain
variable > expression name = \variable.expression
g: N > R
a > a*a+4 h = \a.a*a+4
g(2+3) > g(5) h(2+3) > h(5)
   
  v v
  (\a.a*a+4)(2+3) > (\a.a*a+4)(5)
   
v  v 
(2+3)*(2+3)+4  (2+3)*(2+3)+4 
 \   \ 
 v   v 
 (2+3)*5+4   (2+3)*5+4 
 \   \ 
v v v v v v
5*(2+3)+4 > 5*5+4 5*(2+3)+4 > 5*5+4
Note that the parentheses are important...
2+3*2+3+4 != (2+3)*(2+3)+4
Note also that in h = \a.a*a+4 we have not used
neither the domain nor the codomain...
** DONE Functions as lists of `$\mapsto$'s
Functions can be regarded as sets of (input,output) pairs.
For example,
f:{4,5,6} > R
a > 10a
_is_ the set {(4,40),(5,50),(6,60)}.
Just as we saw to fully evaluate expressions like 2·3+4·5 
the _value_ of 2·3+4·5 is 26 because on numbers there are
no futher evaluation steps to be done 
the _value_ of the f above is {(4,40),(5,50),(6,60)}.
Alternative notations:
\csm{(4,40), && (5,50), && (6,60)} =
\csm{4↦40, && 5↦50, && 6↦60} =
\csm{4↦40 && 5↦50 && 6↦60} =
\sm {4↦40 && 5↦50 && 6↦60}
We will be sloppy with commas in the {}/↦ notation.
** DONE Trees
We are going to use tree for lots of things, e.g.:
represent constructions
compare general and particular cases (with parallel trees)
add and erase information
track free variables / undischarged hypotheses
understand translations (including CurryHoward)
p (2,5) (2,5)
  
p π'p f (2,5) π'(2,5) sqrt (2,5) 5 sqrt
     
πp f(π'p) π(2,5) sqrt(π'(2,5)) 2 sqrt(5)
  
(πp,f(π'p)) (π(2,5),sqrt(π'(2,5))) (2,sqrt(5))
p:A×B (2,5)∈Z×N
 
p:A×B π'p:B f:B>C (2,5)∈Z×N 5∈N sqrt:N>R
   
πp:A f(π'p):C 2∈Z sqrt(5)∈R
 
(πp,f(π'p)):A×C (2,sqrt(5))∈Z×R
** DONE Trees: free variables, discharges, sequents, CurryHoward...

[p]^1 pp [P&Q]^1
    
[p]^1 π'p f pp pπ'p ff [P&Q]^1 Q Q>R
     
πp f(π'p) pπp f,pf(π'p) P R
  
(πp,f(π'p)) f,p(πp,f(π'p)) P&R
1  1
λp.(πp,f(π'p)) fλp.(πp,f(π'p)) P&Q>P&R
p:A×B
 
p:A×B π'p:B f:B>C p:A×Bπ'p:B f:B>C
   
πp:A f(π'p):C p:A×Bπp:A p:A×Bf(π'p):C
 
(πp,f(π'p)):A×C p:A×B(πp,f(π'p)):A×C

π':A×B>B f:B>C

π:A×B>A f∘π':A×B>C

\ang{π,f∘π):A×B>A×C
** DONE Alice's logic and Bob's logic
(For students of Discrete Mathematics)
Our friends Alice and Bob have both publish books on logic,
with some stylistic differences...
For Alice, Classical Propositional Logic is this,
(Ω,⊤,⊥,∧,∨,→,¬,↔) =
({0,1}, 1, 0, {(0,0)↦0, {(0,0)↦0, {(0,0)↦1, {0↦1, {(0,0)↦1, }
(0,1)↦0, (0,1)↦1, (0,1)↦1, 1↦0}, (0,1)↦0,
(1,0)↦0, (1,0)↦1, (1,0)↦0, (1,0)↦0,
(1,1)↦1 }, (1,1)↦1 }, (1,1)↦1 }, (1,1)↦1 }
In Bob's book, Classical Propositional Logic is
(Ω,⊤,∧,¬) = ...
with all the other operators being defined from this one...
How do we translate from one of these notions to another?
Suppose that Alice's book has a theorem that is not in Bob's book.
We may need to translate both the /theorem/ and the /proof/.
At one point, when we were studying Alice's book and feeling that
we were on shaky ground, we thought of "∧" as being the 4th
component of the structure above...
A very lowlevel translation uses things like "4th component".
A higherlevel translation refers to components by their names.
* 
* :big: (Planar) Heyting Algebras for children
** DONE A game  on $N^2$
Let's consider a game played
on an infinite board, N^2,
with black and white pieces.
* This is a black piece.
It is solid. It is heavy.
It tends to sink.
/Black pieces move down/.
O This is a white piece.
It is hollow. It is light.
It tends to float.
/White pieces move up/.
Black moves: * (x,y) (0,y)
/\ /  \  \
v v v v v v v v
(x1,y1) (x,y1) (x1,y1) (0,y1) (1,y1)
White moves: (x1,y+1) (x,y+1) (x1,y+1) (0,y+1) (1,y+1)
^ ^ ^ ^ ^ ^ ^ ^
\/ \  /  /
O (x,y) (0,y)
Note that moves are only possible
between /valid positions/ 
this rules out positions with x=1.
** DONE A game  on subsets of $N^2$
Let's consider a game played
on an infinite board, N^2,
with black and white pieces...
Note that moves are only possible
between /valid positions/ 
this rules out positions with x=1.
We may restrict the
valid positions even more.
Let A ⊆ N^2.
Let BM(A) be the set of black moves
starting and ending on points of A, and
let WM(A) be the set of white moves
starting and ending on points of A.
Then BM(A) and WM(A) are sets of arrows,
and (A,BM(A)) and (A,WM(A)) are graphs.
(A,BM(A)) and (A,WM(A)) are directed,
acyclical graphs  DAGs.
** DONE A game  on $K$
Let K ("Kite") be this subset of N^2:
K = {(1,3),(0,2),(2,2),(1,1),(1,0)}
Then (K,BM(K)) is the DAG at the left below,
which is isomorphic to the DAG K' at the right...
(1,3) 1
/ \ / \
v v v v
(0,2) (2,2) 2 3
\ / \ /
v v v v
(1,1) 4
 
v v
(0,0) 5
(K,BM(K)) = ({(1,3), (0,2),(2,2), (1,1), (1,0)},
{((1,3),(0,2)), ((1,3),(2,2)),
((0,2),(1,1)), ((2,2),(1,1)),
((1,1),(0,0))})
K' = ({1, 2,3, 4, 5},
{(1,2),(1,3), (2,4),(3,4), (4,5)})
** DONE A game  definition aborted
When I talked about ``game'' that was a trick...
I just wanted to define BM(A) and WM(A) on an A ⊆ N^2.
We don't need notions like ``initial position'',
``turn'', ``winning'', etc.
From now on we will forget the idea of a ``game''.
** DONE Transitivereflexive closure
Let (A,R) be a (directed) graph; note that R ⊆ A^2.
We will write
(A,R^+) for the transitive closure of (A,R), and
(A,R^*) for the transitivereflexive closure of (A,R).
(A,R^+) is (A,R) ``plus all the identity arrows'',
(A,R^*) is (A,R^+) ``plus all the composites''.
Remember that when A ⊆ N^2
then (A,BM(A)) and (A,WM(A)) are DAGs...
When we pass to (A,BM(A)^*) and (A,WM(A)^*)
we don't get cycles  except for the identity maps!
(A,BM(A)^*) and (A,WM(A)^*) are directed,
``almost acyclical'' graphs...
More precisely, (A,BM(A)^*) and (A,WM(A)^*)
are /partial orders/!
** DONE Wellpositioned subsets of $N^2$
Let A ⊆ N^2, and let (Δx,Δy) ∈ Z^2.
Let A' = {(x+Δx,y+Δy)  (x,y)∈A}.
Then (A,BM(A)) and (A',BM(A')) are isomorphic!
For example, if A=K and (Δx,Δy)=(80,90),
then (A,BM(A)) and (A',BM(A')) are:
(1,3) (81,93)
/ \ / \
v v v v
(0,2) (2,2) (80,92) (82,92)
\ / \ /
v v v v
(1,1) (81,91)
 
v v
(0,0) (80,90)
We are only interested in the /graphs/
generated by subsets of N^2  so let's
create a notion of ``canonical position''...
Also, we are only interested in the graphs
generated by /finite/, /nonempty/ subsets of N^2...
** DONE Wellpositioned subsets of $N^2$ (2)
Definition:
a subset A of N^2 is /wellpositioned/
when it contains a point of the form (0,y)
and a point of the form (x,0), i.e.,
it touches both axes, i.e.,
it can't be translated left or down
without going outside of N^2...
For any nonempty subset A of N^2,
there is exactly one (Δx,Δy)∈N^2
such that A(Δx,Δy) is wellpositioned.
(1,3) (81,93)
/ \ / \
v v v v
(0,2) (2,2) (80,92) (82,92)
\ / \ /
v v v v
(1,1) (81,91)
 
v v
(0,0) (80,90)
** DONE ZSets
Definition:
A /ZSet/ is a finite, nonempty,
wellpositioned subset of N^2.
The nice thing about ZSets is that
we can represent them unambiguously
using a bullet notation. For example,
*
* *
*
*
is {(1,3), (0,2), (2,2), (1,1), (0,0)}.
** DONE ZSets, ZDAGs, ZPosets
Definitions:
A /ZSet/ is a finite, nonempty, wellpositioned subset of N^2.
A /ZDAG/ is a DAG (A,BM(A)) or (A,WM(A)), where A is a ZSet.
A /ZPoset/ is a poset (A,BM(A)^*) or (A,WM(A)^*), where A is a ZSet.
Examples:
Let H ("House") be this zset here:
*
* *
* *
then:
(H, BM(H)) and (H, WM(H)) are ZDAGs
(H, BM(H)^*) and (H, WM(H)^*) are ZPosets
** DONE Zfunctions
A Zfunction is a function whose domain is a ZSet.
We can represent Zfunctions using a positional notation.
For example, with K={(1,3), (0,2),(2,2), (1,1), (1,0)},
the function f:K>N such that
f = {((1,3),1), ((0,2),2),((2,2),3), ((1,1),4), ((1,0),5)}
can be represented as:
1
2 3
4
5
Note that its domain, K, can be inferred from the shape
*
* *
*
*
but its codomain has to be deduced from the context.
** DONE The reading order
The /reading order/ for a ZSet A
is the bijection from A to {1, 2, ..., A}
that enumerates the elements of A from top
to bottom, going from left to right in each line.
Here are some reading orders:
1
1 1 2 3
2 3 2 3 4 5
4 4 5 6 7 8 9
The reading order can be used to formalize
a way to ``read aloud'' Zfunctions. For example,
we can read aloud
3
2 40
1 50
as 3, 2, 40, 1, 50.
(And this gives a kind of a lexicographic order on
Zfunctions on the same ZSet)
** DONE ZHAs
A Heyting Algebra, as we will see later,
is a structure (a logic!) (Ω,≤,⊤,⊥,∧,∨,→,¬)
in which ≤,⊤,⊥,∧,∨,→,¬ obey the axioms
of /intuitionistic/ propositional calculus.
A ZHA is a ZPoset having:
a bottom point and a top point,
a left wall and a right wall,
all the sameparity points between the two walls,
and no other points.
Some examples of ZHAs:
o o
o o o o o
o o o o o o o o
o o o o o o o o o
o o o o o o o o o o o
o o o o o o o o o
o o o o o o o o
o o o o o o
o o o
(dot) (diamond5) (L) (squiggle) (weird)
** DONE ZHAs (2)
A ZHA is a ZPoset having:
a bottom point and a top point,
a left wall and a right wall,
all the sameparity points between the two walls,
and no other points.
o o
o o o o o
o o o o
o o o o o o
o o o o o o
o o o o o o o
o o o o o o o
o o o o o o
o o o o o o
o o o o o o
o o o
This is This is This is
a ZHA a ZHA NOT a ZHA
** DONE ZHAs (formally)
A ZHA is a ZPoset having:
a bottom point and a top point,
a left wall and a right wall,
all the sameparity points between the two walls,
and no other points.
A ZHA D is generated by:
a height maxy ≥ 0
a bottom point (x_0, 0)
left wall and right wall functions L,R:{0,1,...,maxy}>N
obeying:
L(0)=R(0)=x_0 (bottom point)
L(maxy) = R(maxy) (top point)
0 ≤ L(i) ≤ R(i) (left wall ≤ right wall)
L(y+1) = L(y)±1 (left wall changes in unit steps)
R(y+1) = R(y)±1 (right wall changes in unit steps)
L(y)=0 for some y (to ensure wellpositionedness)
then
D ∩ {(x,y)x ∈ N} is empty when y>maxy,
D ∩ {(x,y)x ∈ N} = {L(y), L(y)+2, L(y)+4, ..., R(y)}
when y∈{0,1,...,maxy}.
** DONE ZHAs (formally) (2)
A ZHA is a ZPoset having:
a bottom point and a top point,
a left wall and a right wall,
all the sameparity points between the two walls,
and no other points.
o L(9)=1 R(9)=1 maxy=9 L(maxy)=R(maxy)
o o
o
o
o
o o
o o o
o o
o o L(1)=3 R(1)=5
o L(0)=4 R(0)=4 L(0)=R(0)=x_0=4
** DONE ZHAs: the (l,r) coordinates
# (finddn5 "zha.lua" "ZHAtests")
We can define the (l,r) coordinates
on a ZHA /informally/ in this way:
the bottom point has (l,r)=(0,0);
when we go one unit northwest,
i.e., from (x,y) to (x1,y+1),
the l coordinate increases by 1;
when we go one unit northeast,
i.e., from (x,y) to (x+1,y+1),
the r coordinate increases by 1;
We usually write the (l,r) coordinates
of a point as just lr  two digits together,
without the `(', `,', `)'.
Here are two examples:
46
45 36 63
35 26 62 53
34 25 52 43
24 51 42 33
23 50 41 32 23
22 40 31 22
21 30 21 12
20 11 20 11 02
10 01 10 01
00 00
** DONE ZHAs are Heyting Algebras (introduction)
Magic fact / big theorem:
ZHAs are Heyting Algebras.
Version for children: we will learn
1. how to interpret ≤,⊤,⊥,∧,∨,→,¬,↔ in any given ZHA
2. what properties that ≤,⊤,⊥,∧,∨,→,¬,↔ have to obey in a HA
3. that our ZHA operations from (1) obey the properties in (2)
The properties are these:
(id) ∀P.(P≤P)
(comp) ∀P,Q,R.(P≤Q)∧(Q≤R)→(P≤R)
(⊤) ∀P.(P≤⊤)
(⊥) ∀P.(⊥≤P)
(∧) ∀P,Q,R.(P≤Q∧R)↔(P≤Q)∧(P≤R)
(∨) ∀P,Q,R.(P∨Q≤R)↔(P≤R)∧(Q≤R)
(→) ∀P,Q,R.(P≤Q→R)↔(P∧Q≤R)
(¬) ¬P := P→⊥
(↔) P↔Q := (P→Q)∧(Q→P)
** DONE Operations in ZHAs: $≤, ⊤, ⊥, ∧$ and $∨$
In a ZHA H,
⊤ is the top element of H,
⊥ is the bottom element (00), and
P≤Q is true iff P_L≤Q_L and P_R≤Q_R, or, equivalently,
P≤Q is true iff the ZPoset (H,WM(H)^*) has an arrow P>Q.
Also:
(P_L,P_R)&(Q_L,Q_R) := (min(P_L,Q_L),min(P_R,Q_R))
(P_L,P_R)∨(Q_L,Q_R) := (max(P_L,Q_L),max(P_R,Q_R))
Visually, this is:
(diagram)
** DONE Operations in ZHAs: $→$
In a ZHA H,
P≤Q returns 0 or 1, (≤:H×H→{0,1})
P→Q returns an element of H (→:H×H→H).
The definition of →:H×H→H is by cases.
(Note: before I explain this to people I usually
ask them to remember how weird they felt the
definition of →:{0,1}×{0,1}→{0,1} was when
they first saw it...)
Q→R := if Q is below R then ⊤ < case 1
elseif Q at left of R then (↗)(Q∧R) < case 2
elseif Q at right of R then (↖)(Q∧R) < case 3
elseif Q is above R then R < case 4
end
Note that the order matters!
If R=22, this is λQ.(which case we use to calculate Q→R):
4
4 4
2 4 3
2 2 3 3
2 2 1 3 3
2 1 1 3
1 1 1
1 1
1
# (finddn5 "zha.lua" "ZHAtestsimp")
# (λQ.Q>22) =
# (λR.22>R) =
** DONE Operations in ZHAs: $¬$ and $↔$
The operations ¬ and <> in a ZHA are
defined in terms of others.
¬P := P>⊥
P<>Q := (P>Q)&(Q>P)
** DONE Some nontautologies: DeMorgan
# (findist "all.lua" "ubstests")
In classical logic both DeMorgan laws hold:
¬(P&Q)<>(¬P∨¬Q)
¬(P∨Q)<>(¬P&¬Q)
In intuitionistic logic these three implications are provable,
(¬P∨¬Q)>¬(P&Q)
¬(P∨Q)>(¬P&¬Q) (¬P&¬Q)>¬(P∨Q)
While this one
¬(P&Q)>(¬P∨¬Q)
is false in some ZHAs, for example here:
32
22
21 12
20 11 02
10 01
00
¬(P & Q)>(¬ P ∨ ¬ Q)
   
10 01 10 01
  
00 02 20
 
32 22

22
** DONE Some nontautologies: $¬¬P≤P$
Exercise (homework!):
check that in the obvious ZHA 
the same as in the previous slide 
we have:
00
00
00 00
(λP.¬P) = 02 00 20
02 20
32
32
32
32 32
(λP.¬¬P) = 20 32 02
20 02
00
1
0
(λP.¬¬P=P) = 0 0
(λP.¬¬P≤P) = 1 0 1
0 0
1
** DONE Lindenbaum Algebras
We've been working with algebras of truth values.
What happens when we try to pass to an algebra of propositions?
"Wff" = "wellformed formula".
Let A = Wffs(⊤,⊥,∧,∨,→,¬,↔,P,Q,R) be the set of wffs built from
the constants ⊤,⊥, the operators ∧,∨,→,¬,↔ and the
atomic propositions P,Q,R  plus parentheses.
Let D be the set of triples (P,Q,δ) where
δ is a /intuitionistic/ proof of P⊢Q. < what is that????
Let D^ be the set of pairs (P,Q) obtained by deleting
the third component of each triple (P,Q,δ)∈D.
Then (A,D^) is a directed graph.
(A,D^) is reflexive and transitive  it is a /preorder/ on A.
It induces a quotient ~ on A.
(A/~,D^/~) is a partial order  the Lindenbaum Algebra
geerated by etc etc etc etc.
** DONE Lindenbaum Algebras (2)
Let A = Wffs(⊤,⊥,&,∨,→,P,Q,R).
Let D be the set of triples (P,Q,δ) where
δ is a intuitionistic proof of P⊢Q (P,Q∈A).
Let D^ be the set of pairs (P,Q) obtained by deleting
the third component of each triple (P,Q,δ)∈D.
Then (A,D^) is a directed graph,
and the partial order (A/~,D^/~) is the Lindenbaum Algebra (...)
Lindenbaum Algebras are (free) Heyting Algebras.
We can visualize Lindenbaum Algebras by writing some
wffs in A and
1) treating them as vertices in a directed graph
2) drawing the arrows in D between them
3) making each wff stand for its equivalence class.
** DONE Lindenbaum Algebra for classical logic
Let A = Wffs(⊤,⊥,&,∨,→,¬,↔,P) be the set of wffs...
Let D be the set of triples (P,Q,δ) where
δ is a /classical/ proof of P⊢Q.
Then the Lindenbaum Algebra (A/~,D^/~) looks like this 
more precisely, it is transitivereflexive closure of this:
⊤
P ¬P
⊥
This is a Heyting Algebra  but it is not free.
It is a Boolean Algebra  the free one on one generator.
** DONE Lindenbaum Algebras for classical logic (2)
Let A = Wffs(⊤,⊥,&,∨,→,¬,↔,P,Q) be the set of wffs...
Let D be the set of triples (P,Q,δ) where
δ is a /classical/ proof of P⊢Q.
Then the Lindenbaum Algebra (A/~,D^/~) looks like this 
more precisely, it is the transitivereflexive closure of
this hypercube:
F
7 E
B 6 D
3 5 A C
2 9 4
1 8
0
This is the free Boolean Algebra on two generators.
(Hint: take P=3 and Q=5)
** DONE Lindenbaum Algebras for IPL are infinite
# https://en.wikipedia.org/wiki/Intuitionistic_logic
# https://en.wikipedia.org/wiki/File:RiegerNishimura.svg
Let I_1 = Wffs(⊤,⊥,&,∨,→,¬,↔,A) be the set of wffs...
Let D be the set of triples (P,Q,δ) where
δ is an /intuitionistic/ proof of P⊢Q.
Then the Lindenbaum Algebra (I_1/~,D^/~) has at least
these distinct points:
43 G G:=F∨F'
42 33 F' F F:=E∨E' F':=F→E
32 23 E E' E:=D∨D' E':=E→D
31 22 D' D D:=C∨C' D':=D→C
21 12 C C' C:=B∨B' C':=C→B
20 11 B' B B:=A∨A' B':=B→A
10 01 A A' A:=10 A':=¬A
00 00
A, A', B, B', ... are all distinct in the ZHA at the left above,
so there cannot be an intuitionistic proof that any two of them 
say, E and C  are equal.
^ This is an argument for adults!!!
We will make it childrenfriendly soon.
** DONE A computer library
# (finddn5 "zha.lua" "ZHA")
# (finddn5 "zha.lua" "ZHAtests")
The "spec" for a ZHA uses one char per line...
> ZHA.fromspeclr("123LLR432L1"):drawf()
64
63 54
53 44
52 43 34 < 3
51 42 33 24 < 4
41 32 23 < R (means 3 again, but moved right)
40 31 22 < L (means 3 again, but moved left)
30 21 12 < L (means 3 again, but moved left)
20 11 02 < 3
10 01 < 2
00 < 1 < first char of the spec
>
** DONE A computer library (2)
# (finddn5 "zha.lua" "ZHA")
# (finddn5 "zha.lua" "ZHAtests")
We can define functions using a very compact lambdanotation,
and use them to draw the values of Zfunctions.
L0 "P Eq(P, Not(Not(P)))"
> "function (P) return Eq(P, Not(Not(P))) end"
The ":drawL" method draws Zfunctions.
The global functions Eq, Not, And, Imp, etc
use the global variable "z"  that's why the "z =".
> z = ZHA.fromspeclr("123LLRR21"):drawf()
(...)
> z:drawL "P Not(P) "
(...)
> z:drawL "P Not(Not(P)) "
(...)
> z:drawL "P Eq(P, Not(Not(P)))"
11
00 00
00 00 00
00 00 00
11 00 00
00 00 00
00 00 11
00 00
11
>
** TEXED When in doubt use brute force
# (finddn5 "zha.lua" "ZHAtestsbf")
# (findist "quotes.tex" "fourman")
\begin{quote}
{\sl When in doubt use brute force.}
Dennis Ritchie
\end{quote}
In the beginning, and for several years, I had very little
intuition about how the $→$ behaved in a Heyting Algebra...
And to make things worse, there are lots of things
that are important in Topos Theory 
namely: the $¬¬$ operator, modalities, sheaves, forcing 
about which I had no intuition whatsoever!
It {\sl had} to be possible to visualize them 
\msk
That was my main motivation for working with ZSets, ZHAs, etc.
\msk
Status of the project:
now I know how to visualise {\sl most} of the operations and theorems
in Fourman/Scott79 
and {\sl some} things about sheaves and geometric morphisms,
mainly from Peter Johnstone's ``Sketches of an Elephant''
and now I am in a position to get help from adult toposophers! =) =) =)
# (findxdvipage "~/LATEX/istanbulquotes.dvi" 1 "Fourman")
# (findxdvipage "~/LATEX/istanbulquotes.dvi" 6 "Elephant")
* 
* :big: Categories
** DONE Categories
# (find854 "" "moreonwd" "is a tuple:")
# (find854page 25 "is a tuple:")
A category
C = (C_0, Hom_C, id_C, o_C; idL_C, idR_C, assoc_C)
is a directed graph on steroids.
C_0 is the set (< version for children) of ``objects'' of C.
Hom_C(A,B) returns the set (< same)
of arrows (``morphisms'', ``maps'') from A to B.
id_C(A) returns and ``identity map'' id_A:A>A.
o_C tells how ``composable'' maps (whose tails and heads meet)
compose; we use ``;'' to write the composition in ``diagrammatic order''.
f g
A > B > C
>
(gof)=(f;g)
idL_C and idR_C tell us that identity maps are cancellable
at the left and at the right: (id_A;f)=f, (f;id_B)=f.
assoc_C tells us that composition is associative: (f;g);h=f;(g;h).
A protocategory is just this:
C^ = (C_0, Hom_C, id_C, o_C)
Note (for adults!!!):
the ``properties'' idL_C, idR_C, assoc_C only make sense
when we know how to compare morphisms for equality.
** DONE Set is a category
This is our archetypal category:
Set = (Set_0, Hom_Set, id_Set, o_Set; idL_Set, idR_Set, assoc_Set)
The objects of Set are sets  so, for example,
{0,1}, {2,3,4,5,6}, {7,8,9}, \empty, \N, \R ∈ Set_0.
The morphisms of Set are the maps between sets  so, for example,
Hom_Set({0,1}, {2,3,4,5,6}) has 2^5 elements (functions!),
Hom_Set({2,3,4,5,6}, {7,8,9}) has 5^3 elements.
Composition in Set is compostion of functions. For example:
2>7
3>8
4>9
0>4 5>9
1>2 6>9
{0,1} > {2,3,4,5,6} > {7,8,9}
>
0>9
1>7
And id_{7,8,9} = {7>7, 8>8, 9>9}.
** DONE ZPosets are categories
A ZPoset, like K_↓*
(the "kite" with the (transitivereflexive closure of) BM(K))
can be seen as a category, in which
Hom (A,B) = / {(A,B)} when there is an arrow from A to B,
K_↓* 
\ empty when there isn't an arrow from A to B.
For example:
((1,3),(0,2)) ((0,2),(1,0))
(1,3) > (0,2) > (1,0)
>
((1,3),(1,0))
Posets can be seen as categories in which
each Homset has at most one element.
Warning (for children!):
The ``can be seen'' is important!!!
Many slides ago we saw K_↓* as a pair
made of a ZSet K and subset of K×K...
But K_↓* ``as a category'' is a 7uple!
Abuse of language!
Also, morphisms here are funny things 
they are not functions, they are /pairs/!
** DONE Categories: associativity
Associativity assures us that
diagrams whose cells commute
commute fully.
Example:
A > B > C
  
v v v
D > E > F
ab;bc;cf = ab;(bc;cf)
= ab;(be;ef)
= (ab;be);ef
= (ad;de);ef
** DONE Categories: iso(morphisms) and identities
An iso is a map f:A>B
for which there is a f¹:B>A
such that (f;f¹)=id_A and (f¹;f)=id_B.
f
A > B
\  \
id_A\ f¹ \id_B
v v v
A > B
f
Inverses are unique. When we have both
(f;g)=id_A and (g;f)=id_B and
(f;h)=id_A and (h;f)=id_B
then we can prove that g=h:
g,h
B > A g = g;id_A
\  \ = g;(f;h)
id_B\ f \id_A = (g;f);h
v v v = id_B;h
B > A = h
g,h
Note that we used identities (id_C) to define isos
and the properties (idL_C, idR_C)
that say that they identities are neutral elements
to prove that inverses are unique.
** DONE Functors
A functor F: C → D is:
F = (F_0, F_1; respids_F, respcomp_F)
it is made of an _action on objects_, F_0,
that takes objects of C to objects of D,
and an _action of morphisms_, F_1,
that takes morphisms in C to morphisms in D...
and ``properties'' (that appear after the ";"):
respids_F = ∀A∈C_0. id_D(F_0(A)) = F_1(id_C(A))
respcomp_F = ∀A,B,C∈\C_0. ∀f:A>B, g:B>C. F_1(f;g)=F_1(f);F_1(g)
** DONE Functors  on diagrams
A functor F:\C>\D takes
1. composable maps in \C to composable maps in \D
2. diagrams in \C to diagrams in \D
3. commutative diagrams in \C to commutative diagrams in \D
4. isos in \C to isos in \D
Exercise / homework:
show that if the diagram at the left below commutes
then its image by F commutes too, and F_1(f¹)=F_1(f)¹.
Note: it has lots of (common!) shorthands...
You will have to figure them all out.
f f
A > B FA > FB
\  \ \  \
id\ f¹ \id id\ f¹ \id
v v v F v v v
A > B > FA > FB
 f   f 
g h g h
v v v v
C > D FC > FD
k k
** DONE Functors: $(A×)$ and $({0,1}x)$
B > FB B > F_0(B)
   
f > Ff f > F_1(f)
v v v v
C > FC C > F_0(C)
F F
\C > \D \C > \D
B > (A×)B B > (A×)_0(B) B > A×B
     
f > (A×)f f > (A×)_1(f) f > λp.(πp,f(π'p))
v v v v v v
C > (A×)C C > (A×)_0(C) C > A×C
(A×) (A×) (A×)
\Set > \Set \Set > \Set \Set > \Set
{2,3} > {0,1}×{2,3}
 
2>4  >  (0,2)>(0,4), (1,2)>(1,4),
3>6   (0,3)>(0,6), (1,3)>(1,6)
v v
{4,5,6} > {0,1}×{4,5,6}
** TEXED Functors: why $(A×)_1(f) = λp.(πp,f(π'p))$ ?
%: p:? p:A{×}B
%:  
%: p:? π'p:? f:B→C p:A{×}B π'p:B f:B→C
%:    
%: πp:? f(π'p):? πp:A f(π'p):C
%:  
%: (πp,f(π'p)):? (πp,f(π'p)):A{×}C
%:  
%: λp.(πp,f(π'p)):A{×}B→A{×}C λp.(πp,f(π'p)):A{×}B→A{×}C
%:
%: ^why1 ^why2
\pu
$$\ded{why1} \quad \ded{why2}$$
How would someone {\sl find} the expression $λp.(πp,f(π'p))$?
Idea: let's try to construct a function $?:A×B→A×C$ from $f:B→C$...
%: ?:A{×}B p:A{×}B
%: π' 
%: ?:A{×}B ?:B f:B→C p:A{×}B π'p:B f:B→C
%: π app  
%: ?:A ?:C πp:A f(π'p):C
%: \ang{,} 
%: ?:A{×}C (πp,f(π'p)):A{×}C
%: λ 
%: ?:A{×}B→A{×}C λp.(πp,f(π'p)):A{×}B→A{×}C
%:
%: ^why3 ^why4
\pu
$$\ded{why3}$$
% $$\ded{why3} \qquad \ded{why4}$$
** DONE Categories with
Almost all the time in Category Theory
we work with categories with extra structure
(and extra properties).
For example, Set, and each ZHA, is
(1) a category with a terminal object
(2) a category with binary products
(3) a category with finite products (= 1+2)
(4) a category with exponentials (needs 2)
(5) a cartesianclosed category (= all the above)
(6) a category with an initial object
(7) a category with binary coproducts (a.k.a. binary sums)
(8) a category with finite coproducts (a.k.a. finite sums; = 6+7)
(9) a cartesianbiclosed category (= all the above)
** DONE BiCCCs
A Cartesianbiclosed category has these operations
(we use some nested uples for clarity):
(C_0, Hom, id, ∘,
(1,!),(×,π,π', <,>),(→,♭,♯),
(0,¡),(+,ι,ι', [,]))
We are omitting the ``properties'' that these operations obey.
Trick: a (bi)Heyting Algebra has exactly the same structure
as a (bi)CCC.
A (bi)Heyting Algebra is a (bi)CCC in which each Homset
has at most one element.
(See the handouts!)
* 
* :big: Logic on subsets of ZDAGs
** DONE Order topologies
Let (A,R) be a graph.
Example: (A,R) = ({1,2,3}, {(1,3),(2,3)}).
1 2
\ /
v v
3
A _topology_ on a set X is a set of subsets of X (the "open sets").
A graph R ⊆ A×A induces a topology on A.
Here's how:
O(A) = {U⊆A  (1∈U → 3∈U) ∧ (2∈U → 3∈U)}
\/ \/
condition condition
induced by induced by
1>3 2>3
If we order the open sets by inclusion
we get a partial order, (O(A),⊆).
Let's draw it as a graph.
** DONE Order topologies (2)
{1,2,3} 1 1 ⊤
^ ^ 1 ^ ^
/ \ ^ ^ / \
/ \ / \ / \
1 2 {1,3} {2,3} 1 0 0 1 P Q
\ / ^ ^ 1 1 ^ ^
\ / \ / ^ ^ \ /
v v \ / \ / \ /
3 {3} 0 0 P∧Q
^ 1 ^
 ^ 
  
{} 0 0 ⊥
0
(A,R) (O(A),⊆) (O(A),⊆)
** DONE Topologies are Heyting Algebras
Look into almost any CT book
and you find this:
for any topological space (X,O(X))
the poset (O(X),⊆) is a Heyting Algebra.
A HA is a structure (Ω,≤,⊤,⊥,∧,∨,→,¬)
obeying certain properties.
What are Ω,≤,⊤,⊥,∧,∨,→,¬?
Ω := O(X) the open sets are the truth values.
⊤ := X because ⊤ is true everywhere
⊥ := {} because ⊥ is true everywhere
≤ := ⊆
P∧Q := {x∈X  x∈P ∧ x∈Q} P∧Q is true where both P and Q are
P∨Q := {x∈X  x∈P ∨ x∈Q} P∨Q is true where either P or Q are
P→Q := {x∈X  x∈P → x∈Q} P→Q is true where P implies Q < oops...
Problem: the definition of P→Q does not work always 
it _sometimes_ produces a nonopen output even when
both its inputs are open... example:
0 0 1
0 0 > 1 0 = 1 1
1 1 1 0 1 0
(open) (open) (nonopen)
** DONE Order topologies: nonopen subsets
A subset B⊆A is nonopen when it has a 1 above a 0
with an arrow 1>0 between them... for example,
{2} is not open:
1 2 0 1
\ / \ /
v v v v
3 0
** DONE Order topologies: stable subsets
When I explain these things to children I find it better
to start using the term ``stable subset'' (or stable function)
instead of ``open subset''.
The idea:
`1's are ``things'' (which are heavy),
`0's are ``empty spaces''.
The `1's want to fall down.
For example, 010 is unstable:
0 1 0 0
\ / \ /
v v   > v v
0 1
Then
P(* *) = {0 0, 0 0, 0 1, 0 1, 1 0, 1 0, 1 1, 1 1}
* 0 1 0 1 0 1 0 1
O(* *) = {0 0, 0 0, 0 1, 1 0, 1 1}
* 0 1 1 1 1
The `P' means ``parts'', but
The `O' means ``open sets'',
for technical reasons that we will understand much later...
(Children don't know topology  I mean, usually)
** DONE Fixing the implication
The fix:
P→Q := {x∈X  x∈P → x∈Q} < bad
P→Q := {x∈X  x∈P → x∈Q}° < good ("°" means "interior")
** DONE Modal logic for children (...but not now)
We can do MUCH better than just adding an "°".
Let (A,R) be a graph, and (A,O(A)) its order topology.
These are two logics:
I = (Ω , ≤ , ⊤ , ⊥ , ∧ , ∨ , → , ¬ )
I I I I I I I I
M = (Ω , ≤ , ⊤ , ⊥ , ∧ , ∨ , → , ¬ , ⋅ , ⋄ )
M M M M M M M M M M
"I" is the intuitionistic logic we've been working on.
"M" is a modal logic (S4).
Ω_I = O(A)
Ω_M = P(A)
P > Q := ⋅ (P > Q)
I M M
This gives us
1) Modal Logic for children
2) Gödels embedding of intuitionistic logic into S4,
for children
But we won't go that way now!!!
** DONE Some order topologies are ZHAs
For example,
1
1 1
1 1
0
1 1
1 1
0 0
1 1 0 0 1
/ \ 1 1 1 1
v v 0 0 0
2 3 > 1 0 0 0 0 1
  1 0 1 1 0 1
v v 0 0
4 5 0 0 0 0
1 0 0 1
0
0 0
0 0
** DONE The generators of a ZHA
Let (A,BM(A)) be a ZDAG, and
let (H,WM(H)) be a ZHA such that (H,WM(H)^*)≃(O(A),⊆).
Let G=Gen(H)  the ``generators'' of H  be the points of H
with exactly one arrow going into them (in the ZDAG).
Then Gen(H) is /exactly/ the image of $A$ in $H$ by the inclusion '↓'!
Magic! =)
1
1 1
1 1
0
1 1
1 1
0 0
1 1 0 0 1
/ \ 1 1 1 1
v v 0 0 0
2 3 > 1 0 0 0 0 1
  1 0 1 1 0 1
v v 0 0
4 5 0 0 0 0
1 0 0 1
0
0 0
0 0
Can we recover $(A,BM(A))$ from $G⊆H$?
Well, sort of.
** DONE The generators of a ZHA (2)
Let (A,BM(A)) be a ZDAG, and
let (H,WM(H)) be a ZHA such that (H,WM(H)^*)≃(\Opens(A),⊆).
Let G=Gen(H).
Can we recover $(A,BM(A))$ from $G⊆H$?
Almost  we can make G inherit an order from H...
Important: we need to look at H as a ZPoset, not as a ZDAGs,
and we need to invert the direction of the arrows.
** DONE The generators of a ZHA (3)
# (findxpdfpage "~/LATEX/2014sfcslides2.pdf" 12)
Look: (A,BM(A)^*) ≃ (G,R^*).
33 .. 32 13
32 23 32 ..  \ 
32 13 22 13 .. 13 v v v
20 12 21 12 .. 12 20 12
10 01 20 11 20 ..  / 
10 01 10 01 v v v
00 .. 10 01
(A,BM(A)) (H,WM(H)) (G,R)
≃(\Opens(A),⊆)
Now let's divide the generators in G⊆H into two classes
according to the direction of the (only) arrow
that points to them. In the example above we have
Gennw(H)={10,20,32} and Genne(H)={01,12,13}.
The order on G is the columns 32>20>10 and 13>12>01,
plus some intercolumn arrows.
** HALF The generators of a ZHA (4)
What happens when we "omit the other digit",
and write 1_, 2_, 3_... for the leftwall generators
and _1, _2, _3... for the rightwall generators?
33 .. 3_ _3
3_ 23 3_ ..  \ 
3_ _3 22 _3 .. _3 v v v
2_ _2 21 _2 .. _2 2_ _2
1_ _1 2_ 11 2_ ..  / 
1_ _1 1_ _1 v v v
00 .. 1_ _1
(A,BM(A)) (H,WM(H)) (G,R)
≃(\Opens(A),⊆)
It becomes easier to create homework problems!!!
We can specify the twocolumn graphs as
(height of the left column,
height of the right column,
interwall arrows from left to right,
interwall arrows from right to right)
The ZHA above is generated by the twocolumn graph
(3, 3, {3>2}, {2>1}).
** HALF The generators of a ZHA: homework ideas
1) Draw the ZHA correponding to each of the 2column
graphs specified below:
(3, 4, {}, {})
(3, 4, {}, {_4>_4})
(3, 4, {}, {_4>_3})
(3, 4, {}, {_4>_2})
(3, 4, {}, {_3>_2})
(3, 4, {}, {_4>_3, _3>_2})
(3, 4, {_2,_1}, {})
(3, 4, {_2,_1}, {_4>_3})
(3, 4, {_2,_1}, {_4>_3, _3>_2})
2) Draw the ZHA etc etc for these string specs:
"123RRLL2L321"
(...)
3) Find the 2column graph and the string spec that
generate each of the ZHAs below:
(...)
4) For each of the ZPosets below draw its topology as a graph.
o o
o o o o o o o o o
o o o o
(They are not going to be ZDAGish or ZPosetish,
because when we have three independent points the
topology contains a cube)
* 
* :big: Modalities for children
** DONE The axioms
# (findbooks "__cats/__cats.el" "fourman")
A modality on Omega is an operation J: Ω > Ω obeying these axioms:
M1: P <= J(P)
M2: J(J(P)) <= J(P)
M3: J(P&Q) = J(P)&J(Q)
What do these axioms _mean_ (e.g., geometrically, on ZDAGs)?
Equivalent elements form diamondshaped regions
Stable elements form a subZDAG
To understand and prove this we start with _consequences_ of M1, M2, M3
What familiar operations obey these axioms?
I(P) := P (the "identity" modality)
T(P) := ⊤ (the "true" modality)
O_A(P) := A or P (the "or" modality)
E_A(P) := A > P (the "implication" (exponential) modality)
B_A(P) := (P > A) > A ("boolean quotient"  general case)
B(P) := B_⊥(P) ("boolean quotient"  particular case))
= (P > ⊥) > ⊥
= ¬¬P
What other operations obey these axioms?
Modalities on ZDAGs correspond to dividing by diagonal lines
** TEXED Fourman/Scott1979: ``Sheaves and Logic''
# (findist "quotes.tex" "fourman")
This is an excerpt  pages 329331  from M.P. Fourman and D.S.
Scott's ``Sheaves and Logic'', that was published in SLNM0753
(``Applications of Sheaves: Proceedings of the Research Symposium on
Applications of Sheaf Theory to Logic, Algebra and Analysis  Durham,
july 921, 1977'').
\bsk
\BF{2.18. ELEMENTARY JOPERATORS.} In these examples $Ω$ is a given cHa.
(i) {\sl The closed quotient.} The operator is defined by
%
$$J_a p = a ∨ p.$$
%
This is obviously a Joperator, and the congruence relation is:
%
$$a∨p = a∨q.$$
%
The set of fixed points (quotient lattice) is:
%
$$\setofst{p ∈ Ω}{a≤p}.$$
Classically speaking in the spatial case where $Ω=\Opens(X)$, the
quotient corresponds to the topology on the {\sl closed} subspace
complementary to the open set $a$.
This quotient makes the element $a$ ``false'' and is the least such.
** TEXED Fourman/Scott1979 (2)
(ii) {\sl The open quotient.} The operator is defined by:
%
% (findslnm0753page (+ 14 330))
%
$$J^a p = a→p.$$
%
The congruence relation is:
%
$$a∧p = a∧q \qquad \text{(equivalently, $a ≤ p↔q$)}.$$
%
The set of fixed points is thus isomorphic to
%
$$Ω_a = \setofst{p∈Ω}{p≤a}.$$
Intuitionistically speaking in the spatial case, this quotient
corresponds to the topology on the open subspace $a$. This quotient
makes $a$ ``true'' and is the leat such.
** TEXED Fourman/Scott1979 (3)
(iii) {\sl The Boolean quotient}. The operator is defined by:
%
$$B_a p = (p→a)→a.$$
%
The congruence relation is:
%
$$p→a = q→ a.$$
%
The set of fixed points is
%
$$\setofst{p∈Ω}{(p→a)→a≤p}.$$
In case $a=⊥$, this is the wellknown $¬¬$quotient giving the
(complete) Boolean algebra of ``stable'' elements. For arbitrary $a$,
we could first form $Ω/J_a$ and follow this by the $¬¬$quotient to
obtain $Ω/B_a$. (In general, if $J≤K$, then $Ω/K$ is a quotient of
$Ω/J$.)
We remark that in general $Ω/J$ is a cBa iff $J=B_{J_⊥}$. Further, is
$Ω$ is already Boolean, then {\sl every} Joperator on $Ω$ is of the
form $B_a = J_a$.
** TEXED Fourman/Scott1979 (4)
(iv) {\sl The forcing quotient}. The operator is a combination of
previous ones:
%
$$(J_a∧J^b)p = (a∨b)∧(b→p).$$
%
The congruence relation is a conjunction:
%
$$a∨b=a∨q \quad \text{and} \quad b∧p=b∧q.$$
%
The set of fixed points is:
%
$$\setofst{p∈Ω}{b→p≤ a→p}.$$
The point of the quotient is that it provides the {\sl least}
Joperator such that $Ja≤Jb$; that is, we take the least quotient that
``forces'' $a→b$ to be true. If we want to force a sequence of
statements $a_i→b_i$, for $i Ω obeying these axioms:
M1: P ≤ J(P)
M2: J(J(P)) ≤ J(P)
M3: J(P∧Q) = J(P)∧J(Q)
M1 means that applying J once moves up,
M2+M1 mean that applying J a second time does not move further up.
(What is M3?)
Let's say that P is _stable_ when P = J(P),
and that P and Q are _equivalent_ when J(P) = J(Q).
Formally:
St(P) := (P = J(P)) (returns true or false)
Sts := {P in Om  St(P) (returns a set)
P==Q := (J(P) = J(Q)) (returns true or false)
E(P) := {Q in Om  P==Q) (returns a set)
Es := {E(P)  P in Om} (returns a partition)
** DONE Stable and equivalent truthvalues (2)
# (findbooks "__cats/__cats.el" "fourman")
Let's look at an example.
When J(P) := B(P) = not not P, we have
33 33 ..
32 23 33 33 .. ..
22 13 33 33 .. ..
Om == 12 03 (P>J(P)) == 33 03 Sts == .. 03
11 02 33 03 .. ..
10 01 10 03 10 ..
00 00 10
and the equivalence classes are:
.. .. .. 33
.. .. .. .. .. .. 32 23
.. .. .. .. .. .. 22 13
.. .. .. .. .. 03 12 ..
.. .. .. .. .. 02 11 ..
.. .. 10 .. .. 01 .. ..
00 .. .. ..
i.e., Es = {{00}, {10}, {01,02,03}, {11,12,13,22,23,32,33}}
** DONE What is axiom M3? Introduction
Remember: a modality on Ω is an operation J: Ω > Ω obeying these axioms:
M1: P ≤ J(P)
M2: J(J(P)) ≤ J(P)
M3: J(P∧Q) = J(P)∧J(Q)
M3 implies _monotonicity_, which is:
Mo: P ≤ Q implies J(P) ≤ J(Q)
but this is not obvious. Let's prove that in several steps.
..
.. ..
.. .. ..
.. .. .. ..
.. 64 .. 46 .. J(P) J(Q)
.. .. .. .. .. .. ^ ^
.. .. .. 44 .. .. .. \ /
.. .. .. .. .. .. .. .. J(P)&J(Q)
.. .. .. .. .. .. ..
.. .. .. .. .. ..
.. 31 .. 13 .. P Q
.. .. .. .. ^ ^
.. 11 .. \ /
.. .. P&Q
..
Let P=31, Q=13, J(P)=64, J(Q)=46.
Then P∧Q=11 and J(P)∧J(Q)=44, but without M3 we wouldn't know
where is J(P∧Q)=J(11)... it could be anything above 11.
M3 tells us that J(11)=J(P∧Q)=J(P)∧J(Q)=64∧46=44,
which is below J(P)=64 and J(Q)=46...
(Actually J(11) is the topmost value below J(P) and J(Q).)
** DONE What is axiom M3? Monotonicity for products
M3 implies that J(P&Q)=J(P)&J(Q),
so let's rewrite the "J(P)&J(Q)" as "J(P&Q)" in our diagram.
M3 implies that when we apply J to all elements of a
product diagram P < P&Q > Q, we do have arrows
in its image J(P) < J(P&Q) > J(Q).
Formally, this is
Mp: J(P&Q) <= J(P), and J(P&Q) <= J(Q)
("monotonicy for products")
..
.. ..
.. .. ..
.. .. .. ..
.. 64 .. 46 .. J(P) J(Q)
.. .. .. .. .. .. ^ ^
.. .. .. 44 .. .. .. \ /
.. .. .. .. .. .. .. .. J(P&Q)
.. .. .. .. .. .. ..
.. .. .. .. .. ..
.. 31 .. 13 .. P Q
.. .. .. .. ^ ^
.. 11 .. \ /
.. .. P&Q
..
** DONE What is axiom M3? Monotonicity
When P <= Q we have P = P&Q,
and a part of the previous diagram collapses..
If P=P&Q=11, Q=13, J(P)=J(P&Q)=44, J(Q)=46,
and we have:
..
.. ..
.. .. ..
.. .. .. ..
.. .. .. 46 .. J(Q)
.. .. .. .. .. .. ^
.. .. .. 44 .. .. .. /
.. .. .. .. .. .. .. .. J(P) = J(P&Q)
.. .. .. .. .. .. ..
.. .. .. .. .. ..
.. .. .. 13 .. Q
.. .. .. .. ^
.. 11 .. /
.. .. P = P&Q
..
** DONE What is axiom M3? Monotonicity (2)
Formally:
M3 
J(P&Q)=J(P)&J(Q) J(P)&P(Q)<=J(Q)
Mp := 
J(P&Q)<=J(Q) J(P&Q)<=J(Q)
P<=Q
=====
P=P&Q
 Mp
P<=Q J(P)=J(P&Q) J(P&Q)<=J(Q)
Mo := 
J(P)<=J(Q) J(P)<=J(Q)
** DONE The sandwich lemma
From
M1: P <= J(P)
M2: J(J(P)) <= J(P)
M3: J(P&Q) = J(P)&J(Q)
We can prove that if P <= Q <= J(P) then P and Q (and J(P)) are equivalent.
More formally ("sandwich lemma"):
SL: If P <= Q <= J(P) then P==Q.
Proof:
Q<=J(P)
Mo M2
P<=Q J(Q)<=J(J(P)) J(J(P))<=J(P)
Mo 
P<=Q Q<=J(P) J(P)<=J(Q) J(Q)<=J(P)
SL := 
J(P)=J(Q) J(P)=J(Q)
** DONE The sandwich lemma: equivalence classes don't have holes
Sandwich lemma:
SL: If P<=Q<=J(P) then P==Q.
What does that mean? Let's look again at our example.
This was an equivalence class,
33
32 23
22 13
12 ..
11 ..
.. ..
..
and J(12) = 33.
Making P=12 and Q=22 in the sandwich lemma,
we have P <= Q <= J(P), i.e., 12 <= 22 <= 33,
and, by C1, P==Q, i.e., 12==22, i.e., J(12)=J(22)=33.
The sandwich lemma is very powerful.
If we know that J(11)=33, then,
using it for all elements between 11 and 33,
we get that J(12)=J(13)=J(22)=J(23)=J(32)=33.
The sandwich lemma says that equivalence classes do not have "holes".
** DONE Equivalence classes: the bottom element lemma
The axiom
M3: J(P&Q) = J(P)&J(Q)
can be extended to P&Q&R, P&Q&R&S, etc. Look:
J(P&Q&R) = J(P&(Q&R))
= J(P)&J(Q&R)
= J(P)&J(Q)&J(R)
What happens when we take the "&" of all the truthvalues in an
equivalence class, and then we apply J to it?
Let's look at an example.
Let E := {11,12,13,22,23,32,33}
and &E := 11&12&13&22&23&32&33 (= 11).
33
32 23
22 13
12 ..
11 ..
.. ..
..
J(&E) = J(11 & 12 & 13 & 22 & 23 & 32 & 33)
= J(11) & J(12) & J(13) & J(22) & J(23) & J(32) & J(33)
= 33 & 33 & 33 & 33 & 33 & 33 & 33
= 33
** DONE Equivalence classes: the bottom element lemma (2)
Let E := {11,12,13,22,23,32,33}
and &E := 11&12&13&22&23&32&33 (= 11).
33
32 23
22 13
12 ..
11 ..
.. ..
..
J(&E) = J(11 & 12 & 13 & 22 & 23 & 32 & 33)
= J(11) & J(12) & J(13) & J(22) & J(23) & J(32) & J(33)
= 33 & 33 & 33 & 33 & 33 & 33 & 33
= 33
So, &E is ==equivalent to all other elements of E,
and &E is below all other elements of E...
That means that &E is the bottom element of that equivalence class,
and, by the sandwich lemma, the equivalence class is made of
all elements between &E and J(&E).
TYPESET THIS IN RED:
(That only works for _finite_ conjunctions, but we are working only
with finite cases...)
** DONE Equivalence classes: the bottom element lemma (3)
A small modification of that previous argument leads to this:
let E be any nonempty set of equivalent elements of Om.
Then the diamondshaped region
E' := D(&E,J(&E))
having &E as its bottom element and J(&E) as its top element
contains E, and all elements in E' are equivalent.
For example, in
33
32 23
22 13
12 ..
11 ..
.. ..
..
let E := {13, 22}; note that J(13)=J(22)=33, and that &E = 13&22 = 12.
Then
J(&E) = J(13 & 22)
= J(13) & J(22)
= 33 & 33
= 33...
** DONE Equivalence classes: the bottom element lemma (4)
let E := {13, 22}; note that J(13)=J(22)=33, and that &E = 13&22 = 12.
Then
J(&E) = J(13 & 22)
= J(13) & J(22)
= 33 & 33
= 33,
and all elements in the diamondshaped region
E' := D(&E,J(&E)) = D(12,33) = {12,13,22,23,32,33}
are equivalent; this new lemma takes any arbitrary nonempty set of
equivalent elements, and extends it to a diamondshaped set of
equivalent elements. In our example it extends E={13,22} to
E={12,13,22,23,32,33}, ie.,
.. 33
.. .. 32 23
22 13 22 13
.. .. to: 12 ..
.. .. .. ..
.. .. .. ..
.. ..
** DONE An operation dual to J
Let Eq(P) be the equivalence class of P.
Formally,
Eq(P) := {Q  J(P)=J(Q)}.
We know that J(P) returns the top element of Eq(P).
We saw, by the bottom element lemma, that each Eq(P) has a bottom element.
Let's denote that bottom element by B(P).
Formally, B(P) is the "&" of all elements of Eq(P).
(A finite conjunction because we are in the finite case!)
We have that Eq(P) is made of all elements between B(P) and J(P),
and nothing else...
We will return to this later.
** DONE How J interacts with $∧$
We know (proofs later!) that "&" is monotonous on both its inputs,
and that P<=J(P) and that J is monotonous, i.e.,
P<=P' Q<=Q' P<=Q
   
P&Q<=P'&Q P&Q<=P&Q' P<=J(P) J(P)<=J(Q)
These things mean that for any truthvalues P and Q we know some
relations between P&Q, J(P)&Q, ..., J(J(P)&J(Q)):
J(J(P)&J(Q))
^ ^ ^
/  \
/  \
J(J(P)&Q) J(P)&J(Q) J(P&J(Q))
^ ^ ^ ^ ^ ^
 \/ \/ 
 /\ /\ 
J(P)&Q J(P&Q) P&J(Q)
^ ^ ^
\  /
\  /
P&Q
But M3 tells us that J(P&Q)=J(P)&J(Q), so at least the two middle
vertices in the cubeshaped diagram above yield the same result...
Actually we can prove more: J(P&Q) is equal to all the four upper
truthvalues in the diagram above. Proofs:
J(J(P)&Q) = J(J(P))&J(Q) = J(P)&J(Q)
J(P&J(Q)) = J(P)&J(J(Q)) = J(P)&J(Q)
J(J(P)&J(Q)) = J(J(P))&J(J(Q)) = J(P)&J(Q)
** DONE How J interacts with $∨$
We know (proofs later!) that "∨" is monotonous on both its inputs,
and (again!) that P<=J(P) and that J is monotonous, i.e.,
P<=P' Q<=Q' P<=Q
   
PvQ<=P'vQ PvQ<=PvQ' P<=J(P) J(P)<=J(Q)
These things mean that for any truthvalues P and Q we know these
relations between PvQ, J(P)vQ, ..., J(J(P)&J(Q)):
J(J(P)∨J(Q))
^ ^ ^
/  \
/  \
J(J(P)∨Q) J(P)∨J(Q) J(P∨J(Q))
^ ^ ^ ^ ^ ^
 \/ \/ 
 /\ /\ 
J(P)∨Q J(P∨Q) P∨J(Q)
^ ^ ^
\  /
\  /
P∨Q
But we know more. We can prove that we have J(J(P)∨J(Q))<=J(P∨Q),
** DONE How J interacts with $∨$ (2)
But we know more. We can prove that we have J(J(P)∨J(Q))<=J(P∨Q),
 
P<=P∨Q Q<=P∨Q
Mo Mo
J(P)<=J(P∨Q) J(Q)<=J(P∨Q)

J(P)∨J(Q)<=J(P∨Q)
Mo M2
J(J(P)∨J(Q))<=J(J(P∨Q)) J(J(P∨Q))=J(P∨Q)

J(J(P)∨J(Q))<=J(P∨Q)
and this implies that all the 4 truthvalues in the upper lozenge are
equal...
** DONE How ZHAs are cut into equivalence classes
We saw that the equivalence classes of J are diamondshaped regions,
but we don't know yet how which ways of splitting a ZHA into
diamondshaped regions correspond to "J"s that obey M1, M2 and M3...
For example, does this correspond a valid J?
/\
55 / \
55 55 / /\
55 55 35 / / \
55 55 35 35 /\ /\ \
51 55 33 35 35 /\ \/ \ \ (*)
50 51 33 33 35 35 \ \ \ /\ /
50 51 33 13 35 \ \/\/ \/
50 21 13 13 \/ / /
21 13 13 \/ /
13 13 \ /
13 \/
It turns out that no.
Look at the frontiers between equivalence classes.
We will see two separate arguments.
** DONE How ZHAs are cut into equivalence classes (2)
The first, based on the fact that J(Q)=J(R) implies J(P&Q)=J(P&R),
shows that borders that make lambda and mirrored lambdas are not
possible,
/\ /\ /\ /\
/\ \ / /\ /\/\ /\/\
\/\/ \/\/ \/ / \ \/
\/ \/ \/ \/
(lambda) (mirrored lambda) (y) (mirrored y)
and a second one, based on the fact that St(P) and St(Q) implies
St(P&Q), that shows that borders that make "y"s and mirrored "y"s are
not possible.
** DONE Borders like lambdas are not possible
Lemma: if Q==R then P&Q==P&R ("`&' preserves equivalence").
Proof:
J(Q)=J(R)

J(Q)=J(R) J(P)&J(Q)=J(P)&J(R)
&pe := ===================
J(P&Q)=J(P&R) J(P&Q)=J(P&R)
Diagrammatically, we can use "P&" to "move southwest", as here:
P Q 52 34
\ / \ \ / \
P&Q R 32 14
\ / \ /
Q&R 12
In the diagram (*) we have J(34)=J(14)=35, but J(32)=33 and J(12)=13;
if we take P=52, Q=34, R=14, the lemma &pe would imply that J(32)=J(12),
which does not hold. Diagrammatically, the lemma "&pe" implies that
the lines marking the frontiers between different equivalence classes
cannot make either a "lambda" or its mirror image...
/\ /\
/\ \ / /\
\/\/ \/\/
\/ \/
(lambda) (mirrored lambda)
** DONE Borders like "y"s are not possible
# (findequailfile "sgmlinput.el" ";; LEFT SEMIDIRECT PRODUCT")
# (findequailfile "sgmlinput.el" ";; RIGHT SEMIDIRECT PRODUCT")
Lemma: St(P) and St(Q) implies St(P&Q) ("& preserves stability")
Proof:
P=J(P) Q=J(Q)
============== M3
P=J(P) Q=J(Q) P&Q=J(P)&J(Q) J(P)&J(Q)=J(P&Q)
&ps := 
P&Q=J(P&Q) P&Q=J(P&Q)
We start with the case in which our ZHA is a lozenge.
We know that equivalence classes are diamondshaped regions.
When the ZHA is a lozenge, the equivalence classes are lozenges too.
Let's look at this example. Here are a ZHA H and a function J: H > H
on it; J obeys M1 and M2  remember the geometrical meaning of M1 and
M2!  and we want to show that that J cannot obey M3 too.
33 33 /\
32 23 33 33 / \
31 22 13 31 33 13 /\ /\
30 21 12 03 31 31 13 13 / \/ \
20 11 02 31 13 13 \ / /
10 01 13 13 \/ /
00 13 \ /
\/
** DONE Borders like "y"s are not possible (2)
Let's look at this example. Here are a ZHA H and a function J: H > H
on it; J obeys M1 and M2  remember the geometrical meaning of M1 and
M2!  and we want to show that that J cannot obey M3 too.
33 33 /\
32 23 33 33 / \
31 22 13 31 33 13 /\ /\
30 21 12 03 31 31 13 13 / \/ \
20 11 02 31 13 13 \ / /
10 01 13 13 \/ /
00 13 \ /
\/
Let P' be the truthvalue at the left of the vertex of the "y",
and Q' be the truthvalue at the right of the vertex of the "y".
In this example, P'=12 and Q'=21.
Let P be J(P'),
and Q be J(Q').
The line from P' to P points exactly northwest, and
the line from Q' to Q points exactly northeast;
because of this, P'&Q' is equal to P&Q, and P'&Q' is the truthvalue
just below the vertex of the "y".
Let R be P&Q = P'&Q'.
P and Q are stable, and so, by "&ps", R=P&Q=P'&Q' must be stable too.
But, by hypothesis, the top element of the equivalence class of R is Q,
so R is not stable, and thus the J in our example cannot obey M3.
** COMMENT Borders like "y"s are not possible (3)
When our ZHA is not a lozenge, it is not trivial to show that the path
from P' to P is exactly northwest, and that the path from Q' to Q is
exactly northeast.
An example:
(...)
[I don't know how to complete the details]
** HALF Stable elements and diamondshaped regions
Two slogans:
Equivalent elements form diamondshaped regions
Stable elements form a subZHA
Definitions:
P==Q := P* <> Q* (P==Q is true or false)
St(P) := P == P* (St(P) is true or false)
Sts := {P in Omega  P == P*} (Sts is a set)
Di(P) := {Q in Omega  P == Q} (Di(P) is a set)
Dis := {Di(P)  P in Omega} (Dis is a partition of Omega)
33 33 33
32 23 33 33 33 33
22 13 33 13 33 33
12 03 13 13 33 03
11 02 11 13 33 03
10 01 11 11 10 03
00 11 00
Pid PJ Pnotnot
** TODO Modalities induce Zquotients
** TODO Zquotients are modalities
** TODO Modalities correspond to morphisms of HAs
* 
* :big: Sheaves
# (finddjvupage "$S/http/angg.twu.net/MINICATS/sheaves_for_children__1_to_7.djvu")
# (findbooks "__cats/__cats.el" "moerdijk")
** DONE Restriction, compatibility, glueing
2>20 3>300
3>300 4>4000
{2,3,4} {3,4,5} 4>4000 5>50
^ ^  
\ / \ /
\ / v v
{3,4} 3>300
4>4000
The domain of 20,300,4000 is {2,3,4}.
The domain of 300,4000,50 is {3,4,5}.
The common domain of 20,300,4000 and 300,4000,50
is {2,3,4}∩{3,4,5} = {3,4}.
The restriction of 20,300,4000 to {3,4} is 300,4000.
The restriction of 300,4000,50 to {3,4} is 300,4000. ← same!
20,300,4000 and 300,4000,50 are compatible.
We are moving down, toward smaller sets, using intersections.
Is there a way to move up  to unions?
Yes, by glueing.
But we have to see some other things first.
** DONE Compatible families
Let's write the domain of a function in its subscript.
Suppose that f_A compat g_B compat h_C compat k_D,
and that E⊆A∩B∩C∩D.
Then the restrictions of f_A, g_B, h_C, k_D to E all coincide!!!
The diagram below should make
A B C D f_A g_B h_C k_D
A∩B B∩C C∩D ?_A∩B ?_B∩C ?_C∩D
^ ^ ^ \  /
\  / \  /
\  / \  /
\  / v v v
E ?_E
?_A∩B = f_AA∩B = g_BA∩B f_AE = f_AA∩BE = g_BA∩BE = g_BE
?_B∩C = g_BB∩C = h_CB∩C g_BE = g_BB∩CE = h_CB∩CE = h_CE
?_C∩D = h_CC∩D = k_DC∩D h_CE = h_CC∩DE = k_DC∩DE = k_DE
We say that the family {f_A, g_B, h_C, k_D} is
/pairwisecompatible/ when any two elements in it
are compatible  f_A compat g_B compat h_C compat k_D
plus f_A compat h_C, g_B compat k_D, f_A compat k_D.
** DONE Compatible families: naming
A notation trick...
Compare:
f_A g_B h_C k_D f_A f_B f_C f_D
?_A∩B ?_B∩C ?_C∩D f_A∩B f_B∩C f_C∩D
\  / \  /
\  / \  /
\  / \  /
v v v v v v
?_E ?_E
?_A∩B := f_AA∩B = g_BA∩B f_A∩B := f_AA∩B = f_BA∩B
?_B∩C := g_BB∩C = h_CB∩C f_B∩C := f_BB∩C = f_CB∩C
?_C∩D := h_CC∩D = k_DC∩D f_C∩D := f_CC∩D = f_DC∩D
?_E := f_AE = g_BE = h_CE = k_DE f_E := f_AE = f_BE = f_CE = f_DE
A compatible family {f_A,f_B,f_C,f_D}
induces a bigger family having one f_U
for each empty set U that is smaller
than _some_ of the sets A, B, C, D...
And these `f_U's are welldefined,
all ways of defining them from the
original {f_A,f_B,f_C,f_D} by restriction
yield the same result.
** DONE Saturating downwards
Remember that we had and operation `↓' on DAGs
that regarded each `1' as black paint that was too liquid
and that would flow down, painting the `0's below it
in black...
0 0
0 0 0 0
0 0 0 0 0 0
↓ 0 0 1 0 = 0 0 1 0
1 0 0 1 1 1
0 0 1 1
0 1
Let's use the same symbol, ↓, for an operation that
starts with a compatible family, say {f_A,f_B,f_C,f_D},
and produces a (possible much bigger!) family made of
all the restrictions of f_A, f_B, f_C, f_D.
Also:
dom f_A = A
dom {f_A, f_B, f_C, f_D} = {A, B, C, D}
** DONE An operation for all kinds of restrictions
This operation simplifies things magically.
(I learned it from Simmons's paper on Omega Sets).
U · V = U∧V
U · g_V = (error)
f_U · V = f_UU∧V
f_U · g_V = f_UU∧V
So: U · V = V · U is true always
f_U · g_V = g_V · f_U is true iff f_U and g_V
are compatible
Now: U · {V, W} = {U·V, U·W} = {U∩V, U∩W}
U · O(X) = all (open) subsets of U = ↓{U}
f_U · O(X) = all restrictions of f_U
{f_U,g_V,h_W}·O(X) = all restrictions of f_U, g_V, h_W,
which yields somthing nice when
f_U, g_V, h_W are compatible,
and something messy when they are not.
We can define ↓A := A·O(X), and this works for all interesting cases!
** DONE Sheaves of functions
For each open set U⊆R,
C∞(U,R) is the set of smooth functions from U to R.
Warning: here (a,b) sometimes mean an open interval, not a pair!
(4,6) > C∞((4,6),R) f_{(4,6)}:(4,6)>R
^  
incl restr restr
 v v
(5,6) > C∞((5,6),R) f_{(4,5)}:(4,5)>R
O(R)     > Set
We can regard C∞(,R) as a (contravariant) functor.
Hey:
C∞(,R): O(R)^op > Set
is a sheaf because compatible families can be glued
in a unique way!
** DONE Limited functions are not sheaves
Let L(,R): O(R)^op > Set
be the contravariant functor that returns
the _limited_ functions on open sets.
Certain families of limited functions
are compatible, but their glueing is not limited,
so it "does not exist".
L(,R) violates one of the conditions for being a sheaf:
that glueings must exist.
The other condition is that glueings must be unique.
To violate that we need to go away from the realm of
the presheaves of functions, and use arbitrary functors.
** DONE The evil presheaf
E_1 {0,1}
/ \ / \ 0>3
/ \ / \ 1>4
v v v v
E = E_2 E_3 = {2} {3,4}
\ / \ /
\ / \ /
v v v v
E_4 {5}
 
  5>6
v v
E_5 {6,7}
The compatible family {2,3} has two different glueings: 0 and 1.
The compatible family {2,4} can't be glued.
The compatible family {} has two different glueings: 6 and 7.
** DONE Sheaves in $Set^{K^\op}$
...with the "obvious" notion of sheafness, are the ones that obey:
A_1 = A_2 ×_A_4 A_3
/ \
/ \
v v
A = A_2 A_3
\ /
\ /
v v
A_4


v
A_5 = 1
So: A_1 and A_5 carry only redundant information,
and sheaves on Set^K^op are equivalent to objects
of Set^V !!!
* 
* :big: Toposes
** TODO A formal definition for children
A _pullback_ for the diagram A>C<B is an object D,
together with morphisms A<D>B, that behaves like this:
{(a,b)∈A×B  f(a)=g(b)} > B
 
  g
v f v
A > C
A _subobject_ of A is a diagram A' > A
that behaves like an inclusion.
Notation: A' >> A means that the arrow is a monic.
A _subobject classifier_ is an object Ω
that behaves like {0,1},
plus an arrow 1 >> Ω (called "true")
that makes the curved arrow at the left have an inverse.
{a∈A  P(a)} > 1 A' > 1
v v v v
 <\   \ 
   i  
v  P v v v v
A > Ω A > Ω
λa.a∈i(A)
A _topos_ is a CCC that has pullbacks and a subobject classifier
(> extra structure and extra properties).
** DONE The main theorems
1) CCCs are exactly the categories in which we can interpret
(the system λ1 os simplytyped) λcalculus
2) HAs are exactly the categories in which we can interpret
(intuitionistic) propositional logic
3) Every HA is a CCC
4) Set is a CCC
5) Every ZHA is a HA
6) Every (O(X),⊆) is a HA
7) Every ZTopos is a CCC < ZToposes are in the next slide
8) Every ZTopos is a topos
9) Topos are exactly the categories in which we can interpret
(the system IST of intuitionistic, typed) set theory
There are many other ways of constructing toposes,
but I believe that ZToposes are the perfect tool for
doing Topos Theory for Children 
** HALF ZToposes
When we did logic on subsets of a ZHA  K, say 
each truthvalue P was in fact a Kshaped diagram, e.g.,
0 P_1
P = 1 0 = P_2 P_3
1 P_4
1 P_5
We are going to do the same with sets.
An object A of Set^(K^op) is a Kshaped diagram of sets
and maps.
A_1
/ \
v v
A = A_2 A_3
\ /
v v
A_4

v
A_5
A _notion of sheafness_ on Set^K^op is something that tells us
which objects of Set^K^op "are" sheaves.
The "sheaves" of Set^K^op form a subcategory of Set^K^op
which is also a topos. < Magic!!!
In the case of ZToposes these subcategories of sheaves are
not something weird and new  we have this:
for any ZDAG D,
for notion of sheafness on Set^D^op,
there is a ZDAG D'
such that Sh(Set^D^op)≅Set^D'^op.
** TODO Quotes from the Elephant
** COMMENT Pullbacks
** COMMENT Subobjects
** COMMENT Classifier
** COMMENT λcalculus in a topos
** COMMENT Logic in a topos
** COMMENT Sequents
** COMMENT Quantifiers
** COMMENT Mixed sequents
** COMMENT Geometrical morphisms
** COMMENT Maps between ZDAGs induce geometrical morphisms
** COMMENT Sheaves: saturated covers
** COMMENT Sheaves: glueing a cover
** COMMENT Sheaves in general
* 
* 
* 
* 
* Partitions into contiguous classes
A partition of {0,...,n} into contiguous classes (a "picc") is one in
which this holds: if a,b,c∈{0,...,n}, a{0,...,n}
a>max [a]_P
that takes each element to the top element in its class, a set St_P :=
{a∈{0,...,n}  a^P=a} of the "stable" elements of {0,...,n}, and a
graphical representation with a bar between a and a+1 when they are in
different classes:
012345 == {{0,1},{2},{3,4,5}},
which will be our favourite notation for partitions into contiguous
classes from now on.
* The algebra of piccs
When P and P' are two piccs on {0,...,n} we say that P≤P' when
∀a∈{0,...,n}.a^P≤a^P'. The intuition is that P≤P' means that the graph
of the function ·^P is under the graph of ·^P':
a^P a^P a^P a^P
^ ^ ^ ^
5  * 5  * * 5  * * * * 5 * * * * * *
4  * 4  4  4 
3  * <= 3  * * <= 3  <= 3 
2  * 2  2  2 
1  * 1 * * 1 * * 1 
0 *> a 0 +> a 0 +> a 0 +> a
0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
012345 <= 012345 <= 012345 012345
This yields a partial order on piccs, whose bottom element is the
identity function 01...n, and the top element is 01...n, that takes
all elements to n.
It turns out that the piccs form a (Heyting!) algebra, in which we can
define ⊤, ⊥, ∧, ∨, and even →.
01234 ⊤
^ ^
 
01234 P∨Q
^ ^ ^ ^
/ \ / \
01234 01234 P Q
^ ^ ^ ^
\ / \ /
01234 P∧Q
^ ^
 
01234 ⊥
The operations ∧ and ∨ are easy to understand in terms of cuts (the
""s):
Cuts(P∨Q) = Cuts(P)∩Cuts(Q)
Cuts(P∧Q) = Cuts(P)∪Cuts(Q)
The stable elements of a picc on {0,...,n} are the ones at the left of
a cut plus the n, so we have:
St(P∨Q) = St(P)∩St(Q)
St(P∧Q) = St(P)∪St(Q)
Here is a case that shows how ·^(P∨Q) can be problematic. The result
of a^(P∨Q) must be stable by both P and Q. Let:
E := 0123456789
O := 0123456789
E∨O = 0123456789
E∧O = 0123456789
We can define a^(P∧Q) := a^P∧a^Q, and this always works. But
a^(P∨Q) := a^P∨a^Q does not, we may have to do something like
iterating the two functions many times:
O∨E = 0123456
2^(O∨E)=6
a^(O∨E)=(((a^O)^E)^O)^E
O=0123456 E=0!123456
O∧E=0123456
2^(O∧E)=2^O∧2^E=3∧2=2
a^(O∧E)=a^O∧a^E
The "→" in the algebra of piccs will not be relevant to us, so we will
not discuss it.
* ZQuotients
A _ZQuotient_ for a ZHA with top element 46 is a partition of
{0,...,4} into contiguous classes (a "partition of the left wall"),
plus a partition of {0,...,6} into contiguous classes (a "partition of
the right wall"). Our favourite short notation for ZQuotients is with
"/"s and "\"s, like this, "4321/0 0123\45\6", because we regard the
cuts in a ZQuotient as diagonal cuts on the ZHA. The graphical
notation is this:
/ \
46
/ \ \
45 36
\ \ \
35 26
/ \ /
34 25
\ /
24
/ \ \
23 14
/ \ / \
22 13 04
\ / \ /
12 03
/ / /
11 02
\ / /
01
/ /
00
\ /
4321/0 0123\45\6
which makes clear how we can adapt the definitions of ·~_P·, [·]_P,
·^P, St_P, which were on (onedimensional!) PICCs in section ..., to
their twodimensional counterparts on ZQuotients. If P is the
ZQuotient of the figure above, then
34 ~_P 25 is true
23 ~_P 24 is false
[12]_P = {11, 12, 13, 22, 23}
22^P = 23
St_P = {03, 04, 23, 45, 46}
* The algebra os ZQuotients
The ideas of the last section apply to ZQuotients, with a few
adjustments. The
St(P∧Q) = St(P)∪St(Q)
would mean, for P = 54/32/10 and Q = 01/23/45, that
St(P_cuts) = St(
* 2column graphs
A 2column graph is a graph like the one in the picture below,
composed of a _left column_ 4_>3_>2>1_>0_, a _right column_
6_>5_>4_>3_>2>1_>0_, and some intercolumn arrows: _, _ and _
going from the left column to the right column, _ and _ going from the
right column to the left column.
(example)
A compact way to specify a 2column graph is this: (left height, right
height, lefttowall arrows, righttoleft arrows). In the graph of
the example this is (5, 7, {...}, {...}).
We need to attribute a precise meaning for 0_,...,_4 and _0,...,_6.
For the moment let's use this one:
vdots vdots
(0,3) = 3_ _4 = (2,4)
(0,2) = 2_ _2 = (2,2)
(0,1) = 1_ _1 = (2,1)
(0,0) = 0_ _0 = (2,0)
Later we will see a different way of identifying the points of the
columns with points of R^2, that is more complex but quite convenient.
Let's write (5, 7, {...}, {...})^* for the transitivereflexive
closure of of the graph, and O(5, 7, {...}, {...}) for its order
topology. O(5, 7, {}, {}) is a rectangle,
Every open set of a 2column graph is made of a pile of $a$ elements
at the bottom of the left column, and a pile of $b$ elements at the
bottom of the right column. If we write these piles as $ab$ then the
connection with ZHAs becomes clear:
Every open set of a 2column graph is made of a pile of $a$ elements
at the bottom of the left column and a pile of $b$ elements at the
bottom of the right column. If we write these ''2piles'' as $ab$ then
the connection with ZHAs becomes clear:
0
0
0
0 1 = 23
1 1
1 1
48
47 38
46 37 28
45 36 27 18
44 35 26 17 08
43 34 25 16 07
42 33 24 15 06
41 32 23 14 05
O(5, 7, {}, {}) = 40 31 22 13 04
30 21 12 03
20 11 02
10 01
00
What happens when we add intercolumn arrows? Let A := O(5, 7, {}, {}).
If we add 2_>_1, we get A' := O(5, 7, {2_>_1}, {}) where 31 ==
(diagram) and all 2piles of the form (diagram), which were open in A,
become nonopen in A', and we get this: a dent in the left wall.
48
47 38
46 37 28
45 36 27 18
44 35 26 17 08
43 34 25 16 07
33 24 15 06
23 14 05
O(5, 7, {2_>_1}, {}) = 22 13 04
21 12 03
20 11 02
10 01
00
If we add _3>_1, which is in the transitive closure of (5, 7,
{2_L1), {}), nothing changes: A'' := O() is equal to A'.
If we add _____, we get another dent in the left wall; if we add ____,
we get a dent in the right wall. We got this:
48
47 38
46 37
36
35 26
34 25 16
33 24 15 06
23 14 05
O(5, 7, {2_>_1}, {}) = 22 13 04
21 12 03
20 11 02
10 01
00
Up to this point all of the A, A', ..., A''' were ZHAs. Let's now add
the arrow .... . It creates a cycle in the 2column graph, which was
previously acyclic; the new dent in the right wall touches a previous
dent in the left wall, making A'''' = O(5, 7, {2_>_1}, {})
disconnected.
* Proper 2column graphs
Let's call a 2column graph _proper_ if it has no redundant arrows,
like the ... that we added between ... and ..., and no cycles, like
the one that was created by adding ... after ... . It turns out that a
2column graph has no redundant arrows iff its arrows from the left
wall can be put in a list (l1_>_r1, l2_>_r2, ln_>_rn) in which
l1l1'_, ..., _rm'>lm'_) and r1'<..._r and an arrow _r'>l'_ which form a cycle, i.e.,
with r>=r' and l'>=l.
It should now be clear how to convert between proper 2column graphs
and ZHAs, and that they are in bijection. The following figure may
help.
/> _6 /> 16
5_ / _5 56 / 15
4_ / _4 43 / 14
3_  /> _3 33  /> 03
2_ +/ _2 23 +/ 02
1_ </ _1 10 </ 01
56 56
46 ..
45 36 .. ..
44 35 26 .. .. ..
43 34 25 16 43 .. .. 16
33 24 15 33 .. 15
23 14 23 14
13 ..
12 03 .. 03
11 02 .. 02
10 01 10 01
00 ..
* Wiggly 2column graphs
Let A be a proper 2column graph. It is better to work on an example,
so let A be (5, 6, {2_>_3, 5_>_6}, {1_<_4}), which is:
/> _6
5_ / _5
4_ / _4
3_  /> _3
2_ +/ _2
1_ </ _1
For each point in its two columns, we can put a 0 in its "_", regard
that as a 2pile, find the smallest open set containing it, and
represent that using the 2digit notation. For example, if we start
with 4_, we get:
/> _6
5_ / _5
4_ / _4
40 == 3_  /> _3
2_ +/ _2
1_ </ _1
/> 0 /> 0
0 / 0 0 / 0
1 / 0 1 / 0
dn 40 == dn 1  /> 0 = 1  /> 1 = 43
1 +/ 0 1 +/ 1
1 </ 0 1 </ 1
Earlier we said that we were going to interpret a_ as (0,a) and _b as
(2,b) at that moment, but that that was just one way. What happens
when we interpret a_ as dn a_ and _b as dn _b? _Lots_ of good things!
See:
(...)
We gen an embedding of A into O(A), in a way that takes each a_ to the
ath generator (in the sense of sec ___) of the left wall, and each _b
to the bth generator of the right wall; also, all the intercolumn
arrows become either straight southeast or straight southwest, and
when we abbreviate the a_>_b and a_<_b as ab, as this,
(5, 6, {2_>_3, 5_>_6}, {1_<_4}) = (5, 6, {23, 56}, {14})
we see that 23 and 16 are exactly the generators of the left wall that
come immediately above a dent, or, equivalently, the first ones in
which the value of their rcoordinate
(56) 56
46 ..
45 36 .. ..
44 35 26 .. .. ..
(43) 34 25 (16) 43 .. .. 16
(33) 24 (15) 33 .. 15
(23)(14) 23 14
13 ..
12 (03) .. 03
11 (02) .. 02
(10)(01) 10 01
00 ..
(with arrows)
* 
* 
* 
* COMMENT :big: Intuitionistic (propositional) logic
** COMMENT Gödel's translation
# http://plato.stanford.edu/entries/goedel/
# http://plato.stanford.edu/entries/goedel/#GodWorIntLogAri
# http://plato.stanford.edu/entries/goedel/#IntProLogIntS4
# http://plato.stanford.edu/entries/goedel/#PriSou [1933f]
# http://plato.stanford.edu/entries/logicmodalorigins/#KriPosWorSem
# http://plato.stanford.edu/entries/logicintuitionistic/
# (findbooks "__modal/__modal.el")
** COMMENT How I arrived at ZHAs?
** COMMENT Where in the literature is this?
# (findbooks "__alg/__alg.el" "gratzer")
# (findgratzerpage (+ 31 35) "3.5 Intervals")
"ZHAs are distributive lattices": Gratzer (sort of)
"ZHAs are Heyting Algebras": Gratzer (sort of)
"All planar HAs are isomorphic to ZHAs", or, better,
"If a ZPoset is a HA then it is isomorphic to a ZHA":
maybe I can claim authorship on that =) (?!?!)
(I'll sketch proofs of the missing parts soon)
(L,r) coordinates, bullet notation, Zfunction notation:
maybe I can claim authorship on that too =)
But what is really interesting to explain
is _how I arrived_ at ZHAs!
** COMMENT ZHAs are topologies
Let H ("house") be this ZSet:
*
* *
* *
Here is the graph obtained from H by using black moves
and the reading order:
1
/ \
v v
2 3
 
v v
4 5
BM(H) induces a topology on H.
A _topology_ on H is a set of subsets of H (the "open sets").
Notation:
O(H) ⊆ P(H)
(opens) (parts)
We will work on {1,2,3,4,5} for convenience
instead of on H = {(1,2), (0,1),(2,1), (0,0),(2,0)}.
** COMMENT Topologies are Heyting Algebras
This is very wellknown:
For any topological space (X,\Opens(X)),
the poset (\Opens(X), ⊆) is a Heyting Algebra.
We take ⊤:=X, ⊥:=\empty, and if P,Q,R∈\Opens(X), then
P&Q := {x∈X(x∈P)&(x∈Q)}
= P∩Q
P∨Q := {x∈X(x∈P)∨(x∈Q)}
= P∪Q
Q→R := {x∈X(x∈Q)→(x∈R)}^\int
= {x∈X¬(x∈Q)∨(x∈R)}^\int
= ((X\bsl Q)∪R)^\int
= \bigcup {P∈\Opens(X)  P&Q⊆R}
If we define Ω:=\Opens(X) and ⊤,⊥,&,∨,→ as above
then (Ω,⊤,⊥,&,∨,→) is a HA.
** COMMENT Finite topologies
As I wanted to visualize how &,∨,→ functioned,
I started to work with the set of open sets ``generated by''
a few open sets  typically just U and V, or, U, V and W,
maybe with some extra equations...
The ``top'' ⊤ would be on top,
the ``bottom'' ⊥ would be at the bottom.
X=U∪V ⊤=P∨Q
^ ^ ^ ^
/ \ / \
U V P Q
^ ^ ^ ^
\ / \ /
U∩V P&Q
^ ^
 
\empty ⊥
** COMMENT Order topologies and their posets of open sets
Let's write ``↓B'' for ``the smallest open set cointaing B''.
Then ↓{1}={1,3}, ↓{2}={2,3}, ↓{3}={3}, or, better:
↓{1}=↓100=101, ↓{2}=↓010=011, ↓{3}=↓001=001.
`↓{}' takes each point of the DAG (A,R)
to a /different/ open set in (A,\Opens(A)),
in a way that inverts the arrows!
111
^ ^
/ \
1 2 101 011
\ / ^ ^
v v \ /
3 001
^

000
** COMMENT When is $(\Opens(A),⊆)$ a ZPoset?
# (findLATEXgrep "grep niH e guill *.tex")
# (findLATEX "2014sfcslides2.tex" "unpriming")
# (findxpdfpage "~/LATEX/2014sfcslides2.pdf" 12)
The diagram below blew my mind, and motivated everything.
Every time that I started with a ZDAG that was
small enough to handle, then (\Opens(A),⊆)
would be a ZPoset. But why? And what exactly was ``small enough''?
(diagrams)
** COMMENT All intuitionistic theorems are tautologies in a ZHA
** COMMENT Intuitionistic theorems
** COMMENT Intuitionistic derivations
** COMMENT Intuitionistic theorems are true in Heyting Algebras
** COMMENT Intuitionistic derivations are constructions in HAs
** COMMENT Natural Deduction
* COMMENT :big: Heyting Algebras by diagrams
** TODO A translation diagram for Heyting Algebras
# (find854 "" "CCCs")
** TODO A translation diagram for BiCCCs
# (find854 "" "CCCs")
** HALF Do our operations $∧,∨,→$ on ZHAs really deserve their names?
Many slides ago we talked about Alice's logic and Bob's logic...
Alice's logic was this,
(copy)
and we were sort of leaving implicit that any other similar tuple, like
(random logic)
would be ``a logic'' in Alice's bookworld...
But things are not like that!!!
I just explained a version for children first!!!!!!!!
What's missing:
structure properties (new!)
/\ /\
a logic = (Om,and,or,...;eq_T,eq...)
where eq_T, ... are the properties (equations) that T,...
have to obey to ``deserver their names'', to ``behave as expected''.
They are:
(...)
** HALF The properties of $⊥,⊤,∧,∨,→$: a diagram
[diagrammatic mnemonic]
[deduction rules (sequents with le)]
[forallish formulation]
We will return to this diagram /many/ times...
** COMMENT ZPosets, ZToposes
** COMMENT Elephant
(All from section A4, in vol.1)
# (findbooks "__cats/__cats.el" "johnstoneelephant")
** Definition 4.1.1
# (findelephantpage (+ 17 161) "A4 Geometric Morphisms  Basic Theory")
# (findelephantpage (+ 17 161) "Definition 4.1.1")
# (findelephantpage (+ 17 163) "Example 4.1.4")
# (findelephantpage (+ 17 164) "Lemma 4.1.5")
# (findelephantpage (+ 17 165) "Example 4.1.8")
# (findelephantpage (+ 17 165) "Example 4.1.10")
# (findelephantpage (+ 17 180) "Example 4.2.6 (ii) and (iii)")
# (findelephantpage (+ 17 181) "Example 4.2.7")
# (findelephantpage (+ 17 182) "inclusion")
# (findelephantpage (+ 17 182) "Lemma 4.2.9")
# (findelephantpage (+ 17 183) "Theorem 4.2.10")
# (findelephantpage (+ 17 184) "Examples 4.2.12 (b) and (c)")
# (findelephantpage (+ 17 189) "Grothendieck coverages")
# (findelephantpage (+ 17 190) "(b) (i) and (ii)")
# (findelephantpage (+ 17 192) "Theorem 4.3.9")
# (findelephantpage (+ 17 205) "Example 4.5.2")
# (findelephantpage (+ 17 209) "Proposition 4.5.8 (i)")
# (findelephantpage (+ 17 211) "Example 4.5.9")
# (findelephantpage (+ 17 211) "We write Lop(E)")
# (findelephantpage (+ 17 211) "Lemma 4.5.10")
# (findelephantpage (+ 17 215) "(e)")
# (findelephantpage (+ 17 219) "Corollary 4.5.20")
# (findelephantpage (+ 17 219) "the local operator ¬¬ of 4.5.9 is dense")
# (findelephantpage (+ 17 224) "Examples 4.6.2 (a), (c), (f)")
# (findelephantpage (+ 17 226) "Theorem 4.6.5")
# (findelephantpage (+ 17 226) "Proposition 4.6.6 (i) and (iv)")
# (findelephantpage (+ 17 231) "Proposition 4.6.10 (i) and (iii)")
Definition 4.1.1 (a) Let \scrE and \scrF be toposes. A /geometric
morphism/ f:\scrF > \scrE consists of a pair of functors f_*:\scrF>\scrE
* 
* COMMENT eevcomposesutf8
(eepitchshell)
(eepitchkill)
(eepitchshell)
# (findeev "eevcompose.el")
# (findevardescr 'eevcomposesall)
recode l1..u8 < ~/eevcurrent/eevcompose.el  awk '91