% (findangg "LATEX/istanbul1.tex")
% (findangg "LATEX/istanbul1.lua")
% (defun c () (interactive) (findLATEXsh "lualatex istanbul1.tex"))
% (defun c () (interactive) (findLATEXsh "lualatex outputformat=dvi istanbul1.tex"))
% (defun d () (interactive) (findxpdfpage "~/LATEX/istanbul1.pdf"))
% (defun d () (interactive) (findxdvipage "~/LATEX/istanbul1.dvi"))
% (defun e () (interactive) (findLATEX "istanbul1.tex"))
% (defun l () (interactive) (findLATEX "istanbul1.lua"))
% (findxpdfpage "~/LATEX/istanbul1.pdf")
% (findxdvipage "~/LATEX/istanbul1.dvi")
% (findfline "~/LATEX/" "istanbul1.pdf")
% http://angg.twu.net/LATEX/istanbul1.pdf
% «.geometry» (to "geometry")
% «.updatebody» (to "updatebody")
\documentclass[oneside]{book}
\usepackage[utf8]{inputenc}
% «geometry» (to ".geometry")
% (findangg ".emacs.papers" "latexgeom")
\usepackage[%total={6.5in,4in},
textwidth=4in, paperwidth=4.5in,
textheight=5in, paperheight=4.5in,
top=0.05in, left=0.2in%, includefoot
]{geometry}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{tikz}
\usepackage{luacode}
\usepackage[colorlinks]{hyperref} % (findes "tex" "hyperref")
\usepackage{proof} % For derivation trees ("%:" lines)
\input diagxy % For 2D diagrams ("%D" lines)
\xyoption{curve} % For the ".curve=" feature in 2D diagrams
\usepackage{edrx15} % (findLATEX "edrx15.sty")
\input istanbuldefs % (findist "defs.tex")
\begin{document}
%\input edrxrepl % (findLATEX "edrxrepl.tex")
\catcode`\^^J=10 % (findes "luatex" "spuriousomega")
%\directlua{dofile "\jobname.lua"} % (findLATEX "2015logicandcats.lua")
\directlua{dofile "2015logicandcats.lua"}
\directlua{dofile "istanbulall.lua"} % (findist "all.lua")
\directlua{getword = getword_utf8} % (finddn5 "dednat6.lua" "utf8")
\directlua{tf = TexFile.read(tex.jobname..".tex")}
\directlua{output = printboth}
\directlua{output = mytexprint}
\directlua{tf:processuntil()}
%
\def\pu{\directlua{tf:processuntil()}}
%%L output(preamble1)
\pu
\def\myttchars{\basicttchars}
% «updatebody» (to ".updatebody")
% Produce an updated body: (findist "all.lua" "orgblocktotexexport")
% Mark the body, then kill it by hand: (eek " 2* CSPC M> 8*")
% Insert a new body: (insertfile "~/LATEX/istanbul1body.tex")
%
% defs
\def\BF#1{\noindent{\bf#1}\quad}
\def\liml{\underleftarrow {\lim}{}}
\def\limr{\underrightarrow{\lim}{}}
\def\frakCat{\mathfrak{Cat}}
\def\frakTop{\mathfrak{Cat}}
\def\elephantpage#1{((page #1))}
\def\ob{{\operatorname{ob}}}
\def\sh{\mathbf{sh}}
\def\Sh{\mathbf{Sh}}
\def\Sp{\mathbf{Sp}}
\def\Lop{\mathbf{Lop}}
\def\Hom{\mathrm{Hom}}
\def\sdd{\ssk{\scriptsize (...)}\ssk}
\def\mdd{\msk{\scriptsize (...)}\msk}
%
% title
{\bf Logic and Categories, or:
Heyting Algebras for Children}
A tutorial presented at the UniLog 2015 conference (Istanbul),
June 2022, 2015
\msk
By Eduardo Ochs
{\tt eduardoochs@gmail.com}
\url{http://angg.twu.net/mathb.html#istanbul}
\msk
Version: 2015jun22
%
% 
\newpage
1. (Nonmathematical) introduction
\bsk
\newpage
1.1. Why study CT {\sl now}?
%\includegraphics[width=8.5cm]{garota_de_ipanema.jpg}
\par Public education in Brazil is being dismantled 
\par maybe we should be doing better things than studying
\par very technical \& inaccessible subjects
\par with no research grants 
\newpage
1.2. Category theory should be more accessible
\par Most texts about CT are for specialists in research universities...
\par {\sl Category theory should be more accessible.}
\msk
\par To whom?...
\begin{itemize}
\item Nonspecialists (in research universities)
\item Grad students (in research universities)
\item Undergrads (in research universities)
\item Nonspecialists (in conferences  where we have to be quick)
\item Undergrads (e.g., in CompSci  in teaching colleges)  (``Children'')
\end{itemize}
\newpage
1.3. What do we mean by "accessible"?
{\myttchars
\footnotesize
\begin{verbatim}
1) Done on very firm grounds:
mathematical objects made from numbers, sets and tuples
FINITE, SMALL mathematical objects whenever possible
avoid references to nonmathematical things like
windows, cars and pizzas (like the OO people do)
avoid reference to Physics  avoid QM at all costs
time is difficult to draw  prefer static rather than changing
2) People have very short attention spans nowadays
3) Selfcontained, but not isolated or isolating 
our material should make the literature more accessible
4) We learn better by doing.
Our material should have lots of space for exercises.
5) Most people with whom I interact are not from Maths/CS/etc
6) Proving general cases is relatively hard.
Checking and calculating is much easier.
People can believe that something can be generalized after
seeing a convincing particular case.
(Sometimes leave them to look for the right generalization
by themselves)
\end{verbatim}
}
\newpage
1.4. Generalizations
{\myttchars
\footnotesize
\begin{verbatim}
General case Full things
 ^  ^
projection  : lifting projection  : lifting
v : v :
Particular case Things for children
Paper:
Internal Diagrams and Archetypal Reasoning in Category Theory
Eduardo Ochs 2103  Logica Universalis
http://angg.twu.net/ < my home page
http://angg.twu.net/mathb.html
http://angg.twu.net/mathb.html#idarct
The typetheorical view of the formalization of all that
is something that I can discuss with very few people =(
The parts "for children" are more fun
(esp. because I don't have to do them alone)
so I have to get good excuses for working on that
instead of on the very technical typetheoretical parts
\end{verbatim}
}
\newpage
1.5. Yesterday and slides
{\myttchars
\footnotesize
\begin{verbatim}
To do: apologize for yesterday
I overestimated my TeXnical skills!!! =(
Some of my diagrams need LuaLaTeX extensions
that took me _much_ longer to write than I expected,
and the emergency tools that let me use ascii+utf8 slides
also took me one (_very_ frantic) day more than I had 
...so things were not ready yesterday  and as I could not
connect my computer directly to the projector, and I did not
have enough material in PDF form, I had to cancel 
These ascii slides will be at
http://angg.twu.net/mathb.html
soon.
Nice (Lua)(La)TeXed slides should be ready soon,
for some adequate, notverysmall value of "soon".
\end{verbatim}
}
%
% 
\newpage
2. The very basics
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
\newpage
2.1. Evaluation: steps
{\myttchars
\footnotesize
\begin{verbatim}
How do we evaluate an expression like 2*3+4*5
to obtain its value, 26?
This involves a series of steps...
There are several ways to represent these steps:
2*3+4*5 2*3+4*5 > 2*3+20
\/ \/  
6 20 v v
\/ 6+4*5 > 6+20 > 26
26
2*3+4*5 = 2*3+20
2*3+4*5 = 6+20 = 6+20
= 26 = 26
The technical term for the arrows is ``reductions''.
Informally, they are ``evaluation steps''.
\end{verbatim}
}
\newpage
2.2. Evaluation sequences
{\myttchars
\footnotesize
\begin{verbatim}
2*3+4*5 2*3+4*5 > 2*3+20
\/ \/  
6 20 v v
\/ 6+4*5 > 6+20 > 26
26
2*3+4*5 = 2*3+20
2*3+4*5 = 6+20 = 6+20
= 26 = 26
To evaluate is to reduce, and reduce, and reduce
until there is nothing more to do  until we reach
someting that is ``fully evaluated''.
Note that two ``evaluation sequences'' starting
at the same expression may ``diverge'' at some point,
but they necessarily ``converge'' later  this is
what assures us that the result of evaluation is
unique.
(Technical terms: ``confluence'', ``diamond lemma'',
``normalization''...)
\end{verbatim}
}
\newpage
2.3. Algebra: tables
{\myttchars
\footnotesize
\begin{verbatim}
Some expressions, e.g. (a+b)(ab) and a*ab*b,
return equal values for all inputs...
We can /test/ whether (a+b)(ab)=a*ab*b
in a /finite number/ of cases using tables,
a b (a+b)(ab) a*ab*b (a+b)(ab)=a*ab*b

0 0 0 0 T
2 3 5 5 T
10 1 99 99 T
but we can not /know/ that (a+b)(ab)=a*ab*b
just by checking a finite number of table lines...
\end{verbatim}
}
\newpage
2.4. Algebra: reductions
{\myttchars
\footnotesize
\begin{verbatim}
To understand why (a+b)(ab)=a*ab*b we need /algebra/,
which includes reductions like these, which use associativity
always in the "expansion" direction, i.e., x(y+z)>xy+xz,
(a+b)(ab) > (a+b)a(a+b)b > (a+b)aabbb
  
 v v
 aa+ba(a+b)b > aa+baabbb
 :
v :
a(ab)+b(ab) > a(ab)+babb :
  :
v v v
aaab+b(ab) > aaab+babb   > aabb
...but then we don't get confluence!
To get the two "  >" arrows we need
commutativity of + and ·, plus xx=0...
We need several new things to learn how to do
"calculations with letters"!...
We can teach evaluation to small children.
We can only teach algebra to /older/ children,
who have already learned evaluation.
The important thing:
evaluation comes first,
algebra comes later.
\end{verbatim}
}
\newpage
2.5. Logic: evaluation and tautologies
{\myttchars
\footnotesize
\begin{verbatim}
Logic is simpler than algebra (for children),
because we can prove lots of things just by
looking at tables with a finite number of lines.
For example, we can check that P&Q>P∨Q just by
calculating the result of P&Q>P∨Q in all four cases.
(P > Q)∨(Q > P) P Q P>Q Q>P (P>Q)∨(Q>P)
\/ \/ \/ \/ 
0 1 1 0 0 0 1 1 1
\/ \/ 0 1 1 0 1
1 0 1 0 0 1 1
\/ 1 1 1 1 1
1
P Q P&Q P∨Q P>Q

0 0 0 0 1
0 1 0 1 1
1 0 0 1 0
1 1 1 1 1
When we teach logic to children it is
good karma to spend a LONG time on this:
what propositions are tautologies?
\end{verbatim}
}
\newpage
2.6. A glimpse at other logics
{\myttchars
\footnotesize
\begin{verbatim}
The truth values of classical logic are {0,1}.
The truth values of O(2)logic are {00,01,11}.
Some propositions, like ¬¬P>P, are tautologies
in classical logic but not in O(2)logic...
P Q P&Q P∨Q P>Q P ¬P ¬¬P ¬¬P>P
 
00 00 00 00 11 00 11 00 11
00 01 00 01 11 01 00 11 01
00 11 00 11 11 11 00 11 11
01 00 00 01 00
01 01 01 01 11
01 11 01 11 11
11 00 00 11 00
11 01 01 11 01
11 11 11 11 11
\end{verbatim}
}
\newpage
2.7. $λ$calculus: subtopics
{\myttchars
\footnotesize
\begin{verbatim}
Lambdacalculus, even for children, is a big topic...
We will see it as this series of subtopics:
* the usual way of defining (named) functions
* the meaning of `:' and `>' in f:A>B
* the meaning of `>'
* evaluation, a.k.a., reduction
* lambdanotation
* betareduction
* the diamond lemma
* name clashes
* pairing and projection
* types
* contexts, prejudgments and judgments
* unions of contexts
* type inference
* Set as model for lambda1
\end{verbatim}
}
\newpage
2.8. Defining named functions
{\myttchars
\footnotesize
\begin{verbatim}
The usual notation for defining functions is like this:
f: N > R (*)
n > 2+sqrt(n)
name: domain > codomain
variable > expression
It creates _named_ functions.
To use this notation we have to fill up five slots,
and use a ":", an ">" and a ">" in the right places.
After stating (*) we can "reduce" expressions with f, like this:
f(4+5) > 2+sqrt(4+5)
: :
: :
v v
f(9) > 2+sqrt(9) > 2+3 > 5
\end{verbatim}
}
\newpage
2.9. $λ$calculus: defining unnamed functions
{\myttchars
\footnotesize
\begin{verbatim}
Now compare
name: domain > codomain
variable > expression name = \variable.expression
g: N > R
a > a*a+4 h = \a.a*a+4
g(2+3) > g(5) h(2+3) > h(5)
   
  v v
  (\a.a*a+4)(2+3) > (\a.a*a+4)(5)
   
v  v 
(2+3)*(2+3)+4  (2+3)*(2+3)+4 
 \   \ 
 v   v 
 (2+3)*5+4   (2+3)*5+4 
 \   \ 
v v v v v v
5*(2+3)+4 > 5*5+4 5*(2+3)+4 > 5*5+4
Note that the parentheses are important...
2+3*2+3+4 != (2+3)*(2+3)+4
Note also that in h = \a.a*a+4 we have not used
neither the domain nor the codomain...
\end{verbatim}
}
\newpage
2.10. Functions as lists of `$\mapsto$'s
{\myttchars
\footnotesize
\begin{verbatim}
Functions can be regarded as sets of (input,output) pairs.
For example,
f:{4,5,6} > R
a > 10a
_is_ the set {(4,40),(5,50),(6,60)}.
Just as we saw to fully evaluate expressions like 2·3+4·5 
the _value_ of 2·3+4·5 is 26 because on numbers there are
no futher evaluation steps to be done 
the _value_ of the f above is {(4,40),(5,50),(6,60)}.
Alternative notations:
\csm{(4,40), && (5,50), && (6,60)} =
\csm{4↦40, && 5↦50, && 6↦60} =
\csm{4↦40 && 5↦50 && 6↦60} =
\sm {4↦40 && 5↦50 && 6↦60}
We will be sloppy with commas in the {}/↦ notation.
\end{verbatim}
}
\newpage
2.11. Trees
{\myttchars
\footnotesize
\begin{verbatim}
We are going to use tree for lots of things, e.g.:
represent constructions
compare general and particular cases (with parallel trees)
add and erase information
track free variables / undischarged hypotheses
understand translations (including CurryHoward)
p (2,5) (2,5)
  
p π'p f (2,5) π'(2,5) sqrt (2,5) 5 sqrt
     
πp f(π'p) π(2,5) sqrt(π'(2,5)) 2 sqrt(5)
  
(πp,f(π'p)) (π(2,5),sqrt(π'(2,5))) (2,sqrt(5))
p:A×B (2,5)∈Z×N
 
p:A×B π'p:B f:B>C (2,5)∈Z×N 5∈N sqrt:N>R
   
πp:A f(π'p):C 2∈Z sqrt(5)∈R
 
(πp,f(π'p)):A×C (2,sqrt(5))∈Z×R
\end{verbatim}
}
\newpage
2.12. Trees: free variables, discharges, sequents, CurryHoward...
{\myttchars
\footnotesize
\begin{verbatim}

[p]^1 pp [P&Q]^1
    
[p]^1 π'p f pp pπ'p ff [P&Q]^1 Q Q>R
     
πp f(π'p) pπp f,pf(π'p) P R
  
(πp,f(π'p)) f,p(πp,f(π'p)) P&R
1  1
λp.(πp,f(π'p)) fλp.(πp,f(π'p)) P&Q>P&R
p:A×B
 
p:A×B π'p:B f:B>C p:A×Bπ'p:B f:B>C
   
πp:A f(π'p):C p:A×Bπp:A p:A×Bf(π'p):C
 
(πp,f(π'p)):A×C p:A×B(πp,f(π'p)):A×C

π':A×B>B f:B>C

π:A×B>A f∘π':A×B>C

\ang{π,f∘π):A×B>A×C
\end{verbatim}
}
\newpage
2.13. Alice's logic and Bob's logic
{\myttchars
\footnotesize
\begin{verbatim}
(For students of Discrete Mathematics)
Our friends Alice and Bob have both publish books on logic,
with some stylistic differences...
For Alice, Classical Propositional Logic is this,
(Ω,⊤,⊥,∧,∨,→,¬,↔) =
({0,1}, 1, 0, {(0,0)↦0, {(0,0)↦0, {(0,0)↦1, {0↦1, {(0,0)↦1, }
(0,1)↦0, (0,1)↦1, (0,1)↦1, 1↦0}, (0,1)↦0,
(1,0)↦0, (1,0)↦1, (1,0)↦0, (1,0)↦0,
(1,1)↦1 }, (1,1)↦1 }, (1,1)↦1 }, (1,1)↦1 }
In Bob's book, Classical Propositional Logic is
(Ω,⊤,∧,¬) = ...
with all the other operators being defined from this one...
How do we translate from one of these notions to another?
Suppose that Alice's book has a theorem that is not in Bob's book.
We may need to translate both the /theorem/ and the /proof/.
At one point, when we were studying Alice's book and feeling that
we were on shaky ground, we thought of "∧" as being the 4th
component of the structure above...
A very lowlevel translation uses things like "4th component".
A higherlevel translation refers to components by their names.
\end{verbatim}
}
%
% 
\newpage
3. (Planar) Heyting Algebras for children
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
\newpage
3.1. A game  on $N^2$
{\myttchars
\footnotesize
\begin{verbatim}
Let's consider a game played
on an infinite board, N^2,
with black and white pieces.
* This is a black piece.
It is solid. It is heavy.
It tends to sink.
/Black pieces move down/.
O This is a white piece.
It is hollow. It is light.
It tends to float.
/White pieces move up/.
Black moves: * (x,y) (0,y)
/\ /  \  \
v v v v v v v v
(x1,y1) (x,y1) (x1,y1) (0,y1) (1,y1)
White moves: (x1,y+1) (x,y+1) (x1,y+1) (0,y+1) (1,y+1)
^ ^ ^ ^ ^ ^ ^ ^
\/ \  /  /
O (x,y) (0,y)
Note that moves are only possible
between /valid positions/ 
this rules out positions with x=1.
\end{verbatim}
}
\newpage
3.2. A game  on subsets of $N^2$
{\myttchars
\footnotesize
\begin{verbatim}
Let's consider a game played
on an infinite board, N^2,
with black and white pieces...
Note that moves are only possible
between /valid positions/ 
this rules out positions with x=1.
We may restrict the
valid positions even more.
Let A ⊆ N^2.
Let BM(A) be the set of black moves
starting and ending on points of A, and
let WM(A) be the set of white moves
starting and ending on points of A.
Then BM(A) and WM(A) are sets of arrows,
and (A,BM(A)) and (A,WM(A)) are graphs.
(A,BM(A)) and (A,WM(A)) are directed,
acyclical graphs  DAGs.
\end{verbatim}
}
\newpage
3.3. A game  on $K$
{\myttchars
\footnotesize
\begin{verbatim}
Let K ("Kite") be this subset of N^2:
K = {(1,3),(0,2),(2,2),(1,1),(1,0)}
Then (K,BM(K)) is the DAG at the left below,
which is isomorphic to the DAG K' at the right...
(1,3) 1
/ \ / \
v v v v
(0,2) (2,2) 2 3
\ / \ /
v v v v
(1,1) 4
 
v v
(0,0) 5
(K,BM(K)) = ({(1,3), (0,2),(2,2), (1,1), (1,0)},
{((1,3),(0,2)), ((1,3),(2,2)),
((0,2),(1,1)), ((2,2),(1,1)),
((1,1),(0,0))})
K' = ({1, 2,3, 4, 5},
{(1,2),(1,3), (2,4),(3,4), (4,5)})
\end{verbatim}
}
\newpage
3.4. A game  definition aborted
{\myttchars
\footnotesize
\begin{verbatim}
When I talked about ``game'' that was a trick...
I just wanted to define BM(A) and WM(A) on an A ⊆ N^2.
We don't need notions like ``initial position'',
``turn'', ``winning'', etc.
From now on we will forget the idea of a ``game''.
\end{verbatim}
}
\newpage
3.5. Transitivereflexive closure
{\myttchars
\footnotesize
\begin{verbatim}
Let (A,R) be a (directed) graph; note that R ⊆ A^2.
We will write
(A,R^+) for the transitive closure of (A,R), and
(A,R^*) for the transitivereflexive closure of (A,R).
(A,R^+) is (A,R) ``plus all the identity arrows'',
(A,R^*) is (A,R^+) ``plus all the composites''.
Remember that when A ⊆ N^2
then (A,BM(A)) and (A,WM(A)) are DAGs...
When we pass to (A,BM(A)^*) and (A,WM(A)^*)
we don't get cycles  except for the identity maps!
(A,BM(A)^*) and (A,WM(A)^*) are directed,
``almost acyclical'' graphs...
More precisely, (A,BM(A)^*) and (A,WM(A)^*)
are /partial orders/!
\end{verbatim}
}
\newpage
3.6. Wellpositioned subsets of $N^2$
{\myttchars
\footnotesize
\begin{verbatim}
Let A ⊆ N^2, and let (Δx,Δy) ∈ Z^2.
Let A' = {(x+Δx,y+Δy)  (x,y)∈A}.
Then (A,BM(A)) and (A',BM(A')) are isomorphic!
For example, if A=K and (Δx,Δy)=(80,90),
then (A,BM(A)) and (A',BM(A')) are:
(1,3) (81,93)
/ \ / \
v v v v
(0,2) (2,2) (80,92) (82,92)
\ / \ /
v v v v
(1,1) (81,91)
 
v v
(0,0) (80,90)
We are only interested in the /graphs/
generated by subsets of N^2  so let's
create a notion of ``canonical position''...
Also, we are only interested in the graphs
generated by /finite/, /nonempty/ subsets of N^2...
\end{verbatim}
}
\newpage
3.7. Wellpositioned subsets of $N^2$ (2)
{\myttchars
\footnotesize
\begin{verbatim}
Definition:
a subset A of N^2 is /wellpositioned/
when it contains a point of the form (0,y)
and a point of the form (x,0), i.e.,
it touches both axes, i.e.,
it can't be translated left or down
without going outside of N^2...
For any nonempty subset A of N^2,
there is exactly one (Δx,Δy)∈N^2
such that A(Δx,Δy) is wellpositioned.
(1,3) (81,93)
/ \ / \
v v v v
(0,2) (2,2) (80,92) (82,92)
\ / \ /
v v v v
(1,1) (81,91)
 
v v
(0,0) (80,90)
\end{verbatim}
}
\newpage
3.8. ZSets
{\myttchars
\footnotesize
\begin{verbatim}
Definition:
A /ZSet/ is a finite, nonempty,
wellpositioned subset of N^2.
The nice thing about ZSets is that
we can represent them unambiguously
using a bullet notation. For example,
*
* *
*
*
is {(1,3), (0,2), (2,2), (1,1), (0,0)}.
\end{verbatim}
}
\newpage
3.9. ZSets, ZDAGs, ZPosets
{\myttchars
\footnotesize
\begin{verbatim}
Definitions:
A /ZSet/ is a finite, nonempty, wellpositioned subset of N^2.
A /ZDAG/ is a DAG (A,BM(A)) or (A,WM(A)), where A is a ZSet.
A /ZPoset/ is a poset (A,BM(A)^*) or (A,WM(A)^*), where A is a ZSet.
Examples:
Let H ("House") be this zset here:
*
* *
* *
then:
(H, BM(H)) and (H, WM(H)) are ZDAGs
(H, BM(H)^*) and (H, WM(H)^*) are ZPosets
\end{verbatim}
}
\newpage
3.10. Zfunctions
{\myttchars
\footnotesize
\begin{verbatim}
A Zfunction is a function whose domain is a ZSet.
We can represent Zfunctions using a positional notation.
For example, with K={(1,3), (0,2),(2,2), (1,1), (1,0)},
the function f:K>N such that
f = {((1,3),1), ((0,2),2),((2,2),3), ((1,1),4), ((1,0),5)}
can be represented as:
1
2 3
4
5
Note that its domain, K, can be inferred from the shape
*
* *
*
*
but its codomain has to be deduced from the context.
\end{verbatim}
}
\newpage
3.11. The reading order
{\myttchars
\footnotesize
\begin{verbatim}
The /reading order/ for a ZSet A
is the bijection from A to {1, 2, ..., A}
that enumerates the elements of A from top
to bottom, going from left to right in each line.
Here are some reading orders:
1
1 1 2 3
2 3 2 3 4 5
4 4 5 6 7 8 9
The reading order can be used to formalize
a way to ``read aloud'' Zfunctions. For example,
we can read aloud
3
2 40
1 50
as 3, 2, 40, 1, 50.
(And this gives a kind of a lexicographic order on
Zfunctions on the same ZSet)
\end{verbatim}
}
\newpage
3.12. ZHAs
{\myttchars
\footnotesize
\begin{verbatim}
A Heyting Algebra, as we will see later,
is a structure (a logic!) (Ω,≤,⊤,⊥,∧,∨,→,¬)
in which ≤,⊤,⊥,∧,∨,→,¬ obey the axioms
of /intuitionistic/ propositional calculus.
A ZHA is a ZPoset having:
a bottom point and a top point,
a left wall and a right wall,
all the sameparity points between the two walls,
and no other points.
Some examples of ZHAs:
o o
o o o o o
o o o o o o o o
o o o o o o o o o
o o o o o o o o o o o
o o o o o o o o o
o o o o o o o o
o o o o o o
o o o
(dot) (diamond5) (L) (squiggle) (weird)
\end{verbatim}
}
\newpage
3.13. ZHAs (2)
{\myttchars
\footnotesize
\begin{verbatim}
A ZHA is a ZPoset having:
a bottom point and a top point,
a left wall and a right wall,
all the sameparity points between the two walls,
and no other points.
o o
o o o o o
o o o o
o o o o o o
o o o o o o
o o o o o o o
o o o o o o o
o o o o o o
o o o o o o
o o o o o o
o o o
This is This is This is
a ZHA a ZHA NOT a ZHA
\end{verbatim}
}
\newpage
3.14. ZHAs (formally)
{\myttchars
\footnotesize
\begin{verbatim}
A ZHA is a ZPoset having:
a bottom point and a top point,
a left wall and a right wall,
all the sameparity points between the two walls,
and no other points.
A ZHA D is generated by:
a height maxy ≥ 0
a bottom point (x_0, 0)
left wall and right wall functions L,R:{0,1,...,maxy}>N
obeying:
L(0)=R(0)=x_0 (bottom point)
L(maxy) = R(maxy) (top point)
0 ≤ L(i) ≤ R(i) (left wall ≤ right wall)
L(y+1) = L(y)±1 (left wall changes in unit steps)
R(y+1) = R(y)±1 (right wall changes in unit steps)
L(y)=0 for some y (to ensure wellpositionedness)
then
D ∩ {(x,y)x ∈ N} is empty when y>maxy,
D ∩ {(x,y)x ∈ N} = {L(y), L(y)+2, L(y)+4, ..., R(y)}
when y∈{0,1,...,maxy}.
\end{verbatim}
}
\newpage
3.15. ZHAs (formally) (2)
{\myttchars
\footnotesize
\begin{verbatim}
A ZHA is a ZPoset having:
a bottom point and a top point,
a left wall and a right wall,
all the sameparity points between the two walls,
and no other points.
o L(9)=1 R(9)=1 maxy=9 L(maxy)=R(maxy)
o o
o
o
o
o o
o o o
o o
o o L(1)=3 R(1)=5
o L(0)=4 R(0)=4 L(0)=R(0)=x_0=4
\end{verbatim}
}
\newpage
3.16. ZHAs: the (l,r) coordinates
{\myttchars
\footnotesize
\begin{verbatim}
We can define the (l,r) coordinates
on a ZHA /informally/ in this way:
the bottom point has (l,r)=(0,0);
when we go one unit northwest,
i.e., from (x,y) to (x1,y+1),
the l coordinate increases by 1;
when we go one unit northeast,
i.e., from (x,y) to (x+1,y+1),
the r coordinate increases by 1;
We usually write the (l,r) coordinates
of a point as just lr  two digits together,
without the `(', `,', `)'.
Here are two examples:
46
45 36 63
35 26 62 53
34 25 52 43
24 51 42 33
23 50 41 32 23
22 40 31 22
21 30 21 12
20 11 20 11 02
10 01 10 01
00 00
\end{verbatim}
}
\newpage
3.17. ZHAs are Heyting Algebras (introduction)
{\myttchars
\footnotesize
\begin{verbatim}
Magic fact / big theorem:
ZHAs are Heyting Algebras.
Version for children: we will learn
1. how to interpret ≤,⊤,⊥,∧,∨,→,¬,↔ in any given ZHA
2. what properties that ≤,⊤,⊥,∧,∨,→,¬,↔ have to obey in a HA
3. that our ZHA operations from (1) obey the properties in (2)
The properties are these:
(id) ∀P.(P≤P)
(comp) ∀P,Q,R.(P≤Q)∧(Q≤R)→(P≤R)
(⊤) ∀P.(P≤⊤)
(⊥) ∀P.(⊥≤P)
(∧) ∀P,Q,R.(P≤Q∧R)↔(P≤Q)∧(P≤R)
(∨) ∀P,Q,R.(P∨Q≤R)↔(P≤R)∧(Q≤R)
(→) ∀P,Q,R.(P≤Q→R)↔(P∧Q≤R)
(¬) ¬P := P→⊥
(↔) P↔Q := (P→Q)∧(Q→P)
\end{verbatim}
}
\newpage
3.18. Operations in ZHAs: $≤, ⊤, ⊥, ∧$ and $∨$
{\myttchars
\footnotesize
\begin{verbatim}
In a ZHA H,
⊤ is the top element of H,
⊥ is the bottom element (00), and
P≤Q is true iff P_L≤Q_L and P_R≤Q_R, or, equivalently,
P≤Q is true iff the ZPoset (H,WM(H)^*) has an arrow P>Q.
Also:
(P_L,P_R)&(Q_L,Q_R) := (min(P_L,Q_L),min(P_R,Q_R))
(P_L,P_R)∨(Q_L,Q_R) := (max(P_L,Q_L),max(P_R,Q_R))
Visually, this is:
(diagram)
\end{verbatim}
}
\newpage
3.19. Operations in ZHAs: $→$
{\myttchars
\footnotesize
\begin{verbatim}
In a ZHA H,
P≤Q returns 0 or 1, (≤:H×H→{0,1})
P→Q returns an element of H (→:H×H→H).
The definition of →:H×H→H is by cases.
(Note: before I explain this to people I usually
ask them to remember how weird they felt the
definition of →:{0,1}×{0,1}→{0,1} was when
they first saw it...)
Q→R := if Q is below R then ⊤ < case 1
elseif Q at left of R then (↗)(Q∧R) < case 2
elseif Q at right of R then (↖)(Q∧R) < case 3
elseif Q is above R then R < case 4
end
Note that the order matters!
If R=22, this is λQ.(which case we use to calculate Q→R):
4
4 4
2 4 3
2 2 3 3
2 2 1 3 3
2 1 1 3
1 1 1
1 1
1
\end{verbatim}
}
\newpage
3.20. Operations in ZHAs: $¬$ and $↔$
{\myttchars
\footnotesize
\begin{verbatim}
The operations ¬ and <> in a ZHA are
defined in terms of others.
¬P := P>⊥
P<>Q := (P>Q)&(Q>P)
\end{verbatim}
}
\newpage
3.21. Some nontautologies: DeMorgan
{\myttchars
\footnotesize
\begin{verbatim}
In classical logic both DeMorgan laws hold:
¬(P&Q)<>(¬P∨¬Q)
¬(P∨Q)<>(¬P&¬Q)
In intuitionistic logic these three implications are provable,
(¬P∨¬Q)>¬(P&Q)
¬(P∨Q)>(¬P&¬Q) (¬P&¬Q)>¬(P∨Q)
While this one
¬(P&Q)>(¬P∨¬Q)
is false in some ZHAs, for example here:
32
22
21 12
20 11 02
10 01
00
¬(P & Q)>(¬ P ∨ ¬ Q)
   
10 01 10 01
  
00 02 20
 
32 22

22
\end{verbatim}
}
\newpage
3.22. Some nontautologies: $¬¬P≤P$
{\myttchars
\footnotesize
\begin{verbatim}
Exercise (homework!):
check that in the obvious ZHA 
the same as in the previous slide 
we have:
00
00
00 00
(λP.¬P) = 02 00 20
02 20
32
32
32
32 32
(λP.¬¬P) = 20 32 02
20 02
00
1
0
(λP.¬¬P=P) = 0 0
(λP.¬¬P≤P) = 1 0 1
0 0
1
\end{verbatim}
}
\newpage
3.23. Lindenbaum Algebras
{\myttchars
\footnotesize
\begin{verbatim}
We've been working with algebras of truth values.
What happens when we try to pass to an algebra of propositions?
"Wff" = "wellformed formula".
Let A = Wffs(⊤,⊥,∧,∨,→,¬,↔,P,Q,R) be the set of wffs built from
the constants ⊤,⊥, the operators ∧,∨,→,¬,↔ and the
atomic propositions P,Q,R  plus parentheses.
Let D be the set of triples (P,Q,δ) where
δ is a /intuitionistic/ proof of P⊢Q. < what is that????
Let D^ be the set of pairs (P,Q) obtained by deleting
the third component of each triple (P,Q,δ)∈D.
Then (A,D^) is a directed graph.
(A,D^) is reflexive and transitive  it is a /preorder/ on A.
It induces a quotient ~ on A.
(A/~,D^/~) is a partial order  the Lindenbaum Algebra
geerated by etc etc etc etc.
\end{verbatim}
}
\newpage
3.24. Lindenbaum Algebras (2)
{\myttchars
\footnotesize
\begin{verbatim}
Let A = Wffs(⊤,⊥,&,∨,→,P,Q,R).
Let D be the set of triples (P,Q,δ) where
δ is a intuitionistic proof of P⊢Q (P,Q∈A).
Let D^ be the set of pairs (P,Q) obtained by deleting
the third component of each triple (P,Q,δ)∈D.
Then (A,D^) is a directed graph,
and the partial order (A/~,D^/~) is the Lindenbaum Algebra (...)
Lindenbaum Algebras are (free) Heyting Algebras.
We can visualize Lindenbaum Algebras by writing some
wffs in A and
1) treating them as vertices in a directed graph
2) drawing the arrows in D between them
3) making each wff stand for its equivalence class.
\end{verbatim}
}
\newpage
3.25. Lindenbaum Algebra for classical logic
{\myttchars
\footnotesize
\begin{verbatim}
Let A = Wffs(⊤,⊥,&,∨,→,¬,↔,P) be the set of wffs...
Let D be the set of triples (P,Q,δ) where
δ is a /classical/ proof of P⊢Q.
Then the Lindenbaum Algebra (A/~,D^/~) looks like this 
more precisely, it is transitivereflexive closure of this:
⊤
P ¬P
⊥
This is a Heyting Algebra  but it is not free.
It is a Boolean Algebra  the free one on one generator.
\end{verbatim}
}
\newpage
3.26. Lindenbaum Algebras for classical logic (2)
{\myttchars
\footnotesize
\begin{verbatim}
Let A = Wffs(⊤,⊥,&,∨,→,¬,↔,P,Q) be the set of wffs...
Let D be the set of triples (P,Q,δ) where
δ is a /classical/ proof of P⊢Q.
Then the Lindenbaum Algebra (A/~,D^/~) looks like this 
more precisely, it is the transitivereflexive closure of
this hypercube:
F
7 E
B 6 D
3 5 A C
2 9 4
1 8
0
This is the free Boolean Algebra on two generators.
(Hint: take P=3 and Q=5)
\end{verbatim}
}
\newpage
3.27. Lindenbaum Algebras for IPL are infinite
{\myttchars
\footnotesize
\begin{verbatim}
Let I_1 = Wffs(⊤,⊥,&,∨,→,¬,↔,A) be the set of wffs...
Let D be the set of triples (P,Q,δ) where
δ is an /intuitionistic/ proof of P⊢Q.
Then the Lindenbaum Algebra (I_1/~,D^/~) has at least
these distinct points:
43 G G:=F∨F'
42 33 F' F F:=E∨E' F':=F→E
32 23 E E' E:=D∨D' E':=E→D
31 22 D' D D:=C∨C' D':=D→C
21 12 C C' C:=B∨B' C':=C→B
20 11 B' B B:=A∨A' B':=B→A
10 01 A A' A:=10 A':=¬A
00 00
A, A', B, B', ... are all distinct in the ZHA at the left above,
so there cannot be an intuitionistic proof that any two of them 
say, E and C  are equal.
^ This is an argument for adults!!!
We will make it childrenfriendly soon.
\end{verbatim}
}
\newpage
3.28. A computer library
{\myttchars
\footnotesize
\begin{verbatim}
The "spec" for a ZHA uses one char per line...
> ZHA.fromspeclr("123LLR432L1"):drawf()
64
63 54
53 44
52 43 34 < 3
51 42 33 24 < 4
41 32 23 < R (means 3 again, but moved right)
40 31 22 < L (means 3 again, but moved left)
30 21 12 < L (means 3 again, but moved left)
20 11 02 < 3
10 01 < 2
00 < 1 < first char of the spec
>
\end{verbatim}
}
\newpage
3.29. A computer library (2)
{\myttchars
\footnotesize
\begin{verbatim}
We can define functions using a very compact lambdanotation,
and use them to draw the values of Zfunctions.
L0 "P Eq(P, Not(Not(P)))"
> "function (P) return Eq(P, Not(Not(P))) end"
The ":drawL" method draws Zfunctions.
The global functions Eq, Not, And, Imp, etc
use the global variable "z"  that's why the "z =".
> z = ZHA.fromspeclr("123LLRR21"):drawf()
(...)
> z:drawL "P Not(P) "
(...)
> z:drawL "P Not(Not(P)) "
(...)
> z:drawL "P Eq(P, Not(Not(P)))"
11
00 00
00 00 00
00 00 00
11 00 00
00 00 00
00 00 11
00 00
11
>
\end{verbatim}
}
\newpage
3.30. When in doubt use brute force
\begin{quote}
{\sl When in doubt use brute force.}
Dennis Ritchie
\end{quote}
In the beginning, and for several years, I had very little
intuition about how the $→$ behaved in a Heyting Algebra...
And to make things worse, there are lots of things
that are important in Topos Theory 
namely: the $¬¬$ operator, modalities, sheaves, forcing 
about which I had no intuition whatsoever!
It {\sl had} to be possible to visualize them 
\msk
That was my main motivation for working with ZSets, ZHAs, etc.
\msk
Status of the project:
now I know how to visualise {\sl most} of the operations and theorems
in Fourman/Scott79 
and {\sl some} things about sheaves and geometric morphisms,
mainly from Peter Johnstone's ``Sketches of an Elephant''
and now I am in a position to get help from adult toposophers! =) =) =)
%
% 
\newpage
4. Categories
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
\newpage
4.1. Categories
{\myttchars
\footnotesize
\begin{verbatim}
A category
C = (C_0, Hom_C, id_C, o_C; idL_C, idR_C, assoc_C)
is a directed graph on steroids.
C_0 is the set (< version for children) of ``objects'' of C.
Hom_C(A,B) returns the set (< same)
of arrows (``morphisms'', ``maps'') from A to B.
id_C(A) returns and ``identity map'' id_A:A>A.
o_C tells how ``composable'' maps (whose tails and heads meet)
compose; we use ``;'' to write the composition in ``diagrammatic order''.
f g
A > B > C
>
(gof)=(f;g)
idL_C and idR_C tell us that identity maps are cancellable
at the left and at the right: (id_A;f)=f, (f;id_B)=f.
assoc_C tells us that composition is associative: (f;g);h=f;(g;h).
A protocategory is just this:
C^ = (C_0, Hom_C, id_C, o_C)
Note (for adults!!!):
the ``properties'' idL_C, idR_C, assoc_C only make sense
when we know how to compare morphisms for equality.
\end{verbatim}
}
\newpage
4.2. Set is a category
{\myttchars
\footnotesize
\begin{verbatim}
This is our archetypal category:
Set = (Set_0, Hom_Set, id_Set, o_Set; idL_Set, idR_Set, assoc_Set)
The objects of Set are sets  so, for example,
{0,1}, {2,3,4,5,6}, {7,8,9}, \empty, \N, \R ∈ Set_0.
The morphisms of Set are the maps between sets  so, for example,
Hom_Set({0,1}, {2,3,4,5,6}) has 2^5 elements (functions!),
Hom_Set({2,3,4,5,6}, {7,8,9}) has 5^3 elements.
Composition in Set is compostion of functions. For example:
2>7
3>8
4>9
0>4 5>9
1>2 6>9
{0,1} > {2,3,4,5,6} > {7,8,9}
>
0>9
1>7
And id_{7,8,9} = {7>7, 8>8, 9>9}.
\end{verbatim}
}
\newpage
4.3. ZPosets are categories
{\myttchars
\footnotesize
\begin{verbatim}
A ZPoset, like K_↓*
(the "kite" with the (transitivereflexive closure of) BM(K))
can be seen as a category, in which
Hom (A,B) = / {(A,B)} when there is an arrow from A to B,
K_↓* 
\ empty when there isn't an arrow from A to B.
For example:
((1,3),(0,2)) ((0,2),(1,0))
(1,3) > (0,2) > (1,0)
>
((1,3),(1,0))
Posets can be seen as categories in which
each Homset has at most one element.
Warning (for children!):
The ``can be seen'' is important!!!
Many slides ago we saw K_↓* as a pair
made of a ZSet K and subset of K×K...
But K_↓* ``as a category'' is a 7uple!
Abuse of language!
Also, morphisms here are funny things 
they are not functions, they are /pairs/!
\end{verbatim}
}
\newpage
4.4. Categories: associativity
{\myttchars
\footnotesize
\begin{verbatim}
Associativity assures us that
diagrams whose cells commute
commute fully.
Example:
A > B > C
  
v v v
D > E > F
ab;bc;cf = ab;(bc;cf)
= ab;(be;ef)
= (ab;be);ef
= (ad;de);ef
\end{verbatim}
}
\newpage
4.5. Categories: iso(morphisms) and identities
{\myttchars
\footnotesize
\begin{verbatim}
An iso is a map f:A>B
for which there is a f¹:B>A
such that (f;f¹)=id_A and (f¹;f)=id_B.
f
A > B
\  \
id_A\ f¹ \id_B
v v v
A > B
f
Inverses are unique. When we have both
(f;g)=id_A and (g;f)=id_B and
(f;h)=id_A and (h;f)=id_B
then we can prove that g=h:
g,h
B > A g = g;id_A
\  \ = g;(f;h)
id_B\ f \id_A = (g;f);h
v v v = id_B;h
B > A = h
g,h
Note that we used identities (id_C) to define isos
and the properties (idL_C, idR_C)
that say that they identities are neutral elements
to prove that inverses are unique.
\end{verbatim}
}
\newpage
4.6. Functors
{\myttchars
\footnotesize
\begin{verbatim}
A functor F: C → D is:
F = (F_0, F_1; respids_F, respcomp_F)
it is made of an _action on objects_, F_0,
that takes objects of C to objects of D,
and an _action of morphisms_, F_1,
that takes morphisms in C to morphisms in D...
and ``properties'' (that appear after the ";"):
respids_F = ∀A∈C_0. id_D(F_0(A)) = F_1(id_C(A))
respcomp_F = ∀A,B,C∈\C_0. ∀f:A>B, g:B>C. F_1(f;g)=F_1(f);F_1(g)
\end{verbatim}
}
\newpage
4.7. Functors  on diagrams
{\myttchars
\footnotesize
\begin{verbatim}
A functor F:\C>\D takes
1. composable maps in \C to composable maps in \D
2. diagrams in \C to diagrams in \D
3. commutative diagrams in \C to commutative diagrams in \D
4. isos in \C to isos in \D
Exercise / homework:
show that if the diagram at the left below commutes
then its image by F commutes too, and F_1(f¹)=F_1(f)¹.
Note: it has lots of (common!) shorthands...
You will have to figure them all out.
f f
A > B FA > FB
\  \ \  \
id\ f¹ \id id\ f¹ \id
v v v F v v v
A > B > FA > FB
 f   f 
g h g h
v v v v
C > D FC > FD
k k
\end{verbatim}
}
\newpage
4.8. Functors: $(A×)$ and $({0,1}x)$
{\myttchars
\footnotesize
\begin{verbatim}
B > FB B > F_0(B)
   
f > Ff f > F_1(f)
v v v v
C > FC C > F_0(C)
F F
\C > \D \C > \D
B > (A×)B B > (A×)_0(B) B > A×B
     
f > (A×)f f > (A×)_1(f) f > λp.(πp,f(π'p))
v v v v v v
C > (A×)C C > (A×)_0(C) C > A×C
(A×) (A×) (A×)
\Set > \Set \Set > \Set \Set > \Set
{2,3} > {0,1}×{2,3}
 
2>4  >  (0,2)>(0,4), (1,2)>(1,4),
3>6   (0,3)>(0,6), (1,3)>(1,6)
v v
{4,5,6} > {0,1}×{4,5,6}
\end{verbatim}
}
\newpage
4.9. Functors: why $(A×)_1(f) = λp.(πp,f(π'p))$ ?
%: p:? p:A{×}B
%:  
%: p:? π'p:? f:B→C p:A{×}B π'p:B f:B→C
%:    
%: πp:? f(π'p):? πp:A f(π'p):C
%:  
%: (πp,f(π'p)):? (πp,f(π'p)):A{×}C
%:  
%: λp.(πp,f(π'p)):A{×}B→A{×}C λp.(πp,f(π'p)):A{×}B→A{×}C
%:
%: ^why1 ^why2
\pu
$$\ded{why1} \quad \ded{why2}$$
How would someone {\sl find} the expression $λp.(πp,f(π'p))$?
Idea: let's try to construct a function $?:A×B→A×C$ from $f:B→C$...
%: ?:A{×}B p:A{×}B
%: π' 
%: ?:A{×}B ?:B f:B→C p:A{×}B π'p:B f:B→C
%: π app  
%: ?:A ?:C πp:A f(π'p):C
%: \ang{,} 
%: ?:A{×}C (πp,f(π'p)):A{×}C
%: λ 
%: ?:A{×}B→A{×}C λp.(πp,f(π'p)):A{×}B→A{×}C
%:
%: ^why3 ^why4
\pu
$$\ded{why3}$$
% $$\ded{why3} \qquad \ded{why4}$$
\newpage
4.10. Categories with
{\myttchars
\footnotesize
\begin{verbatim}
Almost all the time in Category Theory
we work with categories with extra structure
(and extra properties).
For example, Set, and each ZHA, is
(1) a category with a terminal object
(2) a category with binary products
(3) a category with finite products (= 1+2)
(4) a category with exponentials (needs 2)
(5) a cartesianclosed category (= all the above)
(6) a category with an initial object
(7) a category with binary coproducts (a.k.a. binary sums)
(8) a category with finite coproducts (a.k.a. finite sums; = 6+7)
(9) a cartesianbiclosed category (= all the above)
\end{verbatim}
}
\newpage
4.11. BiCCCs
{\myttchars
\footnotesize
\begin{verbatim}
A Cartesianbiclosed category has these operations
(we use some nested uples for clarity):
(C_0, Hom, id, ∘,
(1,!),(×,π,π', <,>),(→,♭,♯),
(0,¡),(+,ι,ι', [,]))
We are omitting the ``properties'' that these operations obey.
Trick: a (bi)Heyting Algebra has exactly the same structure
as a (bi)CCC.
A (bi)Heyting Algebra is a (bi)CCC in which each Homset
has at most one element.
(See the handouts!)
\end{verbatim}
}
%
% 
\newpage
5. Logic on subsets of ZDAGs
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
\newpage
5.1. Order topologies
{\myttchars
\footnotesize
\begin{verbatim}
Let (A,R) be a graph.
Example: (A,R) = ({1,2,3}, {(1,3),(2,3)}).
1 2
\ /
v v
3
A _topology_ on a set X is a set of subsets of X (the "open sets").
A graph R ⊆ A×A induces a topology on A.
Here's how:
O(A) = {U⊆A  (1∈U → 3∈U) ∧ (2∈U → 3∈U)}
\/ \/
condition condition
induced by induced by
1>3 2>3
If we order the open sets by inclusion
we get a partial order, (O(A),⊆).
Let's draw it as a graph.
\end{verbatim}
}
\newpage
5.2. Order topologies (2)
{\myttchars
\footnotesize
\begin{verbatim}
{1,2,3} 1 1 ⊤
^ ^ 1 ^ ^
/ \ ^ ^ / \
/ \ / \ / \
1 2 {1,3} {2,3} 1 0 0 1 P Q
\ / ^ ^ 1 1 ^ ^
\ / \ / ^ ^ \ /
v v \ / \ / \ /
3 {3} 0 0 P∧Q
^ 1 ^
 ^ 
  
{} 0 0 ⊥
0
(A,R) (O(A),⊆) (O(A),⊆)
\end{verbatim}
}
\newpage
5.3. Topologies are Heyting Algebras
{\myttchars
\footnotesize
\begin{verbatim}
Look into almost any CT book
and you find this:
for any topological space (X,O(X))
the poset (O(X),⊆) is a Heyting Algebra.
A HA is a structure (Ω,≤,⊤,⊥,∧,∨,→,¬)
obeying certain properties.
What are Ω,≤,⊤,⊥,∧,∨,→,¬?
Ω := O(X) the open sets are the truth values.
⊤ := X because ⊤ is true everywhere
⊥ := {} because ⊥ is true everywhere
≤ := ⊆
P∧Q := {x∈X  x∈P ∧ x∈Q} P∧Q is true where both P and Q are
P∨Q := {x∈X  x∈P ∨ x∈Q} P∨Q is true where either P or Q are
P→Q := {x∈X  x∈P → x∈Q} P→Q is true where P implies Q < oops...
Problem: the definition of P→Q does not work always 
it _sometimes_ produces a nonopen output even when
both its inputs are open... example:
0 0 1
0 0 > 1 0 = 1 1
1 1 1 0 1 0
(open) (open) (nonopen)
\end{verbatim}
}
\newpage
5.4. Order topologies: nonopen subsets
{\myttchars
\footnotesize
\begin{verbatim}
A subset B⊆A is nonopen when it has a 1 above a 0
with an arrow 1>0 between them... for example,
{2} is not open:
1 2 0 1
\ / \ /
v v v v
3 0
\end{verbatim}
}
\newpage
5.5. Order topologies: stable subsets
{\myttchars
\footnotesize
\begin{verbatim}
When I explain these things to children I find it better
to start using the term ``stable subset'' (or stable function)
instead of ``open subset''.
The idea:
`1's are ``things'' (which are heavy),
`0's are ``empty spaces''.
The `1's want to fall down.
For example, 010 is unstable:
0 1 0 0
\ / \ /
v v   > v v
0 1
Then
P(* *) = {0 0, 0 0, 0 1, 0 1, 1 0, 1 0, 1 1, 1 1}
* 0 1 0 1 0 1 0 1
O(* *) = {0 0, 0 0, 0 1, 1 0, 1 1}
* 0 1 1 1 1
The `P' means ``parts'', but
The `O' means ``open sets'',
for technical reasons that we will understand much later...
(Children don't know topology  I mean, usually)
\end{verbatim}
}
\newpage
5.6. Fixing the implication
{\myttchars
\footnotesize
\begin{verbatim}
The fix:
P→Q := {x∈X  x∈P → x∈Q} < bad
P→Q := {x∈X  x∈P → x∈Q}° < good ("°" means "interior")
\end{verbatim}
}
\newpage
5.7. Modal logic for children (...but not now)
{\myttchars
\footnotesize
\begin{verbatim}
We can do MUCH better than just adding an "°".
Let (A,R) be a graph, and (A,O(A)) its order topology.
These are two logics:
I = (Ω , ≤ , ⊤ , ⊥ , ∧ , ∨ , → , ¬ )
I I I I I I I I
M = (Ω , ≤ , ⊤ , ⊥ , ∧ , ∨ , → , ¬ , ⋅ , ⋄ )
M M M M M M M M M M
"I" is the intuitionistic logic we've been working on.
"M" is a modal logic (S4).
Ω_I = O(A)
Ω_M = P(A)
P > Q := ⋅ (P > Q)
I M M
This gives us
1) Modal Logic for children
2) Gödels embedding of intuitionistic logic into S4,
for children
But we won't go that way now!!!
\end{verbatim}
}
\newpage
5.8. Some order topologies are ZHAs
{\myttchars
\footnotesize
\begin{verbatim}
For example,
1
1 1
1 1
0
1 1
1 1
0 0
1 1 0 0 1
/ \ 1 1 1 1
v v 0 0 0
2 3 > 1 0 0 0 0 1
  1 0 1 1 0 1
v v 0 0
4 5 0 0 0 0
1 0 0 1
0
0 0
0 0
\end{verbatim}
}
\newpage
5.9. The generators of a ZHA
{\myttchars
\footnotesize
\begin{verbatim}
Let (A,BM(A)) be a ZDAG, and
let (H,WM(H)) be a ZHA such that (H,WM(H)^*)≃(O(A),⊆).
Let G=Gen(H)  the ``generators'' of H  be the points of H
with exactly one arrow going into them (in the ZDAG).
Then Gen(H) is /exactly/ the image of $A$ in $H$ by the inclusion '↓'!
Magic! =)
1
1 1
1 1
0
1 1
1 1
0 0
1 1 0 0 1
/ \ 1 1 1 1
v v 0 0 0
2 3 > 1 0 0 0 0 1
  1 0 1 1 0 1
v v 0 0
4 5 0 0 0 0
1 0 0 1
0
0 0
0 0
Can we recover $(A,BM(A))$ from $G⊆H$?
Well, sort of.
\end{verbatim}
}
\newpage
5.10. The generators of a ZHA (2)
{\myttchars
\footnotesize
\begin{verbatim}
Let (A,BM(A)) be a ZDAG, and
let (H,WM(H)) be a ZHA such that (H,WM(H)^*)≃(\Opens(A),⊆).
Let G=Gen(H).
Can we recover $(A,BM(A))$ from $G⊆H$?
Almost  we can make G inherit an order from H...
Important: we need to look at H as a ZPoset, not as a ZDAGs,
and we need to invert the direction of the arrows.
\end{verbatim}
}
\newpage
5.11. The generators of a ZHA (3)
{\myttchars
\footnotesize
\begin{verbatim}
Look: (A,BM(A)^*) ≃ (G,R^*).
33 .. 32 13
32 23 32 ..  \ 
32 13 22 13 .. 13 v v v
20 12 21 12 .. 12 20 12
10 01 20 11 20 ..  / 
10 01 10 01 v v v
00 .. 10 01
(A,BM(A)) (H,WM(H)) (G,R)
≃(\Opens(A),⊆)
Now let's divide the generators in G⊆H into two classes
according to the direction of the (only) arrow
that points to them. In the example above we have
Gennw(H)={10,20,32} and Genne(H)={01,12,13}.
The order on G is the columns 32>20>10 and 13>12>01,
plus some intercolumn arrows.
\end{verbatim}
}
\newpage
5.12. The generators of a ZHA (4)
{\myttchars
\footnotesize
\begin{verbatim}
What happens when we "omit the other digit",
and write 1_, 2_, 3_... for the leftwall generators
and _1, _2, _3... for the rightwall generators?
33 .. 3_ _3
3_ 23 3_ ..  \ 
3_ _3 22 _3 .. _3 v v v
2_ _2 21 _2 .. _2 2_ _2
1_ _1 2_ 11 2_ ..  / 
1_ _1 1_ _1 v v v
00 .. 1_ _1
(A,BM(A)) (H,WM(H)) (G,R)
≃(\Opens(A),⊆)
It becomes easier to create homework problems!!!
We can specify the twocolumn graphs as
(height of the left column,
height of the right column,
interwall arrows from left to right,
interwall arrows from right to right)
The ZHA above is generated by the twocolumn graph
(3, 3, {3>2}, {2>1}).
\end{verbatim}
}
\newpage
5.13. The generators of a ZHA: homework ideas
{\myttchars
\footnotesize
\begin{verbatim}
1) Draw the ZHA correponding to each of the 2column
graphs specified below:
(3, 4, {}, {})
(3, 4, {}, {_4>_4})
(3, 4, {}, {_4>_3})
(3, 4, {}, {_4>_2})
(3, 4, {}, {_3>_2})
(3, 4, {}, {_4>_3, _3>_2})
(3, 4, {_2,_1}, {})
(3, 4, {_2,_1}, {_4>_3})
(3, 4, {_2,_1}, {_4>_3, _3>_2})
2) Draw the ZHA etc etc for these string specs:
"123RRLL2L321"
(...)
3) Find the 2column graph and the string spec that
generate each of the ZHAs below:
(...)
4) For each of the ZPosets below draw its topology as a graph.
o o
o o o o o o o o o
o o o o
(They are not going to be ZDAGish or ZPosetish,
because when we have three independent points the
topology contains a cube)
\end{verbatim}
}
%
% 
\newpage
6. Modalities for children
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
\newpage
6.1. The axioms
{\myttchars
\footnotesize
\begin{verbatim}
A modality on Omega is an operation J: Ω > Ω obeying these axioms:
M1: P <= J(P)
M2: J(J(P)) <= J(P)
M3: J(P&Q) = J(P)&J(Q)
What do these axioms _mean_ (e.g., geometrically, on ZDAGs)?
Equivalent elements form diamondshaped regions
Stable elements form a subZDAG
To understand and prove this we start with _consequences_ of M1, M2, M3
What familiar operations obey these axioms?
I(P) := P (the "identity" modality)
T(P) := ⊤ (the "true" modality)
O_A(P) := A or P (the "or" modality)
E_A(P) := A > P (the "implication" (exponential) modality)
B_A(P) := (P > A) > A ("boolean quotient"  general case)
B(P) := B_⊥(P) ("boolean quotient"  particular case))
= (P > ⊥) > ⊥
= ¬¬P
What other operations obey these axioms?
Modalities on ZDAGs correspond to dividing by diagonal lines
\end{verbatim}
}
\newpage
6.2. Fourman/Scott1979: ``Sheaves and Logic''
This is an excerpt  pages 329331  from M.P. Fourman and D.S.
Scott's ``Sheaves and Logic'', that was published in SLNM0753
(``Applications of Sheaves: Proceedings of the Research Symposium on
Applications of Sheaf Theory to Logic, Algebra and Analysis  Durham,
july 921, 1977'').
\bsk
\BF{2.18. ELEMENTARY JOPERATORS.} In these examples $Ω$ is a given cHa.
(i) {\sl The closed quotient.} The operator is defined by
%
$$J_a p = a ∨ p.$$
%
This is obviously a Joperator, and the congruence relation is:
%
$$a∨p = a∨q.$$
%
The set of fixed points (quotient lattice) is:
%
$$\setofst{p ∈ Ω}{a≤p}.$$
Classically speaking in the spatial case where $Ω=\Opens(X)$, the
quotient corresponds to the topology on the {\sl closed} subspace
complementary to the open set $a$.
This quotient makes the element $a$ ``false'' and is the least such.
\newpage
6.3. Fourman/Scott1979 (2)
(ii) {\sl The open quotient.} The operator is defined by:
%
% (findslnm0753page (+ 14 330))
%
$$J^a p = a→p.$$
%
The congruence relation is:
%
$$a∧p = a∧q \qquad \text{(equivalently, $a ≤ p↔q$)}.$$
%
The set of fixed points is thus isomorphic to
%
$$Ω_a = \setofst{p∈Ω}{p≤a}.$$
Intuitionistically speaking in the spatial case, this quotient
corresponds to the topology on the open subspace $a$. This quotient
makes $a$ ``true'' and is the leat such.
\newpage
6.4. Fourman/Scott1979 (3)
(iii) {\sl The Boolean quotient}. The operator is defined by:
%
$$B_a p = (p→a)→a.$$
%
The congruence relation is:
%
$$p→a = q→ a.$$
%
The set of fixed points is
%
$$\setofst{p∈Ω}{(p→a)→a≤p}.$$
In case $a=⊥$, this is the wellknown $¬¬$quotient giving the
(complete) Boolean algebra of ``stable'' elements. For arbitrary $a$,
we could first form $Ω/J_a$ and follow this by the $¬¬$quotient to
obtain $Ω/B_a$. (In general, if $J≤K$, then $Ω/K$ is a quotient of
$Ω/J$.)
We remark that in general $Ω/J$ is a cBa iff $J=B_{J_⊥}$. Further, is
$Ω$ is already Boolean, then {\sl every} Joperator on $Ω$ is of the
form $B_a = J_a$.
\newpage
6.5. Fourman/Scott1979 (4)
(iv) {\sl The forcing quotient}. The operator is a combination of
previous ones:
%
$$(J_a∧J^b)p = (a∨b)∧(b→p).$$
%
The congruence relation is a conjunction:
%
$$a∨b=a∨q \quad \text{and} \quad b∧p=b∧q.$$
%
The set of fixed points is:
%
$$\setofst{p∈Ω}{b→p≤ a→p}.$$
The point of the quotient is that it provides the {\sl least}
Joperator such that $Ja≤Jb$; that is, we take the least quotient that
``forces'' $a→b$ to be true. If we want to force a sequence of
statements $a_i→b_i$, for $i Ω obeying these axioms:
M1: P ≤ J(P)
M2: J(J(P)) ≤ J(P)
M3: J(P∧Q) = J(P)∧J(Q)
M1 means that applying J once moves up,
M2+M1 mean that applying J a second time does not move further up.
(What is M3?)
Let's say that P is _stable_ when P = J(P),
and that P and Q are _equivalent_ when J(P) = J(Q).
Formally:
St(P) := (P = J(P)) (returns true or false)
Sts := {P in Om  St(P) (returns a set)
P==Q := (J(P) = J(Q)) (returns true or false)
E(P) := {Q in Om  P==Q) (returns a set)
Es := {E(P)  P in Om} (returns a partition)
\end{verbatim}
}
\newpage
6.11. Stable and equivalent truthvalues (2)
{\myttchars
\footnotesize
\begin{verbatim}
Let's look at an example.
When J(P) := B(P) = not not P, we have
33 33 ..
32 23 33 33 .. ..
22 13 33 33 .. ..
Om == 12 03 (P>J(P)) == 33 03 Sts == .. 03
11 02 33 03 .. ..
10 01 10 03 10 ..
00 00 10
and the equivalence classes are:
.. .. .. 33
.. .. .. .. .. .. 32 23
.. .. .. .. .. .. 22 13
.. .. .. .. .. 03 12 ..
.. .. .. .. .. 02 11 ..
.. .. 10 .. .. 01 .. ..
00 .. .. ..
i.e., Es = {{00}, {10}, {01,02,03}, {11,12,13,22,23,32,33}}
\end{verbatim}
}
\newpage
6.12. What is axiom M3? Introduction
{\myttchars
\footnotesize
\begin{verbatim}
Remember: a modality on Ω is an operation J: Ω > Ω obeying these axioms:
M1: P ≤ J(P)
M2: J(J(P)) ≤ J(P)
M3: J(P∧Q) = J(P)∧J(Q)
M3 implies _monotonicity_, which is:
Mo: P ≤ Q implies J(P) ≤ J(Q)
but this is not obvious. Let's prove that in several steps.
..
.. ..
.. .. ..
.. .. .. ..
.. 64 .. 46 .. J(P) J(Q)
.. .. .. .. .. .. ^ ^
.. .. .. 44 .. .. .. \ /
.. .. .. .. .. .. .. .. J(P)&J(Q)
.. .. .. .. .. .. ..
.. .. .. .. .. ..
.. 31 .. 13 .. P Q
.. .. .. .. ^ ^
.. 11 .. \ /
.. .. P&Q
..
Let P=31, Q=13, J(P)=64, J(Q)=46.
Then P∧Q=11 and J(P)∧J(Q)=44, but without M3 we wouldn't know
where is J(P∧Q)=J(11)... it could be anything above 11.
M3 tells us that J(11)=J(P∧Q)=J(P)∧J(Q)=64∧46=44,
which is below J(P)=64 and J(Q)=46...
(Actually J(11) is the topmost value below J(P) and J(Q).)
\end{verbatim}
}
\newpage
6.13. What is axiom M3? Monotonicity for products
{\myttchars
\footnotesize
\begin{verbatim}
M3 implies that J(P&Q)=J(P)&J(Q),
so let's rewrite the "J(P)&J(Q)" as "J(P&Q)" in our diagram.
M3 implies that when we apply J to all elements of a
product diagram P < P&Q > Q, we do have arrows
in its image J(P) < J(P&Q) > J(Q).
Formally, this is
Mp: J(P&Q) <= J(P), and J(P&Q) <= J(Q)
("monotonicy for products")
..
.. ..
.. .. ..
.. .. .. ..
.. 64 .. 46 .. J(P) J(Q)
.. .. .. .. .. .. ^ ^
.. .. .. 44 .. .. .. \ /
.. .. .. .. .. .. .. .. J(P&Q)
.. .. .. .. .. .. ..
.. .. .. .. .. ..
.. 31 .. 13 .. P Q
.. .. .. .. ^ ^
.. 11 .. \ /
.. .. P&Q
..
\end{verbatim}
}
\newpage
6.14. What is axiom M3? Monotonicity
{\myttchars
\footnotesize
\begin{verbatim}
When P <= Q we have P = P&Q,
and a part of the previous diagram collapses..
If P=P&Q=11, Q=13, J(P)=J(P&Q)=44, J(Q)=46,
and we have:
..
.. ..
.. .. ..
.. .. .. ..
.. .. .. 46 .. J(Q)
.. .. .. .. .. .. ^
.. .. .. 44 .. .. .. /
.. .. .. .. .. .. .. .. J(P) = J(P&Q)
.. .. .. .. .. .. ..
.. .. .. .. .. ..
.. .. .. 13 .. Q
.. .. .. .. ^
.. 11 .. /
.. .. P = P&Q
..
\end{verbatim}
}
\newpage
6.15. What is axiom M3? Monotonicity (2)
{\myttchars
\footnotesize
\begin{verbatim}
Formally:
M3 
J(P&Q)=J(P)&J(Q) J(P)&P(Q)<=J(Q)
Mp := 
J(P&Q)<=J(Q) J(P&Q)<=J(Q)
P<=Q
=====
P=P&Q
 Mp
P<=Q J(P)=J(P&Q) J(P&Q)<=J(Q)
Mo := 
J(P)<=J(Q) J(P)<=J(Q)
\end{verbatim}
}
\newpage
6.16. The sandwich lemma
{\myttchars
\footnotesize
\begin{verbatim}
From
M1: P <= J(P)
M2: J(J(P)) <= J(P)
M3: J(P&Q) = J(P)&J(Q)
We can prove that if P <= Q <= J(P) then P and Q (and J(P)) are equivalent.
More formally ("sandwich lemma"):
SL: If P <= Q <= J(P) then P==Q.
Proof:
Q<=J(P)
Mo M2
P<=Q J(Q)<=J(J(P)) J(J(P))<=J(P)
Mo 
P<=Q Q<=J(P) J(P)<=J(Q) J(Q)<=J(P)
SL := 
J(P)=J(Q) J(P)=J(Q)
\end{verbatim}
}
\newpage
6.17. The sandwich lemma: equivalence classes don't have holes
{\myttchars
\footnotesize
\begin{verbatim}
Sandwich lemma:
SL: If P<=Q<=J(P) then P==Q.
What does that mean? Let's look again at our example.
This was an equivalence class,
33
32 23
22 13
12 ..
11 ..
.. ..
..
and J(12) = 33.
Making P=12 and Q=22 in the sandwich lemma,
we have P <= Q <= J(P), i.e., 12 <= 22 <= 33,
and, by C1, P==Q, i.e., 12==22, i.e., J(12)=J(22)=33.
The sandwich lemma is very powerful.
If we know that J(11)=33, then,
using it for all elements between 11 and 33,
we get that J(12)=J(13)=J(22)=J(23)=J(32)=33.
The sandwich lemma says that equivalence classes do not have "holes".
\end{verbatim}
}
\newpage
6.18. Equivalence classes: the bottom element lemma
{\myttchars
\footnotesize
\begin{verbatim}
The axiom
M3: J(P&Q) = J(P)&J(Q)
can be extended to P&Q&R, P&Q&R&S, etc. Look:
J(P&Q&R) = J(P&(Q&R))
= J(P)&J(Q&R)
= J(P)&J(Q)&J(R)
What happens when we take the "&" of all the truthvalues in an
equivalence class, and then we apply J to it?
Let's look at an example.
Let E := {11,12,13,22,23,32,33}
and &E := 11&12&13&22&23&32&33 (= 11).
33
32 23
22 13
12 ..
11 ..
.. ..
..
J(&E) = J(11 & 12 & 13 & 22 & 23 & 32 & 33)
= J(11) & J(12) & J(13) & J(22) & J(23) & J(32) & J(33)
= 33 & 33 & 33 & 33 & 33 & 33 & 33
= 33
\end{verbatim}
}
\newpage
6.19. Equivalence classes: the bottom element lemma (2)
{\myttchars
\footnotesize
\begin{verbatim}
Let E := {11,12,13,22,23,32,33}
and &E := 11&12&13&22&23&32&33 (= 11).
33
32 23
22 13
12 ..
11 ..
.. ..
..
J(&E) = J(11 & 12 & 13 & 22 & 23 & 32 & 33)
= J(11) & J(12) & J(13) & J(22) & J(23) & J(32) & J(33)
= 33 & 33 & 33 & 33 & 33 & 33 & 33
= 33
So, &E is ==equivalent to all other elements of E,
and &E is below all other elements of E...
That means that &E is the bottom element of that equivalence class,
and, by the sandwich lemma, the equivalence class is made of
all elements between &E and J(&E).
TYPESET THIS IN RED:
(That only works for _finite_ conjunctions, but we are working only
with finite cases...)
\end{verbatim}
}
\newpage
6.20. Equivalence classes: the bottom element lemma (3)
{\myttchars
\footnotesize
\begin{verbatim}
A small modification of that previous argument leads to this:
let E be any nonempty set of equivalent elements of Om.
Then the diamondshaped region
E' := D(&E,J(&E))
having &E as its bottom element and J(&E) as its top element
contains E, and all elements in E' are equivalent.
For example, in
33
32 23
22 13
12 ..
11 ..
.. ..
..
let E := {13, 22}; note that J(13)=J(22)=33, and that &E = 13&22 = 12.
Then
J(&E) = J(13 & 22)
= J(13) & J(22)
= 33 & 33
= 33...
\end{verbatim}
}
\newpage
6.21. Equivalence classes: the bottom element lemma (4)
{\myttchars
\footnotesize
\begin{verbatim}
let E := {13, 22}; note that J(13)=J(22)=33, and that &E = 13&22 = 12.
Then
J(&E) = J(13 & 22)
= J(13) & J(22)
= 33 & 33
= 33,
and all elements in the diamondshaped region
E' := D(&E,J(&E)) = D(12,33) = {12,13,22,23,32,33}
are equivalent; this new lemma takes any arbitrary nonempty set of
equivalent elements, and extends it to a diamondshaped set of
equivalent elements. In our example it extends E={13,22} to
E={12,13,22,23,32,33}, ie.,
.. 33
.. .. 32 23
22 13 22 13
.. .. to: 12 ..
.. .. .. ..
.. .. .. ..
.. ..
\end{verbatim}
}
\newpage
6.22. An operation dual to J
{\myttchars
\footnotesize
\begin{verbatim}
Let Eq(P) be the equivalence class of P.
Formally,
Eq(P) := {Q  J(P)=J(Q)}.
We know that J(P) returns the top element of Eq(P).
We saw, by the bottom element lemma, that each Eq(P) has a bottom element.
Let's denote that bottom element by B(P).
Formally, B(P) is the "&" of all elements of Eq(P).
(A finite conjunction because we are in the finite case!)
We have that Eq(P) is made of all elements between B(P) and J(P),
and nothing else...
We will return to this later.
\end{verbatim}
}
\newpage
6.23. How J interacts with $∧$
{\myttchars
\footnotesize
\begin{verbatim}
We know (proofs later!) that "&" is monotonous on both its inputs,
and that P<=J(P) and that J is monotonous, i.e.,
P<=P' Q<=Q' P<=Q
   
P&Q<=P'&Q P&Q<=P&Q' P<=J(P) J(P)<=J(Q)
These things mean that for any truthvalues P and Q we know some
relations between P&Q, J(P)&Q, ..., J(J(P)&J(Q)):
J(J(P)&J(Q))
^ ^ ^
/  \
/  \
J(J(P)&Q) J(P)&J(Q) J(P&J(Q))
^ ^ ^ ^ ^ ^
 \/ \/ 
 /\ /\ 
J(P)&Q J(P&Q) P&J(Q)
^ ^ ^
\  /
\  /
P&Q
But M3 tells us that J(P&Q)=J(P)&J(Q), so at least the two middle
vertices in the cubeshaped diagram above yield the same result...
Actually we can prove more: J(P&Q) is equal to all the four upper
truthvalues in the diagram above. Proofs:
J(J(P)&Q) = J(J(P))&J(Q) = J(P)&J(Q)
J(P&J(Q)) = J(P)&J(J(Q)) = J(P)&J(Q)
J(J(P)&J(Q)) = J(J(P))&J(J(Q)) = J(P)&J(Q)
\end{verbatim}
}
\newpage
6.24. How J interacts with $∨$
{\myttchars
\footnotesize
\begin{verbatim}
We know (proofs later!) that "∨" is monotonous on both its inputs,
and (again!) that P<=J(P) and that J is monotonous, i.e.,
P<=P' Q<=Q' P<=Q
   
PvQ<=P'vQ PvQ<=PvQ' P<=J(P) J(P)<=J(Q)
These things mean that for any truthvalues P and Q we know these
relations between PvQ, J(P)vQ, ..., J(J(P)&J(Q)):
J(J(P)∨J(Q))
^ ^ ^
/  \
/  \
J(J(P)∨Q) J(P)∨J(Q) J(P∨J(Q))
^ ^ ^ ^ ^ ^
 \/ \/ 
 /\ /\ 
J(P)∨Q J(P∨Q) P∨J(Q)
^ ^ ^
\  /
\  /
P∨Q
But we know more. We can prove that we have J(J(P)∨J(Q))<=J(P∨Q),
\end{verbatim}
}
\newpage
6.25. How J interacts with $∨$ (2)
{\myttchars
\footnotesize
\begin{verbatim}
But we know more. We can prove that we have J(J(P)∨J(Q))<=J(P∨Q),
 
P<=P∨Q Q<=P∨Q
Mo Mo
J(P)<=J(P∨Q) J(Q)<=J(P∨Q)

J(P)∨J(Q)<=J(P∨Q)
Mo M2
J(J(P)∨J(Q))<=J(J(P∨Q)) J(J(P∨Q))=J(P∨Q)

J(J(P)∨J(Q))<=J(P∨Q)
and this implies that all the 4 truthvalues in the upper lozenge are
equal...
\end{verbatim}
}
\newpage
6.26. How ZHAs are cut into equivalence classes
{\myttchars
\footnotesize
\begin{verbatim}
We saw that the equivalence classes of J are diamondshaped regions,
but we don't know yet how which ways of splitting a ZHA into
diamondshaped regions correspond to "J"s that obey M1, M2 and M3...
For example, does this correspond a valid J?
/\
55 / \
55 55 / /\
55 55 35 / / \
55 55 35 35 /\ /\ \
51 55 33 35 35 /\ \/ \ \ (*)
50 51 33 33 35 35 \ \ \ /\ /
50 51 33 13 35 \ \/\/ \/
50 21 13 13 \/ / /
21 13 13 \/ /
13 13 \ /
13 \/
It turns out that no.
Look at the frontiers between equivalence classes.
We will see two separate arguments.
\end{verbatim}
}
\newpage
6.27. How ZHAs are cut into equivalence classes (2)
{\myttchars
\footnotesize
\begin{verbatim}
The first, based on the fact that J(Q)=J(R) implies J(P&Q)=J(P&R),
shows that borders that make lambda and mirrored lambdas are not
possible,
/\ /\ /\ /\
/\ \ / /\ /\/\ /\/\
\/\/ \/\/ \/ / \ \/
\/ \/ \/ \/
(lambda) (mirrored lambda) (y) (mirrored y)
and a second one, based on the fact that St(P) and St(Q) implies
St(P&Q), that shows that borders that make "y"s and mirrored "y"s are
not possible.
\end{verbatim}
}
\newpage
6.28. Borders like lambdas are not possible
{\myttchars
\footnotesize
\begin{verbatim}
Lemma: if Q==R then P&Q==P&R ("`&' preserves equivalence").
Proof:
J(Q)=J(R)

J(Q)=J(R) J(P)&J(Q)=J(P)&J(R)
&pe := ===================
J(P&Q)=J(P&R) J(P&Q)=J(P&R)
Diagrammatically, we can use "P&" to "move southwest", as here:
P Q 52 34
\ / \ \ / \
P&Q R 32 14
\ / \ /
Q&R 12
In the diagram (*) we have J(34)=J(14)=35, but J(32)=33 and J(12)=13;
if we take P=52, Q=34, R=14, the lemma &pe would imply that J(32)=J(12),
which does not hold. Diagrammatically, the lemma "&pe" implies that
the lines marking the frontiers between different equivalence classes
cannot make either a "lambda" or its mirror image...
/\ /\
/\ \ / /\
\/\/ \/\/
\/ \/
(lambda) (mirrored lambda)
\end{verbatim}
}
\newpage
6.29. Borders like "y"s are not possible
{\myttchars
\footnotesize
\begin{verbatim}
Lemma: St(P) and St(Q) implies St(P&Q) ("& preserves stability")
Proof:
P=J(P) Q=J(Q)
============== M3
P=J(P) Q=J(Q) P&Q=J(P)&J(Q) J(P)&J(Q)=J(P&Q)
&ps := 
P&Q=J(P&Q) P&Q=J(P&Q)
We start with the case in which our ZHA is a lozenge.
We know that equivalence classes are diamondshaped regions.
When the ZHA is a lozenge, the equivalence classes are lozenges too.
Let's look at this example. Here are a ZHA H and a function J: H > H
on it; J obeys M1 and M2  remember the geometrical meaning of M1 and
M2!  and we want to show that that J cannot obey M3 too.
33 33 /\
32 23 33 33 / \
31 22 13 31 33 13 /\ /\
30 21 12 03 31 31 13 13 / \/ \
20 11 02 31 13 13 \ / /
10 01 13 13 \/ /
00 13 \ /
\/
\end{verbatim}
}
\newpage
6.30. Borders like "y"s are not possible (2)
{\myttchars
\footnotesize
\begin{verbatim}
Let's look at this example. Here are a ZHA H and a function J: H > H
on it; J obeys M1 and M2  remember the geometrical meaning of M1 and
M2!  and we want to show that that J cannot obey M3 too.
33 33 /\
32 23 33 33 / \
31 22 13 31 33 13 /\ /\
30 21 12 03 31 31 13 13 / \/ \
20 11 02 31 13 13 \ / /
10 01 13 13 \/ /
00 13 \ /
\/
Let P' be the truthvalue at the left of the vertex of the "y",
and Q' be the truthvalue at the right of the vertex of the "y".
In this example, P'=12 and Q'=21.
Let P be J(P'),
and Q be J(Q').
The line from P' to P points exactly northwest, and
the line from Q' to Q points exactly northeast;
because of this, P'&Q' is equal to P&Q, and P'&Q' is the truthvalue
just below the vertex of the "y".
Let R be P&Q = P'&Q'.
P and Q are stable, and so, by "&ps", R=P&Q=P'&Q' must be stable too.
But, by hypothesis, the top element of the equivalence class of R is Q,
so R is not stable, and thus the J in our example cannot obey M3.
\end{verbatim}
}
\newpage
6.31. Stable elements and diamondshaped regions
{\myttchars
\footnotesize
\begin{verbatim}
Two slogans:
Equivalent elements form diamondshaped regions
Stable elements form a subZHA
Definitions:
P==Q := P* <> Q* (P==Q is true or false)
St(P) := P == P* (St(P) is true or false)
Sts := {P in Omega  P == P*} (Sts is a set)
Di(P) := {Q in Omega  P == Q} (Di(P) is a set)
Dis := {Di(P)  P in Omega} (Dis is a partition of Omega)
33 33 33
32 23 33 33 33 33
22 13 33 13 33 33
12 03 13 13 33 03
11 02 11 13 33 03
10 01 11 11 10 03
00 11 00
Pid PJ Pnotnot
\end{verbatim}
}
\newpage
6.32. Modalities induce Zquotients
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
\newpage
6.33. Zquotients are modalities
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
\newpage
6.34. Modalities correspond to morphisms of HAs
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
%
% 
\newpage
7. Sheaves
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
\newpage
7.1. Restriction, compatibility, glueing
{\myttchars
\footnotesize
\begin{verbatim}
2>20 3>300
3>300 4>4000
{2,3,4} {3,4,5} 4>4000 5>50
^ ^  
\ / \ /
\ / v v
{3,4} 3>300
4>4000
The domain of 20,300,4000 is {2,3,4}.
The domain of 300,4000,50 is {3,4,5}.
The common domain of 20,300,4000 and 300,4000,50
is {2,3,4}∩{3,4,5} = {3,4}.
The restriction of 20,300,4000 to {3,4} is 300,4000.
The restriction of 300,4000,50 to {3,4} is 300,4000. ← same!
20,300,4000 and 300,4000,50 are compatible.
We are moving down, toward smaller sets, using intersections.
Is there a way to move up  to unions?
Yes, by glueing.
But we have to see some other things first.
\end{verbatim}
}
\newpage
7.2. Compatible families
{\myttchars
\footnotesize
\begin{verbatim}
Let's write the domain of a function in its subscript.
Suppose that f_A compat g_B compat h_C compat k_D,
and that E⊆A∩B∩C∩D.
Then the restrictions of f_A, g_B, h_C, k_D to E all coincide!!!
The diagram below should make
A B C D f_A g_B h_C k_D
A∩B B∩C C∩D ?_A∩B ?_B∩C ?_C∩D
^ ^ ^ \  /
\  / \  /
\  / \  /
\  / v v v
E ?_E
?_A∩B = f_AA∩B = g_BA∩B f_AE = f_AA∩BE = g_BA∩BE = g_BE
?_B∩C = g_BB∩C = h_CB∩C g_BE = g_BB∩CE = h_CB∩CE = h_CE
?_C∩D = h_CC∩D = k_DC∩D h_CE = h_CC∩DE = k_DC∩DE = k_DE
We say that the family {f_A, g_B, h_C, k_D} is
/pairwisecompatible/ when any two elements in it
are compatible  f_A compat g_B compat h_C compat k_D
plus f_A compat h_C, g_B compat k_D, f_A compat k_D.
\end{verbatim}
}
\newpage
7.3. Compatible families: naming
{\myttchars
\footnotesize
\begin{verbatim}
A notation trick...
Compare:
f_A g_B h_C k_D f_A f_B f_C f_D
?_A∩B ?_B∩C ?_C∩D f_A∩B f_B∩C f_C∩D
\  / \  /
\  / \  /
\  / \  /
v v v v v v
?_E ?_E
?_A∩B := f_AA∩B = g_BA∩B f_A∩B := f_AA∩B = f_BA∩B
?_B∩C := g_BB∩C = h_CB∩C f_B∩C := f_BB∩C = f_CB∩C
?_C∩D := h_CC∩D = k_DC∩D f_C∩D := f_CC∩D = f_DC∩D
?_E := f_AE = g_BE = h_CE = k_DE f_E := f_AE = f_BE = f_CE = f_DE
A compatible family {f_A,f_B,f_C,f_D}
induces a bigger family having one f_U
for each empty set U that is smaller
than _some_ of the sets A, B, C, D...
And these `f_U's are welldefined,
all ways of defining them from the
original {f_A,f_B,f_C,f_D} by restriction
yield the same result.
\end{verbatim}
}
\newpage
7.4. Saturating downwards
{\myttchars
\footnotesize
\begin{verbatim}
Remember that we had and operation `↓' on DAGs
that regarded each `1' as black paint that was too liquid
and that would flow down, painting the `0's below it
in black...
0 0
0 0 0 0
0 0 0 0 0 0
↓ 0 0 1 0 = 0 0 1 0
1 0 0 1 1 1
0 0 1 1
0 1
Let's use the same symbol, ↓, for an operation that
starts with a compatible family, say {f_A,f_B,f_C,f_D},
and produces a (possible much bigger!) family made of
all the restrictions of f_A, f_B, f_C, f_D.
Also:
dom f_A = A
dom {f_A, f_B, f_C, f_D} = {A, B, C, D}
\end{verbatim}
}
\newpage
7.5. An operation for all kinds of restrictions
{\myttchars
\footnotesize
\begin{verbatim}
This operation simplifies things magically.
(I learned it from Simmons's paper on Omega Sets).
U · V = U∧V
U · g_V = (error)
f_U · V = f_UU∧V
f_U · g_V = f_UU∧V
So: U · V = V · U is true always
f_U · g_V = g_V · f_U is true iff f_U and g_V
are compatible
Now: U · {V, W} = {U·V, U·W} = {U∩V, U∩W}
U · O(X) = all (open) subsets of U = ↓{U}
f_U · O(X) = all restrictions of f_U
{f_U,g_V,h_W}·O(X) = all restrictions of f_U, g_V, h_W,
which yields somthing nice when
f_U, g_V, h_W are compatible,
and something messy when they are not.
We can define ↓A := A·O(X), and this works for all interesting cases!
\end{verbatim}
}
\newpage
7.6. Sheaves of functions
{\myttchars
\footnotesize
\begin{verbatim}
For each open set U⊆R,
C∞(U,R) is the set of smooth functions from U to R.
Warning: here (a,b) sometimes mean an open interval, not a pair!
(4,6) > C∞((4,6),R) f_{(4,6)}:(4,6)>R
^  
incl restr restr
 v v
(5,6) > C∞((5,6),R) f_{(4,5)}:(4,5)>R
O(R)     > Set
We can regard C∞(,R) as a (contravariant) functor.
Hey:
C∞(,R): O(R)^op > Set
is a sheaf because compatible families can be glued
in a unique way!
\end{verbatim}
}
\newpage
7.7. Limited functions are not sheaves
{\myttchars
\footnotesize
\begin{verbatim}
Let L(,R): O(R)^op > Set
be the contravariant functor that returns
the _limited_ functions on open sets.
Certain families of limited functions
are compatible, but their glueing is not limited,
so it "does not exist".
L(,R) violates one of the conditions for being a sheaf:
that glueings must exist.
The other condition is that glueings must be unique.
To violate that we need to go away from the realm of
the presheaves of functions, and use arbitrary functors.
\end{verbatim}
}
\newpage
7.8. The evil presheaf
{\myttchars
\footnotesize
\begin{verbatim}
E_1 {0,1}
/ \ / \ 0>3
/ \ / \ 1>4
v v v v
E = E_2 E_3 = {2} {3,4}
\ / \ /
\ / \ /
v v v v
E_4 {5}
 
  5>6
v v
E_5 {6,7}
The compatible family {2,3} has two different glueings: 0 and 1.
The compatible family {2,4} can't be glued.
The compatible family {} has two different glueings: 6 and 7.
\end{verbatim}
}
\newpage
7.9. Sheaves in $Set^{K^\op}$
{\myttchars
\footnotesize
\begin{verbatim}
...with the "obvious" notion of sheafness, are the ones that obey:
A_1 = A_2 ×_A_4 A_3
/ \
/ \
v v
A = A_2 A_3
\ /
\ /
v v
A_4


v
A_5 = 1
So: A_1 and A_5 carry only redundant information,
and sheaves on Set^K^op are equivalent to objects
of Set^V !!!
\end{verbatim}
}
%
% 
\newpage
8. Toposes
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
\newpage
8.1. A formal definition for children
{\myttchars
\footnotesize
\begin{verbatim}
A _pullback_ for the diagram A>C<B is an object D,
together with morphisms A<D>B, that behaves like this:
{(a,b)∈A×B  f(a)=g(b)} > B
 
  g
v f v
A > C
A _subobject_ of A is a diagram A' > A
that behaves like an inclusion.
Notation: A' >> A means that the arrow is a monic.
A _subobject classifier_ is an object Ω
that behaves like {0,1},
plus an arrow 1 >> Ω (called "true")
that makes the curved arrow at the left have an inverse.
{a∈A  P(a)} > 1 A' > 1
v v v v
 <\   \ 
   i  
v  P v v v v
A > Ω A > Ω
λa.a∈i(A)
A _topos_ is a CCC that has pullbacks and a subobject classifier
(> extra structure and extra properties).
\end{verbatim}
}
\newpage
8.2. The main theorems
{\myttchars
\footnotesize
\begin{verbatim}
1) CCCs are exactly the categories in which we can interpret
(the system λ1 os simplytyped) λcalculus
2) HAs are exactly the categories in which we can interpret
(intuitionistic) propositional logic
3) Every HA is a CCC
4) Set is a CCC
5) Every ZHA is a HA
6) Every (O(X),⊆) is a HA
7) Every ZTopos is a CCC < ZToposes are in the next slide
8) Every ZTopos is a topos
9) Topos are exactly the categories in which we can interpret
(the system IST of intuitionistic, typed) set theory
There are many other ways of constructing toposes,
but I believe that ZToposes are the perfect tool for
doing Topos Theory for Children 
\end{verbatim}
}
\newpage
8.3. ZToposes
{\myttchars
\footnotesize
\begin{verbatim}
When we did logic on subsets of a ZHA  K, say 
each truthvalue P was in fact a Kshaped diagram, e.g.,
0 P_1
P = 1 0 = P_2 P_3
1 P_4
1 P_5
We are going to do the same with sets.
An object A of Set^(K^op) is a Kshaped diagram of sets
and maps.
A_1
/ \
v v
A = A_2 A_3
\ /
v v
A_4

v
A_5
A _notion of sheafness_ on Set^K^op is something that tells us
which objects of Set^K^op "are" sheaves.
The "sheaves" of Set^K^op form a subcategory of Set^K^op
which is also a topos. < Magic!!!
In the case of ZToposes these subcategories of sheaves are
not something weird and new  we have this:
for any ZDAG D,
for notion of sheafness on Set^D^op,
there is a ZDAG D'
such that Sh(Set^D^op)≅Set^D'^op.
\end{verbatim}
}
\newpage
8.4. Quotes from the Elephant
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
%
% 
\newpage
8.5. A translation diagram for Heyting Algebras
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
\newpage
8.6. A translation diagram for BiCCCs
{\myttchars
\footnotesize
\begin{verbatim}
\end{verbatim}
}
\newpage
8.7. Do our operations $∧,∨,→$ on ZHAs really deserve their names?
{\myttchars
\footnotesize
\begin{verbatim}
Many slides ago we talked about Alice's logic and Bob's logic...
Alice's logic was this,
(copy)
and we were sort of leaving implicit that any other similar tuple, like
(random logic)
would be ``a logic'' in Alice's bookworld...
But things are not like that!!!
I just explained a version for children first!!!!!!!!
What's missing:
structure properties (new!)
/\ /\
a logic = (Om,and,or,...;eq_T,eq...)
where eq_T, ... are the properties (equations) that T,...
have to obey to ``deserver their names'', to ``behave as expected''.
They are:
(...)
\end{verbatim}
}
\newpage
8.8. The properties of $⊥,⊤,∧,∨,→$: a diagram
{\myttchars
\footnotesize
\begin{verbatim}
[diagrammatic mnemonic]
[deduction rules (sequents with le)]
[forallish formulation]
We will return to this diagram /many/ times...
\end{verbatim}
}
\newpage
8.9. Definition 4.1.1
{\myttchars
\footnotesize
\begin{verbatim}
Definition 4.1.1 (a) Let \scrE and \scrF be toposes. A /geometric
morphism/ f:\scrF > \scrE consists of a pair of functors f_*:\scrF>\scrE
\end{verbatim}
}
%
% 
%
% 
%L print(); print(table.concat(sectionnames, "\n"))
\pu
\end{document}
%
% Local Variables:
% coding: utf8unix
% eeanchorformat: "«%s»"
% End: