Warning: this is an htmlized version! The original is here, and the conversion rules are here.
% (find-angg       "LATEX/2014istanbul-a.tex")
% (find-xpdfpage "~/LATEX/2014istanbul-a.pdf")
% (defun o () (interactive) (find-angg "ORG/istanbul.org"))
% (defun e () (interactive) (find-LATEX "2015logicandcats.tex"))
% (defun l () (interactive) (find-LATEX "2015logicandcats.lua"))

Logic in O(LL)
Logic in G

* --- 0. Introduction: a view from the top
** What do we mean by children''?
** The girl from Ipanema
# (find-LATEX "2014sfc-slides.tex")
# (find-xpdfpage "~/LATEX/2014sfc-slides.pdf" 4)
# (find-xpdfpage "~/LATEX/2014sfc-slides.pdf" 5)
# (find-LATEX "2014sfc-slides.tex" "CT-why-now")

** Generalization / specialization / the right example
** Generalization as lifting
** Never underestimate the power of the Sierpinski topos
** Warning for mathematicians
Warning for mathematicians:
We will see particular cases first,
and the general definitions later.
If you are wondering what is definition
for something mentioned in slide n,
it will usually appear at slide n+1.

Sometimes we will explain things using very explicit examples.

Some people (mathematicians!) may find our examples too explicit, and
kindergarden-ish... if you get bored, try to guess what is going on at
the meta-level! One of the themes of this tutorial is a technique for
doing general cases in parallel with "archetypal" particular cases,
and doing parts of proofs in the one side (general or particular) and
parts in the other...

** Alice's Logic and Bob's Logic
** Structure, properties, layers, type theory
** Layers in logic(s): structure vs properties
** HALF Layers in CT
The base layer is constructions and diagrams.
The second layer is properties, equations, proving equalities and commutativites.
Working on the base layer is fun, and feels like play.
Working on the second layer is boring, and feels like heavy, grudging work.

One meaning for Categories for children'' is that
we will do the play'' part,
and leave the work'' part for (much) later.

* ---
* --- 1. Classical logic for children
** DONE Evaluation, and evaluation sequences
# (find-es "tex" "underbrace")

How do we evaluate an expression like 2*3+4*5
to obtain its value, 26?
This involves a series of steps...
There are several ways to represent these steps:

2*3+4*5        2*3+4*5 -> 2*3+20
\-/ \-/	   |          |
6   20	   v          v
\-----/	 6+4*5 ---> 6+20 --> 26
26
2*3+4*5 = 2*3+20
2*3+4*5 = 6+20	         = 6+20
= 26	         = 26

The technical term for the arrows is reductions''.
Informally, they are evaluation steps''.

To evaluate is to reduce, and reduce, and reduce
until there is nothing more to do - until we reach
someting that is fully evaluated''.

Note that two evaluation sequences'' starting
at the same expression may diverge'' at some point,
but they necessarily converge'' later - this is
what assures us that the result of evaluation is
unique.

(Technical terms: confluence'', diamond lemma'',
normalization''...)

** DONE Algebra: tables
Some expressions, e.g. (a+b)(a-b) and a*a-b*b,
return equal values for all inputs...

We can /test/ whether (a+b)(a-b)=a*a-b*b
in a /finite number/ of cases using tables,

a  b   (a+b)(a-b)   a*a-b*b   (a+b)(a-b)=a*a-b*b
------------------------------------------------
0  0        0          0              T
2  3       -5         -5              T
10 1       99         99              T

but we can not /know/ that (a+b)(a-b)=a*a-b*b
just by checking a finite number of table lines...

** DONE Algebra: reductions etc

To understand why (a+b)(a-b)=a*a-b*b we need /algebra/,
which includes reductions like these, which use associativity
always in the "expansion" direction, i.e., x(y+z)->xy+xz,

(a+b)(a-b) -> (a+b)a-(a+b)b -> (a+b)a-ab-bb
|              |                |
|              v                v
|         aa+ba-(a+b)b -> aa+ba-ab-bb
|                               :
v                               :
a(a-b)+b(a-b) -> a(a-b)+ba-bb        :
|               |               :
v               v               v
aa-ab+b(a-b) -> aa-ab+ba-bb - - -> aa-bb

/but then we don't get confluence/ -

We need to learn how to use commutativies, which let us see
that -ab+ba = -ab+ab = 0... and, of course, we need to learn
how to do "calculations with letters"...

We can teach evaluation to small children.
We can only teach algebra to older children,

The important thing:
evaluation comes first,
algebra comes later.

** Evaluating expressions with and, or, imp, not
** Tautologies: some and/or/imp/not expressions are always true
** A glimpse of a logic in which P<->¬¬P is not a tautology
** What is a'' logic? (First attempt)
** What is a'' logic? Names versus values
** Alice's logic vs Bob's logic, and how names help
** What properties do we expect from and/or/imp/not?
Structure
Properties
"To deserve a name"

* ---
* --- 2. Lambda-calculus
** DONE Introduction
Lambda-calculus, even for children, is a big topic...
We will see it as this series of subtopics:
* the usual way of defining (named) functions
* the meaning of :' and ->' in f:A->B
* the meaning of |->'
* evaluation, a.k.a., reduction
* lambda-notation
* beta-reduction
* the diamond lemma
* name clashes
* pairing and projection
* types
* contexts, pre-judgments and judgments
* unions of contexts
* type inference
* Set as model for lambda1

** DONE Defining named functions

The usual notation for defining functions is like this:

f: N -> R                      (*)
n |-> sqrt(n)

name: domain -> codomain
variable |-> expression

It creates _named_ functions.

To use this notation we have to fill up five slots, and use a ":", an
"->" and a "|->" in the right places.

After stating (*) we can "reduce" expressions with f, like this:

f(4+5) ---> sqrt(4+5)
:          :
:          :
v          v
f(9) ---> sqrt(9) ---> 3

f(4+5) ---> 2+sqrt(4+5)
:            :
:            :
v            v
f(9) ---> 2+sqrt(9) ---> 2+3 ---> 5

** DONE Defining unnamed functions

Now compare

name: domain -> codomain
variable |-> expression   name = \variable.expression

g: N  -> R
a |-> a*a+4                    h = \a.a*a+4

g(2+3) -----------> g(5)   h(2+3) ------------> h(5)
|                   |      |                   |
|                   |      v                   v
|                   |  (\a.a*a+4)(2+3) ---> (\a.a*a+4)(5)
|                   |      |                   |
v                   |      v                   |
(2+3)*(2+3)+4         |    (2+3)*(2+3)+4         |
|    \          |          |    \          |
|     v         |          |     v         |
|   (2+3)*5+4   |          |   (2+3)*5+4   |
|          \    |          |          \    |
v           v   v          v           v   v
5*(2+3)+4 -----> 5*5+4     5*(2+3)+4 -----> 5*5+4

Note that the parentheses are important...
2+3*2+3+4 != (2+3)*(2+3)+4

Note also that in h = \a.a*a+4 we have not used
neither the domain nor the codomain...

** DONE Sets versus types: ->'

When we say

g: N  -> R
a |-> a*a+4

we mean that g in R^N, where R^N is the set of all functions from N to
R... if we define that

A -> B   :=   B^A

then

N -> R   :=   R^N

and

g: N -> R

means that:

g in N -> R

Let's define a precise meaning for :' now.
We will have to do that in several steps.
In the beginning :' will be the same as in',
then we will list which are the few allowed uses for :',
and only at the end we will have a formal meaning for :',
where usually :' is weaker than in', but that allows
type-inference.

** DONE Set versus types: pairs and application

Suppose that A={1,2}, B={3,4}, C={5,6,7}
(to make things very concrete!)
and that:

a:A
b:B
f:B->C
p:AxB

Now - i.e., below the four :'-expressions above - we are in a context
where:

a is a variable "ranging over A", i.e., either a=1 ou b=2,
b is a variable ranging over B, i.e., either b=3 ou b=4,
f is a function from B to C, and, as a function can be represented by
its input-output pairs, then f is either
{(3,5),(4,5)}, or
{(3,5),(4,6)}, or
{(3,5),(4,7)}, or
{(3,6),(4,5)}, or ..., or {(3,7),(4,7)},
p is pair made of an a and b (in that order!), so p is either
(1,3) or (1,4) or (2,3) or (2,4).

In this context,

f(b):C         (function application)
pi p:A         (first projection)
pi'p:B         (second projection)
(a,b):AxB      (the operation (_,_) forms pairs)

** DONE Set versus types: pairs and application 2

Suppose that A={1,2}, B={3,4}, C={5,6,7} as before, and:

a:A          a = 2
b:B          b = 3
f:B->C       f = {(3,6),(4,7)
p:AxB        p = (1,4)

Then (a,f(pi'p)):AxC - which means:
* we know the value of the expression (a,f(pi'p))
* its result if of type AxC, i.e., belongs to AxB

"We know the value of (a,f(pi'p))" means:
we know how to calculate it without syntax errors, type errors, or
stumbling into undefined variables.

we can "prove" that (a,f(pi'p)):AxC using the derivation below:

p:AxB                       (1,4)
------pi'                   -----pi'
pi'p:B     f:B->C             4        {(3,6),(4,7)}
-----------------app          ----------------------app
a:A     f(pi'p)):C             2                7
--------------------(,)        ------------------(,)
(a,f(pi'p)):AxC                     (2,7)

** DONE Judgments

If A, B, C are types
and we know the values of
the variables a, b, f, p,
with these types,

a:A
b:B
f:B->C
p:AxB

then we know

(a,f(pi'p)):AxC

i.e., we know the value of a,f(pi'p)
and and we know its type - AxC.

We can write this more compactly.
This is a _judgment_:

a:A, b:B, f:B->C, p:AxB |- (a,f(pi'p)):AxC

But what is a judgment, precisely?

** DONE What is a judgment, precisely?

This is a (valid!) judgment

a:A, b:B, f:B->C, p:AxB |- (a,f(pi'p)):AxC

in an (external) context that is just this: "A, B, and C are types".
From "A, B, and C are types" this we can form many other types - for
example AxB, Bx((AxA)xA), B->C, (A->B)->(B->A),
(AxC)->(B->((CxA)x(A->B))), and so on.

The part

a:A, b:B, f:B->C, p:AxB

is the _context_ (of the judgment).
It is a list of _typings_ (of variables).
The context "declares" variables and their types.

On the right of the |-' in the context we have

(a,f(pi'p)):AxC

This is the _conclusion_ (of the judgment).
It is a typing, but now of an expression (a "term")
that doesn't need to be just a variable.

The expression (a,f(pi'p)) was formed from a,b,f,p
using pairing, projections and function application,
just like we formed (AxC)->(B->((CxA)x(A->B)))
from A, B, C and x and ->.

** DONE Invalid judgments

This is an invalid judgment:

a:B, a:BxC, c:(A->B)xC |- (a,f(pi f)):C

It is syntactically right, but semantically wrong.
The variable a is defined twice (with different types!),
the variable f is not defined,
(a,f(pi f)) can't have the type C, etc...
However,
BxC and (A->B)xC are generated from A, B, C using x and ->,
and (a,f(pi f)) is generated from a and f using pairing,
the projection operations pi and pi', and function application.

** DONE Pre-judgments

This is a pre-judgment:

a:B, a:BxC, c:(A->B)xC |- (a,f(pi f)):C

Grammar for pre-judgments:
* lowercase letters are variables
* uppercase letters are types
* new types can be formed with x and ->
* variables are terms
* new terms can be formed with (,), pi, pi', and function application
* a _context_ is var:type, ..., var:type (0 or more typings)
* a _pre-judgment_ is context |- term:type

** DONE Pre-judgments and judgments (via derivation trees)

We just saw a grammar that generates all _pre-judgments_.
Some pre-judgments are "semantically right", and are _judgments_.
Some pre-judgments are "semantically wrong", and not judgments.
(An "invalid judgment" is a pre-judgment that is not a judgment).

The _judgments_ are the pre-judgments that can appear in derivation
trees like the one below at the right; note that the tree at the right
is essentially the one at the left - that we have seen before - with
contexts added, and with rules that construct more complex types from
more basic types and rules that create variables of given types; in
the tree at the right the only hypotheses are that A, B and C are
types.

A type   B type
---------------x
AxB type        B type  C type
------------var  --------------\to
p:AxB                               p:AxB|-p:AxB       B->C type
------pi'                           ------------pi'  --------------var
pi'p:B     f:B->C      A type       p:AxB|-pi'p:B    f:B->C|-f:B->C
-----------------app   --------var  -------------------------------app
a:A     f(pi'p)):C           a:A|-a:A      p:AxB,f:B->C|-f(pi'p)):C
--------------------(,)      -------------------------------------(,)
(a,f(pi'p)):AxC              a:A,p:AxB,f:B->C|-(a,f(pi'p)):AxC

The tree at the right is a _derivation_ in system Lambda1.
Only pre-judgments that are judgments are derivable.

** TODO Union of contexts
** TODO Type inference
* ---
* --- 3. Intuitionistic logics for children
** DONE Positional notation

Let Lambda be {(1,1), (0,0), (2,0)} subset N^2, i.e., these points:

(1,1)
(0,0)     (2,0)

We can represent a function having Lambda as its domain
using a positional notation - for example, this one,

f: Lambda -> N
(x,y) |-> { 3 if (x,y)=(1,1)
{ 4 if (x,y)=(0,0)
{ 5 if (x,y)=(2,0)

ca be represented "positionally" as:

3
4 5

Note that we have 8 functions from Lambda to {0,1},
namely:

000, 001, ..., 111

These things are going to be the truth-values in a certain logic.

In standard logic'' the set of truth-values is {0,1}.
In Lambda-modal logic'' the set of truth values is
{000, 001, ..., 111}.

** DONE Logic on {0,1}

Let's call our set of truth-values Om.
In the usual logic Om = {0,1}, and
the operations and, or, not, imp, bij, T, false
are defined as functions:

and: Om x Om -> Om
or:  Om x Om -> Om
imp: Om x Om -> Om
bij: Om x Om -> Om
not: Om -> Om
T     in Om            (T:     Om^0 -> Om)
false in Om            (false: Om^0 -> Om)

That we usually explain to children using tables:

P  Q  P&Q  PvQ  P->Q  P<->Q     P  !P
---------------------------     -----
0  0   0    0    1     1        0   1
0  1   0    1    1     0        1   0
1  0   0    1    0     0
1  1   1    1    1     1

This is a well-formed formula'' (wff)
on and, or, not, imp, bij, P, Q:

!(PvQ) -> (!P)&(!Q)

If P is 0 and Q is 1, we can calculate the value of the wff above
doing this:

not (P or Q) -> (not P) and (not Q)
0    1          0           1
------      -----       -----
1          1           0
------------     -----------------
0                  0
----------------------------------
1

** DONE wffs as machines

We can also consider

not (P or Q) -> (not P) and (not Q)

as a machine that receives the values of P and Q,
and returns the value of not (P or Q) -> (not P) and (not Q).
We can represent its behavior in a table:

P  Q  not (P or Q) -> (not P) and (not Q)
-----------------------------------------
0  0               1
0  1               1
1  0               1
1  1               1

Note that it always returns 1 (true), for any inputs.
Expressions that always return true are called _tautologies_.

** DONE Logic for children

When we explain this to children they learn
how to calculate the values of &v-><->TF-expressions
for given values of P and Q, and after that they learn
how to calculate the values of &v-><->TF-expressions
for _all_ values of P and Q...
With that they know that some expressions, like

(!!P) <-> P

are tautologies, and other ones, like PvQ->P&Q, are not.

Logic for them (them=children), at this point, is what they use
to calculate the truth-values of &v-><->TF-expressions,
just like they use arithmetic to calculate the results of things like:

(2*3)+(4*5)
---   ---
6     20
--------
26

** DONE Logic, Arithmetic, tables

A tautology like (!!P)<->P is a &v-><->TF-expression that always
yields the same value - true - and we can test quickly
whether a &v-><->TF-expression is a tautology or not, as the
tables are not _too_ big.

An arithmetic expression like

((a+b)*(a-b))-((a*a)-(b*b))

always yields the same value - zero -, but to verify that
we would have to make an infinite number of tests, i.e.,
fill up all the positions in an infinite table.

Logic is more manageable than arithmetic
(because the tables are smaller)!

To prove that

((a+b)*(a-b))-((a*a)-(b*b))

is always zero we need _algebra_.

There is something similar to algebra for logic,
but we are postponing it - we want to start with just
the idea of filling up tables, and checking tautology-ness.

** DONE Modal logic on Lambda and on House: truth-values

We saw how to do logic when our truth-values are 0 and 1...
Let's now define four other logics:

LL-modal logic     LL-intuitionistic logic
HH-modal logic     HH-intuitionistic logic

We will see the modal ones first.
To make things pronounceable, LL is "lambda" and HH is "house".
Using letters, L = LL, LL-modal logic is L-modal logic, and so on.

In LL-modal logic our truth values are ways of
substituting 0s and 1s for the *s in LL...
In LL-modal logic the truth values are these:

(8 truth values)

In HH-modal logic, our truth values are ways of
substituting 0s and 1s for the *s in HH...
In H-modal logic the truth values are these (there are 32 of them):

(32 truth values, with "...")

Each "*" in LL or HH is called a "world".
If we number the worlds like this,

1       1
2 3     2 3
4 5

then 00101 is a HH-modal truth-value that is false (0)
in the worlds 1, 2 and 4, and true (1) in the worlds 3 and 5.
If P=00101, then

P_1=0
P_2=0   P_3=1
P_4=0   P_5=1

** DONE Modal Logic on Lambda and on House: &,v,->,<->

In HH-modal logic we have:

Om_HM={0000,00001,...,11111}    |Om_HM|=32

We have to define the functions

&_HM: Om_HM x Om_HM -> Om_HM
v_HM: Om_HM x Om_HM -> Om_HM
->_HM: Om_HM x Om_HM -> Om_HM
<->_HM: Om_HM x Om_HM -> Om_HM

A table showing how to calculate &_HM would have 32x32=1024 lines,
so tables are not practical here... We will define &_HM, v_HM, ->_HM,
by an algorithm. The idea is to ___calculate the result in each world
separately___; for example:

0            0             1
0 0  ->_HM   0 1    =      1 1
1 1          0 1           0 1

because

P_1               Q_1               P_1->Q_1                 (P->_HMQ)_1
P_2   P_3  ->_HM  Q_2   Q_3   =   P_2->Q_2  P_3->Q_3   =  (P->_HMQ)_2  (P->_HMQ)_3
P_4   P_5         Q_4   Q_5       P_4->Q_4  P_5->Q_5      (P->_HMQ)_4  (P->_HMQ)_5

0             0              0->0             1
0 0    ->_HM  0 1     =    0->0   0->1   =    1 1
1 1           0 1          1->0   1->1        0 1

and the same thing for &, v, <->.
So, for example,

0          0        0
0 0  &_HM  0 1   =  0 0
1 1        0 1      0 1

0          0        0
0 0  v_HM  0 1   =  0 1
1 1        0 1      1 1

0          0        1
0 0 <->_HM 0 1   =  1 0
1 1        0 1      0 1

** DONE Intuitionistic Logic on Lambda and on House: truth-values

Now think that
"1" means "there's a something here" and
"0" means "there's nothing here"
and that things in LL and HH can fall through
these arrows:

*             *
/ \		  / \
v   v	 v   v
*     *	*     *
|     |
v     v
*     *

Then 11010 is unstable, because the top 1 wants to fall - 11011 wants
to become ..., ... wants to become either ... or ..., and so on.

In LL-_intuitionistic_ logic, the truth-values
are the ones from LL-modal logic that are stable. So,

Om_LM = {8 truth values}
Om_LI = {5 truth values}

and the same thing for H-intuitionistic logic. Om_HM has 32 truth values,
but Om_HI has just 10:

Om_HI = {10 truth values}

** DONE Intuitionistic Logic on Lambda and on House: nec

The most naive way to try to define &_HI, v_HI, ->_HI, <->_HI fails...
If we just apply the idea of "calculate the result in each world separately",
sometimes "good" inputs (stable truth values) will produce "bad" outputs
(unstable truth-values)! Remember that we saw that:

0         0             0->0             1
0 0 ->_HM 0 1    =    0->0   0->1   =    1 1
1 1       0 1         1->0   1->1        0 1

Let's define an operation that keeps stable truth-values unchanged,
but converts unstable truth-vales to stabe truth-values.

Remember the idea that when we have a "1" above a "0"
the "1" wants to fall to the position of the "0" just below it
through one of the arrows...

*             *
/ \		  / \
v   v	 v   v
*     *	*     *
|     |
v     v
*     *

We will modify that idea. To calculate nec P (pron: "necessary P")
everything "1" that could fall into a "0" just disappears - instead
of the "1" and the "0" changing places, the "1" becomes a "0" - and
we repeat that idea as many times as needed until we get something
stable. For example,

1         1         0
1 1  -->  0 1  -->  0 1
0 1       0 1       0 1

and 00101 is stable; so nec 11101 = 00101.

We define:

P &_HI Q = nec (P &_HM Q)
P v_HI Q = nec (P v_HM Q)
P ->_HI Q = nec (P ->_HM Q)

Actually &_HM and v_HM always produce "good" (stable) outputs
when they receive "good" (stable) inputs, so we can simplify that to:

P &_HI Q = P &_HM Q
P v_HI Q = P v_HM Q
P ->_HI Q = nec (P ->_HM Q)

** DONE Intuitionistic Logic on Lambda and on House: ! and <->

Usually in intuitionistic logic ! and <-> are not primitive operators,
they are defined from ->, &, bot like this:

!P    := P->bot
P<->Q := (P->Q)&(Q->P)

so we would have

P &_HI Q := P &_HM Q
P v_HI Q := P v_HM Q
P ->_HI Q := nec (P ->_HM Q)
!_HI P := P ->_HI bot
= nec (P ->_HM bot)
= nec (P ->_HM 00000)
P <->_HI Q := (P ->_HI Q) &_HI (Q ->_HI P)
= nec (P ->_HM Q) &_HM nec (Q ->_HI P)

but it turns out the formulas for ! and <-> yield the same results
that these ones,

!_HI P     := nec (!_HM P)
P <->_HI Q := nec (P <->_HM Q)

which are slightly quicker to calculate.

** DONE !!P->P is not always a tautology

In H-intuitionistic logic, if P=00001, then

!!P->P

means

!_HI !_HI 00001 ->_HI 00001,

and

!_HI 00001 = nec (!_HM 00001) = nec 11110 = 01010
!_HI !_HI 00001 = !_HI 01010 = nec (!_HM 01010) = nec 10101 = 00101
!_HI !_HI 00001 ->_HI 00001 = 00101 ->_HI 00001 = nec (00101 ->_HM 00001) = nec 11011 = 01011

With a bit of practice we can calculate that like this:

! ! 00001 -> 00001
-------
01010
---------
00101
------------------
01011

So in H-intuitionistic logic, when P=00001, then

!!P->P = 01011

which is true is some worlds, but not all;
in this case !!P->P is not equal to T_HI = 11111.
Here are the tables with the values of !!P->P for all values of P,
in H-intuitionistic logic and in classical logic:

P  !!P->P     ...
---------
...

So H-intuitionistic logic "!!P->P" is not a tautology, while in
classical logic it is.

What does that mean?
In H-intuitionistic logic "!!P<->P" is not a tautology.
In H-intuitionistic logic !!P can be different from P.

In L-intuitionistic logic this - "!!P can be different from P" - also
holds:

! ! 011 -> 011
-----
000
-------
111
--------------
011

** DONE !(P&Q)->(!Pv!Q) is not always a tautology

In classical logic we have the two DeMorgan laws,

!(PvQ)<->(!P&!Q)
!(P&Q)<->(!Pv!Q)

We can split each of those into two implications, and
in H-modal logic three of these implications are tautologies,
but the fourth one is not:

!(PvQ)<-(!P&!Q)   is a tautology
!(PvQ)->(!P&!Q)   is a tautology
!(P&Q)<-(!Pv!Q)   is a tautology
!(P&Q)->(!Pv!Q)   is _not_ a tautology

If we take P=00010 and Q=00001, then

!(00010 & 00001)->(!00010 v !00001)
-------------    ------   ------
00000         00101    01010
---------------    ---------------
11111            01111
----------------------------------
01111

* ---
* --- 4. ZSets, ZDAGs and ZHAs
** DONE N^2 as a chessboard

In the usual way of representing chess diagrams,
the black pieces start above, the white pieces start below,
the black pawns move down, the white pawns move up.

Let's think of N^2 as an infinite chessboard.
The "positions" of N^2 are the points (x,y), with x,y \in N.
The "positions" of a chessboard are points (x,y), with x,y \in {0,1,...,7}.
Note that in a chessboard we put the pieces in the middle of the squares,
but in N^2 we put them on points.

Diagram: (chessboard in initial position)     (N^2)

** DONE N^2 as a chessboard: pawns and monks

In the usual way of representing chess diagrams,
the black pieces start above, the white pieces start below,
the black pawns move down, the white pawns move up.

Let's think of N^2 as an infinite chessboard.
The "positions" of N^2 are the points (x,y), with x,y in N.
The "positions" of a chessboard are points (x,y), with x,y in {0,1,...,7}.
Note that in a chessboard we put the pieces in the middle of the squares,
but in N^2 we put them on points.

The points (x,y) in N^2 with x+y odd have one color - white, say.
The points (x,y) in N^2 with x+y even have the other color - black.

Let's define how some pieces - _inspired_ by chess - move.
We are only interested in how our pieces can _move_ - not in ideas
like "winning" or "losing".

Black pawns move from (x,y) to (x',y-1), where x' in {x-1, x, x+1}.
Black monks move from (x,y) to (x',y-1), where x' in {x-1, x+1}.
(Monks are like bishops, but less powerful).

(x,y)	                        (x,y)
/  |  \	       	               /     \
v    v    v	       	             v         v
(x-1,y-1)  (x,y-1)  (x+1,y-1)	    (x-1,y-1)           (x+1,y-1)

(x-1,y+1)  (x,y+1)  (x+1,y+1)	    (x-1,y+1)           (x+1,y+1)
^    ^    ^	       	             ^         ^
\  |  /	       	               \     /
(x,y)	                        (x,y)

White pawns move from (x,y) to (x',y+1), where x' in {x-1, x, x+1}.
White monks move from (x,y) to (x',y+1), where x' in {x-1, x+1}.

** DONE N^2 as a chessboard: paths

Here are two black pawn's paths.
The one at the left is _not_ a black monk's path,
because it has some (two) vertical moves.
The one at the right _is_ a black monk's path.

(0,7)

(1,6)

(1,5)
(...)
(0,4)

(0,3)

(1,2)

(2,1)

** DONE ZSets

We say that a subset D \subset N^2 is _well-positioned_ when:
* it is non-empty,
* it is finite,
* it touches the vertical axis, i.e., there is an (x,y) in D with x=0,
* it touches the horizontal axis, i.e., there is an (x,y) in D with y=0.

Definition: a _ZSet_ is a subset of N^2 that is well-positioned.

We like ZSets (well-positions subsets of N^2) because we can represent
them unambiguosly using the _bullet notation_:

*             {     (1,2),
H' :=  * *      ==     (0,1),    (2,1),
* **            (0,0),    (2,0),(2,3)}

Later we will have some favourite ZSets, and fixed names for them:

V Vee
K Kite
H House

# (find-LATEX "2014sfc-slides.tex" "ZSets")

The _reading order_ on a ZSet D is how we would read aloud the points
of D as we would read a page of text: top line first, then each
subsequent line until the last; and in each line we go from left to
right.

Formally,

(1,2) << (0,1) << (2,1) << (0,0) << (2,0) << (2,3)

and this is H' with its points _listed_ in reading order:

((1,2), (0,1), (2,1), (0,0), (2,0), (2,3))

** DONE Functions on ZSets (positional notation, and reading aloud)

What happens if we represent function on a given ZSet - say, on

*             {     (1,2),
H' :=  * *      ==     (0,1),    (2,1),
* **            (0,0),    (2,0),(2,0)}

by writing its return values positionally?

Two (super-cool!) things. First, we get something very similar to the
bullet notation... For example, the first and the second projections
on H' can be denoted by just \dagH'102022 and \dagH'211000,

pi: H' -> N     ==   {    ((1,2),1), ... }   ==  \dagH'102022
(x,y) |-> x

pi: H' -> N     ==   {    ((1,2),1), ... }   ==  \dagH'211000,
(x,y) |-> x

And second, the reading order gives us a way to read aloud the outputs
of these functions in sequence - \dagH'102022 is one-zero-two-etc.

As a bonus, this gives us a way to order functions from a ZSet D to A
when we have an order on A... for example, here are the 8 functions
from Vee to {0,1}:

000 001 010 010 etc

They are listed "in order", because:

000 < 001 < 010

** DONE BPM(D)

If D is ZSet, let's write BPM(D) for the set of black pawn's moves on
D - i.e., the black pawn's moves on N^2 that start and end on points
of D.

For any ZSet D, (D, BPM(D)) is a DAG - a directed acyclical graph,
acyclical because cycles would have to have arrows going up (which we
don't have) to compensate for the arrows going down.

For example, (H', BPM(H')) is this:

(1,2)
/    \
v      v
(0,1)     (2,1)
|         |  \
v         v   v
(0,0)     (2,0)  (2,3)

Where if we represent BPM(H') as a set of arrows, it becomes this:

{((1,2),(0,1)), ((1,2),(2,1)),
...}

** DONE Transitive-reflexive closures

Let (A,R) be a graph.
R - that represents the arrows - is a relation on A, i.e., R \subseteq A^2.
The transitive closure of (A,R), is obtained by adding all composites of arrows.
The reflexive closure of (A,R) is obtained by adding all identity arrows.
We denote the transitive-reflexive closure of (A,R) by (A,R*).

For exemple, K ("kite") is this, \dagKite*****,
and (K,BPM(K)) and (K,BPM(K)*) are:

(diagram)

** DONE ZDAGs

A _ZDAG_ is a DAG of the form (D,BPM(D)), where D is a ZSet.
A _ZDAG*_ is a directed graph of the form (D,BPM(D)*), where D is a ZSet.
(A ZDAG* has cycles made of identity arrows - it is not acyclical).

Some DAGs are isomorphic to ZDAGs, some DAGs are not...

1
/ \
v   v
2     3  == Kv    pentagon is not iso to a ZDAG
\   /
v v
4
|
v
5

Some categories - some posets - are isomorphic to ZDAG*s...
For example,

(Kite* and its Hom-sets)

** DONE ZHAs

A ZHA is a ZSet obeying some extra conditions.
A ZHA is a ZSet D that has:
* a top element T_D,
* a bottom element bot_D,
* a left wall, LW_D, that is a black monk's path from T_D to bot_D,
* a right wall, RW_D, that is a black monk's path from T_D to bot_D,
* all the same-color elements between LW_D and RW_D, and nothing else.

This is a ZHA:               This is not a ZHA:

*			       * *
* *			      *   *
*
*			      * *** *
* *			       * * *
* *			        *
* * *
* *			     is doesn't have a top element,
*			     its left wall (the red points)
is not a black monk's path,
The red arrows		     it has points of both colors,
join the points of LW_D,     and it has lines without all the
The blue arrows		     same-color points between the
join the points of RW_D.     left wall and the right wall.

This is why we like ZHAs:
If D is a ZHA, then Dup* is a Heyting Algebra,
i.e., we have operations and,or,imp,bij: D->D
that behave as expected
(What does that mean? We will see what
we expect from and, or,imp,bij,T,F soon...)

This - that ZHAs are Heyting Algebras,
and so we can interpret intuitionistic propositional
calculus on them, is an amazing fact, that is
not well-known.

** DONE ZHAs: the (l,r) coordinates

The (l,r) coordinates make it very it easy to define &, v, ->, etc on
ZHAs. Let's use H=... as our example. Here is it with the (x,y)
coordinates of each of its points and the x and y axes,

y
^
9   +
8   +   (2,8)
7   +(1,7) (3,7)
6   +   (2,6)
5   +(1,5)
4 (0,4) (2,4)
3   +(1,3) (3,3)
2 (0,2) (2,2) (4,2)
1   +(1,1) (3,1)
0   +--+(2,0)+--+--+--+-->x
0  1  2  3  4  5  6

And this is it with the l and r axes drawn on; note that the point
(0,0) in the (l,r) coordinates is the bottom point of H, bot_H=(2,0),
and that when we are talking about the (l,r) coordinates we write lr

l       *       r        44
^     * *     ^       43  34
\     *     /          33
6 X   *     X 6       32
5 X * *   X 5      31  22
4 X * * X 4         21  12
3 * * * 3	       20  11  02
2 * * 2	         10  01
1 * 1 	           00
0 0

Formally, the change of coordinates can be done by these formulas.
In a ZDAG whose bottom point is (x0,0),

(x,y) = (x0-l+r,l+r)
(l,r) = (((x-x0)+y)/2,(y-(x-x0))/2)

** DONE ZHAs: True, false, and, or

In a ZHA D,
T_D is the "top" of D,
bot_D is the "bottom" of D (which is always 00),
PvQ is the lowest point above both P and Q,
P&Q is the uppermost point above both P and Q.

In H, T_H=44, bot_H=00, and if P=31 and Q=12 then
PvQ=32 and P&Q=11:

44              top
43  34         ...   ...
33              ...
32             PvQ
31  22        .P.   ...
21  12         ...   .Q.
20  11  02    ...   P&Q   ...
10  01         ...   ...
00              bot

Formally, if P=(l_P,r_P) and Q=(l_Q,r_Q)
then PvQ=(max(l_P,l_Q),max(r_P,r_Q))
and P&Q=(min(l_P,l_Q),min(r_P,r_Q))
(in the (l,r) coordinates).

** DONE ZHAs: <= and ->

In a ZHA D we will have two different notions of "implies".
One - "external" - always returns either "true" or "false", and
another - "internal" - always returns another element of D.

The "external" is written as P<=Q,
and defined as:
P<=Q is true iff l_P<=l_Q and r_P<=r_Q
Intuitively, P<=Q is true iff there is an arrow from P to Q
in Dup*.

The "internal" is written as P->Q,
and we will define it by cases.
(The reasons for this definition will appear later).
If P<=Q, then P->Q=T_D;
otherwise, if P>=Q, then P->Q=Q;
otherwise, let R:=P&Q, and P->Q will be the topmost point in D
belonging to the line that contains both R and Q.

.	          .
. .	         . .
. . .        . . .
. . .        . . .
. . . >      > . . .
. P . . .    . Q . . .
. . Q .      . . P .
. & .        . & .
. .          . .
.            .

** TODO Intuitionistic logic on the ZHAs G, L, S
** TODO Logic on the open subsets of a ZDAG
** TODO Logic on the points of a ZHA
** TODO Sometimes the open subsets of a ZDAG form a ZHA
* ---
* --- 5. A parenthesis: some logics on DAGs (for adults)
** DONE Towards intuitionistic logic

!(P&Q)->(!Pv!Q) is _not_ a tautology in H-intuitionistic logic, but
!(P&Q)->(!Pv!Q) _is_ a tautology in L-intuitionistic logic!

This is easy to check - the table for !(P&Q)->(!Pv!Q) in
L-intuitionistic logic has just 5x5=25 lines:

(table)

Standard logic has lots of tautologies.
L-intuitionistic logic has fewer tautologies.
H-intuitionistic logic has even less.
Pyr-intuitionistic logic has even less.
At the limit we have _intuitionistic logic_, that is defined in a
totally different way, through deduction rules...

But we have this:
if phi(P,Q,...,Z) is not a theorem of intuitionistic logic
then we have a diagram D and truth-values for P,...,Z in
D-intuitionistic logic that make phi(P,Q,...,Z)!=T...
For every non-theorem of intuitionistic (propositional) logic
we have a _finite counter-model_ for it.

This - that every non-theorem of IPL has a finite countermodel - is
not for children (too abstract!), but we are going to see some
examples of how to use tableau methods to produce countermodels; this
should make the meaning of "every non-theorem of IPL has a finite
countermodel" somehow easier to grasp by children.

** TODO Why do our intuitionistic &,v,-> deserve to be called &,v,->?

Our intuitionistic &,v,->,<->, etc obey _most_ of the properties that
we learned to expect from the standard &,v,->,<->... but not all - for
example, !!P may be different from P.

** TODO standard logic = classical propositional calculus
** TODO A positional notation for subsets
** DONE Tarski, Gödel, Kripke, etc (for adults)

A presentation of this for adults could start in this way:

* Every topologic space (X,O(X)) is a model for S4 (Tarski, 1932); nec
P is interpreted as the interior of P.

* The order topology on

GUILL

(or an any other ZDAG) is Alexandroff, and its open sets are exacty
the stable sets, i.e., the ones that don't have a 1 above a 0.

* The Kripke frame on the transitive-reflexive closure of the ZDAG
above (or any other) is a model of S4; nec P corresponds to the
interior of P in the order topology above.

* There is an embedding of intuitionistic logic into the modal logic
S4 (Gödel, 1942) in which the intuitionistic connectives are
translated to modal connectives in this way:

P  &_I Q := P &_M Q
P  v_I Q := P v_M Q
P ->_I Q := nec (P ->_M Q)

We can connect these four (VERY BIG!) facts, and then conclude that
O(G) is a Heyting Algebra... and we also have this:

* For every non-theorem Phi in intuitionistic propositional logic
there is a ZDAG (finite, planar, etc!...) on which we can build a
countermodel for Phi - we need to use tableau methods.

** DONE Tarski, Gödel, Kripke, etc (for children)

...I apologize to the adults in the audience, but the ideas sketched
in previous slide are is not how we are going to do things here... we
will do everything here "for children", with brief explanations at
some points of what are Kripke frames in general, S4 in general, logic
in arbitrary topologies, tableaux, and so on.

* ---
* --- What properties do we expect of &,v,->,etc?
* ---
* --- Logic in ZHAs
* TODO ! and <->
* ---
* --- Topologies and S4
* TODO Modal and intuitionistic truth-values on a ZDAG
* TODO The order topology on a ZDAG
* TODO The open sets of G correspond to the points of a ZDAG
* TODO The interior of a subset of D
* TODO Some O(D)s are also ZSets

Let G ("Guill") be this ZSet: ,
and let's see if we can represent its set of open subsets, O(G),
in a nice way.

* ---
* --- Priming
Let D be a ZSet, and let's write

* ---
* DONE Interlude (for adults): categories, posets, toposes

Lawvere often says:

Never underestimate the power of the Sierpinski topos.

The Sierpinski topos is the category of functors Set^*->*
(a presheaf).

The category *->* "is" (Two,BPM(Two)^*).

The Sierpinski topos "is" Set^Two.

Toposes of the form Set^bfD, when bfD is a ZDAG seen as a poset, are
great for when we want to work out examples fully, because their
object and morphisms are easy to draw explicitly.

Mathematicians love general things, infinite things, and continuous
things.

"Children" love finite things.

We are going to do categories for children, and toposes for children,
by using categories like

Set^Two, Set^HTwo, Set^V, Set^K...

as archetypal examples for the general theory.

* DONE Interlude (for adults): NSA

In Non-Standandard Analysis we construct an universe with
infinitesimals, Set^*, from the standard universe, Set, using a
ultrapower trick, and we do our proofs half in Set, half in Set^*,
using the transfer theorems to move between both...

We are going to do something _slighly similar_ to that here.

For example, we are going to do a constructions and a proofs in
Set^Vee and in a general topos E, and we will move between Set^Vee and
E - but we will do that by drawing two diagramss with similar shapes,
one corresponding to the general case in E, and another the a
particular case of that in Set^V.

This will happen only at the end.

Before that we will apply the same idea on Heyting Algebras,

The part on Heyting Algebras will be much more accessible and much
easier to understand... I mentioned toposes first only because they
are more famous, and people get more excited when we mention them than
Heyting Algebras... sorry! =)

_End of the interlude_

* TODO ZHAs are Heyting Algebras
* TODO Subsets / Modal truth values
* TODO Open subsets / intuitionistic truth values
* TODO Interior
* ---
* --- Modalities for children
* DONE The axioms

A modality on Omega is an operation J: Omega -> Omega obeying these axioms:
M1: P <= J(P)
M2: J(J(P)) <= J(P)
M3: J(P&Q) = J(P)&J(Q)

What do these axioms _mean_ (e.g., geometrically, on ZDAGs)?
* Equivalent elements form diamond-shaped regions
* Stable elements form a sub-ZDAG
To understand and prove this we start with _consequences_ of M1, M2, M3

What familiar operations obey these axioms?
* I(P)   := P              (the "identity" modality)
* T(P)   := T              (the "true" modality)
* O_A(P) := A or P         (the "or" modality)
* E_A(P) := A -> P         (the "implication" (exponential) modality)
* B_A(P) := (P -> A) -> A  ("boolean quotient" - general case)
* B(P)   := B_bot(P)       ("boolean quotient" - particular case))
= (P -> bot) -> bot
= not not P

What other operations obey these axioms?
* Modalities on ZDAGs correspond to dividing by diagonal lines

How do different modalities interact?
(There is an algebra of modalities!)
How does a modality interact with &, or, ->, etc?

What interesting things can we do with modalities?
* Quotients
* Sheafness, sheaves, sheafification (from topos theory)
* Geometric morphisms (from topos theory)

* DONE Diamond-shaped regions

A region (i.e., a subset) of a ZDAG is _diamond-shaped_ if it has
* a top element
* a bottom element
* all elements between the top and the bottom, and nothing else

Formally, the _diamond between A and B_ is
Db(A,B) := {P in Omega | A <= P <= B}

Example: in

33                                  ..
32  23      			      ..  ..
22  13    			        22  ..
12  03     we have Db(01,22) =      12  ..
11  02    	  {01,02,11,12,22} =    11  02
10  01      			      ..  01
00        			        ..

* DONE Stable and equivalent truth-values

Remember: a modality on Om is an operation J: Om -> Om obeying these axioms:
M1: P <= J(P)
M2: J(J(P)) <= J(P)
M3: J(P&Q) = J(P)&J(Q)

M1 means that applying J once moves up,
M2+M1 mean that applying J a second time does not move further up.
(What is M3?)

Let's say that P is _stable_ when P = J(P),
and that P and Q are _equivalent_ when J(P) = J(Q).

Formally:
St(P) := (P = J(P))        (returns true or false)
Sts   := {P in Om | St(P)  (returns a set)
P==Q  := (J(P) = J(Q))     (returns true or false)
E(P)  := {Q in Om | P==Q)  (returns a set)
Es    := {E(P) | P in Om}  (returns a partition)

Let's look at an example.
When J(P) := B(P) = not not P, we have

33                         33                   ..
32  23                     33  33               ..  ..
22  13                     33  33               ..  ..
Om ==   12  03    (P|->J(P)) ==    33  03     Sts ==    ..  03
11  02                     33  03               ..  ..
10  01                     10  03               10  ..
00                         00                   10

and the equivalence classes are:

..           ..          ..          33
..  ..       ..  ..      ..  ..      32  23
..  ..       ..  ..      ..  ..      22  13
..  ..       ..  ..      ..  03      12  ..
..  ..       ..  ..      ..  02      11  ..
..  ..       10  ..      ..  01      ..  ..
00           ..          ..          ..

i.e., Es = {{00}, {10}, {01,02,03}, {11,12,13,22,23,32,33}}

* DONE What is axiom M3? Introduction
Remember: a modality on Om is an operation J: Om -> Om obeying these axioms:
M1: P <= J(P)
M2: J(J(P)) <= J(P)
M3: J(P&Q) = J(P)&J(Q)

M3 implies _monotonicity_, which is:
Mo: P <= Q implies J(P) <= J(Q)
but this is not obvious. Let's prove that in several steps.

..
..  ..
..  ..  ..
..  ..  ..  ..
..  64  ..  46  ..            J(P)      J(Q)
..  ..  ..  ..  ..  ..              ^    ^
..  ..  ..  44  ..  ..  ..             \  /
..  ..  ..  ..  ..  ..  ..  ..         J(P)&J(Q)
..  ..  ..  ..  ..  ..  ..
..  ..  ..  ..  ..  ..
..  31  ..  13  ..              P        Q
..  ..  ..  ..                 ^      ^
..  11  ..                     \   /
..  ..                        P&Q
..

Let P=31, Q=13, J(P)=64, J(Q)=46.
Then P&Q=11 and J(P)&J(Q)=44, but without M3 we wouldn't know
where is J(P&Q)=J(11)... it could be anything above 11.

M3 tells us that J(11)=J(P&Q)=J(P)&J(Q)=64&46=44,
which is below J(P)=64 and J(Q)=46...
(Actually J(11) is the topmost value below J(P) and J(Q).)

* DONE What is axiom M3? Monotonicity for products
M3 implies that J(P&Q)=J(P)&J(Q),
so let's rewrite the "J(P)&J(Q)" as "J(P&Q)" in our diagram.
M3 implies that when we apply J to all elements of a
product diagram P <- P&Q -> Q, we do have arrows
in its image J(P) <- J(P&Q) -> J(Q).

Formally, this is
Mp: J(P&Q) <= J(P), and J(P&Q) <= J(Q)
("monotonicy for products")

..
..  ..
..  ..  ..
..  ..  ..  ..
..  64  ..  46  ..            J(P)      J(Q)
..  ..  ..  ..  ..  ..              ^    ^
..  ..  ..  44  ..  ..  ..             \  /
..  ..  ..  ..  ..  ..  ..  ..          J(P&Q)
..  ..  ..  ..  ..  ..  ..
..  ..  ..  ..  ..  ..
..  31  ..  13  ..              P        Q
..  ..  ..  ..                 ^      ^
..  11  ..                     \   /
..  ..                        P&Q
..

* DONE What is axiom M3? Monotonicity
When P <= Q we have P = P&Q,
and a part of the previous diagram collapses..
If P=P&Q=11, Q=13, J(P)=J(P&Q)=44, J(Q)=46,
and we have:

..
..  ..
..  ..  ..
..  ..  ..  ..
..  ..  ..  46  ..                      J(Q)
..  ..  ..  ..  ..  ..                   ^
..  ..  ..  44  ..  ..  ..                /
..  ..  ..  ..  ..  ..  ..  ..   J(P) = J(P&Q)
..  ..  ..  ..  ..  ..  ..
..  ..  ..  ..  ..  ..
..  ..  ..  13  ..                       Q
..  ..  ..  ..                        ^
..  11  ..                         /
..  ..                    P = P&Q
..

Formally:

----------------M3  ---------------
J(P&Q)=J(P)&J(Q)    J(P)&P(Q)<=J(Q)
------------Mp  :=  -----------------------------------
J(P&Q)<=J(Q)	                J(P&Q)<=J(Q)

P<=Q
=====
P=P&Q
-----------  ------------Mp
P<=Q	    J(P)=J(P&Q)  J(P&Q)<=J(Q)
----------Mo  :=  -------------------------
J(P)<=J(Q)	             J(P)<=J(Q)

* DONE The sandwich lemma
From
M1: P <= J(P)
M2: J(J(P)) <= J(P)
M3: J(P&Q) = J(P)&J(Q)

We can prove that if P <= Q <= J(P) then P and Q (and J(P)) are equivalent.
More formally ("sandwich lemma"):

SL: If P <= Q <= J(P) then P==Q.

Proof:
Q<=J(P)
-------------Mo   -------------M2
P<=Q        J(Q)<=J(J(P))     J(J(P))<=J(P)
----------Mo   -------------------------------
P<=Q  Q<=J(P)		 J(P)<=J(Q)            J(Q)<=J(P)
-------------SL   :=   --------------------------------
J(P)=J(Q)      	            J(P)=J(Q)

* DONE The sandwich lemma: equivalence classes don't have holes

Sandwich lemma:

SL: If P<=Q<=J(P) then P==Q.

What does that mean? Let's look again at our example.
This was an equivalence class,

33
32  23
22  13
12  ..
11  ..
..  ..
..

and J(12) = 33.
Making P=12 and Q=22 in the sandwich lemma,
we have P <= Q <= J(P), i.e., 12 <= 22 <= 33,
and, by C1, P==Q, i.e., 12==22, i.e., J(12)=J(22)=33.

The sandwich lemma is very powerful.
If we know that J(11)=33, then, using it for all elements between 11 and 33,
we get that J(12)=J(13)=J(22)=J(23)=J(32)=33.

The sandwich lemma says that equivalence classes do not have "holes".

* DONE Equivalence classes: the bottom element lemma
The axiom
M3: J(P&Q) = J(P)&J(Q)
can be extended to P&Q&R, P&Q&R&S, etc. Look:

J(P&Q&R) = J(P&(Q&R))
= J(P)&J(Q&R)
= J(P)&J(Q)&J(R)

What happens when we take the "&" of all the truth-values in an
equivalence class, and then we apply J to it?
Let's look at an example.
Let  E := {11,12,13,22,23,32,33}
and &E := 11&12&13&22&23&32&33     (= 11).

33
32  23
22  13
12  ..
11  ..
..  ..
..

J(&E) = J(11 & 12 & 13 & 22 & 23 & 32 & 33)
= J(11) & J(12) & J(13) & J(22) & J(23) & J(32) & J(33)
= 33 & 33 & 33 & 33 & 33 & 33 & 33
= 33

So, &E is ==-equivalent to all other elements of E,
and &E is below all other elements of E...
That means that &E is the bottom element of that equivalence class,
and, by the sandwich lemma, the equivalence class is made of
all elements between &E and J(&E).

TYPESET THIS IN RED:
(That only works for _finite_ conjunctions, but we are working only
with finite cases...)

* DONE Equivalence classes: the bottom element lemma (2)
A small modification of that previous argument leads to this:
let E be any non-empty set of equivalent elements of Om.
Then the diamond-shaped region

E' := D(&E,J(&E))

having &E as its bottom element and J(&E) as its top element
contains E, and all elements in E' are equivalent.

For example, in

33
32  23
22  13
12  ..
11  ..
..  ..
..

let E := {13, 22}; note that J(13)=J(22)=33, and that &E = 13&22 = 12.
Then

J(&E) = J(13 & 22)
= J(13) & J(22)
= 33 & 33
= 33,

and all elements in the diamond-shaped region

E' := D(&E,J(&E)) = D(12,33) = {12,13,22,23,32,33}

are equivalent; this new lemma takes any arbitrary non-empty set of
equivalent elements, and extends it to a diamond-shaped set of
equivalent elements. In our example it extends E={13,22} to
E={12,13,22,23,32,33}, ie.,

..                  33
..  ..    	      32  23
22  13  	        22  13
..  ..    to:       12  ..
..  ..  	        ..  ..
..  ..    	      ..  ..
..      	        ..

* DONE An operation dual to J

Let Eq(P) be the equivalence class of P.
Formally,

Eq(P) := {Q | J(P)=J(Q)}.

We know that J(P) returns the top element of Eq(P).
We saw, by the bottom element lemma, that each Eq(P) has a bottom element.
Let's denote that bottom element by B(P).
Formally, B(P) is the "&" of all elements of Eq(P).
(A finite conjunction because we are in the finite case!)

We have that Eq(P) is made of all elements between B(P) and J(P),
and nothing else...

* DONE How J interacts with &

We know (proofs later!) that "&" is monotonous on both its inputs,
and that P<=J(P) and that J is monotonous, i.e.,

P<=P'       Q<=Q'                   P<=Q
---------   ---------   -------    ----------
P&Q<=P'&Q   P&Q<=P&Q'   P<=J(P)    J(P)<=J(Q)

These things mean that for any truth-values P and Q we know some
relations between P&Q, J(P)&Q, ..., J(J(P)&J(Q)):

J(J(P)&J(Q))
^  ^   ^
/   |    \
/    |     \
J(J(P)&Q) J(P)&J(Q)  J(P&J(Q))
^ ^  ^     ^  ^  ^
|  \/       \/   |
|  /\       /\   |
J(P)&Q   J(P&Q)   P&J(Q)
^    ^     ^
\   |    /
\  |   /
P&Q

But M3 tells us that J(P&Q)=J(P)&J(Q), so at least the two middle
vertices in the cube-shaped diagram above yield the same result...
Actually we can prove more: J(P&Q) is equal to all the four upper
truth-values in the diagram above. Proofs:

J(J(P)&Q)    = J(J(P))&J(Q)    = J(P)&J(Q)
J(P&J(Q)) =    J(P)&J(J(Q)) = J(P)&J(Q)
J(J(P)&J(Q)) = J(J(P))&J(J(Q)) = J(P)&J(Q)

* DONE How J interacts with or

We know (proofs later!) that "v" is monotonous on both its inputs,
and (again!) that P<=J(P) and that J is monotonous, i.e.,

P<=P'       Q<=Q'                   P<=Q
---------   ---------   -------    ----------
PvQ<=P'vQ   PvQ<=PvQ'   P<=J(P)    J(P)<=J(Q)

These things mean that for any truth-values P and Q we know these
relations between PvQ, J(P)vQ, ..., J(J(P)&J(Q)):

J(J(P)vJ(Q))
^  ^   ^
/   |    \
/    |     \
J(J(P)vQ) J(P)vJ(Q)  J(PvJ(Q))
^ ^  ^     ^  ^  ^
|  \/       \/   |
|  /\       /\   |
J(P)vQ   J(PvQ)   PvJ(Q)
^    ^     ^
\   |    /
\  |   /
PvQ

But we know more. We can prove that we have J(J(P)vJ(Q))<=J(PvQ),

------           ------
P<=PvQ           Q<=PvQ
------------Mo   ------------Mo
J(P)<=J(PvQ)     J(Q)<=J(PvQ)
---------------------------
J(P)vJ(Q)<=J(PvQ)
-----------------------Mo   ----------------M2
J(J(P)vJ(Q))<=J(J(PvQ))     J(J(PvQ))=J(PvQ)
--------------------------------------------
J(J(P)vJ(Q))<=J(PvQ)

and this implies that all the 4 truth-values in the upper lozenge are
equal...

* DONE How ZHAs are cut into equivalence classes

We saw that the equivalence classes of J are diamond-shaped regions,
but we don't know yet how which ways of splitting a ZHA into
diamond-shaped regions correspond to "J"s that obey M1, M2 and M3...
For example, does this correspond a valid J?

/\
55                   /  \
55  55                /   /\
55  55  35             /   /  \
55  55  35  35          /\  /\   \
51  55  33  35  35       /\ \/  \   \        (*)
50  51  33  33  35  35     \ \ \  /\  /
50  51  33  13  35        \ \/\/  \/
50  21  13  13           \/ /   /
21  13  13              \/   /
13  13                 \  /
13                    \/

It turns out that no.
Look at the frontiers between equivalence classes.
We will see two separate arguments.

The first, based on the fact that J(Q)=J(R) implies J(P&Q)=J(P&R),
shows that borders that make lambda and mirrored lambdas are not
possible,

/\         /\                  /\     /\
/\ \	     / /\                /\/\   /\/\
\/\/	     \/\/                \/ /   \ \/
\/ 	      \/                  \/     \/

(lambda)  (mirrored lambda)     (y)   (mirrored y)

and a second one, based on the fact that St(P) and St(Q) implies
St(P&Q), that shows that borders that make "y"s and mirrored "y"s are
not possible.

* DONE Borders like lambdas are not possible

Lemma: if Q==R then P&Q==P&R  ("&' preserves equivalence").
Proof:

J(Q)=J(R)
-------------------
J(Q)=J(R)		J(P)&J(Q)=J(P)&J(R)
-------------&pe  :=  ===================
J(P&Q)=J(P&R)		   J(P&Q)=J(P&R)

Diagrammatically, we can use "P&" to "move southwest", as here:

P     Q         52      34
\  /  \          \  /   \
P&Q    R          32      14
\  /             \   /
Q&R                12

In the diagram (*) we have J(34)=J(14)=35, but J(32)=33 and J(12)=13;
if we take P=52, Q=34, R=14, the lemma &pe would imply that J(32)=J(12),
which does not hold. Diagrammatically, the lemma "&pe" implies that
the lines marking the frontiers between different equivalence classes
cannot make either a "lambda" or its mirror image...

/\         /\
/\ \	     / /\
\/\/	     \/\/
\/ 	      \/

(lambda)  (mirrored lambda)

* DONE Borders like "y"s are not possible
# (find-equailfile "sgml-input.el" ";; LEFT SEMIDIRECT PRODUCT")
# (find-equailfile "sgml-input.el" ";; RIGHT SEMIDIRECT PRODUCT")

Lemma: St(P) and St(Q) implies St(P&Q)    ("& preserves stability")
Proof:

P=J(P)  Q=J(Q)
==============   ----------------M3
P=J(P)  Q=J(Q)	 P&Q=J(P)&J(Q)    J(P)&J(Q)=J(P&Q)
--------------&ps  :=	 ---------------------------------
P&Q=J(P&Q)		          P&Q=J(P&Q)

We start with the case in which our ZHA is a lozenge.
We know that equivalence classes are diamond-shaped regions.
When the ZHA is a lozenge, the equivalence classes are lozenges too.

Let's look at this example. Here are a ZHA H and a function J: H -> H
on it; J obeys M1 and M2 - remember the geometrical meaning of M1 and
M2! - and we want to show that J cannot M3 too.

33                 33	          /\
32  23	         33  33	         /  \
31  22  13	       31  33  13	/\  /\
30  21  12  03     31  31  13  13    /  \/  \
20  11  02	       31  13  13      \  /   /
10  01	         13  13	        \/   /
00                 13            \  /
\/

Let P' be the truth-value at the left of the vertex of the "y",
and Q' be the truth-value at the right of the vertex of the "y".
In this example, P'=12 and Q'=21.
Let P be J(P'),
and Q be J(Q').
The line from P' to P points exactly northwest, and
the line from Q' to Q points exactly northeast;
because of this, P'&Q' is equal to P&Q, and P'&Q' is the truth-value
just below the vertex of the "y".
Let R be P&Q = P'&Q'.

P and Q are stable, and so, by "&ps", R=P&Q=P'&Q' must be stable too.
But, by hypothesis, the top element of the equivalence class of R is Q,
so R is not stable, and thus the J in our example cannot obey M3.

* TODO Borders like "y"s are not possible (2)

When our ZHA is not a lozenge, it is not trivial to show that the path
from P' to P is exactly northwest, and that the path from Q' to Q is
exactly northeast.

An example:

(...)

[I don't know how to complete the details]

* HALF Equivalence classes are diamond-shaped regions

A region (i.e., a subset) of a ZDAG is _diamond-shaped_ if it has
* a top element
* a bottom element
* all elements between the top and the bottom, and nothing else

* HALF Stable elements and diamond-shaped regions
Two slogans:
Equivalent elements form diamond-shaped regions
Stable elements form a sub-ZDAG

Definitions:

P==Q  := P* <-> Q*                (P==Q  is true or false)
St(P) := P == P*                  (St(P) is true or false)
Sts   := {P in Omega | P == P*}   (Sts   is a set)
Di(P) := {Q in Omega | P == Q}    (Di(P) is a set)
Dis   := {Di(P) | P in Omega}     (Dis   is a partition of Omega)

33             33              33
32  23         33  33          33  33
22  13         33  13          33  33
12  03         13  13          33  03
11  02         11  13          33  03
10  01         11  11          10  03
00             11              00

Pid            PJ              Pnotnot

* TODO Modalities induce Z-quotients
* TODO Z-quotients are modalities
* TODO Modalities correspond to morphisms of HAs
* ---
* --- Categories for children
* ZDAGs as graphs
* ZDAGs as categories
* K as a category
# (find-xpdfpage "~/LATEX/2014istanbul-a.pdf")

* Set as our archetypal category
* Posets, relations, RT-closure
* Products
* Judgments
* Internal language
* ZHAs as categories
* ---
* --- Toposes
* Ztoposes
* ---
* Garbage
* ZDAGs: abuse of language (delete this)

Sometimes, by abuse of language, we will say that things like (D,BPM(D)*) are ZDAGs...
A (D,BPM(D)*) is not acyclical, but its only cycles are made of identity arrows
(so we may think that they "don't really count" as cycles...)

We will sometimes use the bullet notation for ZDAGs -
both for the things like (D,BPM(D)) and for the things like (D,BPM(D)*),
and sometimes for (D,BMM(D)), (D,BMM(D)*),
and for (D,BPM(D)*) and (D,BMM(D)*) seen as categories, too...

and see the disambiguation tricks.

Note that all these things are isomorphic,

(K,BPM(K)) == (antiK,WPM(antiK)) == (smashedantiK,WPM(smashedantiK)), 12345

but this DAG is not isomorphic to a ZDAG:

(pentagon)

* Local variables
# (find-es "org" "TODO-and-DONE")
#+TODO: TODO | DONE | HALF

# Local Variables:
# coding: raw-text-unix
# coding: utf-8-unix
# modes: (org-mode emacs-lisp-mode fundamental-mode)
# End: