# User:Jon Awbrey/PEIRCE LIST ARCHIVE

## Work Area

### Factoring Propositions Via Coordinate Maps

```Nota Bene, for possible future use.  In the larger current of work
with respect to which this meander of a conduit was initially both
diversionary and tributary, before those high and dry regensquirm
years when it turned into an intellectual interglacial oxbow lake,
I once had in mind a scape in which expressions in a definitional
lattice were ordered according to their simplicity on some scale
or another, and in this setting the word "sense" was actually an
acronym for "semantically equivalent next-simplest expression".

| If this is starting to sound a little bit familiar,
| it may be because the relationship between the two
| kinds of pictures of propositions, namely:
|
| 1.  Propositions about things in general, here,
|     about the times when certain facts are true,
|     having the form of functions f : X -> B,
|
|     the bit-vector labels on venn diagram cells,
|     having the form of functions f' : B^k -> B,
|
| is an epically old story, one that I, myself,
| have related one or twice upon a time before,
| to wit, at least, at the following two cites:
|
| http://suo.ieee.org/email/msg01251.html
| http://suo.ieee.org/email/msg01293.html
|
| There, and now here, once more, and again, it may be observed
| that the relation is one whereby the proposition f : X -> B,
| the one about things and times and mores in general, factors
| into a coding function c : X -> B^k, followed by a derived
| proposition f' : B^k -> B that judges the resulting codes.
|
|                         f
|                   X o------>o B
|                      \     ^
|   c = <x_1, ..., x_k> \   / f'
|                        v /
|                         o
|                        B^k
|
| You may remember that this was supposed to illustrate
| the "factoring" of a proposition f : X -> B = {0, 1}
| into the composition f'(c(x)), where c : X -> B^k is
| the "coding" of each x in X as an k-bit string in B^k,
| and where f' is the mapping of codes into a co-domain
| that we interpret as t-f-values, B = {0, 1} = {F, T}.

In short, there is the standard equivocation ("systematic ambiguity"?) as to
whether we are talking about the "applied" and concretely typed proposition
f : X -> B or the "pure" and abstractly typed proposition f' : B^k -> B.
Or we can think of the latter object as the approximate code icon of
the former object.

Anyway, these types of formal objects are the sorts of things that
I take to be the denotational objects of propositional expressions.
These objects, along with their invarious and insundry mathematical
properties, are the orders of things that I am talking about when
I refer to the "invariant structures in these objects themselves".

"Invariant" means "invariant under a suitable set of transformations",
in this case the translations between various languages that preserve
the objects and the structures in question.  In extremest generality,
this is what universal constructions in category theory are all about.

In summation, the functions f : X -> B and f' : B* -> B have invariant, formal,
mathematical, objective properties that any adequate language might eventually
evolve to express, only some languages express them more obscurely than others.

To be perfectly honest, I continue to be surprised that anybody in this group
has trouble with this.  There are perfectly apt and familiar examples in the
contrast between roman numerals and arabic numerals, or the contrast between
redundant syntaxes, like those that use the pentalphabet {~, &, v, =>, <=>},
and trimmer syntaxes, like those used in existential and conceptual graphs.
Every time somebody says "Let's take {~, &, v, =>, <=>} as an operational
basis for logic" it's just like that old joke that mathematicians tell on
engineers where the ingenue in question says "1 is a prime, 2 is a prime,
3 is a prime, 4 is a prime, ..." -- and I know you think that I'm being
hyperbolic, but I'm really only up to parabolas here ...

I have already refined my criticism so that it does not apply to
the spirit of FOL or KIF or whatever, but only to the letters of
specific syntactic proposals.  There is a fact of the matter as
to whether a concrete language provides a clean or a cluttered
basis for representing the identified set of formal objects.
And it shows up in pragmatic realities like the efficiency
of real time concept formation, concept use, learnability,
reasoning power, and just plain good use of real time.
These are the dire consequences that I learned in my
very first tries at mathematically oriented theorem
automation, and the only factor that has obscured
them in mainstream work since then is the speed
with which folks can now do all of the same
old dumb things that they used to do on
their way to kludging out the answers.
```

## Zeroth Order Logic

### ZOL. Note 1

```Here is a scaled-down version of one of my very first applications,
having to do with the demographic variables in a survey data base.

This Example illustrates the use of 2-variate logical forms
for expressing and reasoning about the logical constraints
that are involved in the following types of situations:

1.  Distinction:     A =/= B
Also known as:   logical inequality, exclusive disjunction
Represented as:  ( A , B )
Graphed as:
|
|   A   B
|   o---o
|    \ /
|     @

2.  Equality:        A = B
Also known as:   logical equivalence, if and only if, A <=> B
Represented as:  (( A , B ))
Graphed as:
|
|   A   B
|   o---o
|    \ /
|     o
|     |
|     @

3.  Implication:     A => B
Also known as:   entailment, if-then
Represented as:  ( A ( B ))
Graphed as:
|
|   A   B
|   o---o
|   |
|   @

Example of a proposition expressing a "zeroth order theory" (ZOT):

Consider the following text, written in what I am calling "Ref Log",
also known as the "Cactus Language" syntax for propositional logic:

|   ( male  , female )
|   (( boy  , male child ))
|   (( girl , female child ))
|   ( child ( human ))

Graphed as:

|                   boy   male     girl   female
|                     o---o child     o---o child
|  male   female       \ /             \ /          child   human
|     o---o             o               o               o---o
|      \ /              |               |               |
|       @               @               @               @

Nota Bene.  Due to graphic constraints -- no, the other
kind of graphic constraints -- of the immediate medium,
I am forced to string out the logical conjuncts of the
actual cactus graph for this situation, one that might
sufficiently be reasoned out from the exhibit supra by
fusing together the four roots of the severed cactus.

Either of these expressions, text or graph, is equivalent to
what would otherwise be written in a more ordinary syntax as:

|  male  =/=  female
|  boy   <=>  male child
|  girl  <=>  female child
|  child  =>  human

This is a actually a single proposition, a conjunction of four lines:
one distinction, two equations, and one implication.  Together these
amount to a set of definitions conjointly constraining the logical
compatibility of the six feature names that appear.  They may be
thought of as sculpting out a space of models that is some subset
of the 2^6 = 64 possible interpretations, and thereby shaping some
universe of discourse.

Once this backdrop is defined, it is possible to "query" this universe,
simply by conjoining additional propositions in further constraint of
the underlying set of models.  This has many uses, as we shall see.

We are considering an Example of a propositional expression
that is formed on the following "alphabet" or "lexicon" of
six "logical features" or "boolean variables":

\$A\$  =  {"boy", "child", "female", "girl", "human", "male"}.

The expression is this:

|   ( male  , female )
|   (( boy  , male child ))
|   (( girl , female child ))
|   ( child ( human ))

Putting it very roughly -- and putting off a better description
of it till later -- we may think of this expression as notation
for a boolean function f : %B%^6 -> %B%.  This is what we might
call the "abstract type" of the function, but we will also find
it convenient on many occasions to represent the points of this
particular copy of the space %B%^6 in terms of the positive and
negative versions of the features from \$A\$ that serve to encase
them as logical "cells", as they are called in the venn diagram
picture of the corresponding universe of discourse X = [\$A\$].

Just for concreteness, this form of representation begins and ends:

<0,0,0,0,0,0>  =  (boy)(child)(female)(girl)(human)(male),
<0,0,0,0,0,1>  =  (boy)(child)(female)(girl)(human) male ,
<0,0,0,0,1,0>  =  (boy)(child)(female)(girl) human (male),
<0,0,0,0,1,1>  =  (boy)(child)(female)(girl) human  male ,
...
<1,1,1,1,0,0>  =   boy  child  female  girl (human)(male),
<1,1,1,1,0,1>  =   boy  child  female  girl (human) male ,
<1,1,1,1,1,0>  =   boy  child  female  girl  human (male),
<1,1,1,1,1,1>  =   boy  child  female  girl  human  male .
```

### ZOL. Note 2

```I continue with the previous Example, that I bring forward and sum up here:

|                  boy   male          girl   female
|                    o---o child          o---o child
|  male   female      \ /                  \ /               child  human
|     o---o            o                    o                    o--o
|      \ /             |                    |                    |
|       @              @                    @                    @
|
| (male , female)((boy , male child))((girl , female child))(child (human))

For my master's piece in Quantitative Psychology (Michigan State, 1989),
I wrote a program, "Theme One" (TO) by name, that among its other duties
operates to process the expressions of the cactus language in many of the
most pressing ways that we need in order to be able to use it effectively
as a propositional calculus.  The operational component of TO where one
does the work of this logical modeling is called "Study", and the core
of the logical calculator deep in the heart of this Study section is
a suite of computational functions that evolve a particular species
of "normal form", analogous to a "disjunctive normal form" (DNF),
from whatever expression they are prebendered as their input.

This "canonical", "normal", or "stable" form of logical expression --
I'll refine the distinctions among these subforms all in good time --
permits succinct depiction as an "arboreal boolean expansion" (ABE).

Once again, the graphic limitations of this space prevail against
any disposition that I might have to lay out a really substantial
case before you, of the brand that might have a chance to impress
you with the aptitude of this ilk of ABE in rooting out the truth
of many a complexly obscurely subtly adamant whetstone of our wit.

So let me just illustrate the way of it with one conjunct of our Example.
What follows will be a sequence of expressions, each one after the first
being logically equal to the one that precedes it:

Step 1

|    g    fc
|     o---o
|      \ /
|       o
|       |
|       @

Step 2

|                   o
|         fc        |   fc
|     o---o         o---o
|      \ /           \ /
|       o             o
|       |             |
|     g o-------------o--o g
|        \           /
|         \         /
|          \       /
|           \     /
|            \   /
|             \ /
|              @

Step 3

|      f c
|       o
|       |            f c
|       o             o
|       |             |
|     g o-------------o--o g
|        \           /
|         \         /
|          \       /
|           \     /
|            \   /
|             \ /
|              @

Step 4

|           o
|           |
|     c o   o c           o
|       |   |             |
|       o   o       c o   o c
|       |   |         |   |
|     f o---o--o f  f o---o--o f
|        \ /           \ /
|       g o-------------o--o g
|          \           /
|           \         /
|            \       /
|             \     /
|              \   /
|               \ /
|                @

Step 5

|           o       c o
|       c   |         |
|     f o---o--o f  f o---o--o f
|        \ /           \ /
|       g o-------------o--o g
|          \           /
|           \         /
|            \       /
|             \     /
|              \   /
|               \ /
|                @

Step 6

|                                       o
|                                       |
|           o                       o   o
|           |                       |   |
|     c o---o--o c      o         c o---o--o c
|        \ /            |            \ /
|       f o-------------o--o f      f o-------------o--o f
|          \           /               \           /
|           \         /                 \         /
|            \       /                   \       /
|             \     /                     \     /
|              \   /                       \   /
|               \ /                         \ /
|              g o---------------------------o--o g
|                 \                         /
|                  \                       /
|                   \                     /
|                    \                   /
|                     \                 /
|                      \               /
|                       \             /
|                        \           /
|                         \         /
|                          \       /
|                           \     /
|                            \   /
|                             \ /
|                              @

Step 7

|           o                       o
|           |                       |
|     c o---o--o c      o         c o---o--o c
|        \ /            |            \ /
|       f o-------------o--o f      f o-------------o--o f
|          \           /               \           /
|           \         /                 \         /
|            \       /                   \       /
|             \     /                     \     /
|              \   /                       \   /
|               \ /                         \ /
|              g o---------------------------o--o g
|                 \                         /
|                  \                       /
|                   \                     /
|                    \                   /
|                     \                 /
|                      \               /
|                       \             /
|                        \           /
|                         \         /
|                          \       /
|                           \     /
|                            \   /
|                             \ /
|                              @

This last expression is the ABE of the input expression.
It can be transcribed into ordinary logical language as:

| either girl and
|        either female and
|               either child and true
|               or not child and false
|        or not female and false
| or not girl and
|        either female and
|               either child and false
|               or not child and true
|        or not female and true

The expression "((girl , female child))" is sufficiently evaluated
by considering its logical values on the coordinate tuples of %B%^3,
or its indications on the cells of the associated venn diagram that
depicts the universe of discourse, namely, on these eight arguments:

<1, 1, 1>  =   girl  female  child ,
<1, 1, 0>  =   girl  female (child),
<1, 0, 1>  =   girl (female) child ,
<1, 0, 0>  =   girl (female)(child),
<0, 1, 1>  =  (girl) female  child ,
<0, 1, 0>  =  (girl) female (child),
<0, 0, 1>  =  (girl)(female) child ,
<0, 0, 0>  =  (girl)(female)(child).

The ABE output expression tells us the logical values of
the input expression on each of these arguments, doing so
by attaching the values to the leaves of a tree, and acting
as an "efficient" or "lazy" evaluator in the sense that the
process that generates the tree follows each path only up to
the point in the tree where it can determine the values on the
entire subtree beyond that point.  Thus, the ABE tree tells us:

girl  female  child   -> 1
girl  female (child)  -> 0
girl (female) -> 0
(girl) female  child   -> 0
(girl) female (child)  -> 1
(girl)(female) -> 1

Picking out the interpretations that yield the truth of the expression,
and expanding the corresponding partial argument tuples, we arrive at
the following interpretations that satisfy the input expression:

girl  female  child   -> 1
(girl) female (child)  -> 1
(girl)(female) child   -> 1
(girl)(female)(child)  -> 1

In sum, if it's a female and a child, then it's a girl,
and if it's either not a female or not a child or both,
then it's not a girl.
```

### ZOL. Note 3

```Brief Automata

By way of providing a simple illustration of Cook's Theorem,
that "Propositional Satisfiability is NP-Complete", here is
an exposition of one way to translate Turing Machine set-ups
into propositional expressions, employing the Ref Log Syntax
for Prop Calc that I described in a couple of earlier notes:

Notation:

Stilt(k)  =  Space and Time Limited Turing Machine,
with k units of space and k units of time.

Stunt(k)  =  Space and Time Limited Turing Machine,
for computing the parity of a bit string,
with Number of Tape cells of input equal to k.

I will follow the pattern of the discussion in the book of
Herbert Wilf, 'Algorithms & Complexity' (1986), pages 188-201,
but translate into Ref Log, which is more efficient with respect
to the number of propositional clauses that are required.

Parity Machine

|                    1/1/+1
|                   ------->
|               /\ /        \ /\
|      0/0/+1  ^  0          1  ^  0/0/+1
|               \/|\        /|\/
|                 | <------- |
|         #/#/-1  |  1/1/+1  |  #/#/-1
|                 |          |
|                 v          v
|                 #          *

o-------o--------o-------------o---------o------------o
| State | Symbol | Next Symbol | Ratchet | Next State |
|   Q   |   S    |     S'      |   dR    |     Q'     |
o-------o--------o-------------o---------o------------o
|   0   |   0    |     0       |   +1    |     0      |
|   0   |   1    |     1       |   +1    |     1      |
|   0   |   #    |     #       |   -1    |     #      |
|   1   |   0    |     0       |   +1    |     1      |
|   1   |   1    |     1       |   +1    |     0      |
|   1   |   #    |     #       |   -1    |     *      |
o-------o--------o-------------o---------o------------o

The TM has a "finite automaton" (FA) as its component.
Let us refer to this particular FA by the name of "M".

The "registers" are also called "tape-cells" or "tape-squares".

In order to consider how the finitely "stilted" rendition of this TM
can be translated into the form of a purely propositional description,
one now fixes k and limits the discussion to talking about a Stilt(k),
which is really not a true TM anymore but a finite automaton in disguise.

In this example, for the sake of a minimal illustration, we choose k = 2,
and discuss Stunt(2).  Since the zeroth tape cell and the last tape cell
are occupied with bof and eof marks "#", this amounts to only one digit
of significant computation.

To translate Stunt(2) into propositional form we use
the following collection of propositional variables:

For the "Present State Function" QF : P -> Q,

{p0_q#, p0_q*, p0_q0, p0_q1,
p1_q#, p1_q*, p1_q0, p1_q1,
p2_q#, p2_q*, p2_q0, p2_q1,
p3_q#, p3_q*, p3_q0, p3_q1}

The propositional expression of the form "pi_qj" says:

| At the point-in-time p_i,
| the finite machine M is in the state q_j.

For the "Present Register Function" RF : P -> R,

{p0_r0, p0_r1, p0_r2, p0_r3,
p1_r0, p1_r1, p1_r2, p1_r3,
p2_r0, p2_r1, p2_r2, p2_r3,
p3_r0, p3_r1, p3_r2, p3_r3}

The propositional expression of the form "pi_rj" says:

| At the point-in-time p_i,
| the tape-head H is on the tape-cell r_j.

For the "Present Symbol Function" SF : P -> (R -> S),

{p0_r0_s#, p0_r0_s*, p0_r0_s0, p0_r0_s1,
p0_r1_s#, p0_r1_s*, p0_r1_s0, p0_r1_s1,
p0_r2_s#, p0_r2_s*, p0_r2_s0, p0_r2_s1,
p0_r3_s#, p0_r3_s*, p0_r3_s0, p0_r3_s1,
p1_r0_s#, p1_r0_s*, p1_r0_s0, p1_r0_s1,
p1_r1_s#, p1_r1_s*, p1_r1_s0, p1_r1_s1,
p1_r2_s#, p1_r2_s*, p1_r2_s0, p1_r2_s1,
p1_r3_s#, p1_r3_s*, p1_r3_s0, p1_r3_s1,
p2_r0_s#, p2_r0_s*, p2_r0_s0, p2_r0_s1,
p2_r1_s#, p2_r1_s*, p2_r1_s0, p2_r1_s1,
p2_r2_s#, p2_r2_s*, p2_r2_s0, p2_r2_s1,
p2_r3_s#, p2_r3_s*, p2_r3_s0, p2_r3_s1,
p3_r0_s#, p3_r0_s*, p3_r0_s0, p3_r0_s1,
p3_r1_s#, p3_r1_s*, p3_r1_s0, p3_r1_s1,
p3_r2_s#, p3_r2_s*, p3_r2_s0, p3_r2_s1,
p3_r3_s#, p3_r3_s*, p3_r3_s0, p3_r3_s1}

The propositional expression of the form "pi_rj_sk" says:

| At the point-in-time p_i,
| the tape-cell r_j bears the mark s_k.

o~~~~~~~~~o~~~~~~~~~o~~INPUTS~~o~~~~~~~~~o~~~~~~~~~o

Here are the Initial Conditions
for the two possible inputs to the
Ref Log redaction of this Parity TM:

o~~~~~~~~~o~~~~~~~~~o~INPUT~0~o~~~~~~~~~o~~~~~~~~~o

Initial Conditions:

p0_q0

p0_r1

p0_r0_s#
p0_r1_s0
p0_r2_s#

The Initial Conditions are given by a logical conjunction
that is composed of 5 basic expressions, altogether stating:

| At the point-in-time p_0, M is in the state q_0, and
| At the point-in-time p_0, H is on the cell  r_1, and
| At the point-in-time p_0, cell r_0 bears the mark "#", and
| At the point-in-time p_0, cell r_1 bears the mark "0", and
| At the point-in-time p_0, cell r_2 bears the mark "#".

o~~~~~~~~~o~~~~~~~~~o~INPUT~1~o~~~~~~~~~o~~~~~~~~~o

Initial Conditions:

p0_q0

p0_r1

p0_r0_s#
p0_r1_s1
p0_r2_s#

The Initial Conditions are given by a logical conjunction
that is composed of 5 basic expressions, altogether stating:

| At the point-in-time p_0, M is in the state q_0, and
| At the point-in-time p_0, H is on the cell  r_1, and
| At the point-in-time p_0, cell r_0 bears the mark "#", and
| At the point-in-time p_0, cell r_1 bears the mark "1", and
| At the point-in-time p_0, cell r_2 bears the mark "#".

o~~~~~~~~~o~~~~~~~~~o~PROGRAM~o~~~~~~~~~o~~~~~~~~~o

And here, yet again, just to store it nearby,
is the logical rendition of the TM's program:

Mediate Conditions:

( p0_q#  ( p1_q# ))
( p0_q*  ( p1_q* ))

( p1_q#  ( p2_q# ))
( p1_q*  ( p2_q* ))

Terminal Conditions:

(( p2_q# )( p2_q* ))

State Partition:

(( p0_q0 ),( p0_q1 ),( p0_q# ),( p0_q* ))
(( p1_q0 ),( p1_q1 ),( p1_q# ),( p1_q* ))
(( p2_q0 ),( p2_q1 ),( p2_q# ),( p2_q* ))

Register Partition:

(( p0_r0 ),( p0_r1 ),( p0_r2 ))
(( p1_r0 ),( p1_r1 ),( p1_r2 ))
(( p2_r0 ),( p2_r1 ),( p2_r2 ))

Symbol Partition:

(( p0_r0_s0 ),( p0_r0_s1 ),( p0_r0_s# ))
(( p0_r1_s0 ),( p0_r1_s1 ),( p0_r1_s# ))
(( p0_r2_s0 ),( p0_r2_s1 ),( p0_r2_s# ))

(( p1_r0_s0 ),( p1_r0_s1 ),( p1_r0_s# ))
(( p1_r1_s0 ),( p1_r1_s1 ),( p1_r1_s# ))
(( p1_r2_s0 ),( p1_r2_s1 ),( p1_r2_s# ))

(( p2_r0_s0 ),( p2_r0_s1 ),( p2_r0_s# ))
(( p2_r1_s0 ),( p2_r1_s1 ),( p2_r1_s# ))
(( p2_r2_s0 ),( p2_r2_s1 ),( p2_r2_s# ))

Interaction Conditions:

(( p0_r0 ) p0_r0_s0 ( p1_r0_s0 ))
(( p0_r0 ) p0_r0_s1 ( p1_r0_s1 ))
(( p0_r0 ) p0_r0_s# ( p1_r0_s# ))

(( p0_r1 ) p0_r1_s0 ( p1_r1_s0 ))
(( p0_r1 ) p0_r1_s1 ( p1_r1_s1 ))
(( p0_r1 ) p0_r1_s# ( p1_r1_s# ))

(( p0_r2 ) p0_r2_s0 ( p1_r2_s0 ))
(( p0_r2 ) p0_r2_s1 ( p1_r2_s1 ))
(( p0_r2 ) p0_r2_s# ( p1_r2_s# ))

(( p1_r0 ) p1_r0_s0 ( p2_r0_s0 ))
(( p1_r0 ) p1_r0_s1 ( p2_r0_s1 ))
(( p1_r0 ) p1_r0_s# ( p2_r0_s# ))

(( p1_r1 ) p1_r1_s0 ( p2_r1_s0 ))
(( p1_r1 ) p1_r1_s1 ( p2_r1_s1 ))
(( p1_r1 ) p1_r1_s# ( p2_r1_s# ))

(( p1_r2 ) p1_r2_s0 ( p2_r2_s0 ))
(( p1_r2 ) p1_r2_s1 ( p2_r2_s1 ))
(( p1_r2 ) p1_r2_s# ( p2_r2_s# ))

Transition Relations:

( p0_q0  p0_r1  p0_r1_s0  ( p1_q0  p1_r2  p1_r1_s0 ))
( p0_q0  p0_r1  p0_r1_s1  ( p1_q1  p1_r2  p1_r1_s1 ))
( p0_q0  p0_r1  p0_r1_s#  ( p1_q#  p1_r0  p1_r1_s# ))
( p0_q0  p0_r2  p0_r2_s#  ( p1_q#  p1_r1  p1_r2_s# ))

( p0_q1  p0_r1  p0_r1_s0  ( p1_q1  p1_r2  p1_r1_s0 ))
( p0_q1  p0_r1  p0_r1_s1  ( p1_q0  p1_r2  p1_r1_s1 ))
( p0_q1  p0_r1  p0_r1_s#  ( p1_q*  p1_r0  p1_r1_s# ))
( p0_q1  p0_r2  p0_r2_s#  ( p1_q*  p1_r1  p1_r2_s# ))

( p1_q0  p1_r1  p1_r1_s0  ( p2_q0  p2_r2  p2_r1_s0 ))
( p1_q0  p1_r1  p1_r1_s1  ( p2_q1  p2_r2  p2_r1_s1 ))
( p1_q0  p1_r1  p1_r1_s#  ( p2_q#  p2_r0  p2_r1_s# ))
( p1_q0  p1_r2  p1_r2_s#  ( p2_q#  p2_r1  p2_r2_s# ))

( p1_q1  p1_r1  p1_r1_s0  ( p2_q1  p2_r2  p2_r1_s0 ))
( p1_q1  p1_r1  p1_r1_s1  ( p2_q0  p2_r2  p2_r1_s1 ))
( p1_q1  p1_r1  p1_r1_s#  ( p2_q*  p2_r0  p2_r1_s# ))
( p1_q1  p1_r2  p1_r2_s#  ( p2_q*  p2_r1  p2_r2_s# ))

o~~~~~~~~~o~~~~~~~~~o~INTERPRETATION~o~~~~~~~~~o~~~~~~~~~o

Interpretation of the Propositional Program:

Mediate Conditions:

( p0_q#  ( p1_q# ))
( p0_q*  ( p1_q* ))

( p1_q#  ( p2_q# ))
( p1_q*  ( p2_q* ))

In Ref Log, an expression of the form "( X ( Y ))"
expresses an implication or an if-then proposition:
"Not X without Y",  "If X then Y",  "X => Y",  etc.

A text string expression of the form "( X ( Y ))"
parses to a graphical data-structure of the form:

X   Y
o---o
|
@

All together, these Mediate Conditions state:

| If at p_0  M is in state q_#, then at p_1  M is in state q_#, and
| If at p_0  M is in state q_*, then at p_1  M is in state q_*, and
| If at p_1  M is in state q_#, then at p_2  M is in state q_#, and
| If at p_1  M is in state q_*, then at p_2  M is in state q_*.

Terminal Conditions:

(( p2_q# )( p2_q* ))

In Ref Log, an expression of the form "(( X )( Y ))"
expresses a disjunction "X or Y" and it parses into:

X   Y
o   o
\ /
o
|
@

In effect, the Terminal Conditions state:

| At p_2,  M is in state q_#, or
| At p_2,  M is in state q_*.

State Partition:

(( p0_q0 ),( p0_q1 ),( p0_q# ),( p0_q* ))
(( p1_q0 ),( p1_q1 ),( p1_q# ),( p1_q* ))
(( p2_q0 ),( p2_q1 ),( p2_q# ),( p2_q* ))

In Ref Log, an expression of the form "(( e_1 ),( e_2 ),( ... ),( e_k ))"
expresses the fact that "exactly one of the e_j is true, for j = 1 to k".
Expressions of this form are called "universal partition" expressions, and
they parse into a type of graph called a "painted and rooted cactus" (PARC):

e_1   e_2   ...   e_k
o     o           o
|     |           |
o-----o--- ... ---o
\               /
\             /
\           /
\         /
\       /
\     /
\   /
\ /
@

The State Partition expresses the conditions that:

| At each of the points-in-time p_i, for i = 0 to 2,
| M can be in exactly one state q_j, for j in the set {0, 1, #, *}.

Register Partition:

(( p0_r0 ),( p0_r1 ),( p0_r2 ))
(( p1_r0 ),( p1_r1 ),( p1_r2 ))
(( p2_r0 ),( p2_r1 ),( p2_r2 ))

The Register Partition expresses the conditions that:

| At each of the points-in-time p_i, for i = 0 to 2,
| H can be on exactly one cell  r_j, for j = 0 to 2.

Symbol Partition:

(( p0_r0_s0 ),( p0_r0_s1 ),( p0_r0_s# ))
(( p0_r1_s0 ),( p0_r1_s1 ),( p0_r1_s# ))
(( p0_r2_s0 ),( p0_r2_s1 ),( p0_r2_s# ))

(( p1_r0_s0 ),( p1_r0_s1 ),( p1_r0_s# ))
(( p1_r1_s0 ),( p1_r1_s1 ),( p1_r1_s# ))
(( p1_r2_s0 ),( p1_r2_s1 ),( p1_r2_s# ))

(( p2_r0_s0 ),( p2_r0_s1 ),( p2_r0_s# ))
(( p2_r1_s0 ),( p2_r1_s1 ),( p2_r1_s# ))
(( p2_r2_s0 ),( p2_r2_s1 ),( p2_r2_s# ))

The Symbol Partition expresses the conditions that:

| At each of the points-in-time p_i, for i in {0, 1, 2},
| in each of the tape-registers r_j, for j in {0, 1, 2},
| there can be exactly one sign s_k, for k in {0, 1, #}.

Interaction Conditions:

(( p0_r0 ) p0_r0_s0 ( p1_r0_s0 ))
(( p0_r0 ) p0_r0_s1 ( p1_r0_s1 ))
(( p0_r0 ) p0_r0_s# ( p1_r0_s# ))

(( p0_r1 ) p0_r1_s0 ( p1_r1_s0 ))
(( p0_r1 ) p0_r1_s1 ( p1_r1_s1 ))
(( p0_r1 ) p0_r1_s# ( p1_r1_s# ))

(( p0_r2 ) p0_r2_s0 ( p1_r2_s0 ))
(( p0_r2 ) p0_r2_s1 ( p1_r2_s1 ))
(( p0_r2 ) p0_r2_s# ( p1_r2_s# ))

(( p1_r0 ) p1_r0_s0 ( p2_r0_s0 ))
(( p1_r0 ) p1_r0_s1 ( p2_r0_s1 ))
(( p1_r0 ) p1_r0_s# ( p2_r0_s# ))

(( p1_r1 ) p1_r1_s0 ( p2_r1_s0 ))
(( p1_r1 ) p1_r1_s1 ( p2_r1_s1 ))
(( p1_r1 ) p1_r1_s# ( p2_r1_s# ))

(( p1_r2 ) p1_r2_s0 ( p2_r2_s0 ))
(( p1_r2 ) p1_r2_s1 ( p2_r2_s1 ))
(( p1_r2 ) p1_r2_s# ( p2_r2_s# ))

In briefest terms, the Interaction Conditions merely express
between two points-in-time unless the tape-head is over the
cell in question at the initial one of those points-in-time.
All that we have to do is to see how they manage to say this.

In Ref Log, an expression of the following form:

"(( p<i>_r<j> ) p<i>_r<j>_s<k> ( p<i+1>_r<j>_s<k> ))",

and which parses as the graph:

p<i>_r<j> o   o  p<i+1>_r<j>_s<k>
\ /
p<i>_r<j>_s<k> o
|
@

can be read in the form of the following implication:

| If
| at the point-in-time p<i>, the tape-cell r<j> bears the mark s<k>,
| but it is not the case that
| at the point-in-time p<i>, the tape-head is on the tape-cell r<j>.
| then
| at the point-in-time p<i+1>, the tape-cell r<j> bears the mark s<k>.

Folks among us of a certain age and a peculiar manner of acculturation will
recognize these as the "Frame Conditions" for the change of state of the TM.

Transition Relations:

( p0_q0  p0_r1  p0_r1_s0  ( p1_q0  p1_r2  p1_r1_s0 ))
( p0_q0  p0_r1  p0_r1_s1  ( p1_q1  p1_r2  p1_r1_s1 ))
( p0_q0  p0_r1  p0_r1_s#  ( p1_q#  p1_r0  p1_r1_s# ))
( p0_q0  p0_r2  p0_r2_s#  ( p1_q#  p1_r1  p1_r2_s# ))

( p0_q1  p0_r1  p0_r1_s0  ( p1_q1  p1_r2  p1_r1_s0 ))
( p0_q1  p0_r1  p0_r1_s1  ( p1_q0  p1_r2  p1_r1_s1 ))
( p0_q1  p0_r1  p0_r1_s#  ( p1_q*  p1_r0  p1_r1_s# ))
( p0_q1  p0_r2  p0_r2_s#  ( p1_q*  p1_r1  p1_r2_s# ))

( p1_q0  p1_r1  p1_r1_s0  ( p2_q0  p2_r2  p2_r1_s0 ))
( p1_q0  p1_r1  p1_r1_s1  ( p2_q1  p2_r2  p2_r1_s1 ))
( p1_q0  p1_r1  p1_r1_s#  ( p2_q#  p2_r0  p2_r1_s# ))
( p1_q0  p1_r2  p1_r2_s#  ( p2_q#  p2_r1  p2_r2_s# ))

( p1_q1  p1_r1  p1_r1_s0  ( p2_q1  p2_r2  p2_r1_s0 ))
( p1_q1  p1_r1  p1_r1_s1  ( p2_q0  p2_r2  p2_r1_s1 ))
( p1_q1  p1_r1  p1_r1_s#  ( p2_q*  p2_r0  p2_r1_s# ))
( p1_q1  p1_r2  p1_r2_s#  ( p2_q*  p2_r1  p2_r2_s# ))

The Transition Conditions merely serve to express,
by means of 16 complex implication expressions,
the data of the TM table that was given above.

o~~~~~~~~~o~~~~~~~~~o~~OUTPUTS~~o~~~~~~~~~o~~~~~~~~~o

And here are the outputs of the computation,
as emulated by its propositional rendition,
and as actually generated within that form
of transmogrification by the program that
I wrote for finding all of the satisfying
interpretations (truth-value assignments)
of propositional expressions in Ref Log:

o~~~~~~~~~o~~~~~~~~~o~OUTPUT~0~o~~~~~~~~~o~~~~~~~~~o

Output Conditions:

p0_q0
p0_r1
p0_r0_s#
p0_r1_s0
p0_r2_s#
p1_q0
p1_r2
p1_r2_s#
p1_r0_s#
p1_r1_s0
p2_q#
p2_r1
p2_r0_s#
p2_r1_s0
p2_r2_s#

The Output Conditions amount to the sole satisfying interpretation,
that is, a "sequence of truth-value assignments" (SOTVA) that make
the entire proposition come out true, and they state the following:

| At the point-in-time p_0, M is in the state q_0,       and
| At the point-in-time p_0, H is on the cell  r_1,       and
| At the point-in-time p_0, cell r_0 bears the mark "#", and
| At the point-in-time p_0, cell r_1 bears the mark "0", and
| At the point-in-time p_0, cell r_2 bears the mark "#", and
|
| At the point-in-time p_1, M is in the state q_0,       and
| At the point-in-time p_1, H is on the cell  r_2,       and
| At the point-in-time p_1, cell r_0 bears the mark "#", and
| At the point-in-time p_1, cell r_1 bears the mark "0", and
| At the point-in-time p_1, cell r_2 bears the mark "#", and
|
| At the point-in-time p_2, M is in the state q_#,       and
| At the point-in-time p_2, H is on the cell  r_1,       and
| At the point-in-time p_2, cell r_0 bears the mark "#", and
| At the point-in-time p_2, cell r_1 bears the mark "0", and
| At the point-in-time p_2, cell r_2 bears the mark "#".

In brief, the output for our sake being the symbol that rests
under the tape-head H when the machine M gets to a rest state,
we are now amazed by the remarkable result that Parity(0) = 0.

o~~~~~~~~~o~~~~~~~~~o~OUTPUT~1~o~~~~~~~~~o~~~~~~~~~o

Output Conditions:

p0_q0
p0_r1
p0_r0_s#
p0_r1_s1
p0_r2_s#
p1_q1
p1_r2
p1_r2_s#
p1_r0_s#
p1_r1_s1
p2_q*
p2_r1
p2_r0_s#
p2_r1_s1
p2_r2_s#

The Output Conditions amount to the sole satisfying interpretation,
that is, a "sequence of truth-value assignments" (SOTVA) that make
the entire proposition come out true, and they state the following:

| At the point-in-time p_0, M is in the state q_0,       and
| At the point-in-time p_0, H is on the cell  r_1,       and
| At the point-in-time p_0, cell r_0 bears the mark "#", and
| At the point-in-time p_0, cell r_1 bears the mark "1", and
| At the point-in-time p_0, cell r_2 bears the mark "#", and
|
| At the point-in-time p_1, M is in the state q_1,       and
| At the point-in-time p_1, H is on the cell  r_2,       and
| At the point-in-time p_1, cell r_0 bears the mark "#", and
| At the point-in-time p_1, cell r_1 bears the mark "1", and
| At the point-in-time p_1, cell r_2 bears the mark "#", and
|
| At the point-in-time p_2, M is in the state q_*,       and
| At the point-in-time p_2, H is on the cell  r_1,       and
| At the point-in-time p_2, cell r_0 bears the mark "#", and
| At the point-in-time p_2, cell r_1 bears the mark "1", and
| At the point-in-time p_2, cell r_2 bears the mark "#".

In brief, the output for our sake being the symbol that rests
under the tape-head H when the machine M gets to a rest state,
we are now amazed by the remarkable result that Parity(1) = 1.

I realized after sending that last bunch of bits that there is room
for confusion about what is the input/output of the Study module of
the Theme One program as opposed to what is the input/output of the
"finitely approximated turing automaton" (FATA).  So here is better
delineation of what's what.  The input to Study is a text file that
is known as LogFile(Whatever) and the output of Study is a sequence
of text files that summarize the various canonical and normal forms
that it generates.  For short, let us call these NormFile(Whatelse).
With that in mind, here are the actual IO's of Study, excluding the
glosses in square brackets:

o~~~~~~~~~o~~~~~~~~~o~~INPUT~~o~~~~~~~~~o~~~~~~~~~o

[Input To Study = FATA Initial Conditions + FATA Program Conditions]

[FATA Initial Conditions For Input 0]

p0_q0

p0_r1

p0_r0_s#
p0_r1_s0
p0_r2_s#

[FATA Program Conditions For Parity Machine]

[Mediate Conditions]

( p0_q#  ( p1_q# ))
( p0_q*  ( p1_q* ))

( p1_q#  ( p2_q# ))
( p1_q*  ( p2_q* ))

[Terminal Conditions]

(( p2_q# )( p2_q* ))

[State Partition]

(( p0_q0 ),( p0_q1 ),( p0_q# ),( p0_q* ))
(( p1_q0 ),( p1_q1 ),( p1_q# ),( p1_q* ))
(( p2_q0 ),( p2_q1 ),( p2_q# ),( p2_q* ))

[Register Partition]

(( p0_r0 ),( p0_r1 ),( p0_r2 ))
(( p1_r0 ),( p1_r1 ),( p1_r2 ))
(( p2_r0 ),( p2_r1 ),( p2_r2 ))

[Symbol Partition]

(( p0_r0_s0 ),( p0_r0_s1 ),( p0_r0_s# ))
(( p0_r1_s0 ),( p0_r1_s1 ),( p0_r1_s# ))
(( p0_r2_s0 ),( p0_r2_s1 ),( p0_r2_s# ))

(( p1_r0_s0 ),( p1_r0_s1 ),( p1_r0_s# ))
(( p1_r1_s0 ),( p1_r1_s1 ),( p1_r1_s# ))
(( p1_r2_s0 ),( p1_r2_s1 ),( p1_r2_s# ))

(( p2_r0_s0 ),( p2_r0_s1 ),( p2_r0_s# ))
(( p2_r1_s0 ),( p2_r1_s1 ),( p2_r1_s# ))
(( p2_r2_s0 ),( p2_r2_s1 ),( p2_r2_s# ))

[Interaction Conditions]

(( p0_r0 ) p0_r0_s0 ( p1_r0_s0 ))
(( p0_r0 ) p0_r0_s1 ( p1_r0_s1 ))
(( p0_r0 ) p0_r0_s# ( p1_r0_s# ))

(( p0_r1 ) p0_r1_s0 ( p1_r1_s0 ))
(( p0_r1 ) p0_r1_s1 ( p1_r1_s1 ))
(( p0_r1 ) p0_r1_s# ( p1_r1_s# ))

(( p0_r2 ) p0_r2_s0 ( p1_r2_s0 ))
(( p0_r2 ) p0_r2_s1 ( p1_r2_s1 ))
(( p0_r2 ) p0_r2_s# ( p1_r2_s# ))

(( p1_r0 ) p1_r0_s0 ( p2_r0_s0 ))
(( p1_r0 ) p1_r0_s1 ( p2_r0_s1 ))
(( p1_r0 ) p1_r0_s# ( p2_r0_s# ))

(( p1_r1 ) p1_r1_s0 ( p2_r1_s0 ))
(( p1_r1 ) p1_r1_s1 ( p2_r1_s1 ))
(( p1_r1 ) p1_r1_s# ( p2_r1_s# ))

(( p1_r2 ) p1_r2_s0 ( p2_r2_s0 ))
(( p1_r2 ) p1_r2_s1 ( p2_r2_s1 ))
(( p1_r2 ) p1_r2_s# ( p2_r2_s# ))

[Transition Relations]

( p0_q0  p0_r1  p0_r1_s0  ( p1_q0  p1_r2  p1_r1_s0 ))
( p0_q0  p0_r1  p0_r1_s1  ( p1_q1  p1_r2  p1_r1_s1 ))
( p0_q0  p0_r1  p0_r1_s#  ( p1_q#  p1_r0  p1_r1_s# ))
( p0_q0  p0_r2  p0_r2_s#  ( p1_q#  p1_r1  p1_r2_s# ))

( p0_q1  p0_r1  p0_r1_s0  ( p1_q1  p1_r2  p1_r1_s0 ))
( p0_q1  p0_r1  p0_r1_s1  ( p1_q0  p1_r2  p1_r1_s1 ))
( p0_q1  p0_r1  p0_r1_s#  ( p1_q*  p1_r0  p1_r1_s# ))
( p0_q1  p0_r2  p0_r2_s#  ( p1_q*  p1_r1  p1_r2_s# ))

( p1_q0  p1_r1  p1_r1_s0  ( p2_q0  p2_r2  p2_r1_s0 ))
( p1_q0  p1_r1  p1_r1_s1  ( p2_q1  p2_r2  p2_r1_s1 ))
( p1_q0  p1_r1  p1_r1_s#  ( p2_q#  p2_r0  p2_r1_s# ))
( p1_q0  p1_r2  p1_r2_s#  ( p2_q#  p2_r1  p2_r2_s# ))

( p1_q1  p1_r1  p1_r1_s0  ( p2_q1  p2_r2  p2_r1_s0 ))
( p1_q1  p1_r1  p1_r1_s1  ( p2_q0  p2_r2  p2_r1_s1 ))
( p1_q1  p1_r1  p1_r1_s#  ( p2_q*  p2_r0  p2_r1_s# ))
( p1_q1  p1_r2  p1_r2_s#  ( p2_q*  p2_r1  p2_r2_s# ))

o~~~~~~~~~o~~~~~~~~~o~~OUTPUT~~o~~~~~~~~~o~~~~~~~~~o

[Output Of Study = FATA Output For Input 0]

p0_q0
p0_r1
p0_r0_s#
p0_r1_s0
p0_r2_s#
p1_q0
p1_r2
p1_r2_s#
p1_r0_s#
p1_r1_s0
p2_q#
p2_r1
p2_r0_s#
p2_r1_s0
p2_r2_s#
```

### ZOL. Note 4

```Turing automata, finitely approximated or not, make my head spin and
my tape go loopy, and I believe that it were a far better thing I do
if I drop back a bit, just for a while, and work up to that level of
computational complexity in a much more gracefully graduated manner.
So, let us return to that humble Example that I took up initially:

|                  boy   male          girl   female
|                    o---o child          o---o child
|  male   female      \ /                  \ /               child  human
|     o---o            o                    o                    o--o
|      \ /             |                    |                    |
|       @              @                    @                    @
|
| (male , female)((boy , male child))((girl , female child))(child (human))

One section of the Theme One program that I wrote for exploring the integration
of "empirical" (adaptive learning) and "rational" (logical modelling) faculties
falls under the title of "Study", amounting to the "reasoning" or the "universe
of discourse modelling" component of the program.  "Study" is designed to work
with "log files" that contain propositional expressions in the cactus language.
The core of the Study module is a utility function that generates a species of
"disjunctive normal form" called the "arboreal boolean expansion" (ABE) of the
input proposition.  In a typical problem setting, one will often wish to focus
on the narrower implications of the initial problem expression.  This can be
done by conjoining a "query" to the input expression before submitting it to
analysis.  The query is really just an additional propositional expression
that imposes a further logical constraint on the input expression.

The Figure roughly sketches the conjuncts of the graph-theoretic
data structure that the parser would commit to memory on reading
the appropriate log file that contains the text along the bottom.

I will now explain the various sorts of things that the
Study utility can do with the log file that describes
the universe of discourse in our present Example.

Theme One Study is built around a suite of four successive generators
of "normal forms" for propositional expressions, just to use that term
in a very approximate way.  The functions that compute these normal forms
are called "Model", "Tenor", "Canon", and "Sense", and so we may refer to
to their text-style outputs as the "mod", "ten", "can", and "sen" files.

Though it could be any propositional expression on the same vocabulary
\$A\$ = {"boy", "child", "female", "girl", "human", "male"}, more usually
the query is a simple conjunction of one or more positive features that
we want to focus on or perhaps to filter out of the logical model space.
On our first run through this Example, we take the log file proposition
as it is, with no extra riders.

| Nota Bene.  Theme One Study Model displays a running tab of how much
| free memory space it has left.  On some of the harder problems that
| you may think of to give it, Model may run out of free memory and
| terminate, abnormally exiting Theme One.  Sometimes it helps to:
|
| 1.  Rephrase the problem in logically equivalent
|     but rhetorically increasedly felicitous ways.
|
| 2.  Think of additional facts that are taken for granted but not
|     made explicit and that cannot be logically inferred by Model.

After Model has finished, it is ready to write out its mod file,
which you may choose to show on the screen or save to a named file.
Mod files are usually too long to see (or to care to see) all at once
on the screen, so it is very often best to save them for later replay.
In our Example the Model function yields a mod file that looks like so:

Model Output and
Mod File Example
o-------------------o
| male              |
|  female -         |  1
|  (female )        |
|   girl -          |  2
|   (girl )         |
|    child          |
|     boy           |
|      human *      |  3 *
|      (human ) -   |  4
|     (boy ) -      |  5
|    (child )       |
|     boy -         |  6
|     (boy ) *      |  7 *
| (male )           |
|  female           |
|   boy -           |  8
|   (boy )          |
|    child          |
|     girl          |
|      human *      |  9 *
|      (human ) -   | 10
|     (girl ) -     | 11
|    (child )       |
|     girl -        | 12
|     (girl ) *     | 13 *
|  (female ) -      | 14
o-------------------o

Counting the stars "*" that indicate true interpretations
and the bars "-" that indicate false interpretations of
the input formula, we can see that the Model function,
out of the 64 possible interpretations, has actually
gone through the work of making just 14 evaluations,
all in order to find the 4 models that are allowed
by the input definitions.

To be clear about what this output means, the starred paths
indicate all of the complete specifications of objects in the
universe of discourse, that is, all of the consistent feature
conjunctions of maximum length, as permitted by the definitions
that are given in the log file.
```

### ZOL. Note 5

```Let's take a break from the Example in progress and
take a look at what we've been doing, as it appears
in the light of broader computational, logical, and
semiotic perspectives.  We should not let our focus
on and our fascination with this particular example
prevent us from recognizing the larger lessons that
it illustrates, the way that it points to much more
important paradigms and predicaments and principles,
as a glimmer of ultimately more significant objects.

I've had good reasons for taking so much trouble with this concrete example
and for setting out the chosen form of logical processing in so much detail.
The lion's share of those reasons have to do with demonstrating distinctive
features of Peirce's approach to logic.  I can discuss these features under

1.  The semiotic embedding of logic.

2.  The nature of mathematical models.

3.  The nurture of equational inference.

In the pragmatic way of thinking about it, all thought takes place
within a setting of sign relations, as a so-called semiotic process.
This means that logical inference, as a form of controlled, critical,
deliberate, normative, reflective, and rule-governed thinking process,
is just one among many species of semiotic processes that inhabits the
same sign-relational environment.  As a consequence of this placement,
logic distinguishes itself "among" other types of semiosis with which
it can be put in comparison, not ruling "above" them as some kind of
unseen overseer.

Peirce's approaches to logic, from the very first to the very last,
were those of a first-rate mathematician who was also a first-rate
logician and philosopher of mathematics, like none have been since.
This meant that his logical method was always seamlessly integrated
with other available modes of effective description, formal modeling,
and mathematical specification.  However, much of this integration has
been lost in the logical formalisms that we have seen since Peirce's time,
leading to a wide gap between qualitative and quantitative modes of thought,
not to mention the proponents thereof.  As a result, the power of a unified
logical-mathematical formalism is a potential that remains largely untapped
to this day.

At least with regard to the propositional level, Peirce's axioms enable
equational inference rules, meaning that no information is lost in the
process of inference, and so the steps are reversible.  In particular,
it seems to have slipped the attention of many contemporary logicians
just how weak are the inference rules that they currently promote as
adequate.  Although they might be claimed to sketch the envelope of
mathematical reasoning, they are not generative of that reasoning,
and lack the power to support the prevailing modes of equational
reasoning in mathematics.
```

### ZOL. Note 6

```Still taking up a reflective station on the train of thought in motion,
let us chart the progression that we have just passed through this way:

|                    Parse
|      Sign A  o-------------->o  Sign 1
|             ^                |
|            /                 |
|           /                  |
|          /                   |
| Object  o                    |  Transform
|          ^                   |
|           \                  |
|            \                 |
|             \                v
|      Sign B  o<--------------o  Sign 2
|                    Verse
|
| Figure 1.  Computation As Semiotic Transformation

In the present case, the Object is an objective situation
or a state of affairs, in effect, a particular pattern of
feature concurrences occurring to us in that world through
which we find ourselves most frequently faring, wily nily,
and the Signs are different tokens and different types of
data structures that we somehow or other find it useful
to devise or to discover for the sake of representing
current objects to ourselves on a recurring basis.

But not all signs, not even signs of a single object, are alike
in every other respect that one might name, not even with respect
to their powers of relating, significantly, to that common object.

And that is what our whole business of computation busies itself about,
when it minds its business best, that is, transmuting signs into signs
in ways that augment their powers of relating significantly to objects.

We have seen how the Model function and the mod output format
indicate all of the complete specifications of objects in the
universe of discourse, that is, all of the consistent feature
conjunctions of maximal specificity that are permitted by the
constraints or the definitions that are given in the log file.

To help identify these specifications of particular cells in
the universe of discourse, the next function and output format,
called "Tenor", edits the mod file to give only the true paths,
in effect, the "positive models", that are by default what we
usually mean when we say "models", and not the "anti-models"
or the "negative models" that fail to satisfy the formula
in question.

In the present Example the Tenor function
generates a Ten file that looks like this:

Tenor Output and
Ten File Example
o-------------------o
| male              |
|  (female )        |
|   (girl )         |
|    child          |
|     boy           |
|      human *      | <1>
|    (child )       |
|     (boy ) *      | <2>
| (male )           |
|  female           |
|   (boy )          |
|    child          |
|     girl          |
|      human *      | <3>
|    (child )       |
|     (girl ) *     | <4>
o-------------------o

As I said, the Tenor function just abstracts a transcript of the models,
that is, the satisfying interpretations, that were already interspersed
throughout the complete Model output.  These specifications, or feature
conjunctions, with the positive and the negative features listed in the
order of their actual budding on the "arboreal boolean expansion" twigs,
may be gathered and arranged in this antherypulogical flowering bouquet:

1.   male   (female ) (girl )  child    boy     human   *
2.   male   (female ) (girl ) (child ) (boy  )          *
3.  (male )  female   (boy  )  child    girl    human   *
4.  (male )  female   (boy  ) (child ) (girl )          *

Notice that Model, as reflected in this abstract, did not consider
the six positive features in the same order along each path.  This
is because the algorithm was designed to proceed opportunistically
in its attempt to reduce the original proposition through a series
of case-analytic considerations and the resulting simplifications.

Notice, too, that Model is something of a lazy evaluator, quitting work
when and if a value is determined by less than the full set of variables.
This is the reason why paths <2> and <4> are not ostensibly of the maximum
length.  According to this lazy mode of understanding, any path that is not
specified on a set of features really stands for the whole bundle of paths
that are derived by freely varying those features.  Thus, specifications
<2> and <4> summarize four models altogether, with the logical choice
between "human" and "not human" being left open at the point where
they leave off their branches in the releavent deciduous tree.

The last two functions in the Study section, "Canon" and "Sense",
extract further derivatives of the normal forms that are produced
by Model and Tenor.  Both of these functions take the set of model
paths and simply throw away the negative labels.  You may think of
these as the "rose colored glasses" or "job interview" normal forms,
in that they try to say everything that's true, so long as it can be
expressed in positive terms.  Generally, this would mean losing a lot
of information, and the result could no longer be expected to have the
property of remaining logically equivalent to the original proposition.

Fortunately, however, it seems that this type of positive projection of
the whole truth is just what is possible, most needed, and most clear in
many of the "natural" examples, that is, in examples that arise from the
domains of natural language and natural conceptual kinds.  In these cases,
where most of the logical features are redundantly coded, for example, in
the way that "adult" = "not child" and "child" = "not adult", the positive
feature bearing redacts are often sufficiently expressive all by themselves.

Canon merely censors its printing of the negative labels as it traverses the
model tree.  This leaves the positive labels in their original columns of the
outline form, giving it a slightly skewed appearance.  This can be misleading
unless you already know what you are looking for.  However, this Canon format
is computationally quick, and frequently suffices, especially if you already
have a likely clue about what to expect in the way of a question's outcome.

In the present Example the Canon function
generates a Can file that looks like this:

Canon Output and
Can File Example
o-------------------o
| male              |
|    child          |
|     boy           |
|      human        |
|  female           |
|    child          |
|     girl          |
|      human        |
o-------------------o

The Sense function does the extra work that is required
to place the positive labels of the model tree at their
proper level in the outline.

In the present Example the Sense function
generates a Sen file that looks like this:

Sense Output and
Sen File Example
o-------------------o
| male              |
|  child            |
|   boy             |
|    human          |
| female            |
|  child            |
|   girl            |
|    human          |
o-------------------o

The Canon and Sense outlines for this Example illustrate a certain
type of general circumstance that needs to be noted at this point.
Recall the model paths or the feature specifications that were
numbered <2> and <4> in the listing of the output for Tenor.
These paths, in effect, reflected Model's discovery that
the venn diagram cells for male or female non-children
and male or female non-humans were not excluded by
the definitions that were given in the Log file.
In the abstracts given by Canon and Sense, the
specifications <2> and <4> have been subsumed,
or absorbed unmarked, under the general topics
of their respective genders, male or female.
This happens because no purely positive
features were supplied to distinguish
the non-child and non-human cases.

That concludes, for the time being,
how long I don't know, our initial
preview of my humble 6-dimensional
Example.
```

### ZOL. Note 7

```Partition:  Genus and Species

Our next Example illustrates the use of the Cactus Language
for representing "absolute" and "relative" partitions, also
known as "complete" and "contingent" classifications of the
universe of discourse, all of which amounts to divvying up
the universe of discourse into mutually exclusive regions,
exhaustive or not, as one frequently needs in situations
involving a genus and its sundry species, and as one
occasionaly pictures in the form of venn diagrams
that look just like the familiar "pie charts".

Example 2.  Gallia est omnis divisa in partes tres ...

The idea that one needs for expressing partitions
in cactus expressions can be summed up like this:

| If the propositional expression
|
|  a   b   c   ...
|  o---o---o...o
|   \         /
|    \       /
|     \     /
|      \   /
|       \ /
|        @
|
| (a,  b,  c,  ...)
|
| means that just one of a, b, c, ... is false,
|
| then the propositional expression
|
|  a   b   c  ...
|  o   o   o   o
|  |   |   |   |
|  o---o---o...o
|   \         /
|    \       /
|     \     /
|      \   /
|       \ /
|        @
|
| ((a),(b),(c), ...)
|
| means that just one of (a), (b), (c), ... is false,
|
| in other words, that just one of a, b, c, ... is true.

Thus we have an efficient means to express and to enforce
a partition of the space of models, in effect, to maintain
the condition that a number of features or propositions are
to be held in mutually exclusive and exhaustive disjunction.
This supplies a much needed bridge between the binary domain
of two values and every other discrete domain that ranges over
a finite number of monadic attribute values or feature options.

Another variation on this theme allows one to maintain the
subsumption of many separate species under an explicit genus.
To see this, let us examine the following form of expression:

|      a   b   c
|      o   o   o
|  g   |   |   |
|  o---o---o---o
|   \         /
|    \       /
|     \     /
|      \   /
|       \ /
|        @
|
| (g ,(a),(b),(c))

Now consider what it would mean for this to be true.  We see two cases:

1.  If the proposition g is true, then exactly one of
the propositions (a), (b), (c) must be false, and
so exactly one of the propositions a, b, c is true.

2.  If the proposition g is false, then every one of
the propositions (a), (b), (c) must be true, and
so every one of the propositions a, b, c is false.

Figures 1 and 2 illustrate this type of situation.

Figure 1 shows the venn diagram of the 4-dimensional universe of
discourse X = [g, a, b, c], conventionally named after the gang
of four logical features that generate it.  Strictly speaking,
X is made up of two layers, the position space X of abstract
type B^4, and the proposition space X^ = (X -> B) of abstract
type B^4 -> B, but it is commonly lawful enough to sign the
signature of both of these spaces with the same X, and thus
to give the power of attorney for the propositions to the
so-indicted position space thereof.

Figure 1 also makes use of the convention whereby the
regions or the subsets of the universe of discourse that
correspond to the basic features g, a, b, c are labelled
with the parallel set of upper case letters G, A, B, C.

|                        o
|                       / \
|                      /   \
|                     /     \
|                    /       \
|                   o         o
|                  /%\       /%\
|                 /%%%\     /%%%\
|                /%%%%%\   /%%%%%\
|               /%%%%%%%\ /%%%%%%%\
|              o%%%%%%%%%o%%%%%%%%%o
|             / \%%%%%%%/ \%%%%%%%/ \
|            /   \%%%%%/   \%%%%%/   \
|           /     \%%%/     \%%%/     \
|          /       \%/       \%/       \
|         o         o         o         o
|        / \       /%\       / \       / \
|       /   \     /%%%\     /   \     /   \
|      /     \   /%%%%%\   /     \   /     \
|     /       \ /%%%%%%%\ /       \ /       \
|    o         o%%%%%%%%%o         o         o
|    .\       / \%%%%%%%/ \       / \       /.
|    . \     /   \%%%%%/   \     /   \     / .
|    .  \   /     \%%%/     \   /     \   /  .
|    .   \ /       \%/       \ /       \ /   .
|    .    o         o         o         o    .
|    .    .\       / \       / \       /.    .
|    .    . \     /   \     /   \     / .    .
|    .    .  \   /     \   /     \   /  .    .
|    . G  .   \ /       \ /       \ /   .  C .
|    ..........o         o         o..........
|         .     \       /%\       /     .
|         .      \     /%%%\     /      .
|         .       \   /%%%%%\   /       .
|         . A      \ /%%%%%%%\ /      B .
|         ..........o%%%%%%%%%o..........
|                    \%%%%%%%/
|                     \%%%%%/
|                      \%%%/
|                       \%/
|                        o
|
| Figure 1.  Genus G and Species A, B, C

Figure 2 presents another form of venn diagram that one often uses
in this sort of situation, where one collapses the unindited cells
out of the picture and leaves only the models of the proposition
in question.  Some people would call the transformation that
changes from the first form to the next form an operation
of "taking the quotient", but I tend to think of it as
the "soap bubble picture", or rather more exactly the
"wire-frame and thread-loop and soap-film" model of
the universe of discourse, where one pops out of
consideration the regions of the soap film that
stretch across the anti-model regions of space.

o-------------------------------------------------o
|`X```````````````````````````````````````````````|
|`````````````````````````````````````````````````|
|````````````````````````o````````````````````````|
|```````````````````````/|\```````````````````````|
|``````````````````````/|||\``````````````````````|
|`````````````````````/|||||\`````````````````````|
|````````````````````/|||||||\````````````````````|
|```````````````````/|||||||||\```````````````````|
|``````````````````o|||| A ||||o``````````````````|
|`````````````````//\|||||||||/\\`````````````````|
|````````````````////\|||||||/\\\\````````````````|
|```````````````//////\|||||/\\\\\\```````````````|
|``````````````////////\|||/\\\\\\\\``````````````|
|`````````````//////////\|/\\\\\\\\\\`````````````|
|````````````/////////// G \\\\\\\\\\\````````````|
|```````````/////////////|\\\\\\\\\\\\\```````````|
|``````````//////////////|\\\\\\\\\\\\\\``````````|
|`````````//////// B ////|\\\\ C \\\\\\\\`````````|
|````````////////////////|\\\\\\\\\\\\\\\\````````|
|```````/////////////////|\\\\\\\\\\\\\\\\\```````|
|``````o-----------------o-----------------o``````|
|`````````````````````````````````````````````````|
|`````````````````````````````````````````````````|
o-------------------------------------------------o

Figure 2.  Genus G and Species A, B, C

Incidental Musements:

http://ccat.sas.upenn.edu/jod/texts/bg1.html
http://patriot.net/~lillard/cp/caes.gall.1.html
http://www.alesia.asso.fr/alesia/pg/BellumGallicum/BGUk/BGUk-1.htm
http://www.alesia.asso.fr/alesia/pg/BellumGallicum/BGUk/Home-BGUK.htm
```

### ZOL. Note 8

```Partition:  Genus and Species (cont.)

Last time we considered in general terms how the forms
of complete partition and contingent partition operate
to maintain mutually disjoint and possibly exhaustive
categories of positions in a universe of discourse.

This time we contemplate another concrete example of
near minimal complexity, designed to demonstrate how
the forms of partition and subsumption can interact
in structuring a space of feature specifications.

Example 3.  Things in General

In this Example, we describe a universe of discourse
in terms of the following vocabulary of five features:

| L.  living_thing
|
| N.  non_living
|
| A.  animal
|
| V.  vegetable
|
| M.  mineral

Let us construe these features as being subject to four constraints:

| 1.  Everything is either a living_thing or non_living, but not both.
|
| 2.  Everything is either animal, vegetable, or mineral,
|     but no two of these together.
|
| 3.  A living_thing is either animal or vegetable, but not both,
|     and everything animal or vegetable is a living_thing.
|
| 4.  Everything mineral is non_living.

These notions and constructions are expressed in the Log file shown below:

Logical Input File
o-------------------------------------------------o
|                                                 |
|   ( living_thing , non_living )                 |
|                                                 |
|   (( animal ),( vegetable ),( mineral ))        |
|                                                 |
|   ( living_thing ,( animal ),( vegetable ))     |
|                                                 |
|   ( mineral ( non_living ))                     |
|                                                 |
o-------------------------------------------------o

A text file like this parses into dynamic pointer graphs
whose relevant structure is shown, very roughly, like so:

Dynamic Parse Graph
o-------------------------------------------------o
|                                                 |
|                                                 |
|                A V M        A V                 |
|                o o o        o o                 |
|     L   N      | | |      L | |        M  N     |
|     o---o      o-o-o      o-o-o        o--o     |
|      \ /        \ /        \ /         |        |
|       @==========@==========@==========@        |
|                                                 |
|                                                 |
o-------------------------------------------------o

Here, to ease my transcripture through a dielectural medium,
I continue the convention that I instituted on the Diff Log
suture, using the artifices @==@==@ and o==o==o to identify
several nodes into one.

The cactus string in the text file and the cactus graph in dynamic memory
are the dialectically variant expressions of a "zeroth order theory" (ZOT),
one that paraphrases into the "usual technical interchange language" (UTIL)
to say:

Translation
o-------------------------------------------------o
|                                                 |
|   living_thing  =/=  non_living                 |
|                                                 |
|   par : all -> {animal, vegetable, mineral}     |
|                                                 |
|   par : living_thing -> {animal, vegetable}     |
|                                                 |
|   mineral => non_living                         |
|                                                 |
o-------------------------------------------------o

Here, "par : all -> {p, q, r}" is short for an assertion
that the universe as a whole is partitioned into subsets
P, Q, R that correspond to the logical features p, q, r.

Also, "par : q -> {r, s}" asserts that "Q partitions into R and S.

It is probably enough just to list the outputs of Model, Tenor, and Sense
when run on the preceding Log file.  Using the same format and labeling as
before, we may note that Model has, from 2^5 = 32 possible interpretations,
that were imposed by the logical input file.

Model Outline
o------------------------o
| living_thing           |
|  non_living -          |  1
|  (non_living )         |
|   mineral -            |  2
|   (mineral )           |
|    animal              |
|     vegetable -        |  3
|     (vegetable ) *     |  4 *
|    (animal )           |
|     vegetable *        |  5 *
|     (vegetable ) -     |  6
| (living_thing )        |
|  non_living            |
|   animal -             |  7
|   (animal )            |
|    vegetable -         |  8
|    (vegetable )        |
|     mineral *          |  9 *
|     (mineral ) -       | 10
|  (non_living ) -       | 11
o------------------------o

Tenor Outline
o------------------------o
| living_thing           |
|  (non_living )         |
|   (mineral )           |
|    animal              |
|     (vegetable ) *     | <1>
|    (animal )           |
|     vegetable *        | <2>
| (living_thing )        |
|  non_living            |
|   (animal )            |
|    (vegetable )        |
|     mineral *          | <3>
o------------------------o

Sense Outline
o------------------------o
| living_thing           |
|  animal                |
|  vegetable             |
| non_living             |
|  mineral               |
o------------------------o

And that's just how it is with things in general.
```

### ZOL. Note 9

```Example 4.  Molly's World

I think that we are finally ready to tackle a more respectable example.
The Example known as "Molly's World" is borrowed from the literature on
computational learning theory, adapted with a few changes from the example
called "Molly's Problem" in the paper "Learning With Hints" by Dana Angluin.
By way of setting up the problem, I quote Angluin's motivational description:

| Imagine that you have become acquainted with an alien named Molly from the
| planet Ornot, who is currently employed in a day-care center.  She is quite
| good at propositional logic, but a bit weak on knowledge of Earth.  So you
| decide to formulate the beginnings of a propositional theory to help her
| label things in her immediate environment.
|
| Angluin, Dana, "Learning With Hints", pages 167-181, in:
| David Haussler & Leonard Pitt (eds.), 'Proceedings of the 1988 Workshop
| on Computational Learning Theory', Morgan Kaufmann, San Mateo, CA, 1989.

The purpose of the writer's quaint pretext is to make sure that the reader
appreciates the rules of play, namely, that no extra-logical savvy is fair,
and that all facts must be presumed or deduced upon the immediate premisses.

Here is my rendition of the initial knowledge base delimiting Molly's World:

Logical Input File:  Molly.Log
o---------------------------------------------------------------------o
|                                                                     |
| ( object ,( toy ),( vehicle ))                                      |
| (( small_size ),( medium_size ),( large_size ))                     |
| (( two_wheels ),( three_wheels ),( four_wheels ))                   |
| (( no_seat ),( one_seat ),( few_seats ),( many_seats ))             |
| ( object ,( scooter ),( bike ),( trike ),( car ),( bus ),( wagon )) |
| ( two_wheels    no_seat            ,( scooter ))                    |
| ( two_wheels    one_seat    pedals ,( bike ))                       |
| ( three_wheels  one_seat    pedals ,( trike ))                      |
| ( four_wheels   few_seats   doors  ,( car ))                        |
| ( four_wheels   many_seats  doors  ,( bus ))                        |
| ( four_wheels   no_seat     handle ,( wagon ))                      |
| ( scooter           ( toy  small_size ))                            |
| ( wagon             ( toy  small_size ))                            |
| ( trike             ( toy  small_size ))                            |
| ( bike  small_size  ( toy ))                                        |
| ( bike  medium_size ( vehicle ))                                    |
| ( bike  large_size  )                                               |
| ( car               ( vehicle  large_size ))                        |
| ( bus               ( vehicle  large_size ))                        |
| ( toy               ( object ))                                     |
| ( vehicle           ( object ))                                     |
|                                                                     |
o---------------------------------------------------------------------o

All of the logical forms that are used in the preceding Log file
will probably be familiar from earlier discussions.  The purpose
of one or two constructions may, however, be a little obscure,
so I will insert a few words of additional explanation here:

The rule "( bike large_size )", for example, merely
says that nothing can be both a bike and large_size.

The rule "( three_wheels one_seat pedals ,( trike ))" says that anything
with all the features of three_wheels, one_seat, and pedals is excluded
from being anything but a trike.  In short, anything with just those
three features is equivalent to a trike.

Recall that the form "( p , q )" may be interpreted to assert either
the exclusive disjunction or the logical inequivalence of p and q.

The rules have been stated in this particular way simply
to imitate the style of rules in the reference example.

This last point does bring up an important issue, the question
of "rhetorical" differences in expression and their potential
impact on the "pragmatics" of computation.  Unfortunately,
I will have to abbreviate my discussion of this topic for
now, and only mention in passing the following facts.

Logically equivalent expressions, even though they must lead
to logically equivalent normal forms, may have very different
characteristics when it comes to the efficiency of processing.

For instance, consider these four logically equivalent forms:

o-------------------------------------------------------o
|                                                       |
|                                                       |
|                                                       |
|    p   q             q         p             q   p    |
|    o---o             o         o             o---o    |
|     \ /          p   |         |   q          \ /     |
|      o           o---o         o---o           o      |
|      |            \ /           \ /            |      |
|      @             @             @             @      |
|                                                       |
|  ((p , q))      (p ,(q))     ((p), q)      ((q , p))  |
|                                                       |
o-------------------------------------------------------o

All of these are equally succinct ways of maintaining that
p is logically equivalent to q, yet each can have different
effects on the route that Model takes to arrive at an answer.
In practical terms, some equalities are more equal than others.

These effects occur partly because the algorithm chooses to make cases
of variables on a basis of leftmost shallowest first, but their impact
can be complicated by the interactions that each expression has with
the context that it occupies.  The main lesson to take away from all
of this, at least, for the time being, is that it is probably better
not to bother too much about these problems, but just to experiment
with different ways of expressing equivalent pieces of information
until you get a sense of what works best in various situations.

I think that you will be happy to see only the
ultimate Sense of Molly's World, so here it is:

Sense Outline:  Molly.Sen
o------------------------o
| object                 |
|  two_wheels            |
|   no_seat              |
|    scooter             |
|     toy                |
|      small_size        |
|   one_seat             |
|    pedals              |
|     bike               |
|      small_size        |
|       toy              |
|      medium_size       |
|       vehicle          |
|  three_wheels          |
|   one_seat             |
|    pedals              |
|     trike              |
|      toy               |
|       small_size       |
|  four_wheels           |
|   few_seats            |
|    doors               |
|     car                |
|      vehicle           |
|       large_size       |
|   many_seats           |
|    doors               |
|     bus                |
|      vehicle           |
|       large_size       |
|   no_seat              |
|    handle              |
|     wagon              |
|      toy               |
|       small_size       |
o------------------------o

This outline is not the Sense of the unconstrained Log file,
but the result of running Model with a query on the single
feature "object".  Using this focus helps the Modeler
to make more relevant Sense of Molly's World.

In preparation for a contingently possible future discussion,
I need to attach a few parting thoughts to the case workup
of Molly's World that may not seem terribly relevant to
the present setting, but whose pertinence I hope will
become clearer in time.

The logical paradigm from which this Example was derived is that
of "Zeroth Order Horn Clause Theories".  The clauses at issue
in these theories are allowed to be of just three kinds:

| 1.  p & q & r & ... => z
|
| 2.  z
|
| 3.  ~[p & q & r & ...]

Here, the proposition letters "p", "q", "r", ..., "z"
are restricted to being single positive features, not
themselves negated or otherwise complex expressions.

In the Cactus Language or Existential Graph syntax
these forms would take on the following appearances:

| 1.  ( p q r ... ( z ))
|
| 2.    z
|
| 3.  ( p q r ... )

The style of deduction in Horn clause logics is essentially
proof-theoretic in character, with the main burden of proof
falling on implication relations ("=>") and on "projective"
forms of inference, that is, information-losing inferences
like modus ponens and resolution.  Cf. [Llo], [MaW].

In contrast, the method used here is substantially model-theoretic,
the stress being to start from more general forms of expression for
laying out facts (for example, distinctions, equations, partitions)
and to work toward results that maintain logical equivalence with
their origins.

What all of this has to do with the output above is this:
From the perspective that is adopted in the present work,
almost any theory, for example, the one that is founded
on the postulates of Molly's World, will have far more
models than the implicational and inferential mode of
reasoning is designed to discover.  We will be forced
to confront them, however, if we try to run Model on
a large set of implications.

The typical Horn clause interpreter gets around this
difficulty only by a stratagem that takes clauses to
mean something other than what they say, that is, by
distorting the principles of semantics in practice.
Our Model, on the other hand, has no such finesse.

This explains why it was necessary to impose the
prerequisite "object" constraint on the Log file
for Molly's World.  It supplied no more than what
we usually take for granted, in order to obtain
a set of models that we would normally think of
as being the intended import of the definitions.
```

### ZOL. Note 10

```Example 5.  Jets and Sharks

The propositional calculus based on the boundary operator, that is,
the multigrade logical connective of the form "( , , , ... )" can be
interpreted in a way that resembles the logic of activation states and
competition constraints in certain neural network models.  One way to do
this is by interpreting the blank or unmarked state as the resting state
of a neural pool, the bound or marked state as its activated state, and
by representing a mutually inhibitory pool of neurons p, q, r by means
of the expression "(p, q, r)".  To illustrate this possibility, let us
transcribe into cactus language expressions a notorious example from
the "parallel distributed processing" (PDP) paradigm [McR] and work
through two of the associated exercises as portrayed in this format.

Logical Input File:  JAS  =  ZOT(Jets And Sharks)
o----------------------------------------------------------------o
|                                                                |
|  (( art    ),( al   ),( sam  ),( clyde ),( mike  ),            |
|   ( jim    ),( greg ),( john ),( doug  ),( lance ),            |
|   ( george ),( pete ),( fred ),( gene  ),( ralph ),            |
|   ( phil   ),( ike  ),( nick ),( don   ),( ned   ),( karl ),   |
|   ( ken    ),( earl ),( rick ),( ol    ),( neal  ),( dave ))   |
|                                                                |
|  ( jets , sharks )                                             |
|                                                                |
|  ( jets ,                                                      |
|    ( art    ),( al   ),( sam  ),( clyde ),( mike  ),           |
|    ( jim    ),( greg ),( john ),( doug  ),( lance ),           |
|    ( george ),( pete ),( fred ),( gene  ),( ralph ))           |
|                                                                |
|  ( sharks ,                                                    |
|    ( phil ),( ike  ),( nick ),( don ),( ned  ),( karl ),       |
|    ( ken  ),( earl ),( rick ),( ol  ),( neal ),( dave ))       |
|                                                                |
|  (( 20's ),( 30's ),( 40's ))                                  |
|                                                                |
|  ( 20's ,                                                      |
|    ( sam    ),( jim  ),( greg ),( john ),( lance ),            |
|    ( george ),( pete ),( fred ),( gene ),( ken   ))            |
|                                                                |
|  ( 30's ,                                                      |
|    ( al   ),( mike ),( doug ),( ralph ),                       |
|    ( phil ),( ike  ),( nick ),( don   ),                       |
|    ( ned  ),( rick ),( ol   ),( neal  ),( dave ))              |
|                                                                |
|  ( 40's ,                                                      |
|    ( art ),( clyde ),( karl ),( earl ))                        |
|                                                                |
|  (( junior_high ),( high_school ),( college ))                 |
|                                                                |
|  ( junior_high ,                                               |
|    ( art  ),( al    ),( clyde  ),( mike  ),( jim ),            |
|    ( john ),( lance ),( george ),( ralph ),( ike ))            |
|                                                                |
|  ( high_school ,                                               |
|    ( greg ),( doug ),( pete ),( fred ),( nick ),               |
|    ( karl ),( ken  ),( earl ),( rick ),( neal ),( dave ))      |
|                                                                |
|  ( college ,                                                   |
|    ( sam ),( gene ),( phil ),( don ),( ned ),( ol ))           |
|                                                                |
|  (( single ),( married ),( divorced ))                         |
|                                                                |
|  ( single ,                                                    |
|    ( art   ),( sam  ),( clyde ),( mike ),                      |
|    ( doug  ),( pete ),( fred  ),( gene ),                      |
|    ( ralph ),( ike  ),( nick  ),( ken  ),( neal ))             |
|                                                                |
|  ( married ,                                                   |
|    ( al  ),( greg ),( john ),( lance ),( phil ),               |
|    ( don ),( ned  ),( karl ),( earl  ),( ol   ))               |
|                                                                |
|  ( divorced ,                                                  |
|    ( jim ),( george ),( rick ),( dave ))                       |
|                                                                |
|  (( bookie ),( burglar ),( pusher ))                           |
|                                                                |
|  ( bookie ,                                                    |
|    ( sam  ),( clyde ),( mike ),( doug ),                       |
|    ( pete ),( ike   ),( ned  ),( karl ),( neal ))              |
|                                                                |
|  ( burglar ,                                                   |
|    ( al     ),( jim ),( john ),( lance ),                      |
|    ( george ),( don ),( ken  ),( earl  ),( rick ))             |
|                                                                |
|  ( pusher ,                                                    |
|    ( art   ),( greg ),( fred ),( gene ),                       |
|    ( ralph ),( phil ),( nick ),( ol   ),( dave ))              |
|                                                                |
o----------------------------------------------------------------o

We now apply Study to the proposition that
defines the Jets and Sharks knowledge base,
that is to say, the knowledge that we are
given about the Jets and Sharks, not the
knowledge that the Jets and Sharks have.

With a query on the name "ken" we obtain the following
output, giving all of the features associated with Ken:

Sense Outline:  JAS & Ken
o---------------------------------------o
| ken                                   |
|  sharks                               |
|   20's                                |
|    high_school                        |
|     single                            |
|      burglar                          |
o---------------------------------------o

With a query on the two features "college" and "sharks"
we obtain the following outline of all of the features
that satisfy these constraints:

Sense Outline:  JAS & College & Sharks
o---------------------------------------o
| college                               |
|  sharks                               |
|   30's                                |
|    married                            |
|     bookie                            |
|      ned                              |
|     burglar                           |
|      don                              |
|     pusher                            |
|      phil                             |
|      ol                               |
o---------------------------------------o

From this we discover that all college Sharks
are 30-something and married.  Furthermore,
we have a complete listing of their names
broken down by occupation, as I have no
doubt that all of them will be in time.

Reference:

| McClelland, James L. & Rumelhart, David E.,
|'Explorations in Parallel Distributed Processing:
| A Handbook of Models, Programs, and Exercises',
| MIT Press, Cambridge, MA, 1988.
```

### ZOL. Note 11

```One of the issues that my pondering weak and weary over
has caused me to burn not a few barrels of midnight oil
over the past elventeen years or so is the relationship
among divers and sundry "styles of inference", by which
I mean particular choices of inference paradigms, rules,
or schemata.  The chief breakpoint seems to lie between
the information-keeping and information-losing modes of
inference, also called "equational" and "implicational",
or "preservative" and "projective" brands, respectively.

Since it appears to be mostly the implicational and projective
styles of inference that are more familiar to folks hereabouts,
I will start off this subdiscussion by introducing a number of
risibly simple but reasonably manageable examples of the other
brand of inference, treated as equational reasoning approaches
to problems about satisfying "zeroth order constraints" (ZOC's).

| Applications of a Propositional Calculator:
| Constraint Satisfaction Problems.
| Jon Awbrey, April 24, 1995.

The Four Houses Puzzle

Constructed on the model of the "Five Houses Puzzle" in [VaH, 132-136].

Problem Statement.  Four people with different nationalities live in the
first four houses of a street.  They practice four distinct professions,
and each of them has a favorite animal, all of them different.  The four
houses are painted different colors.  The following facts are known:

|  1.  The Englander lives in the first house on the left.
|  2.  The doctor lives in the second house.
|  3.  The third house is painted red.
|  4.  The zebra is a favorite in the fourth house.
|  5.  The person in the first house has a dog.
|  6.  The Japanese lives in the third house.
|  7.  The red house is on the left of the yellow one.
|  8.  They breed snails in the house to right of the doctor.
|  9.  The Englander lives next to the green house.
| 10.  The fox is in the house next to to the diplomat.
| 11.  The Spaniard likes zebras.
| 12.  The Japanese is a painter.
| 13.  The Italian lives in the green house.
| 14.  The violinist lives in the yellow house.
| 15.  The dog is a pet in the blue house.
| 16.  The doctor keeps a fox.

The problem is to find all of the assignments of
features to houses that satisfy these requirements.

Logical Input File:  House^4.Log
o---------------------------------------------------------------------o
|                                                                     |
|  eng_1  doc_2  red_3  zeb_4  dog_1  jap_3                           |
|                                                                     |
|  (( red_1  yel_2 ),( red_2  yel_3 ),( red_3  yel_4 ))               |
|  (( doc_1  sna_2 ),( doc_2  sna_3 ),( doc_3  sna_4 ))               |
|                                                                     |
|  (( eng_1  gre_2 ),                                                 |
|   ( eng_2  gre_3 ),( eng_2  gre_1 ),                                |
|   ( eng_3  gre_4 ),( eng_3  gre_2 ),                                |
|                    ( eng_4  gre_3 ))                                |
|                                                                     |
|  (( dip_1  fox_2 ),                                                 |
|   ( dip_2  fox_3 ),( dip_2  fox_1 ),                                |
|   ( dip_3  fox_4 ),( dip_3  fox_2 ),                                |
|                    ( dip_4  fox_3 ))                                |
|                                                                     |
|  (( spa_1 zeb_1 ),( spa_2 zeb_2 ),( spa_3 zeb_3 ),( spa_4 zeb_4 ))  |
|  (( jap_1 pai_1 ),( jap_2 pai_2 ),( jap_3 pai_3 ),( jap_4 pai_4 ))  |
|  (( ita_1 gre_1 ),( ita_2 gre_2 ),( ita_3 gre_3 ),( ita_4 gre_4 ))  |
|                                                                     |
|  (( yel_1 vio_1 ),( yel_2 vio_2 ),( yel_3 vio_3 ),( yel_4 vio_4 ))  |
|  (( blu_1 dog_1 ),( blu_2 dog_2 ),( blu_3 dog_3 ),( blu_4 dog_4 ))  |
|                                                                     |
|  (( doc_1 fox_1 ),( doc_2 fox_2 ),( doc_3 fox_3 ),( doc_4 fox_4 ))  |
|                                                                     |
|  ((                                                                 |
|                                                                     |
|  (( eng_1 ),( eng_2 ),( eng_3 ),( eng_4 ))                          |
|  (( spa_1 ),( spa_2 ),( spa_3 ),( spa_4 ))                          |
|  (( jap_1 ),( jap_2 ),( jap_3 ),( jap_4 ))                          |
|  (( ita_1 ),( ita_2 ),( ita_3 ),( ita_4 ))                          |
|                                                                     |
|  (( eng_1 ),( spa_1 ),( jap_1 ),( ita_1 ))                          |
|  (( eng_2 ),( spa_2 ),( jap_2 ),( ita_2 ))                          |
|  (( eng_3 ),( spa_3 ),( jap_3 ),( ita_3 ))                          |
|  (( eng_4 ),( spa_4 ),( jap_4 ),( ita_4 ))                          |
|                                                                     |
|  (( gre_1 ),( gre_2 ),( gre_3 ),( gre_4 ))                          |
|  (( red_1 ),( red_2 ),( red_3 ),( red_4 ))                          |
|  (( yel_1 ),( yel_2 ),( yel_3 ),( yel_4 ))                          |
|  (( blu_1 ),( blu_2 ),( blu_3 ),( blu_4 ))                          |
|                                                                     |
|  (( gre_1 ),( red_1 ),( yel_1 ),( blu_1 ))                          |
|  (( gre_2 ),( red_2 ),( yel_2 ),( blu_2 ))                          |
|  (( gre_3 ),( red_3 ),( yel_3 ),( blu_3 ))                          |
|  (( gre_4 ),( red_4 ),( yel_4 ),( blu_4 ))                          |
|                                                                     |
|  (( pai_1 ),( pai_2 ),( pai_3 ),( pai_4 ))                          |
|  (( dip_1 ),( dip_2 ),( dip_3 ),( dip_4 ))                          |
|  (( vio_1 ),( vio_2 ),( vio_3 ),( vio_4 ))                          |
|  (( doc_1 ),( doc_2 ),( doc_3 ),( doc_4 ))                          |
|                                                                     |
|  (( pai_1 ),( dip_1 ),( vio_1 ),( doc_1 ))                          |
|  (( pai_2 ),( dip_2 ),( vio_2 ),( doc_2 ))                          |
|  (( pai_3 ),( dip_3 ),( vio_3 ),( doc_3 ))                          |
|  (( pai_4 ),( dip_4 ),( vio_4 ),( doc_4 ))                          |
|                                                                     |
|  (( dog_1 ),( dog_2 ),( dog_3 ),( dog_4 ))                          |
|  (( zeb_1 ),( zeb_2 ),( zeb_3 ),( zeb_4 ))                          |
|  (( fox_1 ),( fox_2 ),( fox_3 ),( fox_4 ))                          |
|  (( sna_1 ),( sna_2 ),( sna_3 ),( sna_4 ))                          |
|                                                                     |
|  (( dog_1 ),( zeb_1 ),( fox_1 ),( sna_1 ))                          |
|  (( dog_2 ),( zeb_2 ),( fox_2 ),( sna_2 ))                          |
|  (( dog_3 ),( zeb_3 ),( fox_3 ),( sna_3 ))                          |
|  (( dog_4 ),( zeb_4 ),( fox_4 ),( sna_4 ))                          |
|                                                                     |
|  ))                                                                 |
|                                                                     |
o---------------------------------------------------------------------o

Sense Outline:  House^4.Sen
o-----------------------------o
| eng_1                       |
|  doc_2                      |
|   red_3                     |
|    zeb_4                    |
|     dog_1                   |
|      jap_3                  |
|       yel_4                 |
|        sna_3                |
|         gre_2               |
|          dip_1              |
|           fox_2             |
|            spa_4            |
|             pai_3           |
|              ita_2          |
|               vio_4         |
|                blu_1        |
o-----------------------------o

Table 1.  Solution to the Four Houses Puzzle
o------------o------------o------------o------------o------------o
|            | House 1    | House 2    | House 3    | House 4    |
o------------o------------o------------o------------o------------o
| Nation     | England    | Italy      | Japan      | Spain      |
| Color      | blue       | green      | red        | yellow     |
| Profession | diplomat   | doctor     | painter    | violinist  |
| Animal     | dog        | fox        | snails     | zebra      |
o------------o------------o------------o------------o------------o

Fabric Knowledge Base

Based on the example in [MaW, pages 8-16].

Logical Input File:  Fab.Log
o---------------------------------------------------------------------o
|                                                                     |
| (has_floats , plain_weave )                                         |
| (has_floats ,(twill_weave ),(satin_weave ))                         |
|                                                                     |
| (plain_weave ,                                                      |
|  (plain_weave  one_color ),                                         |
|  (color_groups  ),                                                  |
|  (grouped_warps ),                                                  |
|  (some_thicker  ),                                                  |
|  (crossed_warps ),                                                  |
|  (plain_weave  flannel ))                                           |
|                                                                     |
| (plain_weave  one_color  cotton  balanced  smooth  ,(percale ))     |
| (plain_weave  one_color  cotton            sheer   ,(organdy ))     |
| (plain_weave  one_color  silk              sheer   ,(organza ))     |
|                                                                     |
| (plain_weave  color_groups  warp_stripe  fill_stripe ,(plaid   ))   |
| (plaid        equal_stripe                           ,(gingham ))   |
|                                                                     |
| (plain_weave  grouped_warps ,(basket_weave ))                       |
|                                                                     |
|  (type_2_to_1 ),                                                    |
|  (type_2_to_2 ),                                                    |
|  (type_4_to_4 ))                                                    |
|                                                                     |
| (basket_weave  typed  type_2_to_1   thicker_fill   ,(oxford      )) |
| (basket_weave  typed (type_2_to_2 ,                                 |
|                       type_4_to_4 ) same_thickness ,(monks_cloth )) |
| (basket_weave (typed )              rough  open    ,(hopsacking  )) |
|                                                                     |
|                                                                     |
| (basket_weave ,(oxford ),(monks_cloth ),(hopsacking ))              |
|                                                                     |
| (plain_weave   some_thicker ,(ribbed_weave ))                       |
|                                                                     |
| (ribbed_weave ,(small_rib ),(medium_rib ),(heavy_rib ))             |
| (ribbed_weave ,(flat_rib  ),(round_rib ))                           |
|                                                                     |
| (ribbed_weave  thicker_fill          ,(cross_ribbed ))              |
| (cross_ribbed  small_rib   flat_rib  ,(faille       ))              |
| (cross_ribbed  small_rib   round_rib ,(grosgrain    ))              |
| (cross_ribbed  medium_rib  round_rib ,(bengaline    ))              |
| (cross_ribbed  heavy_rib   round_rib ,(ottoman      ))              |
|                                                                     |
| (cross_ribbed ,(faille ),(grosgrain ),(bengaline ),(ottoman ))      |
|                                                                     |
| (plain_weave  crossed_warps ,(leno_weave  ))                        |
| (leno_weave   open          ,(marquisette ))                        |
| (plain_weave  loop_threads  ,(pile_weave ))                         |
|                                                                     |
| (pile_weave ,(fill_pile ),(warp_pile ))                             |
| (pile_weave ,(cut ),(uncut ))                                       |
|                                                                     |
| (pile_weave  warp_pile  cut                   ,(velvet    ))        |
| (pile_weave  fill_pile  cut    aligned_pile   ,(corduroy  ))        |
| (pile_weave  fill_pile  cut    staggered_pile ,(velveteen ))        |
| (pile_weave  fill_pile  uncut  reversible     ,(terry     ))        |
|                                                                     |
| (pile_weave  fill_pile  cut ( (aligned_pile , staggered_pile ) ))   |
|                                                                     |
| (pile_weave ,(velvet ),(corduroy ),(velveteen ),(terry ))           |
|                                                                     |
| (plain_weave ,                                                      |
|  (percale    ),(organdy     ),(organza    ),(plaid   ),             |
|  (oxford     ),(monks_cloth ),(hopsacking ),                        |
|  (faille     ),(grosgrain   ),(bengaline  ),(ottoman ),             |
|  (leno_weave ),(pile_weave  ),(plain_weave   flannel ))             |
|                                                                     |
| (twill_weave ,                                                      |
|  (warp_faced ),                                                     |
|  (filling_faced ),                                                  |
|  (even_twill ),                                                     |
|  (twill_weave  flannel ))                                           |
|                                                                     |
| (twill_weave  warp_faced  colored_warp  white_fill ,(denim ))       |
| (twill_weave  warp_faced  one_color                ,(drill ))       |
| (twill_weave  even_twill  diagonal_rib             ,(serge ))       |
|                                                                     |
| (twill_weave  warp_faced (                                          |
|  (one_color ,                                                       |
|   ((colored_warp )(white_fill )) )                                  |
| ))                                                                  |
|                                                                     |
| (twill_weave  warp_faced ,(denim ),(drill ))                        |
| (twill_weave  even_twill ,(serge ))                                 |
|                                                                     |
| ((                                                                  |
|     (  ((plain_weave )(twill_weave ))                               |
|        ((cotton      )(wool        )) napped ,(flannel ))           |
| ))                                                                  |
|                                                                     |
| (satin_weave ,(warp_floats ),(fill_floats ))                        |
|                                                                     |
| (satin_weave ,(satin_weave smooth ),(satin_weave napped ))          |
| (satin_weave ,(satin_weave cotton ),(satin_weave silk   ))          |
|                                                                     |
| (satin_weave  warp_floats  smooth         ,(satin    ))             |
| (satin_weave  fill_floats  smooth         ,(sateen   ))             |
| (satin_weave               napped  cotton ,(moleskin ))             |
|                                                                     |
| (satin_weave ,(satin ),(sateen ),(moleskin ))                       |
|                                                                     |
o---------------------------------------------------------------------o

References

| Maier, David & Warren, David S.,
|'Computing with Logic:  Logic Programming with Prolog',
| Benjamin/Cummings, Menlo Park, CA, 1988.

| Van Hentenryck, Pascal,
|'Constraint Satisfaction in Logic Programming',
| MIT Press, Cambridge, MA, 1989.
```

### ZOL. Note 12

```One of the things that Persean semiotics will need as it develops
is what many other formal sciences and their applied studies have
found it useful to build up, namely, a stock of standard examples,
graded in complexity from the weaning to the teething to the wild
and the overweening.  The present series, for the most part being
borrowed from scattered subfields of AI and cognitive science and
transcribed into the "Alpha Plus" language for zeroth order logic,
is intended to provide an intermediate sampler of non-trivial but
still relatively simple examples of the logical kind.  This means
that the "signs" in the relevant sign relations are propositional
expressions in the formal language of choice, that the process of
semiotic metamorphosis that possesses the most interest for us is
one that transforms expressions into expressions while preserving
their logical equivalence, and that the space of expressions will
partition along the lines of our field interest into sundry sorts
of "logical equivalence classes" (LEC's).

I think that it might be a good idea to go back to a simpler example
of a constraint satisfaction problem, and to discuss the elements of
its expression as a ZOT in a less cluttered setting before advancing
onward once again to problems on the order of the Four Houses Puzzle.

| Applications of a Propositional Calculator:
| Constraint Satisfaction Problems.
| Jon Awbrey, April 24, 1995.

Graph Coloring

Based on the discussion in [Wil, page 196].

One is given three colors, say, orange, silver, indigo,
and a graph on four nodes that has the following shape:

|          1
|          o
|         / \
|        /   \
|     2 o-----o 4
|        \   /
|         \ /
|          o
|          3

The problem is to color the nodes of the graph
in such a way that no pair of nodes that are
an edge, get the same color.

The objective situation that is to be achieved can be represented
in a so-called "declarative" fashion, in effect, by employing the
cactus language as a very simple sort of declarative programming
language, and depicting the prospective solution to the problem
as a ZOT.

To do this, begin by declaring the following set of
twelve boolean variables or "zeroth order features":

{1_orange, 1_silver, 1_indigo,
2_orange, 2_silver, 2_indigo,
3_orange, 3_silver, 3_indigo,
4_orange, 4_silver, 4_indigo}

The interpretation to keep in mind will be such that
the feature name of the form "<node i>_<color j>"
says that the node i is assigned the color j.

Logical Input File:  Color.Log
o----------------------------------------------------------------------o
|                                                                      |
|  (( 1_orange ),( 1_silver ),( 1_indigo ))                            |
|  (( 2_orange ),( 2_silver ),( 2_indigo ))                            |
|  (( 3_orange ),( 3_silver ),( 3_indigo ))                            |
|  (( 4_orange ),( 4_silver ),( 4_indigo ))                            |
|                                                                      |
|  ( 1_orange  2_orange )( 1_silver  2_silver )( 1_indigo  2_indigo )  |
|  ( 1_orange  4_orange )( 1_silver  4_silver )( 1_indigo  4_indigo )  |
|  ( 2_orange  3_orange )( 2_silver  3_silver )( 2_indigo  3_indigo )  |
|  ( 2_orange  4_orange )( 2_silver  4_silver )( 2_indigo  4_indigo )  |
|  ( 3_orange  4_orange )( 3_silver  4_silver )( 3_indigo  4_indigo )  |
|                                                                      |
o----------------------------------------------------------------------o

The first stanza of verses declares that
every node is assigned exactly one color.

The second stanza of verses declares that
no adjacent nodes get the very same color.

Each satisfying interpretation of this ZOT
that is also a program corresponds to what
graffitists call a "coloring" of the graph.

Theme One's Model interpreter, when we set
it to work on this ZOT, will array  before
our eyes all of the colorings of the graph.

Sense Outline:  Color.Sen
o-----------------------------o
| 1_orange                    |
|  2_silver                   |
|   3_orange                  |
|    4_indigo                 |
|  2_indigo                   |
|   3_orange                  |
|    4_silver                 |
| 1_silver                    |
|  2_orange                   |
|   3_silver                  |
|    4_indigo                 |
|  2_indigo                   |
|   3_silver                  |
|    4_orange                 |
| 1_indigo                    |
|  2_orange                   |
|   3_indigo                  |
|    4_silver                 |
|  2_silver                   |
|   3_indigo                  |
|    4_orange                 |
o-----------------------------o

Reference

| Wilf, Herbert S.,
|'Algorithms and Complexity',
| Prentice-Hall, Englewood Cliffs, NJ, 1986.

Nota Bene.  Some printings of this book contain
an erroneuos Figure at this point, not exactly
matching the Example as described in the text.
```

### ZOL. Note 13

```Let us continue to examine the properties of the cactus language
as a minimal style of declarative programming language.  Even in
the likes of this zeroth order microcosm one can observe, and on
a good day still more clearly for the lack of other distractions,
many of the buzz worlds that will spring into full bloom, almost
as if from nowhere, to become the first order of business in the
latter day logical organa, plus combinators, plus lambda calculi.

By way of homage to the classics of the art, I can hardly pass
this way without paying my dues to the next sample of examples.

N Queens Problem

I will give the ZOT that describes the N Queens Problem for N = 5,
since that is the most that I and my old 286 could do when last I
wrote up this Example.

The problem is now to write a "zeroth order program" (ZOP) that
describes the following objective:  To place 5 chess queens on
a 5 by 5 chessboard so that no queen attacks any other queen.

It is clear that there can be at most one queen on each row
of the board and so by dint of regal necessity, exactly one
queen in each row of the desired array.  This gambit allows
us to reduce the problem to one of picking a permutation of
five things in fives places, and this affords us sufficient
clue to begin down a likely path toward the intended object,
by recruiting the following phalanx of 25 logical variables:

Literal Input File:  Q5.Lit
o---------------------------------------o
|                                       |
|  q1_r1, q1_r2, q1_r3, q1_r4, q1_r5,   |
|  q2_r1, q2_r2, q2_r3, q2_r4, q2_r5,   |
|  q3_r1, q3_r2, q3_r3, q3_r4, q3_r5,   |
|  q4_r1, q4_r2, q4_r3, q4_r4, q4_r5,   |
|  q5_r1, q5_r2, q5_r3, q5_r4, q5_r5.   |
|                                       |
o---------------------------------------o

Thus we seek to define a function, of abstract type f : %B%^25 -> %B%,
whose fibre of truth f^(-1)(%1%) is a set of interpretations, each of
whose elements bears the abstract type of a point in the space %B%^25,
and whose reading will inform us of our desired set of configurations.

Logical Input File:  Q5.Log
o------------------------------------------------------------o
|                                                            |
|  ((q1_r1 ),(q1_r2 ),(q1_r3 ),(q1_r4 ),(q1_r5 ))            |
|  ((q2_r1 ),(q2_r2 ),(q2_r3 ),(q2_r4 ),(q2_r5 ))            |
|  ((q3_r1 ),(q3_r2 ),(q3_r3 ),(q3_r4 ),(q3_r5 ))            |
|  ((q4_r1 ),(q4_r2 ),(q4_r3 ),(q4_r4 ),(q4_r5 ))            |
|  ((q5_r1 ),(q5_r2 ),(q5_r3 ),(q5_r4 ),(q5_r5 ))            |
|                                                            |
|  ((q1_r1 ),(q2_r1 ),(q3_r1 ),(q4_r1 ),(q5_r1 ))            |
|  ((q1_r2 ),(q2_r2 ),(q3_r2 ),(q4_r2 ),(q5_r2 ))            |
|  ((q1_r3 ),(q2_r3 ),(q3_r3 ),(q4_r3 ),(q5_r3 ))            |
|  ((q1_r4 ),(q2_r4 ),(q3_r4 ),(q4_r4 ),(q5_r4 ))            |
|  ((q1_r5 ),(q2_r5 ),(q3_r5 ),(q4_r5 ),(q5_r5 ))            |
|                                                            |
|  ((                                                        |
|                                                            |
|  (q1_r1 q2_r2 )(q1_r1 q3_r3 )(q1_r1 q4_r4 )(q1_r1 q5_r5 )  |
|                (q2_r2 q3_r3 )(q2_r2 q4_r4 )(q2_r2 q5_r5 )  |
|                              (q3_r3 q4_r4 )(q3_r3 q5_r5 )  |
|                                            (q4_r4 q5_r5 )  |
|                                                            |
|  (q1_r2 q2_r3 )(q1_r2 q3_r4 )(q1_r2 q4_r5 )                |
|                (q2_r3 q3_r4 )(q2_r3 q4_r5 )                |
|                              (q3_r4 q4_r5 )                |
|                                                            |
|  (q1_r3 q2_r4 )(q1_r3 q3_r5 )                              |
|                (q2_r4 q3_r5 )                              |
|                                                            |
|  (q1_r4 q2_r5 )                                            |
|                                                            |
|  (q2_r1 q3_r2 )(q2_r1 q4_r3 )(q2_r1 q5_r4 )                |
|                (q3_r2 q4_r3 )(q3_r2 q5_r4 )                |
|                              (q4_r3 q5_r4 )                |
|                                                            |
|  (q3_r1 q4_r2 )(q3_r1 q5_r3 )                              |
|                (q4_r2 q5_r3 )                              |
|                                                            |
|  (q4_r1 q5_r2 )                                            |
|                                                            |
|  (q1_r5 q2_r4 )(q1_r5 q3_r3 )(q1_r5 q4_r2 )(q1_r5 q5_r1 )  |
|                (q2_r4 q3_r3 )(q2_r4 q4_r2 )(q2_r4 q5_r1 )  |
|                              (q3_r3 q4_r2 )(q3_r3 q5_r1 )  |
|                                            (q4_r2 q5_r1 )  |
|                                                            |
|  (q2_r5 q3_r4 )(q2_r5 q4_r3 )(q2_r5 q5_r2 )                |
|                (q3_r4 q4_r3 )(q3_r4 q5_r2 )                |
|                              (q4_r3 q5_r2 )                |
|                                                            |
|  (q3_r5 q4_r4 )(q3_r5 q5_r3 )                              |
|                (q4_r4 q5_r3 )                              |
|                                                            |
|  (q4_r5 q5_r4 )                                            |
|                                                            |
|  (q1_r4 q2_r3 )(q1_r4 q3_r2 )(q1_r4 q4_r1 )                |
|                (q2_r3 q3_r2 )(q2_r3 q4_r1 )                |
|                              (q3_r2 q4_r1 )                |
|                                                            |
|  (q1_r3 q2_r2 )(q1_r3 q3_r1 )                              |
|                (q2_r2 q3_r1 )                              |
|                                                            |
|  (q1_r2 q2_r1 )                                            |
|                                                            |
|  ))                                                        |
|                                                            |
o------------------------------------------------------------o

The vanguard of this logical regiment consists of two
stock'a'block platoons, the pattern of whose features
is the usual sort of array for conveying permutations.
Between the stations of their respective offices they
serve to warrant that all of the interpretations that
are left standing on the field of valor at the end of
the day will be ones that tell of permutations 5 by 5.
The rest of the ruck and the runt of the mill in this
regimental logos are there to cover the diagonal bias
against attacking queens that is our protocol to suit.

And here is the issue of the day:

Sense Output:  Q5.Sen
o-------------------o
| q1_r1             |
|  q2_r3            |
|   q3_r5           |
|    q4_r2          |
|     q5_r4         | <1>
|  q2_r4            |
|   q3_r2           |
|    q4_r5          |
|     q5_r3         | <2>
| q1_r2             |
|  q2_r4            |
|   q3_r1           |
|    q4_r3          |
|     q5_r5         | <3>
|  q2_r5            |
|   q3_r3           |
|    q4_r1          |
|     q5_r4         | <4>
| q1_r3             |
|  q2_r1            |
|   q3_r4           |
|    q4_r2          |
|     q5_r5         | <5>
|  q2_r5            |
|   q3_r2           |
|    q4_r4          |
|     q5_r1         | <6>
| q1_r4             |
|  q2_r1            |
|   q3_r3           |
|    q4_r5          |
|     q5_r2         | <7>
|  q2_r2            |
|   q3_r5           |
|    q4_r3          |
|     q5_r1         | <8>
| q1_r5             |
|  q2_r2            |
|   q3_r4           |
|    q4_r1          |
|     q5_r3         | <9>
|  q2_r3            |
|   q3_r1           |
|    q4_r4          |
|     q5_r2         | <A>
o-------------------o

The number at least checks with all of the best authorities,
so I can breathe a sigh of relief on that account, at least.
I am sure that there just has to be a more clever way to do
this, that is to say, within the bounds of ZOT reason alone,
but the above is the best that I could figure out with the
time that I had at the time.

References:  [BaC, 166], [VaH, 122], [Wir, 143].

[BaC]  Ball, W.W. Rouse, & Coxeter, H.S.M.,
'Mathematical Recreations and Essays',
13th ed., Dover, New York, NY, 1987.

[VaH]  Van Hentenryck, Pascal,
'Constraint Satisfaction in Logic Programming,
MIT Press, Cambridge, MA, 1989.

[Wir]  Wirth, Niklaus,
'Algorithms + Data Structures = Programs',
Prentice-Hall, Englewood Cliffs, NJ, 1976.

http://mathworld.wolfram.com/QueensProblem.html
http://www.research.att.com/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=000170
```

### ZOL. Note 14

```I turn now to another golden oldie of a constraint satisfaction problem
that I would like to give here a slightly new spin, but not so much for
the sake of these trifling novelties as from a sense of old time's ache
and a duty to -- well, what's the opposite of novelty?

Phobic Apollo

| Suppose Peter, Paul, and Jane are musicians.  One of them plays
| saxophone, another plays guitar, and the third plays drums.  As
| it happens, one of them is afraid of things associated with the
| number 13, another of them is afraid of cats, and the third is
| afraid of heights.  You also know that Peter and the guitarist
| skydive, that Paul and the saxophone player enjoy cats, and
| that the drummer lives in apartment 13 on the 13th floor.
|
| Soon we will want to use these facts to reason
| about whether or not certain identity relations
| hold or are excluded.  Assume X(Peter, Guitarist)
| means "the person who is Peter is not the person who
| plays the guitar".  In this notation, the facts become:
|
| 1.  X(Peter, Guitarist)
| 2.  X(Peter, Fears Heights)
| 3.  X(Guitarist, Fears Heights)
| 4.  X(Paul, Fears Cats)
| 5.  X(Paul, Saxophonist)
| 6.  X(Saxophonist, Fears Cats)
| 7.  X(Drummer, Fears 13)
| 8.  X(Drummer, Fears Heights)
|
| Exercise attributed to Kenneth D. Forbus, pages 449-450 in:
| Patrick Henry Winston, 'Artificial Intelligence', 2nd ed.,

Here is one way to represent these facts in the form of a ZOT
and use it as a logical program to draw a succinct conclusion:

Logical Input File:  ConSat.Log
o-----------------------------------------------------------------------o
|                                                                       |
|  (( pete_plays_guitar ),( pete_plays_sax ),( pete_plays_drums ))      |
|  (( paul_plays_guitar ),( paul_plays_sax ),( paul_plays_drums ))      |
|  (( jane_plays_guitar ),( jane_plays_sax ),( jane_plays_drums ))      |
|                                                                       |
|  (( pete_plays_guitar ),( paul_plays_guitar ),( jane_plays_guitar ))  |
|  (( pete_plays_sax    ),( paul_plays_sax    ),( jane_plays_sax    ))  |
|  (( pete_plays_drums  ),( paul_plays_drums  ),( jane_plays_drums  ))  |
|                                                                       |
|  (( pete_fears_13 ),( pete_fears_cats ),( pete_fears_height ))        |
|  (( paul_fears_13 ),( paul_fears_cats ),( paul_fears_height ))        |
|  (( jane_fears_13 ),( jane_fears_cats ),( jane_fears_height ))        |
|                                                                       |
|  (( pete_fears_13     ),( paul_fears_13     ),( jane_fears_13     ))  |
|  (( pete_fears_cats   ),( paul_fears_cats   ),( jane_fears_cats   ))  |
|  (( pete_fears_height ),( paul_fears_height ),( jane_fears_height ))  |
|                                                                       |
|  ((                                                                   |
|                                                                       |
|  ( pete_plays_guitar )                                                |
|  ( pete_fears_height )                                                |
|                                                                       |
|  ( pete_plays_guitar  pete_fears_height )                             |
|  ( paul_plays_guitar  paul_fears_height )                             |
|  ( jane_plays_guitar  jane_fears_height )                             |
|                                                                       |
|  ( paul_fears_cats )                                                  |
|  ( paul_plays_sax  )                                                  |
|                                                                       |
|  ( pete_plays_sax  pete_fears_cats )                                  |
|  ( paul_plays_sax  paul_fears_cats )                                  |
|  ( jane_plays_sax  jane_fears_cats )                                  |
|                                                                       |
|  ( pete_plays_drums  pete_fears_13 )                                  |
|  ( paul_plays_drums  paul_fears_13 )                                  |
|  ( jane_plays_drums  jane_fears_13 )                                  |
|                                                                       |
|  ( pete_plays_drums  pete_fears_height )                              |
|  ( paul_plays_drums  paul_fears_height )                              |
|  ( jane_plays_drums  jane_fears_height )                              |
|                                                                       |
|  ))                                                                   |
|                                                                       |
o-----------------------------------------------------------------------o

Sense Outline:  ConSat.Sen
o-----------------------------o
| pete_plays_drums            |
|  paul_plays_guitar          |
|   jane_plays_sax            |
|    pete_fears_cats          |
|     paul_fears_13           |
|      jane_fears_height      |
o-----------------------------o

It might be instructive to review various aspects
of how the Theme One Study function actually went
Just to prove that my program and I really did do
our homework on that Phobic Apollo ConSat problem,
and didn't just provoke some Oracle or other data
base server to give it away, here is the middling
output of the Model function as run on ConSat.Log:

Model Outline:  ConSat.Mod
o-------------------------------------------------o
| pete_plays_guitar -                             |
| (pete_plays_guitar )                            |
|  pete_plays_sax                                 |
|   pete_plays_drums -                            |
|   (pete_plays_drums )                           |
|    paul_plays_sax -                             |
|    (paul_plays_sax )                            |
|     jane_plays_sax -                            |
|     (jane_plays_sax )                           |
|      paul_plays_guitar                          |
|       paul_plays_drums -                        |
|       (paul_plays_drums )                       |
|        jane_plays_guitar -                      |
|        (jane_plays_guitar )                     |
|         jane_plays_drums                        |
|          pete_fears_13                          |
|           pete_fears_cats -                     |
|           (pete_fears_cats )                    |
|            pete_fears_height -                  |
|            (pete_fears_height )                 |
|             paul_fears_13 -                     |
|             (paul_fears_13 )                    |
|              jane_fears_13 -                    |
|              (jane_fears_13 )                   |
|               paul_fears_cats -                 |
|               (paul_fears_cats )                |
|                paul_fears_height -              |
|                (paul_fears_height ) -           |
|          (pete_fears_13 )                       |
|           pete_fears_cats -                     |
|           (pete_fears_cats )                    |
|            pete_fears_height -                  |
|            (pete_fears_height ) -               |
|         (jane_plays_drums ) -                   |
|      (paul_plays_guitar )                       |
|       paul_plays_drums                          |
|        jane_plays_drums -                       |
|        (jane_plays_drums )                      |
|         jane_plays_guitar                       |
|          pete_fears_13                          |
|           pete_fears_cats -                     |
|           (pete_fears_cats )                    |
|            pete_fears_height -                  |
|            (pete_fears_height )                 |
|             paul_fears_13 -                     |
|             (paul_fears_13 )                    |
|              jane_fears_13 -                    |
|              (jane_fears_13 )                   |
|               paul_fears_cats -                 |
|               (paul_fears_cats )                |
|                paul_fears_height -              |
|                (paul_fears_height ) -           |
|          (pete_fears_13 )                       |
|           pete_fears_cats -                     |
|           (pete_fears_cats )                    |
|            pete_fears_height -                  |
|            (pete_fears_height ) -               |
|         (jane_plays_guitar ) -                  |
|       (paul_plays_drums ) -                     |
|  (pete_plays_sax )                              |
|   pete_plays_drums                              |
|    paul_plays_drums -                           |
|    (paul_plays_drums )                          |
|     jane_plays_drums -                          |
|     (jane_plays_drums )                         |
|      paul_plays_guitar                          |
|       paul_plays_sax -                          |
|       (paul_plays_sax )                         |
|        jane_plays_guitar -                      |
|        (jane_plays_guitar )                     |
|         jane_plays_sax                          |
|          pete_fears_13 -                        |
|          (pete_fears_13 )                       |
|           pete_fears_cats                       |
|            pete_fears_height -                  |
|            (pete_fears_height )                 |
|             paul_fears_cats -                   |
|             (paul_fears_cats )                  |
|              jane_fears_cats -                  |
|              (jane_fears_cats )                 |
|               paul_fears_13                     |
|                paul_fears_height -              |
|                (paul_fears_height )             |
|                 jane_fears_13 -                 |
|                 (jane_fears_13 )                |
|                  jane_fears_height *            |
|                  (jane_fears_height ) -         |
|               (paul_fears_13 )                  |
|                paul_fears_height -              |
|                (paul_fears_height ) -           |
|           (pete_fears_cats )                    |
|            pete_fears_height -                  |
|            (pete_fears_height ) -               |
|         (jane_plays_sax ) -                     |
|      (paul_plays_guitar )                       |
|       paul_plays_sax -                          |
|       (paul_plays_sax ) -                       |
|   (pete_plays_drums ) -                         |
o-------------------------------------------------o

This is just the traverse of the "arboreal boolean expansion" (ABE) tree
that Model function germinates from the propositional expression that we
planted in the file Consat.Log, which works to describe the facts of the
situation in question.  Since there are 18 logical feature names in this
propositional expression, we are literally talking about a function that
enjoys the abstract type f : %B%^18 -> %B%.  If I had wanted to evaluate
this function by expressly writing out its truth table, then it would've
required 2^18 = 262144 rows.  Now I didn't bother to count, but I'm sure
that the above output does not have anywhere near that many lines, so it
must be that my program, and maybe even its author, has done a couple of
things along the way that are moderately intelligent.  At least, we hope.
```

### ZOL. Note 15

```Planted three new roses in our garden today:
a Renoir, a Blueberry Hill, a Love & Peace,
a rose for every season of Man and Woman.

Let's now pause and reflect on the mix of abstract and concrete material
that we have cobbled together in spectacle of this "World Of Zero" (WOZ),
since I believe that we may have seen enough, if we look at it right, to
illustrate a few of the more salient phenomena that would normally begin
to weigh in as a major force only on a much larger scale.  Now, it's not
exactly like this impoverished sample, all by itself, could determine us
to draw just the right generalizations, or force us to see the shape and
flow of its immanent law -- it is much too sparse a scattering of points
to tease out the lines of its up and coming generations quite so clearly --
but it can be seen to exemplify many of the more significant themes that
we know evolve in more substantial environments, that is, On Beyond Zero,
since we have already seen them, "tho' obscur'd", in those higher realms.

One of the themes that I'd like to keep an eye on as the investigation
develops is the subject that might be called "computation as semiosis".

In this light, any calculus worth its salt must be capable of helping
us do two things, calculation, of course, but also analysis.  This is
probably one of the reasons why the ordinary sort of differential and
integral calculus over quantitative domains is frequently referred to
as "real analysis", or even just "analysis".  It seems quite clear to
me that any adequate logical calculus, in many ways expected to serve
as a qualitative analogue of analytic geometry in the way that it can
be used to describe configurations in logically circumscribed domains,
ought to qualify in both dimensions, namely, analysis and computation.

With all of these various features of the situation in mind, then, we come
to the point of viewing analysis and computation as just so many different
kinds of "sign transformations in respect of pragmata" (STIROP's).  Taking
this insight to heart, let us next work to assemble a comprehension of our
concrete examples, set in the medium of the abstract calculi that allow us
to express their qualitative patterns, that may hope to be an increment or
two less inchoate than we have seen so far, and that may even permit us to
catch the action of these fading fleeting sign transformations on the wing.

Here is how I picture our latest round of examples
as filling out the framework of this investigation:

o-----------------------------o-----------------------------o
|     Objective Framework     |   Interpretive Framework    |
o-----------------------------o-----------------------------o
|                                                           |
|                               o  s_1 = Logue(x)    |      |
|                              /                     |      |
|                             /                      |      |
|                            @                       |      |
|                          ·  \                      |      |
|                        ·     \                     |      |
|                      ·        o  i_1 = Model(x)    v      |
|                    ·          o  s_2 = Model(x)    |      |
|                  ·           /                     |      |
|                ·            /                      |      |
|  Object = x  o · · · · · · @                       |      |
|                ·            \                      |      |
|                  ·           \                     |      |
|                    ·          o  i_2 = Tenor(x)    v      |
|                      ·        o  s_3 = Tenor(x)    |      |
|                        ·     /                     |      |
|                          ·  /                      |      |
|                            @                       |      |
|                             \                      |      |
|                              \                     |      |
|                               o  i_3 = Sense(x)    v      |
|                                                           |
o-----------------------------------------------------------o
Figure 1.  Computation As Semiotic Transformation

The Figure shows three distinct sign triples of the form <o, s, i>, where
o = ostensible objective = the observed, indicated, or intended situation.

| A.  <x, Logue(x), Model(x)>
|
| B.  <x, Model(x), Tenor(x)>
|
| C.  <x, Tenor(x), Sense(x)>

Let us bring these several signs together in one place,
to compare and contrast their common and their diverse
characters, and to think about why we make such a fuss
about passing from one to the other in the first place.

Table 1.  Logue(x) = Consat.Log
o-----------------------------------------------------------------------o
|                                                                       |
|  (( pete_plays_guitar ),( pete_plays_sax ),( pete_plays_drums ))      |
|  (( paul_plays_guitar ),( paul_plays_sax ),( paul_plays_drums ))      |
|  (( jane_plays_guitar ),( jane_plays_sax ),( jane_plays_drums ))      |
|                                                                       |
|  (( pete_plays_guitar ),( paul_plays_guitar ),( jane_plays_guitar ))  |
|  (( pete_plays_sax    ),( paul_plays_sax    ),( jane_plays_sax    ))  |
|  (( pete_plays_drums  ),( paul_plays_drums  ),( jane_plays_drums  ))  |
|                                                                       |
|  (( pete_fears_13 ),( pete_fears_cats ),( pete_fears_height ))        |
|  (( paul_fears_13 ),( paul_fears_cats ),( paul_fears_height ))        |
|  (( jane_fears_13 ),( jane_fears_cats ),( jane_fears_height ))        |
|                                                                       |
|  (( pete_fears_13     ),( paul_fears_13     ),( jane_fears_13     ))  |
|  (( pete_fears_cats   ),( paul_fears_cats   ),( jane_fears_cats   ))  |
|  (( pete_fears_height ),( paul_fears_height ),( jane_fears_height ))  |
|                                                                       |
|  ((                                                                   |
|                                                                       |
|  ( pete_plays_guitar )                                                |
|  ( pete_fears_height )                                                |
|                                                                       |
|  ( pete_plays_guitar  pete_fears_height )                             |
|  ( paul_plays_guitar  paul_fears_height )                             |
|  ( jane_plays_guitar  jane_fears_height )                             |
|                                                                       |
|  ( paul_fears_cats )                                                  |
|  ( paul_plays_sax  )                                                  |
|                                                                       |
|  ( pete_plays_sax  pete_fears_cats )                                  |
|  ( paul_plays_sax  paul_fears_cats )                                  |
|  ( jane_plays_sax  jane_fears_cats )                                  |
|                                                                       |
|  ( pete_plays_drums  pete_fears_13 )                                  |
|  ( paul_plays_drums  paul_fears_13 )                                  |
|  ( jane_plays_drums  jane_fears_13 )                                  |
|                                                                       |
|  ( pete_plays_drums  pete_fears_height )                              |
|  ( paul_plays_drums  paul_fears_height )                              |
|  ( jane_plays_drums  jane_fears_height )                              |
|                                                                       |
|  ))                                                                   |
|                                                                       |
o-----------------------------------------------------------------------o

Table 2.  Model(x) = Consat.Mod
o-------------------------------------------------o
| pete_plays_guitar -                             |
| (pete_plays_guitar )                            |
|  pete_plays_sax                                 |
|   pete_plays_drums -                            |
|   (pete_plays_drums )                           |
|    paul_plays_sax -                             |
|    (paul_plays_sax )                            |
|     jane_plays_sax -                            |
|     (jane_plays_sax )                           |
|      paul_plays_guitar                          |
|       paul_plays_drums -                        |
|       (paul_plays_drums )                       |
|        jane_plays_guitar -                      |
|        (jane_plays_guitar )                     |
|         jane_plays_drums                        |
|          pete_fears_13                          |
|           pete_fears_cats -                     |
|           (pete_fears_cats )                    |
|            pete_fears_height -                  |
|            (pete_fears_height )                 |
|             paul_fears_13 -                     |
|             (paul_fears_13 )                    |
|              jane_fears_13 -                    |
|              (jane_fears_13 )                   |
|               paul_fears_cats -                 |
|               (paul_fears_cats )                |
|                paul_fears_height -              |
|                (paul_fears_height ) -           |
|          (pete_fears_13 )                       |
|           pete_fears_cats -                     |
|           (pete_fears_cats )                    |
|            pete_fears_height -                  |
|            (pete_fears_height ) -               |
|         (jane_plays_drums ) -                   |
|      (paul_plays_guitar )                       |
|       paul_plays_drums                          |
|        jane_plays_drums -                       |
|        (jane_plays_drums )                      |
|         jane_plays_guitar                       |
|          pete_fears_13                          |
|           pete_fears_cats -                     |
|           (pete_fears_cats )                    |
|            pete_fears_height -                  |
|            (pete_fears_height )                 |
|             paul_fears_13 -                     |
|             (paul_fears_13 )                    |
|              jane_fears_13 -                    |
|              (jane_fears_13 )                   |
|               paul_fears_cats -                 |
|               (paul_fears_cats )                |
|                paul_fears_height -              |
|                (paul_fears_height ) -           |
|          (pete_fears_13 )                       |
|           pete_fears_cats -                     |
|           (pete_fears_cats )                    |
|            pete_fears_height -                  |
|            (pete_fears_height ) -               |
|         (jane_plays_guitar ) -                  |
|       (paul_plays_drums ) -                     |
|  (pete_plays_sax )                              |
|   pete_plays_drums                              |
|    paul_plays_drums -                           |
|    (paul_plays_drums )                          |
|     jane_plays_drums -                          |
|     (jane_plays_drums )                         |
|      paul_plays_guitar                          |
|       paul_plays_sax -                          |
|       (paul_plays_sax )                         |
|        jane_plays_guitar -                      |
|        (jane_plays_guitar )                     |
|         jane_plays_sax                          |
|          pete_fears_13 -                        |
|          (pete_fears_13 )                       |
|           pete_fears_cats                       |
|            pete_fears_height -                  |
|            (pete_fears_height )                 |
|             paul_fears_cats -                   |
|             (paul_fears_cats )                  |
|              jane_fears_cats -                  |
|              (jane_fears_cats )                 |
|               paul_fears_13                     |
|                paul_fears_height -              |
|                (paul_fears_height )             |
|                 jane_fears_13 -                 |
|                 (jane_fears_13 )                |
|                  jane_fears_height *            |
|                  (jane_fears_height ) -         |
|               (paul_fears_13 )                  |
|                paul_fears_height -              |
|                (paul_fears_height ) -           |
|           (pete_fears_cats )                    |
|            pete_fears_height -                  |
|            (pete_fears_height ) -               |
|         (jane_plays_sax ) -                     |
|      (paul_plays_guitar )                       |
|       paul_plays_sax -                          |
|       (paul_plays_sax ) -                       |
|   (pete_plays_drums ) -                         |
o-------------------------------------------------o

Table 3.  Tenor(x) = Consat.Ten (Just The Gist Of It)
o-------------------------------------------------o
| (pete_plays_guitar )                            | <01> -
|  (pete_plays_sax )                              | <02> -
|   pete_plays_drums                              | <03> +
|    (paul_plays_drums )                          | <04> -
|     (jane_plays_drums )                         | <05> -
|      paul_plays_guitar                          | <06> +
|       (paul_plays_sax )                         | <07> -
|        (jane_plays_guitar )                     | <08> -
|         jane_plays_sax                          | <09> +
|          (pete_fears_13 )                       | <10> -
|           pete_fears_cats                       | <11> +
|            (pete_fears_height )                 | <12> -
|             (paul_fears_cats )                  | <13> -
|              (jane_fears_cats )                 | <14> -
|               paul_fears_13                     | <15> +
|                (paul_fears_height )             | <16> -
|                 (jane_fears_13 )                | <17> -
|                  jane_fears_height *            | <18> +
o-------------------------------------------------o

Table 4.  Sense(x) = Consat.Sen
o-------------------------------------------------o
| pete_plays_drums                                | <03>
|  paul_plays_guitar                              | <06>
|   jane_plays_sax                                | <09>
|    pete_fears_cats                              | <11>
|     paul_fears_13                               | <15>
|      jane_fears_height                          | <18>
o-------------------------------------------------o

As one proceeds through the subsessions of the Theme One Study session,
the computation transforms its larger "signs", in this case text files,
from one to the next, in the sequence:  Logue, Model, Tenor, and Sense.

Let us see if we can pin down, on sign-theoretic grounds,
why this very sort of exercise is so routinely necessary.

We'll worry later about the proper use of quotation marks
in discussing such a case, where the file name "Yada.Yak"
denotes a piece of text that expresses a proposition that
describes an objective situation or an intentional object.
Whatever the case, it is clear that we are knee and neck
deep in a sign relational situation of modest complexity.

I think that the right sort of analogy might help us
to sort it out, or at least to tell what's important
from the things that are less so.  The paradigm that
comes to mind for me is the type of context in maths
where we talk about the "locus" or the "solution set"
of an equation, and here we think of the equation as
denoting its solution set or describing a locus, say,
a point or a curve or a surface or so on up the scale.

In this figure of speech, we might say for instance:

| x
| is what "x^3 - 3x^2 + 3x - 1 = 0" denotes
| is what "(x-1)^3 = 0" denotes
| is what "1" denotes
| is 1.

Making explicit the assumptive interpretations
that the context probably enfolds in this case,
we assume this description of the solution set:

{x in the Reals : x^3 - 3x^2 + 3x -1 = 0} = {1}.

In sign relational terms, we have the 3-tuples:

| <x, "x^3 - 3x^2 + 3x - 1 = 0", "(x-1)^3 = 0">
|
| <x, "(x-1)^3 = 0", "1">
|
| <x, "1", "1">

As it turns out we discover that the
object x was really just 1 all along.

But why do we put ourselves through the rigors of these
transformations at all?  If 1 is what we mean, why not
just say "1" in the first place and be done with it?
A person who asks a question like that has forgetten
how we keep getting ourselves into these quandaries,
and who it is that assigns the problems, for it is
Nature herself who is the taskmistress here and the
problems are set in the manner that she determines,
not in the style to which we would like to become
accustomed.  The best that we can demand of our
various and sundry calculi is that they afford
us with the nets and the snares more readily
to catch the shape of the problematic game
as it flies up before us on its own wings,
and only then to tame it to the amenable
demeanors that we find to our liking.

In sum, the first place is not ours to take.
We are but poor second players in this game.

That understood, I can now lay out our present Example
along the lines of this familiar mathematical exercise.

| x
| is what Consat.Log denotes
| is what Consat.Mod denotes
| is what Consat.Ten denotes
| is what Consat.Sen denotes.
```

### ZOL. Note 16

```It will be good to keep this picture before us a while longer:

o-----------------------------o-----------------------------o
|     Objective Framework     |   Interpretive Framework    |
o-----------------------------o-----------------------------o
|                                                           |
|                               o  s_1 = Logue(x)    |      |
|                              /                     |      |
|                             /                      |      |
|                            @                       |      |
|                          ·  \                      |      |
|                        ·     \                     |      |
|                      ·        o  i_1 = Model(x)    v      |
|                    ·          o  s_2 = Model(x)    |      |
|                  ·           /                     |      |
|                ·            /                      |      |
|  Object = x  o · · · · · · @                       |      |
|                ·            \                      |      |
|                  ·           \                     |      |
|                    ·          o  i_2 = Tenor(x)    v      |
|                      ·        o  s_3 = Tenor(x)    |      |
|                        ·     /                     |      |
|                          ·  /                      |      |
|                            @                       |      |
|                             \                      |      |
|                              \                     |      |
|                               o  i_3 = Sense(x)    v      |
|                                                           |
o-----------------------------------------------------------o
Figure 1.  Computation As Semiotic Transformation

The reason that I am troubling myself -- and no doubt you --
with the details of this Example is because it highlights
a number of the thistles that we will have to grasp if we
ever want to escape from the traps of YARNBOL and YARWARS
in which so many of our fairweather fiends are seeking to
ensnare us, and not just us -- the whole web of the world.

YARNBOL  =  Yet Another Roman Numeral Based Ontology Language.
YARWARS  =  Yet Another Representation Without A Reasoning System.

In order to avoid this, or to reverse the trend once it gets started,
we just have to remember what a dynamic living process a computation
really is, precisely because it is meant to serve as an iconic image
of dynamic, deliberate, purposeful transformations that we are bound
to go through and to carry out in a hopeful pursuit of the solutions
to the many real live problems that life and society place before us.
So I take it rather seriously.

Okay, back to the grindstone.

The question is:  "Why are these trips necessary?"

How come we don't just have one proper expression
for each situation under the sun, or all possible
suns, I guess, for some, and just use that on any
appearance, instance, occasion of that situation?

Why is it ever necessary to begin with an obscure description
of a situation? -- for that is exactly what the propositional
expression caled "Logue(x)", for Example, the Consat.Log file,
really is.

Maybe I need to explain that first.

The first three items of syntax -- Logue(x), Model(x), Tenor(x) --
are all just so many different propositional expressions that
denote one and the same boolean-valued function p : X -> B,
and one whose abstract image we may describe well enough as
a boolean function that has the abstract type q : B^k -> B,
where k happens to be 18 in the present Consat Example.

If we were to write out the truth table for q : B^18 -> B
it would take 2^18 = 262144 rows.  Using a bold letter #x#
for a coordinate tuple, writing #x# = <x_1, ..., x_18>, each
row of the table would have the form <x_1, ..., x_18, q(#x#)>.
And the function q is such that all rows evaluate to 0 save 1.

Each of the four different formats expresses this fact about q
in its own way.  The first three are logically equivalent, and
the last one is the maximally determinate positive implication
of what the others all say.

From this point of view, the logical computation that we went through,
in the sequence Logue, Model, Tenor, Sense, was a process of changing
from an obscure sign of the objective proposition to a more organized
arrangement of its satisfying or unsatisfying interpretations, to the
most succinct possible expression of the same meaning, to an adequate
positive projection of it that is useful enough in the proper context.

This is the sort of mill -- it's called "computation" -- that we have
to be able to put our representations through on a recurrent, regular,
routine basis, that is, if we expect them to have any utility at all.
And it is only when we have started to do that in genuinely effective
and efficient ways, that we can even begin to think about facilitating
any bit of qualitative conceptual analysis through computational means.

And as far as the qualitative side of logical computation
and conceptual analysis goes, we have barely even started.

We are contemplating the sequence of initial and normal forms
for the Consat problem and we have noted the following system
of logical relations, taking the enchained expressions of the
objective situation x in a pairwise associated way, of course:

Logue(x)  <=>  Model(x)  <=>  Tenor(x)  =>  Sense(x).

If we continue to pursue the analogy that we made with the form
of mathematical activity commonly known as "solving equations",
then there are many salient features of this type of logical
problem solving endeavor that suddenly leap into the light.

First of all, we notice the importance of "equational reasoning"
in mathematics, by which I mean, not just the quantitative type
of equation that forms the matter of the process, but also the
qualitative type of equation, or the "logical equivalence",
that connects each expression along the way, right up to
the penultimate stage, when we are satisfied in a given
context to take a projective implication of the total
knowledge of the situation that we have been taking
some pains to preserve at every intermediate stage
of the game.

This general pattern or strategy of inference, working its way through
phases of "equational" or "total information preserving" inference and
phases of "implicational" or "selective information losing" inference,
is actually very common throughout mathematics, and I have in mind to
examine its character in greater detail and in a more general setting.

Just as the barest hint of things to come along these lines, you might
consider the question of what would constitute the equational analogue
of modus ponens, in other words the scheme of inference that goes from
x and x=>y to y.  Well the answer is a scheme of inference that passes
from x and x=>y to x&y, and then being reversible, back again.  I will
explore the rationale and the utility of this gambit in future reports.

One observation that we can make already at this point,
however, is that these schemes of equational reasoning,
or reversible inference, remain poorly developed among
our currently prevailing styles of inference in logic,
their potentials for applied logical software hardly
being broached in our presently available systems.
```

### ZOL. Note 17

```Unsystematic Pretexts On Zeroth Order Logic

Medias Res

Date:  Tue, 03 Oct 2000 01:52:23 -0400

Re: Jim Piat's question:

| Perhaps the liar's paradox cannot be reduced to diagrammatic form.
| Why does this paradox seem to resist deductive analysis?  Is there
| something in the purpose or context of the premises that is at the
| root of the problem?  Is there something about asserting one's own
| context, or denying one speaks from a context that is at the bottom

According to my understanding of it, the so-called Liar Paradox
is just the most simple-minded of fallacies, involving nothing
more mysterious than the acceptance of a false assumption,
from which anybody can prove anything at all.

Consider one of the forms in which it is commonly presented:

Somebody writes down:

| 1.  Statement 1 is false.

Then you are led to reason:  If Statement 1 is false then
by the principle that permits the substitution of equals
in a true statement to obtain yet another true statement,
one can derive the result:

"Statement 1 is false" is false.

Ergo, Statement 1 is true, and so on, and so on,

Where did you go wrong?  Where were you misled?

As it happens, graphical reasoning does help
to clear this up -- at least, it did for me --
if only because the process of translating
the purported reasoning into another form
gave me a clue where the wool was pulled.

Just here, to wit, where it is writ:

| 1.  Statement 1 is false.

What is this really saying?
Well, it's the same as writing:

| Statement 1.  Statement 1 is false.

And what the heck does this dot.comment say?

It is inducing you to accept this identity:

| "Statement 1"  =  "Statement 1 is false".

That appears to be a purely syntactic indexing,
the sort of thing you are led to believe that
you can do arbitrarily, with logical impunity.
But you cannot, for syntactic identity implies
logical equivalence, and that is liable to find
itself constrained by iron bands of logical law.

And you cannot just assume what this result says:

| "Statement 1"  =  "Negation of Statement 1".

To present the last step in the form that I like:

o--------------------------------------o
|                                      |
|                      Statement_1     |
|                      o               |
|                      |               |
|      Statement_1     |               |
|                o-----o               |
|                 \   /                |
|                  \ /                 |
|                   o                  |
|                   |                  |
|                   |                  |
|                   @                  |
|                                      |
|   (( Statement_1, (Statement_1) ))   |
|                                      |
o--------------------------------------o
| Figure 0.  Statement 0               |
o--------------------------------------o

And this my friends, call it "Statement 0",
is purely and simply a false statement,

Statement 0 was slipped into your drink
before you were even beginning to think.

A bit before you were led to substitute
you should have examined more carefully
the locus proposed for the substitution!

For the principle that you rushed to use
does not permit you to substitute unequals
into a statement that is false to begin with,
not just in the first place, but even before,
in the zeroth place of argument, as it were,
and still expect to come up with a truth.

Now, let that be the end of that.

Postscript

I believe that this is roughly the core of Peirce's
analysis of the Liar Puzzle but he gives a far more
careful and intricate discussion of it, and I would
need to revisit it again before I can say for sure.
```

### ZOL. Note 18

```Unsystematic Pretexts On Zeroth Order Logic

Prelude

I like to think of ZOL as the ground floor or the lower bound
for any brand of useful logic.  It is obvious that we cannot
do everything that we want within its realm, but when we do
find that a problem reduces to this level, then it is quite
advantageous to have a syntax and a set of inference steps
that is as efficient and as flexible as possible.

Anyway, this is the conclusion that I reached in the middle 80's,
when I began my first trials and tribulations with what was then
called the "automatic theorem proving" (ATP) branch of AI, and
after my first experiences with one or two primitive tribes of
"general resolution theorem-provers" had led me to believe that
a lot of the thrashing around that they were doing then was due
to computational complexities that had not been managed properly
at the purely propositional level.  I know, NP-completeness and
all that, but at the time it was something of a revelation to me.
Now, in those times I was even more arrogant than I am today --
try to imagine! -- and I believed moreover that I held a secret
key to all of the previously recalculant doors, in the form of
Peirce's work on graphical logics.  Well, I would not say, even
today, that this unabashed and completely unwashed opinion was
entirely false, but, of course, nothing is ever quite that easy.
I am just telling you this so that maybe you won't imagine that
I made all of this stuff up just the other day, or something.

Here are some of the reasons that come to mind when I ask myself
why all of this business might still be worth bringing up in the
context of ongoing discussions, here and elsewhere.

1.  When you have a logical system for which complete algorithms
are available to find all of the satisfying interpretations
(truth value assignments to the variables) of an expression,
then several different styles of reasoning become feasible,
with features that are distinctly different from those that
we commonly take for granted.

2.  In particular, we acquire the option of using equational modes
of inference, with logical equivalences between the expressions
at each successive step.

3.  Given a decent calculus for propositions, where propositions are
modeled as functions of type f : B^k -> B, for some k, and where
these functions have "fibers" f^(-1)(1) that are subsets of B^k,
it becomes possible to exploit many partial analogies between the
"universes of discourse" of conjoint type <B^k, B^k -> B> and the
spaces and functions of conjoint type <R^k, R^k -> R>, R = Reals,
that are of interest in "real analysis", encompassing the subfields
of analytic geometry, functional analysis, and differential geometry.
There is in this the beginnings of a novel way -- novel in the sense
that it was new with Newton and Leibniz -- of dealing with change and
diversity in logical frameworks.

Exodium

May the whetstone of your wit
Not go un-whetted by this bit,
For we have much yet to carve,
All your wit's edges to crave.

Avvertimento

Coming Attractions:  "Propositional Equation Reasoning Systems" (PERS).
```

### ZOL. References

```Extra Examples

1.  Propositional logic example.
Files:  Alpha.lex + Prop.log
Ref:    [Cha, 20, Example 2.12]

2.  Chemical synthesis problem.
Files:  Chem.*
Ref:    [Cha, 21, Example 2.13]

3.  N Queens problem.
Files:  Queen*.*,  Q8.*,  Q5.*
Refs:   [BaC, 166], [VaH, 122], [Wir, 143].
Notes:  Only the 5 Queens example will run in 640K memory.
Use the "Queen.lex" file to load the "Q5.eg*" log files.

4.  Five Houses puzzle.
Files:  House.*
Ref:    [VaH, 132].
Notes:  Will not run in 640K memory.

5.  Graph coloring example.
Files:  Color.*
Ref:    [Wil, 196].

6.  Examples of Cook's Theorem in computational complexity,
that propositional satisfiability is NP-complete.

Files:  StiltN.* = "Space and Time Limited Turing Machine",
with N units of space and N units of time.
StuntN.* = "Space and Time Limited Turing Machine",
for computing the parity of a bit string,
with Number of Tape cells of input equal to N.
Ref:    [Wil, 188-201].
Notes:  Can only run Turing machine example for input of size 2.
Since the last tape cell is used for an end-of-file marker,
this amounts to only one significant digit of computation.
Use the "Stilt3.lex" file  to load the "Stunt2.egN" files.
Their Sense file outputs appear on the "Stunt2.seN" files.

7.  Fabric knowledge base.
Files:  Fabric.*, Fab.*
Ref:    [MaW, 8-16].

8.  Constraint Satisfaction example.
Files:  Consat1.*, Consat2.*
Ref:    [Win, 449, Exercise 3-9].
Notes:  Attributed to Kenneth D. Forbus.
```

### ZOL. Bibliography

Angluin, Dana,
“Learning with Hints”,
Proceedings of the 1988 Workshop on Computational Learning Theory,
edited by D. Haussler and L. Pitt, Morgan Kaufmann, San Mateo, CA, 1989.
Ball, W.W. Rouse, and Coxeter, H.S.M.,
Mathematical Recreations and Essays, 13th ed.,
Dover, New York, NY, 1987.
Chang, Chin-Liang and Lee, Richard Char-Tung,
Symbolic Logic and Mechanical Theorem Proving,
Academic Press, New York, NY, 1973.
Denning, Peter J., Dennis, Jack B., and Qualitz, Joseph E.,
Machines, Languages, and Computation,
Prentice-Hall, Englewood Cliffs, NJ, 1978.
Edelman, Gerald M.,
Topobiology : An Introduction to Molecular Embryology,
Basic Books, New York, NY, 1988.
Lloyd, J.W.,
Foundations of Logic Programming,
Springer-Verlag, Berlin, 1984.
Maier, David and Warren, David S.,
Computing with Logic : Logic Programming with Prolog,
Benjamin/Cummings, Menlo Park, CA, 1988.
McClelland, James L. and Rumelhart, David E.,
Explorations in Parallel Distributed Processing : A Handbook of Models, Programs, and Exercises,
MIT Press, Cambridge, MA, 1988.
Peirce, Charles Sanders,
Collected Papers of Charles Sanders Peirce,
edited by Charles Hartshorne, Paul Weiss, and Arthur W. Burks,
Harvard University Press, Cambridge, MA, 1931-1960.
Peirce, Charles Sanders,
The New Elements of Mathematics,
edited by Carolyn Eisele, Mouton, The Hague, 1976.
Wiener, Philip P. (ed.)
Charles S. Peirce : Selected Writings; Values in a Universe of Chance,
Dover, New York, NY, 1966.
Spencer Brown, George,
Laws of Form,
George Allen & Unwin, London, UK, 1969.
Van Hentenryck, Pascal,
Constraint Satisfaction in Logic Programming,
MIT Press, Cambridge, MA, 1989.
Wilf, Herbert S.,
Algorithms and Complexity,
Prentice-Hall, Englewood Cliffs, NJ, 1986.
Winston, Patrick Henry,
Artificial Intelligence, 2nd ed.,
Wirth, Niklaus,
Algorithms + Data Structures = Programs,
Prentice-Hall, Englewood Cliffs, NJ, 1976.

## Zeroth Order Logic • Discussion (Sep–Oct 2000)

### Comments on Whitten's Starter Ontology

```Subj:  Comments on Whitten's Starter Ontology
Date:  Fri, 29 Sep 2000 23:24:37 -0400

Chris, David,

I will not quote your dialogue since it will be some time before
I can work through it, but I do have some previously worked-out
thoughts on the relationship between assertions and expressions,
at least, in the sorts of languages that one intends to employ
for the purpose of describing some object domain.  In addition,
this approach to the question helps to smooth out some of the
wrinkles between the descriptive and the functional properties
of the logical expressions being used.  This is a little bit
reminiscent of the declarative/procedural dimension of choice,
since it is the functional properties of expressions that are
especially pertinent when you turn to the task of implementing
the language as a component of an active system -- especially
model for implementing procedures.

Disclaimer:  I can only vouch for the servicability of this POV
at the level of propositional calculus (AKA sentential logic or
"zeroth order logic" (ZOL)).  It's not much, but I always find
that a surprising fraction of the everyday problem complexity
is bound up at this level, so any easing of the bottleneck at
this level will usually help a lot with all the other tasks.

From this POV, propositional expressions are just notations
for boolean-valued functions f : X -> B, where B = {0, 1}.

Thus, the expression denotes a function.
As it is, such an f is just a function.

The distinction between assertions and expressions can now be treated
as not so much of a "logical" distinction as a "pragmatic" distinction,
in other words, not so much a question of what the expression denotes as
a question of how its denotation, namely, the function, is being used at
the particular moment in question.

When you assert an expression this is tantamount to asking your interpreter
to shift into a contemplation of the subset of X what is variously called:

1.  The "fiber of truth" -- I did not make this up! --
in the function f, namely, f^(-1)(1) ç X,

2.  The "antecedent", "level set", or "pre-image" of truth,
or of the functional value 1, under the function f,

3.  The "indicated set" of the "indicator function" f.

In sum, assertion is a directive to "consider the models"
of the expression being asserted.

Anyway, this is how I more or less think about it.
```

### All Liar, No Paradox 1

```Subj:  All Liar, No Paradox
Date:  Mon, 23 Oct 2000 17:14:02 -0400

I was reminded of this old note of mine, initially
addressed to some denizens of the Peirce List, by
Jay Halcomb's remarks on the need to distinguish
"syntactic", "semantic", and dare I add "pragmatic"
brands of inconsistency affecting our formal systems.
Just by way of tossing yet another file into the ointment,
I thought that this humble example, aside from booting the
Liar beyond the reaches of our Architectronic Bootstraps,
might also serve to illustrate just how tricky it can be
to perform the required dissection.

By the way, I forgot to include the Figure,
which was critical to seeing the solution,
so here it is:

|             S_1
|              o
|              |
|       S_1    |
|        o-----o
|         \   /
|          \ /
|           o
|           |
|           |
|           @
|
|    (( S_1, (S_1) ))

Figure 0.  Statement 0
```

### All Liar, No Paradox 2

```Subj:  All Liar, No Paradox
Date:  Tue, 03 Oct 2000 01:52:23 -0400

Re Jim's question:

| Perhaps the liar's paradox can not be reduced to diagrammatic form.
| Why does this paradox seem to resist deductive analysis?  Is there
| something in the purpose or context of the premises that is at the
| root of the problem?  Is there something about asserting ones own
| context, or denying one speaks from a context that is at the bottom

According to my understanding of it, the so-called Liar Paradox
is just the most simple-minded of fallacies, involving nothing
more mysterious than the acceptance of a false assumption,
from which anybody can prove anything at all.

Consider one of the forms in which it is commonly presented:

| Somebody writes down:
|
| > 1.  Statement 1 is false.
|
| Then you are led to reason:
| If Statement 1 is false then
| by the principle that permits
| the substitution of equals in
| a true statement to obtain
| yet another true statement,
| one can derive the result:
|
| "Statement 1 is false" is false.
| Ergo, Statement 1 is true,
| and so on, and so on,

Where did you go wrong?
Where were you misled?

As it happens, graphical reasoning does help
to clear this up -- at least, it did for me --
if only because the process of translating
the purported reasoning into another form
gave me a clue where the wool was pulled.

Just here, to wit, where it is writ:

> 1.  Statement 1 is false.

What is this really saying?

Well, it's the same as writing:

> Statement 1.  Statement 1 is false.

And what the heck does this dot.comment say?

It is inducing you to accept this identity:

> "Statement 1" = "Statement 1 is false".

That appears to be a purely syntactic indexing,
the sort of thing you are led to believe that
you can do arbitrarily, with logical impunity.
But you cannot, for syntactic identity implies
logical equivalence, and that is liable to find
itself constrained by iron bands of logical law.

And you cannot just assume what this result says:

> "Statement 1" = "Negation of Statement 1"

To write the last step in the form that I like:

> (( Statement_1 , ( Statement_1 ) ))

And this my friends, call it "Statement 0",
is purely and simply a false statement,

Statement 0 was slipped into your drink
before you were even starting to think.

A bit before you were led to substitute
you should have examined more carefully
the site proposed for the substitution!

For the principle that you rushed to use
does not permit you to substitute unequals
into a statement that is false to begin with,
not just in the first place, but even before,
in the zeroth place of argument, as it were,
and still expect to come up with a truth.

Now, let that be the end of that.
```

### In Praise of Zeroth Order Logic 1

```Subj:  In Praise of Zeroth Order Logic
Date:  Wed, 11 Oct 2000 09:12:14 -0400

Michael Gruninger's note had such a positive ring
to it that I thought I might try to chime in, though
in a lower register or a minor key, as befits my range.

It's true that thinking too much about all the stuff
that we want and can't have, all the things that we'd
like to do and can't, all this cant about "no go there"
which is all we seem to hear from logicians these days,
can be downright dispiriting, if not utterly paralyzing.

Of course there is no going back, no choice but to recognize
our limitations -- only disaster ensues from ignoring that --
but still, sometimes you need to think about all of the good
and useful things that you can do with even the simplest and
the most gratuitous of basic materials.

So let this song be taken in the spirit of the general enchoiry,
in praise of the splendid things that we can actually accomplish
with so humble an instrument or organon as propositional calculus.

Well, as often happens, I have used up my breath and my time,
for now, with all of this preliminary fanfare, so I will get
back to it later on, resting content, for now, that my theme
is well-augured, if none of its variations are yet opportuned.
```

### In Praise of Zeroth Order Logic 2

```Subj:  In Praise of Zeroth Order Logic
Date:  Mon, 16 Oct 2000 21:38:21 -0400

I wanted to get back to this hopeful thread before
I forgot what I had in mind, but the re-stringing
of the original harp may be a bit rough at first.

I like to think of ZOL as the ground floor or the lower bound
for any brand of useful logic.  It is obvious that we cannot
do everything that we want within its realm, but when we do
find that a problem reduces to this level, then it is quite
advantageous to have a syntax and a set of inference steps
that is as efficient and as flexible as possible.

Anyway, this is the conclusion that I reached in the middle 80's,
when I began my first trials and tribulations with what was then
called the "automatic theorem proving" (ATP) branch of AI, and
after my first experiences with one or two primitive tribes of
"general resolution theorem-provers" had led me to believe that
a lot of the thrashing around that they were doing then was due
to computational complexities that had not been managed properly
at the purely propositional level.  I know, NP-completeness and
all that, but at the time it was something of a revelation to me.
Now, in those times I was even more arrogant than I am today --
try to imagine! -- and I believed moreover that I held a secret
key to all of the previously recalculant doors, in the form of
Peirce's work on graphical logics.  Well, I would not say, even
today, that this unabashed and completely unwashed opinion was
entirely false, but, of course, nothing is ever quite that easy.
I am just telling you this so that maybe you won't imagine that
I made all of this stuff up just the other day, or something.

The reason why I think that all of this is worth bringing up in the
context of the discussions that we have been having lately is this:

1.  When you have a logical system for which complete algorithms
are available to find all of the satisfying interpretations
(truth value assignments to the variables) of an expression,
then several different styles of reasoning become feasible,
with features that are distinctly different from those that
we commonly take for granted.

2.  In particular, we acquire the option of using equational modes
of inference, with logical equivalences between the expressions
at each successive step.

3.  Given a decent calculus for propositions, modeled as functions of
type f : B^n -> B, whose "fibers" (f^(-1))(1) are subsets of B^n,
it becomes possible to exploit many partial analogies between the
"universes of discourse" of conjoint type <B^n, B^n -> B> and the
spaces and functions of conjoint type <R^n, R^n -> R>, R = Reals,
that are of interest in "real analysis", comprising the subfields
of analytic geometry, functional analysis, and differential geometry.
There is in this the beginnings of a novel way -- novel in the sense
that it was new with Newton -- of dealing with change and diversity
in logical frameworks.

Coming Attractions:  "Propositional Equation Reasoning Systems" (PERS).

May the whetstone of your wit
Not go un-whetted by this bit,
For we have much yet to carve,
All your wit's edges to crave.
```

### In Praise of Zeroth Order Logic 3

```Subj:  In Praise of Zeroth Order Logic
Date:  Tue, 17 Oct 2000 14:28:39 -0400

I am about to put forth a claim of the following form:

| X is better than Y for the purpose Z.

I was going to leave this claim implicit, in the customary way
that all of us typically embed such claims within our conduct,
and only respond when challenged in the "Well, why would I be
doing this if I didna think it better!?" sort of way that all
of starship engineers have no doubt been trained into by now.

More specifically, I am concerned with a claim of the form:

| Language X is better than Language Y for the purpose
| of representing the 'associated domain of objects' Z.

I can declare my stake in a clain of this form by indexing
it to myself, as an agent, interpreter, observer, whatever,
perhaps in something like the following form:

| Language X is better than Language Y for the purpose
| of representing the 'associated domain of objects' Z,
| so says I.

At this point I might choose to create a specialized bit of syntax:

| [I <: Agent] :=>
| [Representing Z <: Pragma] :
| [Language X] ->[Better <: Order]-> [Language Y].

Paraphrasing:

| I, the agent of this claim, assert that
| the aim, end, or object of representing Z
| makes Language X look better than Language Y.

This is ostencilly a four-place rheme,
but there is yet another modification
that would be customary to work on it.

More abstractly, in the sense of extracting the particular measure
of order, the one that is implied or signified by the word "better",
and withdrawing it to a parametric position within the claim form,
I might choose to write this alternative form:

| [ I <: Agent,
|   Better <: Order,
|   Representing Z <: Pragma
| ]
| :
| [Language X] ->- [Language Y].

Of course, a lot more could be said about what we might
conceivably mean by "better", but I will leave that to
your imagination for now, though, as I imagine you can
imagine, I will have a few bits to add to the account,
but later.  I just hope that my account in heaven is
not reckoned up in mod 2 arithmetic!
```

### In Praise of Zeroth Order Logic 4

```Subj:  In Praise of Zeroth Order Logic
Date:  Fri, 20 Oct 2000 17:00:46 -0400

Last time, I was preparing to make a claim of the form:

| Language X is better than Language Y for the purpose
| of representing the 'associated domain of objects' Z.

Now, on the face of it, such a claim seems wholly outlandish.
If I were to say "French is better than German for purpose Z",
you would probably suspect that my discourse was on the point
of degenerating into specious cliches, if not downright slurs.
From common experience, you know that one language may indeed
have the felicities of an especially apt phrase at one point,
where another language would require a clumsy circumlocution
in order to translate it, but that the advantage, if there
is one, will be very likely to shift from point to point
across any domain of discourse that anyone might choose.

And so it could only be in a very restricted area of discourse,
say, Propositional Calculus, where our Political Correctness
is not at risk, that I would even dare to make such a claim.

But then -- if I restrict my claim to the level of PC, SL, ZOL --
I now seem to have the opposite sorts of problems with making
it seem interesting or plausible at all:

1.  Who cares?  What is there to get excited about?
2.  Not a bit!  Isn't it rather obviously mistaken?

And this would seem clear from the fact that every syntax for
Propositional Calculus, that was even worthy of carrying the
name, would have to be as logically indiscernible from its
cohorts as yet another pebble on the shore, would it not?

Not a bit!

But it does become clear from this reflection that our discussion
must leave the realm of pure logic and dabble in the tidepools of
pragmatic issues.

The Art Is Long,
The Day Is Gone.
Later, And Anon,
There Will Come
A Time To Light
The Candle, Jon.
```

### In Praise of Zeroth Order Logic 5

```Subj:  In Praise of Zeroth Order Logic
Date:  Mon, 06 Nov 2000 00:30:54 -0500

In my qualified response (or hedged bet) with regard to Jim's question
about "What Language to Use", I showed how the partition constraints
that are implicit in John Sowa's "Top Level Categories" (TLC) can
be represented as a single proposition in a particular language
for propositional, sentential, or zeroth order logic (ZOL).
I have been calling this particular representation of ZOL
by the name of the "reflective extension of logical graphs",
or "RefLog", for short.

At the present time I am still not certain that I have read
the intentions of John's lattice representation correctly,
but, as far as its use in providing us with an example of
a certain level of complexity, it does not really matter.

I will now elaborate on the meanings of these types of expressions,
as a way of illustrating the properties of the formal language
in which they are written.

Let us give this proposition, the one that expresses the
feature constraints of TLC, a shorter name, say, "Tolc".
Thus, we have:

| "Tolc" =
|
| "
| (( Object      ),( Process       ),( Schema      ),( Script  ),
|  ( Juncture    ),( Participation ),( Description ),( History ),
|  ( Structure   ),( Situation     ),( Reason      ),( Purpose ))
|
| ((
|
| ( Independent   ,( Actuality  ),( Form        ))
| ( Relative      ,( Prehension ),( Proposition ))
| ( Mediating     ,( Nexus      ),( Intention   ))
|
| ( Physical      ,( Actuality ),( Prehension  ),( Nexus     ))
| ( Abstract      ,( Form      ),( Proposition ),( Intention ))
|
| ( Continuant    ,( Object      ),( Schema    ),( Juncture      ),
|                  ( Description ),( Structure ),( Reason        ))
| ( Occurrent     ,( Process     ),( Script    ),( Participation ),
|                  ( History     ),( Situation ),( Purpose       ))
|
| ( Actuality     ,( Object      ),( Process       ))
| ( Form          ,( Schema      ),( Script        ))
| ( Prehension    ,( Juncture    ),( Participation ))
| ( Proposition   ,( Description ),( History       ))
| ( Nexus         ,( Structure   ),( Situation     ))
| ( Intention     ,( Reason      ),( Purpose       ))
|
| ))
| "

Just by way of review:

| This expression makes use of two basic forms:
|
| 1.  An expression of the form "(( X1 ),( X2 ), ... ,( Xk ))"
|     says that the universe of discourse is partitioned into
|     k "mutually exclusive and exhaustive categories" (MEEC's),
|     those for which the propositions X1, X2, ..., Xk, respectively,
|     are true.
|
| 2.  An expression of the form "( Y ,( Y1 ),( Y2 ), ... ,( Yk ))"
|     says that the part of the universe of discourse where Y is true
|     is partitioned into k MEEC's, those for which the propositions
|     Y1, Y2, ..., Yk, respectively, are true.
|
| The "recessing" of the larger part of the expression within
| a logically otiose double negation "(( ... ))" is merely a trick
| that makes the processing more efficient for a particular program.

Here is the basic set-up.

Let A be a formal alphabet (lexicon, vocabulary) of 25 logical features,
which may be given nominal and verbose spellings in the following way:

| x<01>  =  Abstract,
| x<02>  =  Actuality,
| x<03>  =  Continuant,
| x<04>  =  Description,
| x<05>  =  Form,
| x<06>  =  History,
| x<07>  =  Independent,
| x<08>  =  Intention,
| x<09>  =  Juncture,
| x<10>  =  Mediating,
| x<11>  =  Nexus,
| x<12>  =  Object,
| x<13>  =  Occurrent,
| x<14>  =  Participation,
| x<15>  =  Physical,
| x<16>  =  Prehension,
| x<17>  =  Process,
| x<18>  =  Proposition,
| x<19>  =  Purpose,
| x<20>  =  Reason,
| x<21>  =  Relative,
| x<22>  =  Schema,
| x<23>  =  Script,
| x<24>  =  Situation,
| x<25>  =  Structure.

Given an arbitrary alphabet of logical features A = {x_1, ..., x_k},
let us employ the following notations to talk about various sorts of
objects that occur in and around the associated universe of discourse:

1.  <<A>>  =  <<x_1, ..., x_k>>  =
The "set of interpretations" (cells, points, vectors)
in the universe of discourse that is generated by A.

2.  A^  =  (<<A>> -> B)  =
The "set of propositions" (boolean functions) on <<A>>.

3.  [[A]]  =  [[x_1, ..., x_k]]  =
The "universe of discourse" that is generated by A,
comprising both of the previous sets, and accordingly
comprehended as the ordered pair (<<A>>, A^), made up
of all logical points and all logical functions that
are associated with the alphabet A.

As far as the types of these spaces go --
geometric, functional, and integrated --
we have the following specifications:

1.  <<A>> :  B^k, in other words, <<A>> is isomorphic to B^k.
2.    A^  : (B^k -> B), the space of functions {f : B^k -> B}.
3.  [[A]] : (B^k, (B^k -> B)), notated as (B^k +-> B) or [[B^k]].

In this setting, each proposition f in A^ has two kinds of typings:

a.  The "concrete typing" of the form f : <<A>> -> B
elicits the full richness of the qualitative features in A.

b.  The "abstract typing" of the form f : B^k -> B
demotes the concrete units to their quantitative codes in B^k.

Given all this, then, the proposition Tolc is (or can be interpreted as)
a function Tolc : <<A>> -> B, where A = {x<01>, ..., x<25>} as above, or
a function Tolc :  B^25 -> B, up to a level of "isotypical abstraction".

Next time I will discuss how a proposition like Tolc specifies an embedding --
not in the least bit procrustean! -- of a lattice like TLC within a certain
type of boolean lattice, as I briefly suggested, a little bit inaccurately,
in the following statement:

| This whole expression effectively tells one how to embed the lattice
| in B^25, where B = {0, 1}, a 25-dimensional universe of discourse --
| binary cube, truth table, venn diagram, or however you want to view it --
| 25 being the number of terms in the vocabulary, which are here interpreted
| as binary features or boolean variables.
```

### In Praise of Zeroth Order Logic 6

```Subj:  In Praise of Zeroth Order Logic
Date:  Mon, 06 Nov 2000 20:48:01 -0500

I left off last time with this promise:

| Next time I will discuss how a proposition like Tolc specifies an embedding --
| not in the least bit procrustean! -- of a lattice like TLC within a certain
| type of boolean lattice, as I briefly suggested, a little bit inaccurately,
| in the following statement:
|
| | This whole expression effectively tells one how to embed the lattice
| | in B^25, where B = {0, 1}, a 25-dimensional universe of discourse --
| | binary cube, truth table, venn diagram, or however you want to view it --
| | 25 being the number of terms in the vocabulary, which are here interpreted
| | as binary features or boolean variables.

In order to see what's involved here without becoming too involved ourselves
in the largely irrelevant complications of visualizing 25 logical dimensions,
let us revert to an order of simpler examples that can suffice to illustrate
all of the pertinent issues.  As it happens, an example that I used a while
ago on the CG List has a pair of components that are about the right size.

Here is how I set up the example:

| Consider the "universal partition" expression
|
| "(( Animal ),( Vegetable ),( Mineral ))"
|
| conjoined with the "relative partition" expression
|
| "( Living_thing ,( Animal ),( Vegetable ))".
|
| Letting each term be abbreviated by its initial letter,
| one has the following pair of tree-like graphs that were
| earlier described as "painted and rooted cacti" (PARC's).
|
|    A  V  M
|    o  o  o
|    |  |  |
|    o--o--o
|     \   /
|      \ /
|       @
|
|
|       A  V
|       o  o
|    L  |  |
|    o--o--o
|     \   /
|      \ /
|       @
|
| Here, I am using the at-sign "@" for the root node.
| Also, conjoining these two cacti, signifying their
| logical conjunction under the existential reading,
| would be indicated by joining them at their roots,
| but that is beyond my graphic resources at present
| and so I will have to leave it to your imagination.
|
| Notice how these forms work.  The bracket or "lobe"
| of the form "( X1, ..., Xk )" says that exactly one
| of its arguments is false.  In particular, "(X)" is
| just our old friend negation.  So the expression of
| the form "(( X1 ), ..., ( Xk ))" says that just one
| among the "(X1)", ..., "(Xk)" is false, which is to
| say that just one among the "X1", ..., "Xk" is true.
|
| The relative partition expression, otherwise known
| as the "genus and species" or the "pie-chart" form,
| taking the shape "( Y ,( Y1 ), ..., ( Yk ))", can
| be grasped as follows:  If Y is true, then it is
| as if Y were not there, being blank, and the form
| reduces to the partition "(( Y1 ), ..., ( Yk ))".
| If Y is false, then it is the only false argument,
| and so the expression reduces to "(Y1)·...·(Yk)".
| In sum, if you are outside the genus, then you
| are outside each and every one of its species.
|
| There is a sort of graphical image that makes this
| particular form of evaluation very easy to remember.
| Suppose you have to evaluate a generic lobe like this:
|
|    X1  X2       Xk
|     o---o...o---o
|      \         /
|       \       /
|        \     /
|         \   /
|          \ /
|           @
|
| As soon as you find an argument that is false,
| without loss of generality, assume it to be X1,
| then it must be that all of the rest are true,
| so the expression reduces to a conjunction of
| the rest, namely, X2·...·Xk.  Graphically,
|
|              o
|    X1 = () = |
|              @
|
| which looks like a "spike", so you can picture
| the false value as puncturing the lobe on which
| it grows, thereby shrinking it down to a bunch
| of paints on a rooted node, which is a graphic
| picture of their conjunction.

This discussion is also available at:

http://www.virtual-earth.de/CG/cg-list/msg03379.html

For this instance, the alphabet is  X = {x1, x2, x3} = {A, V, M},
and let us not be too fussy about quotations marks for right now.
Given this alphabet of logical features, there are three further
types of logical or mathematical objects that one has to consider:

1.  The set of positions in the universe of discourse is <<X>> : B^3.

How many such "positions" (cells, interpretations, points, vectors) are there?
In general, if the cardinality of the alphabet is k, then there are 2^k points.
In the present case, Card(X) = |X| = 3, so there are 2^3 = 8 points in <<X>>.

The points of <<X>> can be represented in two ways:
There is the "multiplicative representation" of points as logical conjuncts, and
there is the "additive representation" of points as coordinate vectors, which we
can write in corresponding parallel columns as follows:

0.  |   (A)(V)(M)   |   <0, 0, 0>   |
1.  |   (A)(V) M    |   <0, 0, 1>   |
2.  |   (A) V (M)   |   <0, 1, 0>   |
3.  |   (A) V  M    |   <0, 1, 1>   |
4.  |    A (V)(M)   |   <1, 0, 0>   |
5.  |    A (V) M    |   <1, 0, 1>   |
6.  |    A  V (M)   |   <1, 1, 0>   |
7.  |    A  V  M    |   <1, 1, 1>   |

The positions of this space, isomorphic to B^3,
can be seen to form a boolean (cubic) lattice:

111
o
/|\
/ | \
/  |  \
110 o  101  o 011
|\ / \ /|
| \   / |
|/ \ / \|
100 o  010  o 001
\  |  /
\ | /
\|/
o
000

This lattice is nice enough, as far as it goes,
but this is not quite the lattice yet in which
we shall ultimately be interested here.

2.  The set of propositions (boolean functions)
is  X^ = {f : <<X>> -> B} = (<<X>> -> B) : (B^3 -> B).

How many such "propositions" are there?
If the cardinality of the alphabet is k, then there are 2^(2^k) propositions.
In the present case, Card(X) = |X| = 3, so there are 2^(2^3) = 256 props in X^.

The set of propositions on <<X>> is X^ : (<<X>> -> B) : (B^3 -> B).
These correspond to all of the different "truth functions" that one
could form-up in an 8-rowed "truth table".  The propositions of X^
can also be seen to form a lattice under the implication ordering
that comes natural to this species of propositions.  I will leave
the drawing of this lattice as an exercise for the reader.

3.  Finally, the complete universe of discourse can be comprehended
as consisting of two "layers", the positions of <<X>> plus the
propositions of X^, as in the form [[X]] = (<<X>>, (<<X>> -> B)).

For the present purpose, let us look a little more closely
at the conjunctive component that specifies this partition:

A  V  M
o  o  o
|  |  |
o--o--o
\   /
\ /
@

(( Animal ),
( Vegetable ),
( Mineral ))

Let "S1" = "((A),(V),(M))", in other words,
let "S1" denote the same proposition that is denoted by
the sentence or the propositional expression "((A),(V),(M))".

Then S1 lives in the "proposition space" X^,
where  X^ = (<<X>> -> B) = {f : <<X>> -> B}.

Given any f in X^, it has an "inverse relation" f^(-1).

The set of elements in <<X>> given by (f^(-1))(y),
for y in B, is called the "fiber of y" in <<X>>.

What is the "fiber of truth" (f^(-1))(1) in <<X>>?
It is the set of points in <<X>> where f holds true.

As far as I can tell, performing the act or taking up
the propositional attitude of "asserting a proposition"
is just a matter of turning one's mind from the logical
function f to its fiber of truth (f^(-1))(1).

So, to make a long story short -- too late for that! --
what does it mean (to me) to assert the proposition S1?

S1 = ((A),(V),(M)) : <<A, V, M>> -> B = {0, 1} = {F, T}

This asserts that everything in the universe [[A, V, M]]
is either Animal, or Vegetable, or Mineral, exclusively.

Let us picture the assertion of S1 in terms of the transformation
that it effects on a "generic" venn diagram, that is, one where
all of the "circles" that correspond to the variables {A, V, M}
bisect each other "independently" to generate 2^3 = 8 cells.
It may help to imagine a soap-film and wire apparatus here.
The assertion of S1 means that one can safely ignore all of
the cells in the "fiber of falsity" under S1, as if to "pop"
these bits of the bubble that stretches across the universe.
What remains is topologically equivalent to a Neapolitan form
of universe, where all of the gelato is divided into three parts.

If you want to draw an ordered picture,
it shapes up as the following latt-ice:

1
/|\
/ | \
A  V  M
\ | /
\|/
0

But now, within the present framework, where I have striven
to maintain a viable form of "functional interpretation" for
all of the symbols and all of the sentences in this particular
rendition of propositional calculus, there are a number of extra
observations that one can make with regard to this genre of picture.

First of all, by way of correcting my previous hasty statement,
this is the lattice that results from the lattice for X^ when
the assertion of S1 imposes its propositional constraints on
the initial universe of discourse.

Anyway, I think that I got it right this time.
But I will have to look at this again after
I am more rested.

Reference Materials:

Extensions Of Logical Graphs

01.  http://www.virtual-earth.de/CG/cg-list/old/msg03351.html
02.  http://www.virtual-earth.de/CG/cg-list/old/msg03352.html
03.  http://www.virtual-earth.de/CG/cg-list/old/msg03353.html
04.  http://www.virtual-earth.de/CG/cg-list/old/msg03354.html
05.  http://www.virtual-earth.de/CG/cg-list/old/msg03376.html
06.  http://www.virtual-earth.de/CG/cg-list/old/msg03379.html
07.  http://www.virtual-earth.de/CG/cg-list/old/msg03381.html
```

### In Praise of Zeroth Order Logic 7

```Subj:  In Praise of Zeroth Order Logic
Date:  Tue, 07 Nov 2000 02:28:10 -0500

ZOL SIG:

Let me now clean up some of the backtracks and present
a more straightline account of how I can represent the
structure of John Sowa's "Top Level Categories" in the
extension of Peirce's Alpha Graphs that I usually call
the "reflective extension of logical graphs" (RefLog).
Until advised otherwise, I will continue to work under
the hopeful assumption that I have managed to read the
right sorts of partition constraints off the original
lattice diagram, which you may review at the location
that I will reference again shortly.

It will supply the defects of my ITM (intermediate term memory)
if I can iterate my note from the "What Language To Use" thread,
as I am already beginning to lose track of what I scribed there.
At any rate, I am abundantly cogscient of how negligible a risk
there is of it having any amount of impact, much less any order
of moment, within the settled-in setting of that inertial frame.
This time around, though, I will correct whatever errors I find.

Before I do that, it will serve our understanding of the situation
if I can return to our minimal example in three logical dimensions
and nail down all of the pertinent details of that simplified case.
So let me revert to the universe of discourse [[A, V, M]], where I
can lay in another layer of details over the text already laid out.

o~~~~~~~~~o~~~~~~~~~o~ELABORATION~o~~~~~~~~~o~~~~~~~~~o

Here is how I set up the example:

| Consider the "universal partition" expression
|
| "(( Animal ),( Vegetable ),( Mineral ))"
|
| conjoined with the "relative partition" expression
|
| "( Living_thing ,( Animal ),( Vegetable ))".
|
| Letting each term be abbreviated by its initial letter,
| one has the following pair of tree-like graphs that were
| earlier described as "painted and rooted cacti" (PARC's).
|
|    A  V  M
|    o  o  o
|    |  |  |
|    o--o--o
|     \   /
|      \ /
|       @
|
|
|       A  V
|       o  o
|    L  |  |
|    o--o--o
|     \   /
|      \ /
|       @
|
| Here, I am using the at-sign "@" for the root node.
| Also, conjoining these two cacti, signifying their
| logical conjunction under the existential reading,
| would be indicated by joining them at their roots,
| but that is beyond my graphic resources at present
| and so I will have to leave it to your imagination.
|
| Notice how these forms work.  The bracket or "lobe"
| of the form "( X1, ..., Xk )" says that exactly one
| of its arguments is false.  In particular, "(X)" is
| just our old friend negation.  So the expression of
| the form "(( X1 ), ..., ( Xk ))" says that just one
| among the "(X1)", ..., "(Xk)" is false, which is to
| say that just one among the "X1", ..., "Xk" is true.
|
| The relative partition expression, otherwise known
| as the "genus and species" or the "pie-chart" form,
| taking the shape "( Y ,( Y1 ), ..., ( Yk ))", can
| be grasped as follows:  If Y is true, then it is
| as if Y were not there, being blank, and the form
| reduces to the partition "(( Y1 ), ..., ( Yk ))".
| If Y is false, then it is the only false argument,
| and so the expression reduces to "(Y1)·...·(Yk)".
| In sum, if you are outside the genus, then you
| are outside each and every one of its species.
|
| There is a sort of graphical image that makes this
| particular form of evaluation very easy to remember.
| Suppose you have to evaluate a generic lobe like this:
|
|    X1  X2       Xk
|     o---o...o---o
|      \         /
|       \       /
|        \     /
|         \   /
|          \ /
|           @
|
| As soon as you find an argument that is false,
| without loss of generality, assume it to be X1,
| then it must be that all of the rest are true,
| so the expression reduces to a conjunction of
| the rest, namely, X2·...·Xk.  Graphically,
|
|              o
|    X1 = () = |
|              @
|
| which looks like a "spike", so you can picture
| the false value as puncturing the lobe on which
| it grows, thereby shrinking it down to a bunch
| of paints on a rooted node, which is a graphic
| picture of their conjunction.

This discussion is also available at:

http://www.virtual-earth.de/CG/cg-list/msg03379.html

For this instance, the alphabet is  X = {x1, x2, x3} = {A, V, M},
and let us not be too fussy about quotations marks for right now.
Given this alphabet of logical features, there are three further
types of logical or mathematical objects that one has to consider:

1.  The set of positions in the universe of discourse is <<X>> : B^3.

How many such "positions" (cells, interpretations, points, vectors) are there?
In general, if the cardinality of the alphabet is k, then there are 2^k points.
In the present case, Card(X) = |X| = 3, so there are 2^3 = 8 points in <<X>>.

The points of <<X>> can be represented in two ways:
There is the "multiplicative representation" of points as logical conjuncts, and
there is the "additive representation" of points as coordinate vectors, which we
can write in corresponding parallel columns as follows:

0.  |   (A)(V)(M)   |   <0, 0, 0>   |
1.  |   (A)(V) M    |   <0, 0, 1>   |
2.  |   (A) V (M)   |   <0, 1, 0>   |
3.  |   (A) V  M    |   <0, 1, 1>   |
4.  |    A (V)(M)   |   <1, 0, 0>   |
5.  |    A (V) M    |   <1, 0, 1>   |
6.  |    A  V (M)   |   <1, 1, 0>   |
7.  |    A  V  M    |   <1, 1, 1>   |

The positions of this space, isomorphic to B^3,
can be seen to form a boolean (cubic) lattice:

111
o
/|\
/ | \
/  |  \
110 o  101  o 011
|\ / \ /|
| \   / |
|/ \ / \|
100 o  010  o 001
\  |  /
\ | /
\|/
o
000

This lattice is nice enough, as far as it goes,
but there is still another lattice arisng from
this context that we shall ultimately discover
to be of even more interest to us at this time.

2.  The set of propositions (boolean functions)
is  X^ = {f : <<X>> -> B} = (<<X>> -> B) : (B^3 -> B).

How many such "propositions" are there?

If the cardinality of the alphabet is k, then there are 2^(2^k) propositions.
In the present case, Card(X) = |X| = 3, so there are 2^(2^3) = 256 props in X^.

I probably should make a note here of one of the most important features of this
whole way of doing propositional logic, a benefit that is, in the long run, well
worth the extra effort and the investment of overhead that it takes to set it up
and to get it going.  Namely, I am taking some pains to maintain what I describe
as a "functional conception of propositional calculus", that is, to keep at the
ready a "functional interpretation" (FI), both for the syntax of sentences and
for the abstract formal entities that are, if the truth be told, perhaps more
properly known as "propositions".  No matter, let our motto be "Semper FI".

The reasons for striving to maintain this functional conception are many
and various, as I may have occasion to elaborate as the time wears on.
But there are a few implications of this functional perspective that
are pertinent to point out now:

2a.  The FI of the logical constants in B = {0, 1} = {false, true}.
The logical constants 0, 1 in B have alter-interpretations
as the so-called "constant functions (maps, propositions)",
where these are defined as follows:

0 : <<X>> -> B, such that 0(x) = 0, for all x in B.
1 : <<X>> -> B, such that 1(x) = 1, for all x in B.

2b.  The FI of the basic logical features given by the alphabet X.
The "symbols" or "variables" in the alphabet X = {A, V, M}
enjoy an alter-interpretation as functions in X^, namely,
as the so-called "coordinate functions (maps, projections,
or propositions)".  In general, these are defined like so:

If X = {x1, ..., xk}, then xj in X is also a function in X^, where
the "action" or the "value" of xj on x = <x1, ..., xk> in <<X>> is
defined as follows:

xj : <<X>> -> B, such that xj(<x1, ..., xj, ..., xk>) = xj.

Okay, that was deliberately stated in an overly elegant and tricky way --
do you suspect that it's too cute to be true? -- but I still think that,
given the proper interpretation of all of the symbols that are involved,
and however much the faithful interpretation may vary from place to place
in the definition, that this is a reasonable definition of the needed idea.

The set of propositions on <<X>> is X^ : (<<X>> -> B) : (B^3 -> B).
These correspond to all of the different "truth functions" that one
could form up in an 8-rowed "truth table".  The propositions of X^
can also be seen to form a lattice under the implication ordering
that comes natural to this species of propositions.  I will leave
the drawing of this lattice as an exercise for the reader.

Let us look at just a few of the more rudimentary truth-functions
out of the 256 in X^.  Here is the form of the usual truth table:

|               |    A  V  M    |  0  A  V  M  1
------|---------------|---------------|-----------------
0.  |   (A)(V)(M)   |   <0, 0, 0>   |  0  0  0  0  1
1.  |   (A)(V) M    |   <0, 0, 1>   |  0  0  0  1  1
2.  |   (A) V (M)   |   <0, 1, 0>   |  0  0  1  0  1
3.  |   (A) V  M    |   <0, 1, 1>   |  0  0  1  1  1
4.  |    A (V)(M)   |   <1, 0, 0>   |  0  1  0  0  1
5.  |    A (V) M    |   <1, 0, 1>   |  0  1  0  1  1
6.  |    A  V (M)   |   <1, 1, 0>   |  0  1  1  0  1
7.  |    A  V  M    |   <1, 1, 1>   |  0  1  1  1  1

of symbols in not something that I just now perversely made up.
All of it is totally standard in algebra and analysis, where
the only difference is that you would be more likely to see
R^k than B^k in the casual perusal of those subject areas.

3.  Finally, the complete universe of discourse can be comprehended
as consisting of two "layers", the positions of <<X>> plus the
propositions of X^, as in the form [[X]] = (<<X>>, (<<X>> -> B)).

For the present purpose, let us look a little more closely
at the conjunctive component that specifies this partition:

A  V  M
o  o  o
|  |  |
o--o--o
\   /
\ /
@

(( Animal    ),
( Vegetable ),
( Mineral   ))

Let "S1" = "((A),(V),(M))", in other words,
let "S1" denote the same proposition that is denoted by
the sentence or the propositional expression "((A),(V),(M))".

Then S1 lives in the "proposition space" X^,
where  X^ = (<<X>> -> B) = {f : <<X>> -> B}.

Given any f in X^, it has an "inverse relation" f^(-1).

The set of elements in <<X>> given by (f^(-1))(y),
for y in B, is called the "fiber of y" in <<X>>.

What is the "fiber of truth" (f^(-1))(1) in <<X>>?
It is the set of points in <<X>> where f holds true.

As far as I can tell, performing the act or taking up
the propositional attitude of "asserting a proposition"
is just a matter of turning one's mind from the logical
function f to its fiber of truth (f^(-1))(1).

So, to make a long story short -- too late for that! --
what does it mean (to me) to assert the proposition S1?

S1 = ((A),(V),(M)) : <<A, V, M>> -> B = {0, 1} = {F, T}

This asserts that everything in the universe [[A, V, M]]
is either Animal, or Vegetable, or Mineral, exclusively.

Let us picture the assertion of S1 in terms of the transformation
that it effects on a "generic" venn diagram, that is, one where
all of the "circles" that correspond to the variables {A, V, M}
bisect each other "independently" to generate 2^3 = 8 cells.
It may help to imagine a soap-film and wire apparatus here.
The assertion of S1 means that one can safely ignore all of
the cells in the "fiber of falsity" under S1, as if to "pop"
these bits of the bubble that stretches across the universe.
What remains is topologically equivalent to a Neapolitan form
of universe, where all of the gelato is divided into three parts.

If you want to draw an ordered picture,
it shapes up as the following latt-ice:

1
/|\
/ | \
A  V  M
\ | /
\|/
0

But now, within the present framework, where I have striven
to maintain a viable form of "functional interpretation" for
all of the symbols and all of the sentences in this particular
rendition of propositional calculus, there are a number of extra
observations that one can make with regard to this genre of picture.

First of all, this is the lattice that is begotten from the
lattice of propositions in X^, the one with the 256 elements,
when the assertion of S1 is allowed to impose its propositional
constraints on the initial universe of discourse.  This is more
properly described as a "quotient" of X^ modulo the "relations"
given by S1.  Sorry about the ambiguity -- those are just the
words that are commonly used, modulo the relation of my own
memory's narrative, of course.  In this sort of situation,
one commonly writes a meta-notation of the form "X^/S1"
to denote the "quotient of X^ by S1".

Second, one may note that each of the elements in the quotient lattice
still retains its FI, where 0 and 1 are the constant propositions and
where A, V, M are the coordinate projections in X^.

Third, notice that the disjunction "A or V or M" is always true in X^/S1.
In other words, the equation "((A)(V)(M)) = 1" holds true in this setting.

Anyway, I hope that this is a little closer to the truth.

Reference Materials:

Extensions Of Logical Graphs

01.  http://www.virtual-earth.de/CG/cg-list/old/msg03351.html
02.  http://www.virtual-earth.de/CG/cg-list/old/msg03352.html
03.  http://www.virtual-earth.de/CG/cg-list/old/msg03353.html
04.  http://www.virtual-earth.de/CG/cg-list/old/msg03354.html
05.  http://www.virtual-earth.de/CG/cg-list/old/msg03376.html
06.  http://www.virtual-earth.de/CG/cg-list/old/msg03379.html
07.  http://www.virtual-earth.de/CG/cg-list/old/msg03381.html
```

## Zeroth Order Logic • Discussion (Peirce List, Before July 2002)

### ZOL-DIS. Note 1

```BU = Benjamin Udell
JA = Jon Awbrey

BU: Just a technical question, which I hope has an answer sufficiently
simple to be included in the body of a future exposition of yours:

BU: Is “Zeroth-Order Logic” just a nickname for 1st-order statement/proposition logic?
I understand the idea of the statement as a zero-place or “medadic” term, in case
there's some connection with that idea, but I would have thought that zeroth-order
logic would be that about which first-order logical statements talk -- the world
itself I guess.  Would logic that talks explicitly about zeroth-order logical
statements still be second-order logic?  If not, which-manieth-order logic
would it be?

JA: Ben, it's just prop calc, sentential logic, monadic predicates
(no quantifiers, no k-adic relations for k > 1), Peirce's alpha.
Actually a common use in some circles.

BU: Thanks for the reply.  I knew all that, I should have
emphasized “nickname”.  I was wondering -- not whether,
but why the characterization as “zeroth-order logic”,
i.e., is it JUST a nickname.

The point of this and the differential logic thread is to illustrate some
surprisingly interesting things that can be done even at the most rudimentary
levels of logic if one has the right kind of calculus.  This "cactus language",
or the "reflective extension of propositional logic" (ref-log), is a syntax that
grew out of Peirce's Alpha Graphs in the light of a peculiar type of "reflective"
heuristic, characteristic of Peirce, that I had been studying as an undergraduate,
and one day thought to apply to his Logical Graphs.  The result had the beneficial
effect of fixing a number of incidental problems with the computational efficiency
of these graphs, that had been bedeviling me for quite some time.

This is all just venn diagram world -- the problem is getting a calculus
that has some facility, both in logical analysis and in brute computation,
when the number of circles climbs into the hundreds and thousands and ...

The utility of the nickname is that it points to the initial level
of formal structure without saying too much, that is, without a lot
of distracting connotations that belong more properly to the level
of the interpretation and not to the form of the law itself.

For example, a circle in a venn diagram can stand for a monadic predicate,
like "Man", "Horse", "Kangaroo", or "Whale", or it can stand for sentence,
like "It is raining", "It is sandstorming", or "It is tornadoing", and
I have actually seen folks get bent out of shape if you apply the terms
"propositional calculus" and "monadic predicate logic" indiscriminately
to both of these.  Formally, it's all the same stuff though.
```

### ZOL-DIS. Note 2

```| the way of heaven and earth
| is to be long continued
| in their operation
| without stopping
|
| i ching, hexagram 32

BU = Benjamin Udell
JA = Jon Awbrey

JA: This is all just venn diagram world --

BU: Venn for all & all for venn.

Hurrah, hurrah, for Ursula LeGuin!

JA: the problem is getting a calculus that has some facility,
both in logical analysis and in brute computation, when
the number of circles climbs into the hundreds and
thousands and ...

BU: All right!  On the horizon, continuous logical structure (yet NOT fuzzy).
Has anybody attempted this before?  Any parallel to applications of calculus
to combinatorial coding (of which applications I know nothing, including whether
they exist)?

For my part I am content to remain discrete.
Everything that I mine from this logic vein
will be a slight adaptation and extension
Peirce's Alpha Level of Logical Graphs.
I worked this out on bits of scratch
paper from the late (19)60's and all
through the (19)70's and then spent
the whole of the (19)80's writing
a program to parse and to process
what I described at various times
as the "Boundary Operator Syntax",
the "Reflective Extension of PC",
or the "Cactus Language", after
the species of graph that forms
its parse graph and constitutes
its basic data structure in the
computer.  The axioms that I use
are basically Peirce's for Alpha,
influenced by George Spencer Brown's
'Laws of Form', which to some degree
revived and extended Peirce's Graphs,
especially in their Entish rewooding,
during the last years of the 1960's.

JA: The utility of the nickname is that it points to the initial level
of formal structure without saying too much, that is, without a lot
of distracting connotations that belong more properly to the level
of the interpretation and not to the form of the law itself.

JA: For example, a circle in a venn diagram can stand for a monadic predicate,
like "Man", "Horse", "Kangaroo", or "Whale", or it can stand for sentence,
like "It is raining", "It is sandstorming", or "It is tornadoing", and
I have actually seen folks get bent out of shape if you apply the terms
"propositional calculus" and "monadic predicate logic" indiscriminately
to both of these.  Formally, it's all the same stuff though.

BU: I get it.  As long as one is focusing on sentences, or on monadic predicates,
but not on both, & not on quantification of the monadic predicates.  Logic of
(I.e., logic common to structures of zero-place terms/predicates & structures
of one-place terms/predicates.)

Exactly.  Things like "situations", that sentences are sometimes supposed
to denote, are different sorts of things from things like "things", that
monadic predictations are sometimes supposed to denote, but there is
a commonality of abstract structure that we seek to capture in the
right sorts of hypostatic abstractions, that formal propositional
expressions may be supposed to denote.  One tiny catch though,
if we limit the kinds of relations that we attempt to treat,
then there are surprising relationships between Order Zero
and Higher Orders that render ZOL of far greater utility
than one might otherwise suspect.

All in good time ...
```

### ZOL-DIS. Note 3

```BU = Benjamin Udell
JA = Jon Awbrey

JA: Let's take a break from the Example in progress and
take a look at what we've been doing, as it appears
in the light of broader computational, logical, and
semiotic perspectives.  We should not let our focus
on and our fascination with this particular example
prevent us from recognizing the larger lessons that
it illustrates, the way that it points to much more
important paradigms and predicaments and principles,
as a glimmer of ultimately more significant objects.

JA: I've had good reasons for taking so much trouble with this concrete example
and for setting out the chosen form of logical processing in so much detail.
The lion's share of those reasons have to do with demonstrating distinctive
features of Peirce's approach to logic.  I can discuss these features under

JA: 1.  The semiotic embedding of logic.

JA: 2.  The nature of mathematical models.

JA: 3.  The nurture of equational inference.

JA: In the pragmatic way of thinking about it, all thought takes place
within a setting of sign relations, as a so-called semiotic process.
This means that logical inference, as a form of controlled, critical,
deliberate, normative, reflective, and rule-governed thinking process,
is just one among many species of semiotic processes that inhabits the
same sign-relational environment.  As a consequence of this placement,
logic distinguishes itself "among" other types of semiosis with which
it can be put in comparison, not ruling "above" them as some kind of
unseen overseer.

JA: Peirce's approaches to logic, from the very first to the very last,
were those of a first-rate mathematician who was also a first-rate
logician and philosopher of mathematics, like none have been since.
This meant that his logical method was always seamlessly integrated
with other available modes of effective description, formal modeling,
and mathematical specification.  However, much of this integration has
been lost in the logical formalisms that we have seen since Peirce's time,
leading to a wide gap between qualitative and quantitative modes of thought,
not to mention the proponents thereof.  As a result, the power of a unified
logical-mathematical formalism is a potential that remains largely untapped
to this day.

JA: At least with regard to the propositional level, Peirce's axioms enable
equational inference rules, meaning that no information is lost in the
process of inference, and so the steps are reversible.  In particular,
it seems to have slipped the attention of many contemporary logicians
just how weak are the inference rules that they currently promote as
adequate.  Although they might be claimed to sketch the envelope of
mathematical reasoning, they are not generative of that reasoning,
and lack the power to support the prevailing modes of equational
reasoning in mathematics.

BU: Well of course I like a post that's so big-picture
that I don't squint at it even once.

BU: I've been thinking, in my hazy way, about calculation and reversible
or equivalential inference, this will definitely interest me.

BU: You're getting into logic of mathematics, the "other" meaning
of "mathematical logic" besides that of mathematics of logic.
If you wanted to, you could bring up issues of abstractionism
and formalism, semantic and syntactic etc.  If you want to.
I don't see that you have to, but it would be interesting,
in terms both of Peirce and what you’re doing.

It seems about time for another small pause in our hike through this
strangely familiar territory, so let us stop to contemplate what new
perspectives our exertions up to this point may have brought us upon
the surrounding scenery and upon that business far away in the agora.

For the sake of my personal orientation, let me recall these points:

| 1.  The semiotic embedding of logic.
|
| 2.  The nature of mathematical models.
|
| 3.  The nurture of equational inference.

For me, listing half-heartedly again to the humming,
all too humming murmurs of that hubbub about belief,
the most resonant motive and the most salient point
at once, all so paradogmatically in departure -- or
is it violation? -- of the principle of uncertainty,
would be the sign-relational embedding of logic, as
a new way of looking at intentional paradoxes rises
from that theme.  Indeed, they no longer retain the
mysteriosa of paradox in their retinues at all, but
come to be seen for what they are, unfit models and
wrong theories of the phenomena under view, no more.

If you troubled to read the passages from "The Simplest Mathematics"
that Joe Ransdell troubled himself to post for our benefit, you will
have encountered many doctrines, utterly the most favored notions of
mathematicians, who are, despite all popular beliefs in some circles,
the most simple-minded of all folk there be on the face of the earth,
though they tend to keep quiet about the lot, their simple notions I
mean, and maybe all that seemed unproblematic and beyond the shadows
of doubt to you as you read it in the isolation of your own chambers,
but I can tell you the hue and the cry that it raises if you venture
to hawk it at the stall of Academus or from the Porch, front or back.

Why is that?

Well, that'd be way too long a story to tell today.

The very sad middle of it -- naive optimist that I am,
I am hopeful that the end of it will be a bit happier --
is that Peirce was a logician who actually knew a lot
of mathematics, and more than that, he had the desire
and the ability to articulate what he knew of it, and
cast my mind as I can I cannot think of anybody since
who had all three of these rare categorical qualities.

But one of the side effects of the decline in logical literacy
over the last hundred years -- by that I mean the prevailing
and even the authoritatively promoted tendency to drop any
of the classical tasks to which logic used to be applied,
once it is suspected that a certain type of restricted
logical formalism would not be adequate to tackle it --
is that it's just about impossible to get it across
any more that some things are not coming across in
their purported "paraphrases" and "translations",
more like parapraxes and travesties they are.

The semiotic embedding of logic has a bearing here.
For if logical thought, like its more unco cousins,
has its proper setting in sign relations, in their
full 3-adic glory, and if you try, as a few people
never give up trying, to paraparse any fragment of
its most fit theory into a language that restricts
then something is just bound to be lost in the bit.

All of the parties to the current disbelief about belief
seem to be taking it for granted that there is something
like a 2-adic "belief transaction" between the epistemic
agent and the proposition under view.  That's one theory.

But if you take the treble to to take a good listen to the entire
untiring symphony of what Peirce composed on the theme of thought,
all of the basic derivatives and subdominant subsidiaries thereof,
I just can't imagine how any onhearer would ever imagine that any
forehearable 2-concert of 2-abolical instruments would be able to
play it.  In finale, it is more than easy to prove that they cant.

Sufficient unto 2-morrow is the 3-bell thereof.
```

## Zeroth Order Theories • Discussion (2002)

### N Queens Problem (SeqFan List)

```AK = Antti Karttunen
JA = Jon Awbrey

AK: Am I (and other SeqFanaticians) missing something from this thread?

AK: Your previous message on seqfan (headers below) is a bit of the same topic,
but does it belong to the same thread?  Where I could obtain the other
messages belonging to those two threads?  (I'm just now starting to
study "mathematical logic", and its relations to combinatorics are
very interesting.)  Is this "cactus" language documented anywhere?

here i was just following a courtesy of copying people
when i reference their works, in this case neil's site:

http://www.research.att.com/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=000170

but then i thought that the seqfantasians might be amused, too.

the bit on higher order propositions, in particular,
those of type h : (B^2 -> B) -> B, i sent because
of the significance that 2^2^2^2 = 65536 took on
for us around that time.  & the ho, ho, ho joke.

"zeroth order logic" (zol) is just another name for
the propositional calculus or the sentential logic
that comes before "first order logic" (fol), aka
first intens/tional logic, quantificational logic,
or predicate calculus, depending on who you talk to.

the line of work that i have been doing derives from
the ideas of c.s. peirce (1839-1914), who developed
a couple of systems of "logical graphs", actually,
two variant interpretations of the same abstract
structures, called "entitative" and "existential"
graphs.  he organized his system into "alpha",
"beta", and "gamma" layers, roughly equivalent
to our propositional, quantificational, and
modal levels of logic today.

on the more contemporary scene, peirce's entitative interpretation
of logical graphs was revived and extended by george spencer brown
in his book 'laws of form', while the existential interpretation
has flourished in the development of "conceptual graphs" by
john f sowa and a community of growing multitudes.

http://members.door.net/arisbe/
http://www.enolagaia.com/GSB.html
http://www.cs.uah.edu/~delugach/CG/
http://www.jfsowa.com/
http://www.jfsowa.com/cg/
http://www.jfsowa.com/peirce/ms514w.htm
http://users.bestweb.net/~sowa/
http://users.bestweb.net/~sowa/peirce/ms514.htm

i have mostly focused on "alpha" (prop calc or zol) --
though the "func conception of quant logic" thread was
a beginning try at saying how the same line of thought
might be extended to 1st, 2nd, & higher order logics --
and i devised a particular graph & string syntax that
is based on a species of cacti, officially described as
the "reflective extension of logical graphs" (ref log),
but more lately just referred to as "cactus language".

it turns out that one can do many interesting things
with prop calc if one has an efficient enough syntax
and a powerful enough interpreter for it, even using
it as a very minimal sort of declarative programming
language, hence, the current thread was directed to
applying "zeroth order theories" (zot's) as brands
of "zeroth order programs" (zop's) to a set of old
constraint satisfaction and knowledge rep examples.

more recent expositions of the cactus language have been directed
toward what some people call "ontology engineering" -- it sounds
so much cooler than "taxonomy" -- and so these are found in the
ieee standard upper ontology working group discussion archives.
```

## Document History

### Zeroth Order Logic

Web Archive

#### Sep 2000 — Whitten's Starter Ontology

1. http://suo.ieee.org/email/msg01246.html

#### Oct 2000 — All Liar, No Paradox

1. http://suo.ieee.org/email/msg01739.html

#### Nov 2000 — Sowa's Top Level Categories

What Language To Use

TLC In KIF

#### Jul 2001 — Reflective Extension Of Logical Graphs

1. http://suo.ieee.org/email/msg05694.html