User:Jon Awbrey/Assorted Musements

From InterSciWiki
Jump to: navigation, search

Assorted Musements • Collated 2017

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Peirce List -- Assorted Musements -- 2005 Oct 04

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COR.  Composition Of Relations
COS.  Classification Of Signs
DOI.  Doctrine Of Individuals
ECI.  Extension x Comprehension = Information
HEC.  Hermeneutic Equivalence Classes
HRY.  Hemlock, the Rack, and You 
I&T.  Identity & Teridentity
LMU.  Limited Mark Universes
LOR.  Logic Of Relatives
MSI.  Manifolds of Sensuous Impressions
NST.  Non Sequi Tours
RAR.  Reductions Among Relations
RIG.  Relations In General
TOR.  Theory Of Relations
T^3.  Tone, Token, Type
VIT.  Virally Important Topics

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Main Texts

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

DOI 1870.  http://suo.ieee.org/ontology/msg04332.html
LMU 1883.  http://suo.ieee.org/ontology/msg03204.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COR.  Composition Of Relations

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COR.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I hope that you will be able to recall that cousin of
the Between relation that we took up here once before,
and that you will be able to recognize its characters,
even if I now disguise it under a new name and partly
dissemble them under a new manner of parameterization.

Reading the generic name "AO" to suggest "argument order",
let us define a specific relation AO_213 c R^3 as follows:

<x, a, b> in AO_213

if and only if

a < x < b.

In other words:

<x, a, b> in AO_213

if and only if

a < x and x < b.

Corresponding to the 3-adic relation AO_213 c R^3 = RxRxR
there is a "proposition", a function ao_213 : R^3 -> B,
that I will describe, until a better name comes along,
as the "relation map" that is "dual to" the relation.
It is also known as the "indicator" of that relation.

Consider the boolean analogue or the logical variant of AO, with
real domains of type R now replaced by boolean domains of type B.

The boolean analogue of the ordering "<" is the implication "=>",
so this tells us how to construct the logical analogue of AO_213.

Define the logical "argument order" relation AO_213 c B^3 as follows:

<x, a, b> in AO_213

if and only if

[a => x] and [x => b].

When there is no risk of confusion, we can also write it like this:

<x, a, b> in AO_213

if and only if

a => x => b.

Corresponding to the 3-adic relation AO_213 c B^3 = BxBxB,
there is a "proposition", a function ao_213 : B^3 -> B,
that I will describe, until a better name comes along,
as the "relation map" that is "dual to" the relation.
It is also known as the "indicator" of that relation.

At this point I want to try and get away with a bit
of additional flexibility in the syntax that I use,
reusing some of the same names for what are distinct
but closely related types of mathematical objects.

In particular, I would like to have the license
to speak a bit more loosely about these objects,
to ignore the distinction between "relations" of
the form Q c X_1 x ... x X_k and "relation maps"
of the form q : X_1 x ... x X_k -> B, and perhaps
on sundry informal occasions to use the very same
names for them -- The Horror! -- hoping to let the
context determine the appropriate type of object,
except where it may be necessary to maintain this
distinction in order to avoid risking confusion.

In order to keep track of all of the players -- not to mention all of the refs! --
it may help to re-introduce a diagram that I have used many times before, as a
kind of a play-book or programme, to sort out the burgeoning teams of objects
and the cryptic arrays of signs that we need to follow throughout the course
of this rather extended run-into-overtime game:

o-----------------------------o-----------------------------o
|  Objective Framework (OF)   | Interpretive Framework (IF) |
o-----------------------------o-----------------------------o
|       Formal Objects        |    Formal Signs & Texts     |
o-----------------------------o-----------------------------o
|                             |                             |
|        Propositions         |         Expressions         |
|          (Logical)          |          (Logical)          |
|              o              |              o              |
|              |              |              |              |
|              |              |              |              |
|              o              |              o              |
|             / \             |             / \             |
|            /   \            |            /   \            |
|           o     o           |           o     o           |
|       Sets       Maps       |  Set Names       Map Names  |
| (Geometric)    (Functional) | (Geometric)    (Functional) |
|                             |                             |
o-----------------------------o-----------------------------o
|                             |                             |
|     B^k         B^k -> B    |   "AO_213"      "ao_213"    |
|     R^k         R^k -> B    |   "AO_213"      "ao_213"    |
|     X^k         X^k -> B    |   "Q"           "q"         |
|                             |                             |
o-----------------------------o-----------------------------o

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COR.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

By way of illustrating the application of these
heuristic principles to the case of the Between
relation, we might proceed in the following way.

Given a pair of distinct real numbers, say a =/= b in R, we may
choose names "without loss of generality" (WOLOG) so that a < b.

Then the 3-adic "between" relation is defined as follows:

x is between a and b

if and only if

a < x and x < b.

Now, if we are operating under what are likely to be
the default assumptions, the intended context, or the
standard interpretation of these statements, then the
sentence "x < y" invokes a dyadic relation LT c R x R,
where R = Reals, and this in turn corresponds to the
proposition lt : RxR -> B, where B = {0, 1} as usual.

This allows us to articulate the sentence "x is between a and b"
as invoking a 3-adic relation called "1st between 2nd and 3rd",
also known as the "argument order 2, 1, 3" relation on x, a, b.
We already defined this 3-adic relation as AO_213 c R^3 = RxRxR
noting that it is indicated by the proposition ao_213 : R^3 -> B.

Last but not least, recall that conjunction is just a function
that maps two propositions into a proposition, in other words,
a function of the form 'and' : (X -> B) x (X -> B) -> (X -> B).

Now here's a good question:  What is X?
To be specific, is it R^2 or is it R^3?

We appear to have a little bit of a problem
matching up the divers pieces of the puzzle.

What this requires is a little gimmick that I call a "tacit extension".

In this case, we need a way of taking the function lt : R^2 -> B and
using it to construct two different functions of type : R^3 -> B.

Thus we need a couple of functions on functions, both having this form:

te : (R_1 x R_2 -> B) -> (R_1' x R_2' x R_3' -> B).

Here we must now distinguish particular copies of R
in the cartesian products RxR = R^2 and RxRxR = R^3.

Tacit extension te_21 is defined by:  (te_21 (f))(x, a, b)  =  f(a, x).

Tacit extension te_13 is defined by:  (te_13 (f))(x, a, b)  =  f(x, b).

Applying these tacit extensions to lt we get:

(te_21 (lt))(x, a, b)  =  lt_21 (x, a, b)  =  lt(a, x).

(te_13 (lt))(x, a, b)  =  lt_13 (x, a, b)  =  lt(x, b).

In effect, the tacit extension of the "less than" map,
given three arguments, just compares the proper pair
of them, with a "don't care" condition on the other.

Putting all the pieces together, we get this picture:

o-------------------------------------------------o
|                                                 |
|          lt(a, x)  o       o  lt(x, b)          |
|                    |       |                    |
|                    |       |                    |
|                  te_21   te_13                  |
|                    |       |                    |
|                    |       |                    |
|   lt_21 (x, a, b)  o       o  lt_13 (x, a, b)   |
|                     \     /                     |
|                      \   /                      |
|                       \ /                       |
|                       and                       |
|                        |                        |
|                        |                        |
|                        |                        |
|                        o                        |
|                ao_213 (x, a, b)                 |
|                                                 |
o-------------------------------------------------o

This gives a functional decomposition of the proposition ao_213
making use of two different applications of the proposition lt,
a couple of tacit extensions te_21 and te_13, and the reusable
function 'and' that joins the tacit extensions of lt together.

What is the use of all of this? -- especially in view of the fact
that we did not lower the maximum arity of the relations involved?

In effect, we started out with a 3-adic relation map  ao_213 : R^3 -> B,
rebuilding it from the 2-adic relation maps of the shape  lt : R^2 -> B,
plus two tacit extensions of the shape  te : (R^2 -> B)  ->  (R^3 -> B),
plus a 3-adic relation of the shape  'and' : (R^3 -> B)^2 -> (R^3 -> B).

Well, the advantage, over the long haul, comes precisely from the fact
that the seemingly more complex pieces that we reached in the analysis
are pieces that are highly reusable in situation after situation, and
so it is worth the overhead, eventually, that it takes to understand
these recurring components as thoroughly as possible.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COR.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I thought that it might be a beneficial exercise for us to
work out an explicit definition of "relational composition",
partly as an object example of how such a thing ought to look.

I will begin at the  beginning, roughly with the way that C.S. Peirce
intially defined and outlined his notion of "relative multiplication",
though I will be doing this from memory -- so be gentle, I will check
with the original papers (circa 1870) later on.  An interesting thing
from my perspective, and what first drew me back to this stage in the
history of logic from the problems that I was investigating years ago,
is the way that Peirce had already, at the very beginning, integrated
the aspects of the logic of relations that we usually problematize as
the "extensional" versus the "intensional".  Some of that integration
is dimly apparent even in these simple cases, but the remainder of it
would be well worth detailed and extended study.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COR.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Graph-Theoretic Picture of Composition

Here are some excerpts from the "Influenza" example:

There is this 3-adic transaction among
three relational domains.  Let us say:
"Transmitters", "Vectors", "Receivers",
and let us symbolize:  C c T x V x R.

In order to prove the proposition, for instance, as in a court of law,
that "John J gave-the-flu-to Mary M", it is necessary but by no means
enough to convince an arbiter that an infectious colony or a virulent
sample of particular micro-organisms of the genus known as "influenza"
was transported from John J (SSN 1 TBA) to Mary M (SSN 2 TBA) on some
well-specified occasion in question.

In other words, the "evidence" for the 2-adic relation that bears
the form and the description F c T x R : "-- gave-the-flu-to --",
is found solely within the 3-adic relation of "communication" C.

Let us assume that this long chain of causal and physical "influences"
can be more conveniently summarized, for our present purposes, in the
form of a 3-adic relation that connects a transmitter t, a "vector" v,
and a receiver r.  Thus a bona fide incident or a genuine instance of
the "communication relation" C c TxVxR will be "minimally adequately",
as they say in epidemiology, charted in a datum of the form <t, v, r>.

What is the character of the relationship between
the 3-adic relation of "communication" C c TxVxR
and the 2-adic relation "-- gave-the-flu-to --"?

This particular relation among relations --
you may be about to read my mention, but
will not if I can help it find me to use
the term "meta-relation" for this notion --
is broadly nomenclated as a "projection",
with type here being Proj : TxVxR -> TxR.
Our use of it in this presenting case is
an example of how we transit from caring
about the "detail of the evidence" (DOTE)
to desiring only a brief sum of the fact.

For now, let us stipulate that we have the following
sample of data about the 3-adic relation C c TxVxR :

   {..., <John J, Agent A, Mary M>, ...}.

In other words, we are fixing on a single element:

   <John J, Agent A, Mary M>  in  C  c  TxVxR.

Let us now contemplate the generalization of ordinary functional composition
to 2-adic relations, called, not too surprisingly, "relational composition",
and roughly information-equivalent to Peirce's "relative multiplication".

I will employ the data of our present case to illustrate two different
styles of picture that we can use to help us reason out the operation
of this particular form of relational composition.

First I show one of my favorite genres of pictures for 2-adic relations,
availing itself of the species of graphs known as "bipartite graphs",
or "bigraphs", for short.

Let an instance of the 2-adic relation E c TxV
informally defined by {<t, v> : t exhales v},
be expressed in the form "t exhales v".

Let an instance of the 2-adic relation I c VxR
informally defined by {<v, r> : v infects r},
be expressed in the form "v infects r".

Just for concreteness in the example, let us imagine that:

1.  John J exhales three viral particles numbered 1, 3, 5.

2.  Mary M inhales three viral particles numbered 3, 5, 7,
           each of which infects her with influenza.

The 2-adic relation E that exists in this situation is
imaged by the bigraph on the T and the V columns below.

The 2-adic relation I that exists in this situation is
imaged by the bigraph on the V and the R columns below.

        E     I
     T---->V---->R

     o     1     o
          /
         /
     o  /  2     o
       /
      /
   J o-----3     o
      \     \
       \     \
     o  \  4  \  o
         \     \
          \     \
     o     5-----o M
                /
               /
     o     6  /  o
             /
            /
     o     7     o

Let us now use this picture to illustrate for ourselves,
by way of concrete examples, many of the distinct types
of set-theoretic constructs that would arise in general
when contemplating any similar relational configuration.

First of all, there is in fact a particular 3-adic relation Q
that is determined by the data of these two 2-adic relations.
It cannot be what we are calling the "relational composition"
or the "relative product", of course, since that is defined --
forgive me if I must for this moment be emphatic -- DEFINED
to be yet another 2-adic relation.  Just about every writer
that I have read who has discovered this construction has
appeared to come up with a different name for it, and I
have already forgotten the one that I was using last,
so let me just define it and we will name it later:

What we want is easy enough to see in visible form,
as far as the present case goes, if we look at the
composite sketch already given.  There the mystery
3-adic relation has exactly the 3-tuples <t, v, r>
that are found on the marked paths of this diagram.

That much insight should provide enough of a hint
to find a duly officious set-theoretic definition:

   Q  =  {<t, v, r> : <t, v> in E and <v, r> in I}.

There is yet another, very convenient, way to define this,
the recipe of which construction proceeds by these stages:

1.  For 2-adic relation G c TxV, define GxR,
    named the "extension" of G to TxVxR, as:
    {<t, v, r> in TxVxR : <t, v> in G}.

2.  For 2-adic relation H c VxR, define TxH,
    named the "extension" of H to TxVxR, as:
    {<t, v, r> in TxVxR : <v, r> in H}.

In effect, these extensions just keep the constraint
of the 2-adic relation "in its places" while letting
the other elements roam freely.

Given the ingredients of these two extensions,
at the elemental level enjoying the two types:
TxV -> TxVxR  and  VxR -> TxVxR, respectively,
we can define the 3-adic Q as an intersection:

   Q(G, H)  =  GxR  |^|  TxH

One way to comprehend what this construction means
is to recognize that it is the largest relation on
TxVxR that is congruent with having its projection
on TxV be G and its projection on VxR be H.

Thus, the particular Q in our present example is:

   Q(E, I)  =  ExR  |^|  TxI

This is the relation on TxVxR, to us, embodying an assumption
about the "evidence" that underlies the case, which restricts
itself to the information given, imposing no extra constraint.

And finally -- though it does amount to something like the "scenic tour",
it will turn out to be useful that we did things in this roundabout way --
we define the relational composition of the 2-adic relations G and H as:

   G o H  =  Proj<T, R> Q(G, H)  =  Proj<T, R> (GxR |^| TxH)

| Reference:
|
| Although it no doubt goes way back, I am used to thinking
| of this formula as "Tarski's Trick", because I first took
| notice of it in a book by Ulam, who made this attribution.
|
| Ulam & Bednarek,
|"On the Theory of Relational Structures
| And Schemata for Parallel Computation",
| Original report dated May 1977, in:
| Ulam, 'Analogies Between Analogies',
| University of California Press, Berkely, CA, 1990.

Applying this general formula to our immediate situation:

   E o I  =  Proj<T, R> Q(E, I)  =  Proj<T, R> (ExR |^| TxI)

We arrive at this picture of the composition E o I c TxR:

       EoI
     T---->R

     o     o

     o     o

   J o     o
      \
       \
     o  \  o
         \
          \
     o     o M

     o     o

     o     o

In summation, E o I = {<John J, Mary M>}.

By the way, you may have noticed that I am using here
what strikes me as a more natural order for composing
2-adic relations, but the opposite of what is usually
employed for functions.  In the present ordering, one
can read the appearances of the relational domains in
what seems like a much more straightforward way, just
as they are invoked by the series of relation symbols.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COR.  Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Matrix-Theoretic Picture of Composition

Let us declare a "logical basis" -- and leave it
as an exercise for the reader to improvise a fit
definition of what is, and what ought to be that --
but anyway a collection of elements of this form:

   "Basic Entia"  K  =  T |_| V |_| R

   "Transmissia"  T  =  {t1, t2, t3, t4, t5, t6, t7}

   "Viral Entia"  V  =  {v1, v2, v3, v4, v5, v6, v7}

   "Receptacula"  R  =  {r1, r2, r3, r4, r5, r6, r7}

Just by way of orientation to the way that we speak "way out here",

   t3  =  John.
   r5  =  Mary.

So far, so good, but here we have come to one of those junctures
where personal tastes are noticed to be notoriously divergent in
matters of notation, and so at this point I will simply describe
a few of the most popular options:

1.  One may lump all of these elements together and work
    with a cubic array that has dimensions  21 x 21 x 21,
    taking its projections into square matrices  21 x 21.

2.  One may consider the very like possibilty, here only,
    that the T's and the R's are abstractly the same set,
    and reduce the representation in a corresponding way.

3.  One may treat the relational domains T, V, R as three
    distinct sets, start with a 3-adic relation Q c TxVxR
    represented as a cubic array of dimensions  7 x 7 x 7,
    taking its projections into square matrices of  7 x 7.

Option 3 seems easier for us here,
just as a way of conserving space.

The extensions of the 2-adic relations E and I
are the following collections of ordered pairs:

   E  =  {<t3, v1>, <t3, v3>, <t3, v5>}

   I  =  {<v3, r5>, <v5, r5>, <v7, r5>}

Peirce represented 2-adic relations in this form:

   E  =  t3:v1 +, t3:v3 +, t3:v5

   I  =  v3:r5 +, v5:r5 +, v7:r5

It is very often convenient, though by no means obligatory,
to arrange these quasi-algebraic terms in forms like these:

T x V  =

[
|  t1:v1,  t1:v2,  t1:v3,  t1:v4,  t1:v5,  t1:v6,  t1:v7,
|  t2:v1,  t2:v2,  t2:v3,  t2:v4,  t2:v5,  t2:v6,  t2:v7,
|  t3:v1,  t3:v2,  t3:v3,  t3:v4,  t3:v5,  t3:v6,  t3:v7,
|  t4:v1,  t4:v2,  t4:v3,  t4:v4,  t4:v5,  t4:v6,  t4:v7,
|  t5:v1,  t5:v2,  t5:v3,  t5:v4,  t5:v5,  t5:v6,  t5:v7,
|  t6:v1,  t6:v2,  t6:v3,  t6:v4,  t6:v5,  t6:v6,  t6:v7,
|  t7:v1,  t7:v2,  t7:v3,  t7:v4,  t7:v5,  t7:v6,  t7:v7,
]

V x R  =

[
|  v1:r1,  v1:r2,  v1:r3,  v1:r4,  v1:r5,  v1:r6,  v1:r7,
|  v2:r1,  v2:r2,  v2:r3,  v2:r4,  v2:r5,  v2:r6,  v2:r7,
|  v3:r1,  v3:r2,  v3:r3,  v3:r4,  v3:r5,  v3:r6,  v3:r7,
|  v4:r1,  v4:r2,  v4:r3,  v4:r4,  v4:r5,  v4:r6,  v4:r7,
|  v5:r1,  v5:r2,  v5:r3,  v5:r4,  v5:r5,  v5:r6,  v5:r7,
|  v6:r1,  v6:r2,  v6:r3,  v6:r4,  v6:r5,  v6:r6,  v6:r7,
|  v7:r1,  v7:r2,  v7:r3,  v7:r4,  v7:r5,  v7:r6,  v7:r7,
]

Now, taking these generic motifs as scenic -- or, at least, schematic --
backdrops, one can permit the particular characters of one's favorite
2-adic relations to represent themselves and to play out their action
on this stage, by attaching affirming or nullifying "coefficients" to
the appropriate places of the thus-arrayed company of possible actors.

E  =

[
|  0 t1:v1,  0 t1:v2,  0 t1:v3,  0 t1:v4,  0 t1:v5,  0 t1:v6,  0 t1:v7,
|  0 t2:v1,  0 t2:v2,  0 t2:v3,  0 t2:v4,  0 t2:v5,  0 t2:v6,  0 t2:v7,
|  1 t3:v1,  0 t3:v2,  1 t3:v3,  0 t3:v4,  1 t3:v5,  0 t3:v6,  0 t3:v7,
|  0 t4:v1,  0 t4:v2,  0 t4:v3,  0 t4:v4,  0 t4:v5,  0 t4:v6,  0 t4:v7,
|  0 t5:v1,  0 t5:v2,  0 t5:v3,  0 t5:v4,  0 t5:v5,  0 t5:v6,  0 t5:v7,
|  0 t6:v1,  0 t6:v2,  0 t6:v3,  0 t6:v4,  0 t6:v5,  0 t6:v6,  0 t6:v7,
|  0 t7:v1,  0 t7:v2,  0 t7:v3,  0 t7:v4,  0 t7:v5,  0 t7:v6,  0 t7:v7,
]

I  =

[
|  0 v1:r1,  0 v1:r2,  0 v1:r3,  0 v1:r4,  0 v1:r5,  0 v1:r6,  0 v1:r7,
|  0 v2:r1,  0 v2:r2,  0 v2:r3,  0 v2:r4,  0 v2:r5,  0 v2:r6,  0 v2:r7,
|  0 v3:r1,  0 v3:r2,  0 v3:r3,  0 v3:r4,  1 v3:r5,  0 v3:r6,  0 v3:r7,
|  0 v4:r1,  0 v4:r2,  0 v4:r3,  0 v4:r4,  0 v4:r5,  0 v4:r6,  0 v4:r7,
|  0 v5:r1,  0 v5:r2,  0 v5:r3,  0 v5:r4,  1 v5:r5,  0 v5:r6,  0 v5:r7,
|  0 v6:r1,  0 v6:r2,  0 v6:r3,  0 v6:r4,  0 v6:r5,  0 v6:r6,  0 v6:r7,
|  0 v7:r1,  0 v7:r2,  0 v7:r3,  0 v7:r4,  1 v7:r5,  0 v7:r6,  0 v7:r7,
]

And then there are times when it is not so convenient!

At any rate, it is then conceivable to push the level
of abstraction in our so-arrayed representations even
one step further, and so long as we keep in mind what
the now-suppressed row-indices and column-indices are
supposed to signify, logically speaking, in the first
place, then we can push them even deeper into the dim
and tacit background of the overriding interpretation.

E  =

[
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  1,  0,  1,  0,  1,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
]

I  =

[
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  1,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  1,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  1,  0,  0,
]

When all of this is said and done, that is to say,
when all of this is said and done the fitting way,
then one can represent relative multiplication or
relational composition in terms of an appropriate
quasi-algebraic "multiplication" operation on the
rectangular matrices that represent the relations.
The logical operation of the relative product has
to be qualified as "quasi-algebraic" just to help
us keep in mind the fact that it is not precisely
the one that algebraically-minded folks would put
on the same brands of {0, 1}-coefficient matrices.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COR.  Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Let me start by going back to the way that Peirce
intially represented 2-adic relations in terms of
2-tuples that have an algebra defined on them:

| Peirce represented 2-adic relations in this form:
|
|   E  =  t3:v1 +, t3:v3 +, t3:v5
|
|   I  =  v3:r5 +, v5:r5 +, v7:r5

The composite symbol "+," will have to serve for
the variety of different symbols that Peirce and
his variety of different typographers have taken
to using at different points in time to signify
the logical operation of inclusive disjunction.
Especially in this matrix algebra setting, the
ordinary plus sign "+" is a strictly reserved
keysign for the additive operation of a field.

Given a basis $X$ = {x<1>, ..., x<n>}, where each element
stands in for an entity, an object, a particular, a thing,
or whatever in the universe of discourse, Peirce employed
a "colonial notation" of the form "s<1>: ... :s<k>", with
s<j> in $X$ for j = 1 to k, to represent ordered k-tuples.

Each type of relational operation that one defines over
this initial basis and over this k-tuply extended basis
corresponds to a way of combining f-tuples and g-tuples
to arrive at specified h-tuples for appropriate choices
of positive integers f, g, h.

In the case of the ordinary relational composition of
two 2-adic relations to get a product 2-adic relation,
the quasi-algebraic rule is this:

|   (u:v)(x:y) = (u:y) if v = x.
|   (u:v)(x:y) =   0   otherwise.

For example, consider the problem to compute the
relative product EI = EoI of the pair of 2-adic
relations, E and I, that are given as follows:

|   E  =  t3:v1 +, t3:v3 +, t3:v5
|   I  =  v3:r5 +, v5:r5 +, v7:r5

One simply forms the indicated product:

|   EI = (t3:v1 +, t3:v3 +, t3:v5)(v3:r5 +, v5:r5 +, v7:r5)

and then proceeds to multiply the whole thing out,
using a composition rule of the form given above
to reduce complex terms whenever possible.

Like so:

|   EI  =  (t3:v1 +, t3:v3 +, t3:v5)(v3:r5 +, v5:r5 +, v7:r5)
|
|       =  (t3:v1)(v3:r5) +, (t3:v1)(v5:r5) +, (t3:v1)(v7:r5) +,
|          (t3:v3)(v3:r5) +, (t3:v3)(v5:r5) +, (t3:v3)(v7:r5) +,
|          (t3:v5)(v3:r5) +, (t3:v5)(v5:r5) +, (t3:v5)(v7:r5)
|
|       =    0    +,    0    +,    0    +,
|          t3:r5  +,    0    +,    0    +,
|            0    +,  t3:r5  +,    0
|
|       =  t3:r5
|
|       =  John:Mary

In conclusion, we compute that John gave the flu to Mary.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COR.  Omitted Material

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Let us then push on, in a retrograde way, returning to the orbit
of those very first relations that got us into the midst of this
quandary in the first place, to wit, the relations in medias res,
the relations of betwixt and between and all of their sundry kin.
But let us this time place the paltry special relations on which
we fixed the first time around back within the setting of a much
broader and a much more systematically examined context, that is,
an extended family of related relations or variations on a theme.

I append some material on "sign relations"
and the "irreducibility of triadics" (IOT).

Here is an old note I've been looking for since we started on this bit about isms,
as I feel like I managed to express in it somewhere my point of view that the key
to integrating variant persepectives is to treat their contrasting values as axes
or dimensions rather than so many points on a line to be selected among, each in
exclusion of all the others.  To express it briefly, it is the difference between
k-tomic decisions among terminal values and k-adic dimensions of extended variation.

But I think that it is safe to say, for whatever else
it might be good, tomic thinking is of limited use in
trying to understand Peirce's thought. 

OK, YAFI (you asked for it).  As it happens, this is precisely what I just used up
one of the better years of my life trying to explain in the SUO discussion group,
and so I have a whole lot of material on this, most of it hardly scathed by any
dint of popular assimilation or external use.

I see a couple of separate questions in what you are asking:

1.  What is the qualitative character of the 3-adic sign relation?  In particular,
    is it better to comprehend it in the form <object, sign, interpretive agent>,
    or is it best to understand it in the form <object, sign, interpretant sign>?

2.  What is reducible to what in what way, and what not?

The answer to the first question is writ
in what we who speak in Peircean tongues
dub the "Parable of the Sop to Cerberus".
Peirce would often start out explaining
his idea of the sign relation, for the
sake of a gentle exposition, in terms
of Object, Sign, and Interpreter, and
then follow with a partial retraction
of the Agent to the Interpretant Sign
that occupies the alleged agent's mind.
Here is the locus classicus for this bit:

There is a critical passage where Peirce explains the relationship
between his popular illustrations and his technical theory of signs.

| It is clearly indispensable to start with an accurate
| and broad analysis of the nature of a Sign.  I define
| a Sign as anything which is so determined by something
| else, called its Object, and so determines an effect
| upon a person, which effect I call its Interpretant,
| that the latter is thereby mediately determined by
| the former.  My insertion of "upon a person" is
| a sop to Cerberus, because I despair of making
| my own broader conception understood.
|
| CSP, 'Selected Writings', page 404.
|
| Charles Sanders Peirce, "Letters to Lady Welby", Chapter 24, pages 380-432,
| in 'Charles S. Peirce: Selected Writings (Values in a Universe of Chance)',
| Edited with Introduction and Notes by Philip P. Wiener, Dover Publications,
| New York, NY, 1966.
|
| http://suo.ieee.org/ontology/msg02683.html

Peirce's truer technical conception can be garnered
from another legendary bit of narrative exposition,
the story of the "French Interpretant's Memory":

Here is a passage from Peirce that is decisive in clearing up
the relationship between the interpreter and the interpretant,
and, not by coincidence, has some bearing on the placement of
concepts as symbols, as their principal aspects are refracted
across the spectrum of sign modalities.

| I think we need to reflect upon the circumstance that every word
| implies some proposition or, what is the same thing, every word,
| concept, symbol has an equivalent term -- or one which has become
| identified with it, -- in short, has an 'interpretant'.
|
| Consider, what a word or symbol is;  it is a sort
| of representation.  Now a representation is something
| which stands for something.  ...  A thing cannot stand for
| something without standing 'to' something 'for' that something.
| Now, what is this that a word stands 'to'?  Is it a person?
|
| We usually say that the word 'homme' stands to a Frenchman for 'man'.
| It would be a little more precise to say that it stands 'to' the
| Frenchman's mind -- to his memory.  It is still more accurate
| to say that it addresses a particular remembrance or image
| in that memory.  And what 'image', what remembrance?
| Plainly, the one which is the mental equivalent of
| the word 'homme' -- in short, its interpretant.
| Whatever a word addresses then or 'stands to',
| is its interpretant or identified symbol.  ...
|
| The interpretant of a term, then, and that which it stands to
| are identical.  Hence, since it is of the very essence of a symbol
| that it should stand 'to' something, every symbol -- every word and
| every 'conception' -- must have an interpretant -- or what is the
| same thing, must have information or implication.  (CE 1, 466-467).
|
| Charles Sanders Peirce, 'Chronological Edition', Volume 1, pages 466-467.

http://suo.ieee.org/email/msg01111.html
http://suo.ieee.org/email/msg01112.html
http://suo.ieee.org/email/msg01113.html

As it happens, this is exactly the sort of conception of semiosis
that I need in my own work for building bridges between the typical
brands of tokens that are commonly treated in abstract semiotics and
the kinds of states in the configuration spaces of dynamic systems
that are the actual carriers of these signals.  Which explains
why I discuss this passage toward the end of the introduction
to my dissertation and make critical use of it throughout.

I am not sure about this, since I do not know for certain what the object of life is.
It would be just as easy to say that the protein is yet another interpretant sign in
a process whose main object is to simply to continue itself in the form to which it
would like to become accustomed.  The only way I know to decide would be to check
my favorite definition, but there is always a bit of play in the way that it can
be made to fit any particular concrete process.

Before we get into this I need to make clear that I will be talking about the
composability and ducibility properties of relations in a purely mathematical
sense, and if I say "formal", "logical", or "structural" I will be projecting
what is pretty much a shade off the same tree.  So I will say that a relation
is "decomposable" or "irreducible" much in the same way that I would say that
an integer is "prime" or, better still, that a mathematical group is "simple".
Therefore, whatever we say about the composition, constitution, factorization,
and structure of a relation will be wholly determined by the definitions that
we pose for the notions of "composition" and "reduction" that we have in mind.
Within these bounds it is possible to achieve a limiting degree of certainty
about what is claimed within.

When it comes to the questions of conceptual, epistemological, ontological,
or physical structure I am far less certain.  Each of these, I am guessing,
involves the question of how well a mathematical object captures the being
of some object outside it, and there is usually a lot of "play in the will"
when it comes to that.  I do not expect to know much about the ontology of
anything, in the sense of "what it is", at my rate, not in this brief life,
so I am pretty much fated to discussing the ways that a thing is described,
which does include, in partial kinds of compensation, the ways that things
are circumscribed, encompassed, and surrounded by the gifts of the senses.

As far as episteme goes, I do not know much about it, except that knowledge,
if we ever get it, can only arrive through the imports of inquiry, broadly
conceived, and only be validated through the customs of inquiry, narrowly
conceived, in the stamp of critical, reflective, transmissable inquiry.

If by "conceptual" one means the comprehension of the intensions that are
implied in our concept or our grasp of an object, then I would be able to
say that an object relation can be addressed in extension or in intension,
either way, and that both ways are most likely ultimately necessary for a
practical grasp of the abstract object in question.  This notwithstanding,
I am for the moment working from an extensional pons of view on relations,
for the simple reason that much less work has been done on sign relations
from the extensional angle so far.

In this view, a relation is a set of tuples.  Notice that I do not yet say "k-tuples".
This is because it is useful to expand the concept of a relation to include all of
what are more usually called "formal languages", to wit, subsets of A*, where A
is any finite set, called, the "alphabet" ("lexicon", "vocabulary") of the
formal language L c A*.  Starting from this level of generality, some of
the first few things that I need to know about any relation are:

1.  Does it have a definite "arity"?
2.  If so, is the arity finite?
3.  If so, what is it?

Let us put to one side the need to worry about aritiless (inaritied?) relations
and focus our discussion on k-adic relations, those that have a finite arity k.

In particular, we are especially interested in sign relations.
A "sign relation" L is a subset of a cartesian product OxSxI,
where the sets O, S, I are the "object domain", "sign domain",
    and "interpretant sign domain", respectively, of L c OxSxI.

It is frequently useful to approach the concept
of a full-fledged sign relation in three phases:

1.  A "sign relation" simpliciter, L c OxSxI, could be just about any
    3-adic relation on the arbitrary domains O, S, I, so long as it
    satisfies one of the adequate definitions of a sign relation.

2.  A "sign process" is a sign relation plus a significant sense of transition.
    This means that there is a definite, non-trivial sense in which a sign
    determines its interpretant signs in time with respect to its objects.
    We often find ourselves writing "<o, s, i>" as "<o, s, s'> in such cases,
    where the semiotic transition s ~> s' takes place in respect of the object o.
    If this sounds a tad like an "automaton with input" (AWI), it's not accidental.

3.  An "inquiry process" is a sign process that has value-directed transitions.
    This means that there is a property, a quality, or a scalar value that can
    be associated with a sign in relation to its objects, and that the transit
    from a sign to an interpretant in regard to an object occurs in such a way
    that the value is increased in the process.  For example, semiotic actions
    like inquiry and computation are directed in such a way as to increase the
    qualities of alacrity, brevity, or clarity of the signs on which they work.

All in all, sign relations are not limited to purely linguistic types of systems.
They encompass the data of the senses, natural signs, and plastic representation,
just to name some randomly-chosen species of this very widely disseminated genus.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Projective Reduction of Triadic Relations:  The Catch

This scene was initially interjected as a comic relief,
and so I will spare you the drollery of its repetition.
The entire pith and substance of it boils down to this
rather trivial observation, but one whose significance,
such as it may be, gets overlooked with serious punity,
and that is the fact that, even though one we have now
found in the sign relations L(A) and L(B) two examples
of 3-adic relations which are reconstructible from and
in this sense reducible to the 2-dimensional data sets
of their 2-adic projections, we entertain an insidious
self-deception if we think that we have given the slip
to 3-adic relations altogether.  I feel sometimes like
I am straining to call the attentions of fish to water
by blubbering and glubbering "HOH, HOH, HOH" until the
sea shall free me, I guess, whenever I try to say this,
but I guess that I might as well just go ahead and try,
one more time, while still I have the O^2, and so here
I go, again, flailing at the logical immersion of this
entire process of "analysis" (= Greek for washing back)
in hope of rendering its nigh unto invisible influence
slightly more opaque to the light of a higher analysis.
All of which is just my attempt to remind you in a way
that you will not so soon forget that the very conduct
of the whole analytic reduction step is an association
among at least three relations, the analysand plus the
two or more analytic components, and there it goes yet
again, this ineluctable triadicity.

To bring the chase up short, the very idea of reduction,
when it means reducing one thing to more than one other
thing, is itself a relation of post-dyadic valence, and
thus parthenogenetically reproducing itself from itself,
evokes the polycloven hydra's hoof of Persean 3-adicity.

Okay, I just thought that you might welcome an intermezzo,
however scherzo-frenetic it might be, but now it's back
to the opera that the phantom of the operators wrote.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Classification Of Signs

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

ML = Martin Lefebvre

ML: I have an exegetical question regarding the "New List" and
    in particular M.G. Murphey's reading of it.  In chapter 15 of
    his book on Peirce ('The Development of Peirce's Philosophy'),
    Murphey makes a big deal out of Peirce moving from predicate
    logic to the logic of relatives to discuss the categories.
    He goes so far as to claim that from 1885 Peirce operated
    "substantial changes in the definitions of [the] categories",
    and that "these changes are sufficiently great so that Peirce
    ought to have adopted new names for them to prevent confusion
    with his earlier papers".  Murphey's analysis of the situation
    rests in good measure on the index.  His claim is that in the
    New List, the index does not refer directly to an individual,
    but rather to a concept.  "The use of the term 'index' to mean a
    sign which refers not to a concept but to an individual directly
    does not appear until 1885 [...]".  Now it is possible that I
    have been misreading the New List for some time (projecting on
    it, as it were, later notions), yet I find Murphey's take hard
    to reconciliate with the idea, which we find in the New List,
    according to which indices are not general signs.  Peirce
    defines indices as representations "whose relation to their
    object consists in a correspondence in fact".  Moreover,
    the absence of generality in likenesses and indices is why
    the rules of logic "have no immediate application" to them
    (according to the 1867 view).  Now isn't it the case that the
    exclusion of indices from the rules of logic stems directly
    from the fact that they do not refer to concepts (unlike what
    Murphey is saying)?  Doesn't correspondence in fact already
    imply haecceity?  Am I missing something here?  What the final
    section of the New List makes clear, I believe, is that Peirce
    is unwilling in 1867 to fully consider the icon and the index
    semeiotically, since he is confining his view to propositions
    and arguments.  And in that sense the logic of relatives,
    by offering a view of the categories not subordinated to
    propositional logic, may be what makes possible the
    famous 1903 classification.

ML: I'll appreciate any help with this.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

ML = Martin Lefebvre

ML: Murphey's analysis of the situation rests in good measure on the index.
    His claim is that in the New List, the index does not refer directly to
    an individual, but rather to a concept.  "The use of the term 'index' to
    mean a sign which refers not to a concept but to an individual directly
    does not appear until 1885 [...]".

This is very off the cuff, but my sense is that a concept for Peirce,
early and late, is always a sign, in particular a symbol, so an index,
though it might have a concept as a connotation or interpretant, only
refers to it in ways that are irrelevant to its function in denoting
its object.  I have been pursuing a similar train of thought for a
while, but I find the critical years to be 1865-1870, and so the
Harvard and Lowell lectures are the most helpful here.  I will
look up the links that I posted last year.

Here are some that I can find right away:

Extension x Comprehension = Information, links 01-78, notes 02-04.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Point masses don't really exist,
| but we are going to have to spend
| a long time talking as if they did.

As a statement about the relationship between physical realities
and mathematical models, it leaves a lot to be desired, but it's
probably about the best that could be expected in 25 words or less
from a college physics professor standing before an auditorium of
first year students.  Looking back, a good deal of my subsequent
intellectual development could be interpreted as interpretants
of those two little words "as if".

Peirce, of course, as I came to discover after
a long time had passed, said it slightly better:

| There are cases where we are quite in the dark, alike concerning the creating
| purpose and concerning the genesis of things[,] but where we find a system of
| classes connected with a system of abstract ideas -- most frequently numbers --
| and that in such a manner as to give us reason to guess that those ideas in
| some way, usually obscure, determine the possibilities of things.  For example,
| chemical compounds, generally -- or at least the more decidedly characterized
| of them, including, it would seem, the so-called elements -- seem to belong
| to types, so that, to take a single example, chlorates KClO3, manganates
| KMnO3, bromates KBrO3, rutheniates KRuO3, iodates KIO3, behave chemically
| in strikingly analogous ways.  That this sort of argument for the existence
| of natural classes -- I mean the argument drawn from types, that is, from
| a connection between the things and a system of formal ideas -- may be much
| stronger and more direct than one might expect to find it, is shown by the
| circumstance that ideas themselves -- and are they not the easiest of all
| things to classify naturally, with assured truth? -- can be classified
| on no other grounds than this, except in a few exceptional cases.  Even
| in these few cases, this method would seem to be the safest.  For example,
| in pure mathematics, almost all the classification reposes on the relations
| of the forms classified to numbers or other multitudes.  Thus, in topical
| geometry, figures are classified according to the whole numbers attached
| to their 'choresis', 'cyclosis', 'periphraxis', 'apeiresis', etc.  As for
| the exceptions, such as the classes of hessians, jacobians, invariants,
| vectors, etc., they all depend upon types, too, although upon types of
| a different kind.  It is plain that it must be so;  and all the natural
| classes of logic will be found to have the same character.
|
| Charles Sanders Peirce, 'Collected Papers', CP 1.223.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

It seems like Tom Gollier and I discussed some of these issues
earlier this year, in connection with the relationship between
indices, induction, inference, and inquiry.  For those of you
who are not all that hell-bent for surfing, I will cull out
some of the more relevant excerpts from Peirce from the
more ephemeral bits of commentary and discussion.

Extension x Comprehension = Information, notes 08-10.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Here is are some passages that I think are especially critical with respect
to understanding Peirce's use of words like "index" and "individual", along
with the relationship between these concepts.

Notice in particular the following passage, and its similarity to the
treatment of indices, individuality, proper names, types, tokens, and
so on in the "George Washington" example from another thread.

| But such terms though conceivable in one sense --
| that is intelligible in their conditions --
| are yet impossible.
| You never can narrow down to an individual.
|
| ...
|
| Thus, even the proper name
| of a man is a general term or
| the name of a class, for it names
| a class of sensations and thoughts.
| The true individual term the absolutely
| singular 'this' & 'that' cannot be reached.
| Whatever has comprehension must be general.

In short, how individual instances and ideal instants exist, if they do,
and how we come to know and represent them, are two different questions.
But it is necessary to find ways of "justifying" or "rationalizing" our
talk about these ideal limits, ideal types, and other idealizations.

In mathematics, this rationalization process took a bit of work just to get
as far as we have, and the current set of working solutions are by no means
all that stable.  It is the history that Peirce refers to when he speaks of
"Weierstrassian severity".

Extension x Comprehension = Information, notes 18, 19, 21-25.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

ML = Martin Lefebvre

Martin, I will return to these questions now.
It has been 25 years since I last looked into
Murphey's book, so I will make one scan here
and then return to refresh my memory of that.

ML: I have an exegetical question regarding the New List and in
    particular M.G. Murphey's reading of it. In chapter 15 of his
    book on Peirce (The Development of Peirce's Philosophy), Murphey
    makes a big deal out of Peirce moving from predicate logic to the
    logic of relatives to discuss the categories.

Maybe just a typo, but it is probably better to say "apophantic",
"subject-predicate form", or "syllogistic" than "predicate logic".

ML: He goes so far as to claim that from 1885 Peirce operated
    "substantial changes in the definitions of [the] categories",
    and that "these changes are sufficiently great so that Peirce
    ought to have adopted new names for them to prevent confusion
    with his earlier papers".  Murphey's analysis of the situation
    rests in good measure on the index.  His claim is that in the
    New List, the index does not refer directly to an individual,
    but rather to a concept.

I do not think that this is the way that Peirce generally uses the
word "concept".  At any rate, it is very important to grasp what
Peirce was saying about the "doctrine of individuals", as this
calls into question the status of any ostensible distinction
between individual terms and general terms.

ML: "The use of the term 'index' to mean a sign which refers
    not to a concept but to an individual directly does not
    appear until 1885 [...]".  Now it is possible that I have
    been misreading the New List for some time (projecting on
    it, as it were, later notions), yet I find Murphey's take
    hard to reconciliate with the idea, which we find in the
    New List, according to which indices are not general signs.

This is true if you take the word "general" to mean "genuine", that is,
the non-degenerate species of signs that is constituted by symbols, but
not if you take it to mean that indices cannot have plural denotation.

ML: Peirce defines indices as representations "whose relation
    to their object consists in a correspondence in fact".
    Moreover, the absence of generality in likenesses and
    indices is why the rules of logic "have no immediate
    application" to them (according to the 1867 view).

Again, the word "generality" is very tricky here.
I am not saying that Peirce never used it this way,
but it has to be taken as "generic" in the sense of
"non-degenerate".  The criterion of "ruliness" or of
being "law-abiding" is the thing that distinguishes
symbols, types, natural kinds, etc. from indices and
icons, which are "where you find them", as it were.

By the way, this is one of the bad effects of trying to project the
3-dimensional subject of tone/token/type on a 2-dimensional plane of
description.  It leads people to confound the dimension of intensions,
qualities, universals, the mere possession of properties or "tones",
with the extra measure of natural kindness or naturalized lawfulness
that is involved in "representations acquiring a nature, that is to
say an immediate representative power".

| Now, I ask, how is it that anything can be done with a symbol,
| without reflecting upon the conception, much less imagining the
| object that belongs to it?  It is simply because the symbol has
| acquired a nature, which may be described thus, that when it is
| brought before the mind certain principles of its use -- whether
| reflected on or not -- by association immediately regulate the
| action of the mind;  and these may be regarded as laws of the
| symbol itself which it cannot 'as a symbol' transgress.
| (Peirce, CE 1, 173).

| Now we have already analyzed the notion of a 'symbol', and we have found
| that it depends upon the possibility of representations acquiring a nature,
| that is to say an immediate representative power.  (Peirce, CE 1, 280).

ML: Now isn't it the case that the exclusion of indices from the
    rules of logic stems directly from the fact that they do not
    refer to concepts (unlike what Murphey is saying)?  Doesn't
    correspondence in fact already imply haecceity?

A concept is a symbol.  To say that index refers to a concept is jarring
if you mean "refers" in the sense of "denotes", as if to hoist the index
to a higher-order sign, so I would have to try and interpret "refers" in
the sense of "connotes".  But I suspect that is not what is really going
on here, more likely the use of "concept" to mean a class or intension?

The indicial aspect of a sign is like a blank check,
it's open to whatever its object chooses to fill in.
The law that governs the symbolic function of signs,
how they get duly cashed out for interpretant signs,
only has jurisdiction over the process that follows.

ML: Am I missing something here?  What the final section of the New List
    makes clear, I believe, is that Peirce is unwilling in 1867 to fully 
    consider the icon and the index semeiotically, since he is confining 
    his view to propositions and arguments.  And in that sense the logic 
    of relatives, by offering a view of the categories not subordinated 
    to propositional logic, may be what makes possible the famous 1903 
    classification.

I will look into it further.  But I sense that what is going on here
is a certain ambivalence about the relation between logic and semiotic.
If we identify the two studies, then semiotic like logic is all normative,
and thus considers icons and indices as "degenerates", outside its purview.
But if we make logic the formal or normative species of a more general study
of semiotic, this leaves room for a descriptive semiotic that would not feel
so guilty about its interest in icons and indices.

If icons and indices are outlawed ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Just the beginning of some working notes from Murphey --

| [Chapter 15].  The Revision of the Categories
|
| In Part Two, I have argued that Peirce's work of the 1870's constitutes a
| revision of his earlier system in the light of the new logic of relations.
| But there are several serious objections to this interpretation which
| require consideration.  ...  (p. 296).

| [In 1879, at Johns Hopkins] Peirce returned at once to the
| logic of relations and began the long delayed exploration
| of the territory that he had discovered ten years before.  ...
| The collection of papers written by Peirce and his students,
| 'Studies in Logic', which Johns Hopkins published in 1883
| was certainly the most important single volume on logic
| written in America in the nineteenth century.  (p. 297).
|
| Once the new logic was in order Peirce would probably have turned to
| a revision of the categories in any case.  But the immediate cause of
| his doing so in 1885 was the discovery by O.H. Mitchell and himself of
| the theory of quantification.  [n4.  O.H. Mitchell, "On a New Algebra
| of Logic", 'Studies in Logic', pp. 72-106;  CP 3.351-354;  CP 3.393 ff].
| In its implications for the categories, this discovery is second only
| to that of the logic of relations.  For just as the logic of relations
| suggested to Peirce the correlation between continuity and Thirdness,
| so it was the discovery of quantification which led to the final
| revision of Secondness.  (pp. 297-298).
|
| Murray G. Murphey,
|'The Development of Peirce's Philosophy',
| first published, Harvard University Press, Cambridge, MA, 1961.
| reprinted, Hackett Publishing Company, Indianapolis, IN, 1993.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 8

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 15.  Revision of the Categories (cont.)
|
| It will be recalled that in the "New List", Peirce, like Kant, had based the
| possibility of a list of categories upon the fact that the manifold of sense
| must be brought to the unity of a proposition.  But for Peirce, unlike Kant,
| the significance of the proposition lies simply in the fact that it illustrates
| the sign relation.  It is this tripartite sign relation which is the fundamental
| synthetic relation, and it is only because all synthesis involves this relation
| that any synthesis may be used to derive the categories -- that the propositional
| synthesis may stand for all others.  The categories of the "New List" are the
| concepts of connection required to make possible this sign relation and may be
| viewed as an explication of its nature.  But at the same time, Peirce's concept
| of the sign relation is heavily indebted to the traditional subject-predicate
| theory of the proposition.  Thus, the subject of the proposition denotes
| the object of the sign, the copula expresses being or the possibility of
| determination of the subject, and the predicate makes this determination.
| In 1867 Peirce regarded this interpretation as true for all complete
| sign relations:  so in an argument the premisses designate a state
| of fact which, taken as subject, is determined by the conclusion
| as predicate.  (CP 1.545-559).  (MGM, p. 298).
|
| Murray G. Murphey,
|'The Development of Peirce's Philosophy',
| first published, Harvard University Press, Cambridge, MA, 1961.
| reprinted, Hackett Publishing Company, Indianapolis, IN, 1993.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 9

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 15.  Revision of the Categories (cont.)
|
| The theory of the "New List" had to be abandoned when Peirce's
| logic required him to abandon the subject-predicate theory of the
| proposition, and this occurred in 1870 when he discovered the logic
| of relations.  It is essential to the argument of the "New List" that
| the three kinds of relations should be intermediaries at different
| levels of abstraction between Being and Substance, for otherwise
| neither the necessity nor the completeness of the categories can
| be proven.  But the theory of relations shows that the three kinds
| of relations are equally abstract and that each may serve as a proper
| predicate.  To say that a monadic predicate is joined to its object by
| a triadic relation solves nothing, for we must then ask, what joins the
| triadic relation to its subjects, and so on.  Thus the logic of relations
| undercuts the whole concept of the "New List".
|
| The theory of quantification further undermined the "New List" by raising
| fresh problems concerning the nature of the object.  In a paper of 1885
| (CP 3.359-403), Peirce makes this connection between quantification and
| Secondness quite explicit.  He first analyzes propositional constituents
| into two classes of signs which he calls "tokens" and "indices".  By a
| "token" Peirce means in this paper (but not in some others) a symbol,
| or a sign related to its object only in virtue of a mental association
| or habit (CP 3.360).  An "index" is then described as follows:  "The
| index asserts nothing;  it only say 'There!'  ...  Demonstrative and
| relative pronouns are nearly pure indices, because they denote things
| without describing them ..." (CP 3.361).  Peirce then shows why indices
| are necessary.
|
| | But tokens alone do not state what is the subject of discourse;
| | and this can, in fact not be described in general terms;  it can
| | only be indicated.  The actual world cannot be distinguished from
| | a world of imagination by any description.  Hence the need of pronoun
| | and indices, and the more complicated the subject the greater the need
| | of them.  The introduction of indices into the algebra of logic is the
| | greatest merit of Mr. Mitchell's system.  He writes 'F'_1 to mean that
| | the proposition 'F' is true of every object in the universe, and 'F'_u
| | to mean that the same is true of some object.  This distinction can
| | only be made in some such way as this.  (CP 3.363).
|
| To express the propositions F_1 and F_u Peirce introduced the notations
| !P!_i F_i and !S!_i F_i respectively which became the standard symbolism
| of the Boole-Schröder algebra.  These are equivalent to the modern forms
| (x)Fx and (Ex)Fx respectively.  (MGM, pp. 298-299).
|
| Murray G. Murphey,
|'The Development of Peirce's Philosophy',
| first published, Harvard University Press, Cambridge, MA, 1961.
| reprinted, Hackett Publishing Company, Indianapolis, IN, 1993.

NB.  !P! and !S! are the Greek capitals Pi and Sigma, respectively.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 10

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 15.  Revision of the Categories (cont.)
|
| The theory of the index which is here introduced
| marks a decided break with Peirce's earlier theory.
| It is true of course that the term "index" was used
| as early as 1867 to refer to signs "whose relation to
| their objects consists in a correspondence in fact ..."
| (CP 1.558).  But there is the important difference between
| the index of the "New List" and the index of quantification.
| The index "It" of the "New List" is a concept -- namely, the
| concept of "the present, in general" (CP 1.547) -- and it does not
| refer directly to an individual since Peirce then held individuality
| to be ideal.  As I have argued above, this theory, although historically
| derived from Kant's appendix to the Transcendental Dialectic, is logically
| the result of the fact that the logics which Peirce employed contained no
| concept of the individual.  The use of the term "index" to mean a sign
| which refers not to a concept but to an individual directly does not
| appear until 1885 and its introduction is due to Mitchell's theory.
| It is at this point that the notion of individuality becomes
| important for Peirce.  (MGM, pp. 299-300).
|
| Murray G. Murphey,
|'The Development of Peirce's Philosophy',
| first published, Harvard University Press, Cambridge, MA, 1961.
| reprinted, Hackett Publishing Company, Indianapolis, IN, 1993.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 11

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 15.  Revision of the Categories (cont.)
|
| Peirce attributed his new theory of individuality to new developments in
| logic, but that there is anything in the theory of quantification itself
| which requires such a theory of individuality is doubtful, the more so
| since no other modern writer has found it necessary to follow Peirce's
| lead.  In fact, there is nothing in the logic of quantification itself
| which requires Peirce to abandon his previous theory of the proposition.
| For as we have seen in discussing the "New List", Peirce's early analysis
| of propositions makes use of the Scholastic theory of supposition according
| to which a proposition such as "all S is P" is regarded as affirming "P"
| of whatever arguments x there may be of which "S" is affirmed.  Thus the
| propositional analysis of the theory of supposition is in line with that
| of quantification, although it is less explicit, and the introduction of
| quantifiers and variables could very well be viewed as simply a completion
| and extension of the earlier position.  But in the case of propositions of
| the form "this is red" the adequacy of the earlier theory is less evident.
| Such a proposition is of the form 'fa' where 'a' is an individual and would
| clearly seem to be incompatible with the analysis of the "New List".  So
| far as I can see, there are only two alternatives open to Peirce in this
| case.  He must either adopt the view that "this is red" does require a
| pure individual, or he must interpret "this" as a bound variable which
| serves to link the extension of "red" to the extension of some preceding
| term.  That is, he must regard "this is red" as an incomplete sign which is
| only interpretable when joined to a prior sign of the same sort, and which
| asserts redness of the objects in the extensional domain of that prior sign.
| It is obvious that this latter approach leads to an infinite regress, for
| every atomic sentence would have to be so interpreted and we should never
| reach any first sentence which would tell us what we were talking about.
| Yet this is precisely the alternative taken by Peirce in the "New List"
| and the 1868 papers on cognition, and he had lived happily with this
| 'regressus ad infinitum' for eighteen years.  Why then did he suddenly
| abandon it in 1885?  (MGM, pp. 300-301).
|
| Murray G. Murphey,
|'The Development of Peirce's Philosophy',
| first published, Harvard University Press, Cambridge, MA, 1961.
| reprinted, Hackett Publishing Company, Indianapolis, IN, 1993.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 12

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

There are so many fascinating facets to this subject
that I hardly know where to begin, so I think that I
will just collect a few more bits of source material
before I try to think about it in any more detail.
However, what I found myself thinking about this
morning represented a slightly different way of
looking at the problem than the sorts of things
I would have been worrying over when I first
encountered it, so I will report on that.

Let us consider the attention-grabbing-&-redirecting signs "There!" or "Voila!".
When I look at these signs from my current point of view, I see nothing at all
like pure indices -- they invoke, after all, the lexicons of natural languages,
with all of their common and all of their distinct connotations and etymologies
intact, and there is, in the end, that iconic panache of the exclamation point,
so suggestive of a writer dashing the-pen-&-the-ultimate touches on a work with
that antic quill-pen in hand.

So the most that I could say about these "demonstrations" is that
their indicial aspect, in their ordinary use, is the most salient.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 13

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I will go back and pick up a bit more of what Peirce
wrote "On the Algebra of Logic" (1885, CP 3.359-403).

| I have taken pains to make my distinction of icons, indices,
| and tokens [more frequently called "symbols"] clear, in order to
| enunicate this proposition:  in a perfect system of logical notation
| signs of these several kinds must all be employed.  Without tokens there
| would be no generality in the statements, for they are the only general
| signs;  and generality is essential to reasoning.  Take, for example, the
| circles by which Euler represents the relations of terms.  They well fulfill
| the function of icons, but their want of generality and their incompetence
| to expresss propositions must have been felt by everybody who has used them.
| Mr. Venn has, therefore, been led to add shading to them;  and this shading
| is a conventional sign of the nature of a token.  In algebra, the letters,
| both quantitative and functional, are of this nature.  But tokens alone do
| not state what is the subject of discourse;  and this can, in fact, not be
| described in general terms;  it can only be indicated.  The actual world
| cannot be distinguished from a world of imagination by any description.
| Hence the need of pronoun and indices, and the more complicated the subject
| the greater the need of them.  The introduction of indices into the algebra
| of logic is the greatest merit of Mr. Mitchell's system.  He writes 'F'_1
| to mean that the proposition 'F' is true of every object in the universe,
| and 'F'_u to mean that the same is true of some object.  This distinction
| can only be made in some such way as this.  Indices are also required to
| show in what manner other signs are connected together.  With these two
| kinds of signs alone any proposition can be expressed;  but it cannot be
| reasoned upon, for reasoning consists in the observation that where certain
| relations subsist certain others are found, and it accordingly requires the
| exhibition of the relations reasoned within an icon.  It has long been a puzzle
| how it could be that, on the one hand, mathematics is purely deductive in its
| nature, and draws its conclusions apodictically, while on the other hand, it
| presents as rich and apparently unending a series of surprising discoveries
| as any observational science.  Various have been the attempts to solve the
| paradox by breaking down one or the other of these assertions, but without
| success.  The truth, however, appears to be that all deductive reasoning,
| even simple syllogism, involves an element of observation;  namely, deduction
| consists in constructing an icon or diagram the relations of whose parts shall
| present a complete analogy with those of the parts of the object of reasoning,
| of experimenting upon this image in the imagination, and of observing the result
| so as to discover unnoticed and hidden relations among the parts.  (CP 3.363).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 14

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 15.  Revision of the Categories (cont.)
|
| In keeping with his architectonic hypothesis, Peirce attributed his revision
| of the categories to his logic.  But I think there are other and more important
| reasons for this revision.  As we have seen in some detail in Chapter 7, Peirce's
| theory of reality had run into serious difficulties in the late 1870's.  This
| theory of reality is a direct result of the categorical theory of the "New List"
| and the cognitive theory of the 1868 papers.  By denying the existence of first
| impressions of sense, Peirce had completely sundered the real from perception,
| so that direct acquaintance with reality cannot be gained by going to the source
| of our cognitions.  The only alternative left therefore was to locate the real at
| the end of the series of cognitions by defining reality as that which is thought
| in the final opinion to which inquiry will lead.  But Peirce then found himself
| unable to prove that in fact inquiry will ever lead to any final result and
| accordingly unable to prove that there is any reality.  As a result, Peirce's
| position degenerates into an extreme form of subjectivism in which we are
| lost in a phantasmagoric maze of our own concepts.  For one who called
| himself a realist, such a development was intolerable.  (MGM, p. 301).
| 
| Murray G. Murphey,
|'The Development of Peirce's Philosophy',
| first published, Harvard University Press, Cambridge, MA, 1961.
| reprinted, Hackett Publishing Company, Indianapolis, IN, 1993.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 15

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BU = Ben Udell

BU: Are not the combinatory functors so many "this"s, "look here"s,
    "voila"s, "ecce"s in the statement?  The very arrangement of
    symbols in a statement/thought has combinatory significance in
    terms of combinatory rules/habits of statement/thought-formation.
    The "this!" that refers to a physical object outside the statement
    could be taken as an extrovert combinatory functor.

Your note is too long, my eyes are too blurry,
so I think I'll just ramble the evening away.

I am pretty sure that I first looked into Murphey's Peirce
during the second half of my Justin Morrill years (1972-76).
I had already spent two or three years immersed in CP 3-4.
And wouldn't it be hilarious if someone were raking over
our reminiscences the way we are post-mortifying Peirce's?
Ah, Blessed Nepenthe!  Anyway, I could not have seen the
'Chronological Edition' yet, so the revelations of Peirce's
Harvard and Lowell lectures, in their inelastically intact
impact on my brain, were still a decade in the future, and
I can remember that I mostly accepted the story of Peirce's
Protean conversions pretty much as it was told here and there.
Except that I had spent quite a bit of time worrying over the
whole tangled tale of "existential import", or the lack thereof,
in the tortoise's transit from classical to modern logic, which
caused me to work through the "Description of a Notation" (1870)
and its sequels with particular care.  The upshot of the story
is that I can distinctly remember how I first interpreted these
problematic passages.  It did not seem that Peirce was talking
about anything so hot off the presses, since what he mentioned
was only "nominally" novel in comparison to what I had read in
his 1870's papers, but mostly that he was paying a compliment
to own long-held and previously published ideas in the guise
of his protege Mitchell.

That is about what I remember, and I can see nothing yet that
contradicts that impression, indeed, in retrospect, very much
more that confirms it, from the incidental to the substantial.

Are you referring to Quine's combinatory functors?
If so, it would help to let me know which version
you have in mind.  I am a little more conversant
with the standard Schoenfinkel brands:  S, K, I.
The whole intent of combinators was to eliminate
the variable (the formal index or pronoun) as a
primitive theoretical construct.  I have always
wondered what this would mean philosophically.
It might just mean that the entire indicative
aspect has been relegated to the tabula rasa,
the uncarved block, the unmarked medium that
bears the inscription, or the lack thereof.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 16

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Just to mix it up real good, let's develop another
layer of constrast with a theme from Wittgenstein.
Here is a discussion of W's "Picture Theory" from
Jaakko Hintikka's 'On Wittgenstein'.

| 4.  Wittgenstein's Picture Theory
|
| But there is also a major difference between Russell
| and Wittgenstein.  Russell's theory of acquaintance was
| supposed to explain, not only the structure of our knowledge,
| but also the structure of propositions, including the conditions
| on their meaningfulness.  What is for instance required if I am to
| understand a simple (elementary) relational proposition of the form
| R(a, b), for instance, "Ludwig hits Karl", assuming that it is fully
| analyzed?  In order to understand this proposition, I have to be
| acquainted with its simplest ingredients, in this case Ludwig, Karl,
| and the relation of hitting.  But this is not enough.  In order to
| understand R(a, b), I must for instance distinguish it from the
| converse proposition R(b, a), in the example "Karl hits Ludwig".
| Since the two involve the same three objects of acquaintance,
| something else is needed.  This something else is according to
| Russell the logical form of the proposition in question.  [As?]
| All basic ingredients of a proposition, it must also be an object
| of acquaintance.  Likewise, there must be logical forms characteristic
| of various structures of propositions, such as disjunction and conjunction.
| These objects are called "logical objects" and their representation in language
| "logical constants".  Thus, Russell postulated a separate class of objects of
| acquaintance called "logical forms".
|
| Russell explained this culmination of his theory of acquaintance in the book
| 'Theory of Knowledge' which he wrote in 1913.  He made the mistake of showing
| it to Wittgenstein who was highly critical, not to say contemptuous, telling
| Russell that he had himself entertained similar ideas but found them entirely
| wrong.  Russell was so discouraged that he never published the entire 'Theory
| of Knowledge'.  It appeared posthumously in 1984 as volume 7 of Russell's
| 'Collected Papers'.
|
| This pinpoints Wittgenstein's 'Tractatus' precisely in relation to Russell.
| The 'Tractatus' is nothing more and nothing less than Russell's 1913 theory
| 'sans' logical forms as objects of acquaintance.  This relationship was for
| a long time obscured by philosophers' unawareness of Russell's for a long
| time unpublished book.  Wittgenstein himself was nevertheless fully aware
| of the relation:
|
| | My fundamental thoughts is that "logical constants"
| | do not represent [anything].  ('Tractatus' 4.0312).
|
| | Here it becomes clear that there are no such things
| | as "logical objects" or "logical constants" (in the
| | sense of Frege and Russell).  ('Tractatus', 5.4).
|
| Jaakko Hintikka, 'On Wittgenstein',
| Wadsworth, Belmont, CA, 2000.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 17

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 4.  Wittgenstein's Picture Theory (cont.)
|
| But the rejection of logical objects thrust on Wittgenstein the task
| of explaining structural meaning without them.  If a proposition is
| not held together by the "logical glue" provided by logical forms as
| actually existing objects of acquaintance, what does hold it together?
| Wittgenstein's answer was:  A proposition is held together, not by
| any additional "tie" or "glue" but by the forms of its constituents.
| Objects are connected with each other in a proposition, not like tiles
| cemented together, but like jigsaw-puzzle pieces held in place by their
| form.
|
| | In a proposition objects hang together like links in a chain.
|
| More literally speaking,
|
| | The logical form of the proposition must already be given by
| | the forms of the component parts.  ('Notebooks 1914-16', p. 23).
|
| This is Wittgenstein's famous "picture theory of language".  Its gist is
| readily explained, not by trying to spell out directly how propositions can
| be interpreted as pictures, but how a picture can have a propositional content.
| Suppose I try to convey to you the message "Ludwig hits Karl" by showing you a
| picture of the two 'in flagrante delicto'.  What is it you have to grasp in order
| to get the point?  Many details are undoubtedly unnecessary.  What is needed is,
| first, enough detail to enable you to recognize the antagonists as Ludwig and
| Karl, respectively.  But you do not have to convey this information by a mass
| of detail.  It can be done by attaching the name labels "Ludwig" and "Karl"
| to the two figures in the picture and perhaps eventually replacing these
| figures by their names.  Likewise the only additional thing you need to
| do is to recognize the goings-on in the picture as hitting.  This can
| also be done by placing the word label "hits" between the two figures
| and omitting everything else as irrelevant.  In this way, the telltale
| picture can be transformed into the sentence "Ludwig hits Karl" without
| losing any features of the picture that are needed to convey the intended
| message.  In this sense, the picture and the sentence are completely on a
| par.  You can view the proposition (sentence) as a picture, or the picture
| as a proposition.  (Hintikka, OW, pp. 19-20).
|
| Jaakko Hintikka, 'On Wittgenstein',
| Wadsworth, Belmont, CA, 2000.

Incidental Musement:

http://www.madaboutbeethoven.com/pages/people_and_places/people_family/biog_karl_nephew.htm

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 18

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The Mathematical Equivalent of a Knock-Knock Joke:
|
| Q.  I am having trouble getting this series to converge.
| A.  You have to use Euler Summability.
| Q.  What the devil is Euler Summability?
| A.  Euler had summability to put things over on people!

Since this issue has arisen again -- about the mathematical
connotations that went hand-in-hand with Peirce's quantifiers
Pi and Sigma, the lack thereof that afflicted the scene with the
flipped "A" and "E", and the semiotic devolution that came about
as the living tree of mathematical models in qualitative shape
gradually putrified and then petrified into syntactic fossils
thereby -- I will revive a comment that I made on this score
once before, and maybe I'll try to explain it a little more.

Adapted from "Zeroth Law Of Semantics", 10 Oct 2002 10:22:18 -0400.

The thing that happened when we passed from Peirce's Pi and Sigma
to whoever's flipped A and E is this:  At first it was a harmless
substitution, but that was only because the historically hard-won
connotations that were commonly associated with infinite products
and infinite sums got carried along with the exchange of symbols,
for one thing, that products and sums do not necessarily make any
sense at all unless extremely careful attention is accorded to the
space ranged over and the contingent conditions of convergence to
any determinate value -- "convergence is a privilege not a right" --
for another thing, the potential diversity of the results relative
to the precise method of production or summation that one happens
to specify -- there may be several arguably valid options that one
can use -- and least but not last, the constant connection between
the assertional reading of logical expressions and the functional
interpretation of mathematical formulas that is standard equipment
in building mathematical models of just about anything.  Over time,
this one little mutation has led many to lose the sense of pragmatic
contingency and hermeneutic relativity affecting the meaningfulness
of most if not all statements about all and some.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 19

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Let me see if I can explain in another way, one that will
incidentally connect up with the Semantic Web discussion,
why I do not think that you have so much as addressed the
question that was at issue in discussing Peirce's theory
of quantification, especially as it related to the index.

I did a Google search on the lexical item "x" yesterday and
it told me that it found 175,000,000 "hits" -- YMMV depending
on web traffic, the time of day, Google's web sweep cycle, and,
of course, the number will no doubt increase in the near future.

Google is not yet old enough to be case or gender sensitive,
and it/he/she is lacking in cognizance of most diacritical
and punctuation marks, for example, collapsing the series
x, 'x', "x", "'x'", ... into one big infinite tautology.

Here is just a sample of what I got on the first page:

It seems that "X" is the New York Stock Exchange monicker for
US STEEL CORP (NYSE:X), so I got a lot of stock-related links
that were too boring and complicated and ephemeral to bother
anybody with.

I got a lot of more entertaining links like these:

http://www.x-men-the-movie.com/
http://www.thexfiles.com/main_flash.html

And then there were tons and tons of links
to Microsoft X-Windows and Apple Mac OS X:

http://www.x.org/
http://www.apple.com/macosx/

I did not check to see whether any of these "hits"
implicated Ludwig or Karl, but since I am co-posting
this note to a Google-visible list, I can be pretty
sure that there will be one, at least, pretty soon.

Now, to rephrase in this setting the question that is
really at issue when we discuss the role of the index
in Peirce's logic, more generally, the indicial facet
and indicative function of language, a question that
I believe I can fairly say has yet to become so much
as visible to you, to judge by your responses, I can
ask it more concretely but illustratively this way:

What would Google Corp really have to do if it really wanted
to raise Google's "semiotic intelligence quotient" (SIQ) to
the point where it could really "establish co-references"
in a half-way sensible fashion?

This is the type of question that is really at issue here.
To capture the gist of the example, it is a way of asking:
How, exactly, effectively, pragmatically, do all of these
blithe co-references really get 'established' in anything
approaching the intended way?

So far you have cast your remarks wholly interior to the species of
automatic/inveterate conventions-of-context or rules-of-the-game --
hiding by the way a large number of unreflected and uncritiqued
assumptions -- that human beings play along with when they are
"gaming the play" or engaging in the piece of language theatre
that all the world calls the "predicate calculus", or one
rendition of it.

In continuing to hide away in this staged arena and its darkened theatre,
you are ignoring the task of examining its presuppostions, of making the
critical comaprisons between one play or one performance and another.
By insisting on translating everything into the one-eyed syntax and
the one-dimensional vision of first order logic, as you know it,
you are simply failing to see Peirce's logic at all, much less
examine it in anything like its due light.

['presuppostions' (sic) & 'comaprisons' (sic), I think I'll keep 'em]

If the readings from Murphey have shown us nothing else,
they have amply illustrated the kinds of distortions and
downright absurd spectacles that arise from trying to read
Peirce's logic through the goggles and boggles of Frege and
Russell.  As a person who read Dodgson, Frege, Peirce, among
many others, in stereoptic parallel when I was first studying
logic in earnest, I can tell you that it is very good exercise
for the mind's eye to try and read each one's logic through the
goggles of the others, but it is totally short-sighted to throw
away all but one set of logical scopes.

A person who does read Peirce's logic with fresh eyes or
compensated vision will quickly learn the following sorts
of things:  That three-quarters of a century before Quine
got self-conscious enough about logic to start appending
quotations to himself, Peirce had already developed the
theory of sign relations that had obsoleted in advance
most of the things that that tradition was putting out.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 20

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Martin's question, the re-reading of Murphey, and the subsequent
discussions have reminded me of a whole passel of problems that
were among the first that I ever lost a lot sleep over in logic,
so I will try to stick with this topic so long as the nostalgia
does not become too painful, in a bad sort of way.  Just by way
of trying to keep one foot in the present, I'll restart and run
a parallel thread on a related issue that I had been working on
late last year, newly titled "Critique Of Functional Reason".

As I presently see the Big Picture from here, it all has to do with
the "objective reference of logic as semiosis", and how the iconic,
the indicial, and the symbolic features of logical languages figure
into the functions of language and discourse in describing reality.

What I would like to do first is to explain the "functional interpretation"
of logical languages.  This involves interpreting logical expressions of
various types and orders as so many types and orders of mathematical
functions, typically involving an object domain X and the boolean
domain B = {0, 1} = {false, true}.  The functional interpretation
of syntax involves two complementary "directions":  There is
the "evaluative" direction and the "indicative" direction,
corresponding to the two operations of (1) asking whether
a given description is true or false of a given object,
and (2) asking what objects a given description is
true of, or not.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 21

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

My immediate concern is to get to some concrete examples,
of a sort that will be simple enough for a beginning but
still embedded and filled out enough to help us keep in
mind the uses of logic in realistic applied situations.
Otherwise, we will be in danger of spinning off into
the anomie of syntactic rootlessness.

| NB.  On this ascii transcription.
|
| !P! = Pi = Product,    !S! = Sigma = Sum
|
| `A` = All = Universal, `E` = Some = Existential
|
| "c" = "contained as a subset in"
|
| I will not always quote a formula if it is clearly being mentioned, not used.

One of the differences between the way that Peirce -- like most practicing
math, stat, compsci folks still do today -- would have looked at a formula
like !S!_x F(x)G(x) and the way that most of us probably learned to regard
a formula like `E`x(Fx&Gx) is that the F and G are thought of as functions
F, G : X -> B from a suitable space X, the "universe of discourse", to the
space B of boolean values, usually written 0 and 1 or 'false' and 'true',
respectively.  The first caution is not to get too excited about the kind
of "value" that is meant by the truth value 'true'.  It is not as if this
little "gold star" were the be-all end-all of logical inquiry as coded
into a logical expression.  It is merely an "indicator" of the objects
in the space X that one is concerned to point out by means of a sign.
Indeed, functions like F, G : X -> B are commonly called by either
of the names "indicator functions" or "characteristic functions"
in math and stat today.  What they indicate or characterize is,
in the case of F, the subset of X such that F(x) = 1.  This
is spoken "F-inverse of 1" and written "(F^(-1))(1)" and
various people call it the "antecedent", the "fiber",
the "level set", or the "pre-image under F of 1
in X".  To make the notation prettier, one can
introduce the "fiber bars" "[| ... |]" and
write [|F|] = (F^(-1))(1) c X.  This is
all just 1-dim notation for what you
are doing when you shade a region
of a venn diagram, nothing more.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 22

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

On the parallel "Critique Of Functional Reason" thread,
we will be doing some proleguminous reading from Quine,
but just in case you were wondering what sort of axiom
I have in mind for my paralleles, if not my parabolas,
on this line let me jump right into the thick of the
chase and present the concrete example that I'll be
using to flesh out our F's and G's and P's and Q's.

| Chapter 1.  Statements
|
| Section 5.  Statements About Statements (cont.)
|
| With a few trivial exceptions such as material implication, any relation
| between statements will depend on something more than the truth values
| of the statements related.  Such is the case, e.g., with the phonetic
| relation of rhyming.  The same holds for the semantic relation of
| logical implication described above, and for any other relation
| which has (unlike material implication) a serious claim to the
| name of implication.  Such relations are quite consonant with
| a policy of shunning non-truth-functional modes of statement
| composition (cf. Section 1), since a relation of statements
| is not a mode of statement composition.  On this account,
| the policy of admitting none but truth-functional modes of
| statement composition is not so restrictive as might have at
| first appeared;  what could be accomplished by a subjunctive
| conditional or other non-truth-functional mode of statement
| composition can commonly be accomplished just as well by
| talking 'about' the statements in question, thus using
| an implication relation or some other strong relation
| of statements instead of the strong mode of statement
| composition.  Instead of saying:
|
| If Perth were 400 miles from Omaha then Perth would be in America,
|
| one might say:
|
| "Perth is 400 miles from Omaha" implies "Perth is in America",
|
| in some appropriate sense of implication.
|
| Quine, 'Mathematical Logic', page 29.
|
| Willard Van Orman Quine,
|'Mathematical Logic', Revised Edition,
| Harvard University Press, Cambridge, MA,
| 1940, 1951, 1981.

o-----------------------------------------------------------o
| Table 1.  From Omaha To Perth:  Your Mileage May Vary     |
o-------------------o-------------------o-------------------o
| Omaha             | Perth             | Distance In Miles |
o-------------------o-------------------o-------------------o
| Omaha, AL         | Perth, DE         |             839.3 |
| Omaha, AL         | Perth, IN         |             563.2 |
| Omaha, AL         | Perth, KS         |             892.9 |
| Omaha, AL         | Perth, MN         |            1203.8 |
| Omaha, AL         | Perth, MS         |             411.0 |
| Omaha, AL         | Perth, ND         |            1619.6 |
| Omaha, AL         | Perth, NV         |            2356.6 |
| Omaha, AL         | Perth, NY         |            1112.3 |
| Omaha, AL         | Perth, VA         |             515.7 |
| Omaha, AL         | Perth, NB         |            1577.3 |
| Omaha, AL         | Perth, ON         |            1202.4 |
o-------------------o-------------------o-------------------o
| Omaha, AR         | Perth, DE         |            1143.4 |
| Omaha, AR         | Perth, IN         |             457.5 |
| Omaha, AR         | Perth, KS         |             337.5 |
| Omaha, AR         | Perth, MN         |             654.3 |
| Omaha, AR         | Perth, MS         |             439.7 |
| Omaha, AR         | Perth, ND         |            1052.5 |
| Omaha, AR         | Perth, NV         |            1747.6 |
| Omaha, AR         | Perth, NY         |            1266.4 |
| Omaha, AR         | Perth, VA         |            1029.7 |
| Omaha, AR         | Perth, NB         |            1757.8 |
| Omaha, AR         | Perth, ON         |            1250.8 |
o-------------------o-------------------o-------------------o
| Omaha, GA         | Perth, DE         |             892.6 |
| Omaha, GA         | Perth, IN         |             670.1 |
| Omaha, GA         | Perth, KS         |             976.9 |
| Omaha, GA         | Perth, MN         |            1310.7 |
| Omaha, GA         | Perth, MS         |             415.0 |
| Omaha, GA         | Perth, ND         |            1726.5 |
| Omaha, GA         | Perth, NV         |            2463.5 |
| Omaha, GA         | Perth, NY         |            1165.7 |
| Omaha, GA         | Perth, VA         |             569.0 |
| Omaha, GA         | Perth, NB         |            1635.3 |
| Omaha, GA         | Perth, ON         |            1260.4 |
o-------------------o-------------------o-------------------o
| Omaha, IL         | Perth, DE         |             823.6 |
| Omaha, IL         | Perth, IN         |             176.9 |
| Omaha, IL         | Perth, KS         |             622.5 |
| Omaha, IL         | Perth, MN         |             758.4 |
| Omaha, IL         | Perth, MS         |             540.7 |
| Omaha, IL         | Perth, ND         |            1174.2 |
| Omaha, IL         | Perth, NV         |            1904.8 |
| Omaha, IL         | Perth, NY         |             950.2 |
| Omaha, IL         | Perth, VA         |             665.0 |
| Omaha, IL         | Perth, NB         |            1452.6 |
| Omaha, IL         | Perth, ON         |             945.7 |
o-------------------o-------------------o-------------------o
| Omaha, KY         | Perth, DE         |             577.8 |
| Omaha, KY         | Perth, IN         |             389.4 |
| Omaha, KY         | Perth, KS         |             958.1 |
| Omaha, KY         | Perth, MN         |             960.2 |
| Omaha, KY         | Perth, MS         |             744.6 |
| Omaha, KY         | Perth, ND         |            1376.0 |
| Omaha, KY         | Perth, NV         |            2240.5 |
| Omaha, KY         | Perth, NY         |             825.2 |
| Omaha, KY         | Perth, VA         |             315.6 |
| Omaha, KY         | Perth, NB         |            1298.0 |
| Omaha, KY         | Perth, ON         |             891.0 |
o-------------------o-------------------o-------------------o
| Omaha, MO         | Perth, DE         |            1069.2 |
| Omaha, MO         | Perth, IN         |             400.5 |
| Omaha, MO         | Perth, KS         |             403.0 |
| Omaha, MO         | Perth, MN         |             379.5 |
| Omaha, MO         | Perth, MS         |             797.8 |
| Omaha, MO         | Perth, ND         |             844.3 |
| Omaha, MO         | Perth, NV         |            1595.2 |
| Omaha, MO         | Perth, NY         |            1134.0 |
| Omaha, MO         | Perth, VA         |            1012.3 |
| Omaha, MO         | Perth, NB         |            1542.6 |
| Omaha, MO         | Perth, ON         |            1035.7 |
o-------------------o-------------------o-------------------o
| Omaha, NE         | Perth, DE         |            1201.5 |
| Omaha, NE         | Perth, IN         |             598.2 |
| Omaha, NE         | Perth, KS         |             348.9 |
| Omaha, NE         | Perth, MN         |             425.8 |
| Omaha, NE         | Perth, MS         |             893.6 |
| Omaha, NE         | Perth, ND         |             642.0 |
| Omaha, NE         | Perth, NV         |            1380.4 |
| Omaha, NE         | Perth, NY         |            1242.0 |
| Omaha, NE         | Perth, VA         |            1195.0 |
| Omaha, NE         | Perth, NB         |            1672.9 |
| Omaha, NE         | Perth, ON         |            1166.0 |
o-------------------o-------------------o-------------------o
| Omaha, TX         | Perth, DE         |            1317.4 |
| Omaha, TX         | Perth, IN         |             724.2 |
| Omaha, TX         | Perth, KS         |             393.0 |
| Omaha, TX         | Perth, MN         |             925.0 |
| Omaha, TX         | Perth, MS         |             338.9 |
| Omaha, TX         | Perth, ND         |            1323.3 |
| Omaha, TX         | Perth, NV         |            1909.4 |
| Omaha, TX         | Perth, NY         |            1499.3 |
| Omaha, TX         | Perth, VA         |            1055.2 |
| Omaha, TX         | Perth, NB         |            2083.1 |
| Omaha, TX         | Perth, ON         |            1511.6 |
o-------------------o-------------------o-------------------o
| Omaha, VA         | Perth, DE         |             526.8 |
| Omaha, VA         | Perth, IN         |             425.3 |
| Omaha, VA         | Perth, KS         |             994.0 |
| Omaha, VA         | Perth, MN         |             996.1 |
| Omaha, VA         | Perth, MS         |             735.9 |
| Omaha, VA         | Perth, ND         |            1411.8 |
| Omaha, VA         | Perth, NV         |            2276.4 |
| Omaha, VA         | Perth, NY         |             774.2 |
| Omaha, VA         | Perth, VA         |             264.6 |
| Omaha, VA         | Perth, NB         |            1264.1 |
| Omaha, VA         | Perth, ON         |             881.1 |
o-------------------o-------------------o-------------------o

And A Tip'O'Th'Hat To Our Virtually Itinerant Fellow Travelers At:

http://www.mapquest.com/

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 23

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

What are some of the things that we would
like to be able to do in the real world
with our data and our logic, with our
DB's and our KB's?

Well, for instance, one might take a notion
to wonder about the truth of the statement:

If Perth is 400 miles from Omaha then Perth is in America.

First of all, a certain amount of logical and semiotic analysis
would be necessary to decide whether and under what conditions
any such statement can even possess a determinate truth value.

We all know the familiar conversational implicatures and
the polite pragmatic presuppositions that would attend such
a statement if we met with it in the context of a logic text,
as a participant in a classroom discussion, or as a contestant
on a logic exam, but you are falling into forgetfulness -- I said
the "real world", right up front, and never yet the twain have met,
so you can start by forgetting those convenient, all too convenient
assumptions, the rules of engagement in classroom arenas and theatres.
Here's a sample, just in case you've forgotten what it's time to forget:

There's no such thing as a proper name, a sign so magically sympathetic with
its one and only intended object that it can pick it out of a crowded cosmos
and will this one thing with purity of heart as its uniquely sole denotation.
Strings of char like "Omaha" and "Perth" can only do so much, nothing at all
by themselves, and serving as signs that are capable of determining singular
existents just isn't one of them.  And if you're determined to find the rest
of that misplaced determination in the context, countryside, county, environ,
neighborhood, province, state, surround, or vicinity then you are barking up
the wrong bailiwick.  All of which the pragmatic thinker says by saying that
words from "Time" to "Timbuktu" are properly read as "symbols", and if they
mean, then they mean what they mean just because some interpretant says so.

And before we get done doing that, as the requisite analysis of the question
may itself be a matter of experimental trial and persistent error that will
have to be carried on concurrently with our tries at answering it, we will
want to check the coverage of the statement against a real world database,
or many, just to see if our mini-theory has models or counter-models there,
in a representation where models or counter-models really count for something.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 24

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

TG = Tom Gollier

JA: There's no such thing as a proper name, a sign so magically sympathetic with
    its one and only intended object that it can pick it out of a crowded cosmos
    and will this one thing with purity of heart as its uniquely sole denotation.
    Strings of char like "Omaha" and "Perth" can only do so much, nothing at all
    by themselves, and serving as signs that are capable of determining singular
    existents just isn't one of them. And if you're determined to find the rest
    of that misplaced determination in the context, countryside, county, environ,
    neighborhood, province, state, surround, or vicinity then you are barking up
    the wrong bailiwick. All of which the pragmatic thinker says by saying that
    words from "Time" to "Timbuktu" are properly read as "symbols", and if they
    mean, then they mean what they mean just because some interpretant says so.

TG(IF?): And what you say seems true enough in terms of terms.
         But suppose we conjoined two of those terms (along with
         what their interpretants say they mean), say Omaha and
         Nebraska.  Wouldn't that pinpoint a location?  And even
         if there were 2 or more Omaha's in Nebraska or several
         parallel Nebraska's, wouldn't the conjunction point to
         determinate locations nevertheless?  Kind of like using
         two or more vectors of our direction-finding equipment to
         pinpoint a location in an otherwise inaccessible terrain?

Do you mean "Turn of the Century Omaha.Ne" -- and by the way, which century? --
or do you perhaps mean "Antebellum Omaha.Ne" -- and by the way, which bellum? --
at some point one simply has to grasp the basic information-theoretic thistle
that five bytes, or a gadshillion bytes, or any finite number of bytes can
only byte off so much of the world, or all conceivable worlds, to chew on,
and it's always a bit more than any sort of ontological atom.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Note 25

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

ML = Martin Lefebvre

ML: I have an exegetical question regarding the "New List" and
    in particular M.G. Murphey's reading of it.  In chapter 15 of
    his book on Peirce ('The Development of Peirce's Philosophy'),
    Murphey makes a big deal out of Peirce moving from predicate
    logic to the logic of relatives to discuss the categories.
    He goes so far as to claim that from 1885 Peirce operated
    "substantial changes in the definitions of [the] categories",
    and that "these changes are sufficiently great so that Peirce
    ought to have adopted new names for them to prevent confusion
    with his earlier papers".  Murphey's analysis of the situation
    rests in good measure on the index.  His claim is that in the
    New List, the index does not refer directly to an individual,
    but rather to a concept.  "The use of the term 'index' to mean a
    sign which refers not to a concept but to an individual directly
    does not appear until 1885 [...]".  Now it is possible that I
    have been misreading the New List for some time (projecting on
    it, as it were, later notions), yet I find Murphey's take hard
    to reconciliate with the idea, which we find in the New List,
    according to which indices are not general signs.  Peirce
    defines indices as representations "whose relation to their
    object consists in a correspondence in fact".  Moreover,
    the absence of generality in likenesses and indices is why
    the rules of logic "have no immediate application" to them
    (according to the 1867 view).  Now isn't it the case that the
    exclusion of indices from the rules of logic stems directly
    from the fact that they do not refer to concepts (unlike what
    Murphey is saying)?  Doesn't correspondence in fact already
    imply haecceity?  Am I missing something here?  What the final
    section of the New List makes clear, I believe, is that Peirce
    is unwilling in 1867 to fully consider the icon and the index
    semeiotically, since he is confining his view to propositions
    and arguments.  And in that sense the logic of relatives,
    by offering a view of the categories not subordinated to
    propositional logic, may be what makes possible the
    famous 1903 classification.

ML: I'll appreciate any help with this.

Some questions have been asked about Peirce's conception of signs,
especially indices, in the light of Murphey's reading of the role
of indices in Peirce's quantification theory.  This brings up the
question of so-called "individual terms", whether they are purely
conventional and discourse-relative as such, or whether there may
be some sense in which they genuinely denote "individual objects",
properly speaking, in a suitable domain of quantification that is
constituted of individuals, properly speaking.  Various questions
about identity and teridentity also raised their Cerberean heads.

Having studied, developed, and applied the work of C.S. Peirce
for some 35 years now, mostly within the sorts of mathematical
contexts within which he himself first began to develop and to
publish them, although with ample respect to the traditions of
philosophy that preceded him, I think that I have some idea of
what C.S. Peirce meant by the various terms and ideas at issue,
and I think that I can speak to these issues with some hope of
clarifying them to anyone who would like to see them clarified.

The problems that certain representatives of certain schools of philosophy
have with grasping Peirce's very clear concepts, much less the basic facts
of the scientific context in which they found their origin and their first
significant bearings, the troubles that they have working up a desire just
to give a careful reading to what is evidently worth learning about within
C.S. Peirce's work -- that is where I see the red herrings in this sea.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Doctrine of Individuals

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| In reference to the doctrine of individuals, two distinctions should be
| borne in mind.  The logical atom, or term not capable of logical division,
| must be one of which every predicate may be universally affirmed or denied.
| For, let 'A' be such a term.  Then, if it is neither true that all 'A' is 'X'
| nor that no 'A' is 'X', it must be true that some 'A' is 'X' and some 'A' is
| not 'X';  and therefore 'A' may be divided into 'A' that is 'X' and 'A' that
| is not 'X', which is contrary to its nature as a logical atom.
|
| Such a term can be realized neither in thought nor in sense.
|
| Not in sense, because our organs of sense are special -- the eye,
| for example, not immediately informing us of taste, so that an image
| on the retina is indeterminate in respect to sweetness and non-sweetness.
| When I see a thing, I do not see that it is not sweet, nor do I see that it
| is sweet;  and therefore what I see is capable of logical division into the
| sweet and the not sweet.  It is customary to assume that visual images are
| absolutely determinate in respect to color, but even this may be doubted.
| I know of no facts which prove that there is never the least vagueness
| in the immediate sensation.
|
| In thought, an absolutely determinate term cannot be realized,
| because, not being given by sense, such a concept would have to
| be formed by synthesis, and there would be no end to the synthesis
| because there is no limit to the number of possible predicates.
|
| A logical atom, then, like a point in space, would involve for
| its precise determination an endless process.  We can only say,
| in a general way, that a term, however determinate, may be made
| more determinate still, but not that it can be made absolutely
| determinate.  Such a term as "the second Philip of Macedon" is
| still capable of logical division -- into Philip drunk and
| Philip sober, for example;  but we call it individual because
| that which is denoted by it is in only one place at one time.
| It is a term not 'absolutely' indivisible, but indivisible as
| long as we neglect differences of time and the differences which
| accompany them.  Such differences we habitually disregard in the
| logical division of substances.  In the division of relations,
| etc., we do not, of course, disregard these differences, but we
| disregard some others.  There is nothing to prevent almost any
| sort of difference from being conventionally neglected in some
| discourse, and if 'I' be a term which in consequence of such
| neglect becomes indivisible in that discourse, we have in
| that discourse,
|
| ['I'] = 1.
|
| This distinction between the absolutely indivisible and that which
| is one in number from a particular point of view is shadowed forth
| in the two words 'individual' ('to atomon') and 'singular' ('to kath
| ekaston');  but as those who have used the word 'individual' have not
| been aware that absolute individuality is merely ideal, it has come to
| be used in a more general sense.  (CP 3.93, CE 2, 389-390).
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

Nota Bene.  On the square bracket notation used above:
Peirce explains this notation at CP 3.65 or CE 2, 366.

| I propose to denote the number of a logical term by
| enclosing the term in square brackets, thus, ['t'].

The "number" of an absolute term, as in the case of 'I',
is defined as the number of individuals that it denotes.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

DOI.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Any genuine appreciation of what Peirce has to say about identity,
indices, names, proper or otherwise, and the putative distinctions
between individual, particular, and general terms will have to deal
with what he wrote in 1870 about the "doctrine of individuals".

Notice that this statement, together with the maxims
that "Whatever has comprehension must be general"
and "Whatever has extension must be composite",
pull the ruga -- and all of the elephants --
out from underneath the nominal thinker's
wishful thinking to find ontological
security in individual names, which
said nominal thinker has confused
with the names of individuals,
to turn a phrase back on same.

"A Simple Desultory Philippic"

| In reference to the doctrine of individuals, two distinctions should be
| borne in mind.  The logical atom, or term not capable of logical division,
| must be one of which every predicate may be universally affirmed or denied.
| For, let 'A' be such a term.  Then, if it is neither true that all 'A' is 'X'
| nor that no 'A' is 'X', it must be true that some 'A' is 'X' and some 'A' is
| not 'X';  and therefore 'A' may be divided into 'A' that is 'X' and 'A' that
| is not 'X', which is contrary to its nature as a logical atom.
|
| Such a term can be realized neither in thought nor in sense.
|
| Not in sense, because our organs of sense are special -- the eye,
| for example, not immediately informing us of taste, so that an image
| on the retina is indeterminate in respect to sweetness and non-sweetness.
| When I see a thing, I do not see that it is not sweet, nor do I see that it
| is sweet;  and therefore what I see is capable of logical division into the
| sweet and the not sweet.  It is customary to assume that visual images are
| absolutely determinate in respect to color, but even this may be doubted.
| I know of no facts which prove that there is never the least vagueness
| in the immediate sensation.
|
| In thought, an absolutely determinate term cannot be realized,
| because, not being given by sense, such a concept would have to
| be formed by synthesis, and there would be no end to the synthesis
| because there is no limit to the number of possible predicates.
|
| A logical atom, then, like a point in space, would involve for
| its precise determination an endless process.  We can only say,
| in a general way, that a term, however determinate, may be made
| more determinate still, but not that it can be made absolutely
| determinate.  Such a term as "the second Philip of Macedon" is
| still capable of logical division -- into Philip drunk and
| Philip sober, for example;  but we call it individual because
| that which is denoted by it is in only one place at one time.
| It is a term not 'absolutely' indivisible, but indivisible as
| long as we neglect differences of time and the differences which
| accompany them.  Such differences we habitually disregard in the
| logical division of substances.  In the division of relations,
| etc., we do not, of course, disregard these differences, but we
| disregard some others.  There is nothing to prevent almost any
| sort of difference from being conventionally neglected in some
| discourse, and if 'I' be a term which in consequence of such
| neglect becomes indivisible in that discourse, we have in
| that discourse,
|
| ['I'] = 1.
|
| This distinction between the absolutely indivisible and that which
| is one in number from a particular point of view is shadowed forth
| in the two words 'individual' ('to atomon') and 'singular' ('to kath
| ekaston');  but as those who have used the word 'individual' have not
| been aware that absolute individuality is merely ideal, it has come to
| be used in a more general sense.  (CP 3.93, CE 2, 389-390).
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

Nota Bene.  On the square bracket notation used above:
Peirce explains this notation at CP 3.65 or CE 2, 366.

| I propose to denote the number of a logical term by
| enclosing the term in square brackets, thus, ['t'].

The "number" of an absolute term, as in the case of 'I',
is defined as the number of individuals that it denotes.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

DOI.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The old logics distinguish between 'individuum signatum' and 'individuum vagum'.
| "Julius Caesar" is an example of the former;  "a certain man", of the latter.
| The 'individuum vagum', in the days when such conceptions were exactly
| investigated, occasioned great difficulty from its having a certain
| generality, being capable, apparently, of logical division.  If we
| include under the 'individuum vagum' such a term as "any individual
| man", these difficuluties appear in a strong light, for what is true
| of any individual man is true of all men.  Such a term is in one sense
| not an individual term;  for it represents every man.  But it represents
| each man as capable of being denoted by a term which is individual;  and
| so, though it is not itself an individual term, it stands for any one
| of a class of individual terms.  If we call a thought about a thing
| in so far as it is denoted by a term, a 'second intention', we may
| say that such a term as "any individual man" is individual by
| second intention.  The letters which the mathematician uses
| (whether in algebra or in geometry) are such individuals
| by second intention.  Such individuals are one in number,
| for any individual man is one man;  they may also be regarded
| as incapable of logical division, for any individual man, though
| he may either be a Frenchman or not, is yet altogether a Frenchman
| or altogether not, and not some one and some the other.  Thus, all
| the formal logical laws relating to individuals will hold good
| of such individuals by second intention, and at the same
| time a universal proposition may at any moment be
| substituted for a proposition about such an
| individual, for nothing can be predicated
| of such an individual which cannot be
| predicated of the whole class.
|
| C.S. Peirce, 'Collected Papers', CP 3.94
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

DOI.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Individual
|
| (As a technical term of logic, 'individuum' first appears
| in Boethius, in a translation from Victorinus, no doubt
| of 'atomon', a word used by Plato ('Sophistes', 229 D)
| for an indivisible species, and by Aristotle, often in
| the same sense, but occasionally for an individual.
| Of course the physical and mathematical senses of
| the word were earlier.  Aristotle's usual term for
| individuals is 'ta kath ekasta', Latin 'singularia',
| English 'singulars'.)
|
| Used in logic in two closely connected senses.
|
| (1) According to the more formal of these an individual is an
| object (or term) not only actually determinate in respect to
| having or wanting each general character and not both having
| and wanting any, but is necessitated by its mode of being to
| be so determinate.  See Particular (in logic).
|
| C.S. Peirce, 'Collected Papers', CP 3.611
|
|'Dictionary of Philosophy and Psychology',
| J.M. Baldwin (ed.), Macmillan, New York, NY,
| Volume 1, pp. 537-538, 2nd edition 1911.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

DOI.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Individual (1, cont.)
|
| This definition does not prevent two distinct individuals from being
| precisely similar, since they may be distinguished by their hecceities
| (or determinations not of a generalizable nature);  so that Leibniz's
| principle of indiscernibles is not involved in this definition.
|
| Although the principles of contradiction and excluded middle may be regarded
| as together constituting the definition of the relation expressed by "not",
| yet they also imply that whatever exists consists of individuals.  This,
| however, does not seem to be an identical proposition or necessity of
| thought;  for Kant's Law of Specification ('Krit. d. reinen Vernunft',
| 1st ed., 656, 2nd ed., 684;  but it is requisite to read the whole
| section to understand his meaning), which has been widely accepted,
| treats logical quantity as a continuum in Kant's sense, i.e., that
| every part of which is composed of parts.  Though this law is only
| regulative, it is supposed to be demanded by reason, and its wide
| acceptance as so demanded is a strong argument in favour of the
| conceivability of a world without individuals in the sense of
| the definition now considered.
|
| Besides, since it is not in the nature of concepts adequately
| to define individuals, it would seem that a world from which
| they were eliminated would only be the more intelligible.
|
| A new discussion of the matter, on a level with
| modern mathematical thought and with exact logic,
| is a desideratum.  A highly important contribution
| is contained in Schroeder's 'Logik', iii, Vorles. 10.
| What Scotus says ('Quaest. in Met.', VII 9, xiii & xv)
| is worth consideration.
| 
| C.S. Peirce, 'Collected Papers', CP 3.612
|
|'Dictionary of Philosophy and Psychology',
| J.M. Baldwin (ed.), Macmillan, New York, NY,
| Volume 1, pp. 537-538, 2nd edition 1911.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

DOI.  Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Individual (cont.)
|
| (2) Another definition which avoids the above difficulties is that
| an individual is something which reacts.  That is to say, it does
| react against some things, and it is of such a nature that it
| might react, or have reacted, against my will.
|
| This is the stoical definition of a reality;  but since the Stoics were
| individualistic nominalists, this rather favours the satisfactoriness
| of the definition than otherwise.
|
| It may be objected that it is unintelligible;  but in the sense
| in which this is true, it is a merit, since an individual is
| unintelligible in that sense.  It is a brute fact that the
| moon exists, and all explanations suppose the existence
| of that same matter.  That existence is unintelligible
| in the sense in which the definition is so.  That is
| to say, a reaction may be experienced, but it cannot
| be conceived in its character of a reaction;  for
| that element evaporates from every general idea.
|
| According to this definition, that which alone immediately
| presents itself as an individual is a reaction against the will.
| But everything whose identity consists in a continuity of reactions
| will be a single logical individual.  Thus any portion of space, so far
| as it can be regarded as reacting, is for logic a single individual;  its
| spatial extension is no objection.
|
| With this definition there is no difficulty about the truth that whatever
| exists is individual, since existence (not reality) and individuality are
| essentially the same thing;  and whatever fulfills the present definition
| equally fulfills the former definition by virtue of the principles of
| contradiction and excluded middle, regarded as mere definitions of
| the relation expressed by "not".
|
| As for the principle of indiscernibles, if two individual things are
| exactly alike in all other respects, they must, according to this
| definition, differ in their spatial relations, since space is
| nothing but the intuitional presentation of the conditions of
| reaction, or of some of them.  But there will be no logical
| hindrance to two things being exactly alike in all other
| respects;  and if they are never so, that is a physical
| law, not a neccesity of logic.  This second definition,
| therefore, seems to be the preferable one.
|
| C.S. Peirce, 'Collected Papers', CP 3.613
|
|'Dictionary of Philosophy and Psychology',
| J.M. Baldwin (ed.), Macmillan, New York, NY,
| Volume 1, pp. 537-538, 2nd edition 1911.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

DOI.  Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Let me highlight the following statement from the thread of HECeity:

| From this it is evident that what is to be called heterogeneous or not is
| a relative matter.  However, in our calculus it is enough for two things
| to have no concepts in common out of certain fixed concepts which are
| designated by us, even though they may have others in common.
|
| Leibniz, "Elements of a Calculus", April 1679.

So, if Leibniz, from the very outset, did not intend his coincidence metric,
his measure of conceptual remove, to be taken in the absolute way that some
seem to think he meant it, what then?  Have those who take it that way once
again mistaken an interpretive heuristic regulation for an ontological law?

I tend to think so.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

DOI.  Note 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o



o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Hermeneutic Equivalence Classes

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (April, 1679)
|
| 1.  A "term" is the subject or predicate of a categorical proposition.
|     Under the heading of "term", therefore, I include neither the sign of
|     quantity nor the copula;  so when it is said, "The wise man believes"
|     the term will be, not "believes", but "believer", for it is the same
|     as if I had said, "The wise man is a believer".
|
| 2.  By "propositions" I understand here categorical propositions,
|     unless I make special mention to the contrary.  However, the
|     categorical proposition is the basis of the rest, and modal,
|     hypothetical, disjunctive, and all other propositions
|     presuppose it.
|
|     I call "categorical" the proposition "A is B" or "A is not B",
|     i.e. "It is false that A is B", together with a variation in the
|     sign of quantity, so that either it is a universal proposition and
|     is understood of every subject, or it is a particular proposition and
|     is understood of some subject.
|
| Leibniz, 'Logical Papers', p. 17.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pages 17-24,
| Oxford University Press, London, UK, 1966.   (L. Couturat, p. 49).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 3.  Let there be assigned to any term its
|     symbolic number ['numerus characteristicus'],
|     to be used in calculation as the term itself is
|     used in reasoning.  I choose numbers whilst writing;
|     in due course I will adapt other signs both to numbers
|     and to speech itself.  For the moment, however, numbers
|     are of the greatest use, because of their certainty and
|     of the ease with which they can be handled, and because
|     in this way it is evident to the eye that everything is
|     certain and determinate in the case of concepts, as it
|     is in the case of numbers.
|
| 4.  The one rule for discovering suitable symbolic numbers is this:
|     that when the concept of a given term is composed directly of
|     the concepts of two or more other terms, then the symbolic
|     number of the given term should be produced by multiplying
|     together the symbolic numbers of the terms which compose
|     the concept of the given term.
|
|     For example, since man is a rational animal,
|     if the number of animal, 'a', is 2, and of
|     rational, 'r', is 3, then the number of
|     man, 'h', will be the same as 'ar':
|     in this example, 2 x 3, or 6.
|
|     Again, since gold is the heaviest metal,
|     if the number of metal, 'm', is 3, and
|     the number of heaviest, 'p', is 5, then
|     the number of gold, 's', will be the
|     same as 'mp', i.e. in this example
|     3 x 5, or 15.
|
| [*  Leibniz here refers to gold by its alchemical name, 'Sol'.]
|
| Leibniz, 'Logical Papers', pp. 17-18.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 5.  We shall use letters (such as 'a', 'r', 'h', or 'm', 'p', 's' above) when
|     numbers are either not available, or they are at any rate being treated
|     generally and not considered specifically.  This we must do here, when
|     we are establishing the elements of the subject.  The same thing is
|     done in algebra, so that we are not compelled to show in individual
|     cases what we can show once and for all of an indefinite number of
|     instances.  The method of using letters here I shall explain below.
|
| 6.  The rule given in article 4 is sufficient for our calculus  to cover
|     all things in the whole world, as far as we have distinct concepts
|     of them, i.e. as far as we know some of their requisites by which,
|     after we have examined them bit by bit, we can distinguish them
|     from all others;  or, as far as we can assign their definition.
|     For these requisites are simply the terms whose concepts compose
|     the concept which we have of a thing.
|
|     We can distinguish many things from others by their requisites,
|     and if there are any whose requisites are difficult to assign,
|     we will assign to them in the mean time some prime number,
|     and use it to designate other things.  [?].
|
|     In this way we shall be able to discover and prove
|     by our calculus at any rate all the propositions
|     which can be proved without the analysis of what
|     has temporarily been assumed to be prime.  (In
|     the same way, Euclid never uses the definition
|     of a straight line in his proofs, but instead
|     used certain assumptions which he took to be
|     axiomatic.  But when Archimedes wanted to
|     go further, he was compelled to analyse
|     and define the straight line itself --
|     namely, as the least distance between
|     two points.)
|
|     In this way we shall discover, if not all, at any rate innumerable things;
|     both those which have already been proved by others, and those which can
|     ever be proved by others from the definitions, axioms, and experiments
|     which are already known.
|
|     This is our prerogative:  that by means of numbers we can judge immediately
|     whether propositions presented to us are proved, and that what others could
|     hardly do with the greatest mental labour and good fortune, we can provide
|     with the guidance of symbols alone, by a sure and truly analytical method.
|     As a result of this, we shall be able to show within a century what many
|     thousands of years would hardly have granted to mortals otherwise.
|
| Leibniz, 'Logical Papers', p. 18.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 7.  To make evident the use of symbolic numbers in propositions, it is
|     necessary to consider the fact that every true universal affirmative
|     categorical proposition simply shows ['significat'] some connexion between
|     predicate and subject (a 'direct' connexion, which is what is always meant
|     here).  This connexion is, that the predicate is said to be in the subject,
|     or to be contained in the subject;  either absolutely and regarded in itself,
|     or at any rate in some instance, i.e. that the subject is said to contain the
|     predicate in a stated fashion.  This is to say that the concept of the subject,
|     either in itself or with some addition, involves the concept of the predicate,
|     and therefore that subject and predicate are related to each other either as
|     whole and part, or as whole and coincident whole, or as part to whole.
|
|     In the first two cases the proposition is a universal affirmative;
|     so when I say "All gold is metal" I simply mean that in the concept
|     of gold the concept of metal is contained directly, since gold is
|     the heaviest metal.
|
|     Again, when I say, "Every pious man is happy", I mean simply this:
|     that the connexion between the concepts of the pious man and of
|     the happy man is such that anyone who understands perfectly the
|     nature of the pious man will realize that the nature of the
|     happy man is involved in it directly.
|
|     But in all cases, whether the subject or predicate is a part
|     or a whole, a particular affirmative proposition always holds.
|
|     For example, some metal is gold;  for although metal does not by itself
|     contain gold, nevertheless some metal, with some addition or specification
|     (e.g. "that which makes up the greater part of the Hungarian ducat") is of
|     such a nature as to involve the nature of gold.
|
|     There is, however, a difference in the method of containment between the
|     subject of a universal and of a particular proposition.  For the subject
|     of a universal proposition, regarded in itself and taken absolutely, must
|     contain the predicate;  thus the concept of gold, regarded in itself and
|     taken absolutely, involves the concept of metal, for the concept of gold
|     is "the heaviest metal".  But in a particular affirmative proposition,
|     it is enough that the inclusion should hold with some addition.  The
|     concept of metal, regarded absolutely and taken in itself, does not
|     involve the concept of gold;  for it to do so, something must
|     be added.  This "something" is the sign of particularity;
|     for there is some certain metal which contains the
|     concept of gold.
|
|     However, when we say later that a term is contained in a term or
|     a concept in a concept, we shall understand "simply and in itself".
|
| Leibniz, 'Logical Papers', pp. 18-19.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 8.  Negative propositions merely contradict affirmatives, and assert
|     that they are false.  So a particular negative proposition simply
|     denies that an affirmative proposition is universal.  For example,
|     when I say "Some silver is not soluble in common 'aqua fortis'", I
|     simply mean that the universal affirmative proposition "All silver
|     is soluble in common 'aqua fortis'" is false.  For, if we believe
|     certain chemists, there is a contrary instance, which they call
|     "fixed silver" ['Luna fixa'].  A universal negative proposition
|     merely contradicts a particular affirmative.  For example, if I
|     say "No wicked man is happy", I mean that it is false that some
|     wicked man is happy.  So it is evident that negatives can be
|     understood from affirmatives, and conversely, affirmatives
|     from negatives.
|
| 9.  Further, in every categorical proposition there are two terms.
|     Any two terms, in so far as they are said to be in or not to
|     be in, i.e. to be contained or not to be contained, differ
|     in the following ways:  that either one is contained in
|     the other, or neither is.  If the one is contained in
|     the other, then either the one is equal to the other
|     or they differ as whole and part.  If neither is
|     contained in the other, then either they contain
|     something which is common, but not too remote,
|     or they are totally different.  However, we
|     will explain this species by species.
|
| Leibniz, 'Logical Papers', pp. 19-20.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 10.  Two terms which contain each other
|      and are nevertheless equal I call
|      "coincident".
|
|      For example, the concept of a triangle
|      coincides in effect with the concept of
|      a trilateral -- i.e. as much is contained
|      in the one as in the other.
|
|      Sometimes this may not appear at first sight,
|      but if one analyses each of the two one will
|      at last come to the same.
|
| 11.  Two terms which contain each other [one the other?] but do not coincide
|      are commonly called "genus" and "species".  These, in so far as they
|      compose concepts or terms (which is how I regard them here) differ
|      as part and whole, in such a way that the concept of the genus is
|      a part and that of the species is a whole, since it is composed
|      of genus and differentia.
|
|      For example, the concept of gold and the concept of metal
|      differ as part and whole;  for in the concept of gold there
|      is contained the concept of metal and something else -- e.g.
|      the concept of the heaviest among metals.  Consequently, the
|      concept of gold is greater than the concept of metal.
|
| Leibniz, 'Logical Papers', p. 20.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 12.  The Scholastics speak differently;
|      for they consider, not concepts,
|      but instances which are brought
|      under universal concepts.
|
|      So they say that metal is wider than gold,
|      since it contains more species than gold, and
|      if we wish to enumerate the individuals made of
|      gold on the one hand and those made of metal on
|      the other, the latter will be more than the former,
|      which will therefore be contained in the latter as
|      a part in the whole.
|
|      By the use of this observation, and with suitable symbols,
|      we could prove all the rules of logic by a calculus somewhat
|      different from the present one -- that is, simply by a kind of
|      inversion of it.  However, I have preferred to consider universal
|      concepts, i.e. ideas, and their combinations, as they do not depend
|      on the existence of individuals.
|
|      So I say that gold is greater than metal, since more is required for the
|      concept of gold than for that of metal and it is a greater task to produce
|      gold than to produce simply a metal of some kind or other.
|
|      Our language and that of the Scholastics, then, is not contradictory here, but
|      it must be distinguished carefully.  However, it will be evident to anyone who
|      considers the matter that I have not made any linguistic innovation which does
|      not have some reason and some utility.
|
| Leibniz, 'Logical Papers', pp. 20-21.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 8

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 13.  If neither term is contained in the other
|      they are called "disparate";  and then, as
|      I have said, they either have something in
|      common or they differ totally.
|
|      Those terms have something in common which are under
|      the same genus, and which you could call "conspecies";
|      as "man" and "brute" have the concept of animal in common.
|      "Gold" and "silver" have in common the concept of metal,
|      "gold" and "vitriol", that of mineral.
|
|      From this it is evident that two terms have more or less in common
|      as their genus is less or more remote.  For if the genus is very
|      remote, there will be little that the species have in common.
|
|      If the genus is most remote, we say that things are "heterogeneous",
|      i.e. that they differ totally, like body and spirit;  not because
|      they have nothing in common -- for they are both substances, at
|      any rate -- but because this common genus is very remote.
|
|      From this it is evident that what is to be called heterogeneous or not is
|      a relative matter.  However, in our calculus it is enough for two things
|      to have no concepts in common out of certain fixed concepts which are
|      designated by us, even though they may have others in common.
|
| Leibniz, 'Logical Papers', p. 21.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 9

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 14.  What we have just said about terms which, in various ways, contain
|      or do not contain each other, let us now transfer to their symbolic
|      numbers.  This is easy, since we said in article 4 that when a term
|      helps to constitute another, i.e. when the concept of the term is
|      contained in the concept of another, then the symbolic number of
|      the constituent term is a factor of the symbolic number to be
|      assumed as standing for the term to be constituted;  or, what
|      is the same, the symbolic number of the term to be constituted
|      (i.e. which contains another) is divisible by the symbolic number
|      of the constituent term (i.e. which is in the other).
|
|      For example, the concept of animal helps to constitute the concept of man,
|      and so the symbolic number of animal, 'a' (e.g. 2), together with another
|      number 'r' (such as 3), will be a factor of the number 'ar', or 'h' (2 x 3,
|      or 6) -- namely, the symbolic number of man.  It is therefore necessary
|      that the number 'ar' or 'h' (i.e. 6) can be divided by 'a' (i.e. by 2).
|
| Leibniz, 'Logical Papers', p. 21.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 10

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 15.  When two terms are coincident, e.g. "man" and "rational animal", then
|      their numbers, 'h' and 'ar', are in effect coincident (as 2 x 3 and 6).
|      Since, however, the one term contains the other in this way, although
|      reciprocally (for "man" contains "rational animal", and nothing besides;
|      and "rational animal" contains "man", and nothing besides which is not
|      already contained in "man"), it is necessary that the numbers 'h' and
|      'ar' (2 x 3 and 6) should also contain each other.  This is the case,
|      since they are coincident, and the same number is contained in itself.
|
|      Furthermore, it is necessary that the one can be divided by the other,
|      which is also the case;  for if any number is divided by itself, the
|      result is unity.  So what we said in the previous article -- that
|      when one term contains another the symbolic number of the former
|      is divisible by the symbolic number of the latter -- also holds
|      in the case of coincident terms.
|
| Leibniz, 'Logical Papers', pp. 21-22.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 11

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 16.  Hence we can also know by symbolic numbers which term does not
|      contain another;  for we have only to test whether the number
|      of the latter can divide exactly the number of the former.
|
|      For example, if the symbolic number of man is assumed to be 6, and
|      that of ape to be 10, it is evident that neither does the concept
|      of ape contain the concept of man, nor does the converse hold,
|      since 10 cannot be exactly divided by 6, nor 6 by 10.
|
|      If, therefore, it is asked whether the concept of the wise man is
|      contained in the concept of the just man, i.e. if nothing more is
|      required for wisdom than what is already contained in justice, we
|      have only to examine whether the symbolic number of the just man
|      can be exactly divided by the symbolic number of the wise man.
|      If the division cannot be made, it is evident that something
|      else is required for wisdom which is not required in the just
|      man.  (This "something else" is a knowledge of reasons;  for
|      someone can be just by custom or habit, even if he cannot give
|      a reason for the things he does.)  I will state later how this
|      minimum which is still required, or, is to be supplied, can also
|      be found by symbolic numbers.
|
| Leibniz, 'Logical Papers', p. 22.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 12

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 17.  From this, therefore, we can know whether some
|      universal affirmative proposition is true.  For
|      in this proposition the concept of the subject,
|      taken absolutely and indefinitely, and in general
|      regarded in itself, always contains the concept of
|      the predicate.
|
|      For example, all gold is metal;  that is, the concept of metal is
|      contained in the general concept of gold regarded in itself, so that
|      whatever is assumed to be gold is by that very fact assumed to be metal.
|      This is because all the requisites of metal (such as being homogeneous
|      to the senses, liquid when fire is applied in a certain degree, and then
|      not wetting things of another genus immersed in it) are contained in the
|      requisites of gold, as we explained at length in article 7 above.  So if
|      we want to know whether all gold is metal (for it can be doubted whether,
|      for example, fulminating gold is still a metal, since it is in the form of
|      a powder and explodes rather than liquefies when fire is applied to it in
|      a certain degree) we shall only investigate whether the definition of metal
|      is in it.  That is, by a very simple procedure (once we have our symbolic
|      numbers) we shall investigate whether the symbolic number of gold can be
|      divided by the symbolic number of metal.
|
| Leibniz, 'Logical Papers', pp. 22-23.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 13

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 18.  But in the particular affirmative proposition it is not necessary that
|      the predicate should be in the subject regarded in itself and absolutely;
|      i.e. that the concept of the subject should in itself contain the concept
|      of the predicate;  it is enough that the predicate should be contained in
|      some species of the subject, i.e. that 'the concept of some instance or
|      species of the subject should contain the concept of the predicate',
|      even though it is not stated expressly what the species is.
|
|      Consequently, if you say, "Some experienced man is prudent", it is
|      not said that the concept of the prudent man is contained in the
|      concept of the experienced man regarded in itself.  Nor, again,
|      is this denied;  it is enough for our purpose that some species
|      of experienced man has a concept which contains the concept of
|      the prudent man, even though it is not stated expressly just
|      what that species is.  For even if it is not said expressly
|      here that the experienced man is a prudent man who also has
|      natural judgement, it is enough that it is understood that
|      some species of experienced man involves prudence.
|
| Leibniz, 'Logical Papers', p. 23.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 14

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (cont.)
|
| 19.  If the concept of the subject, regarded in itself,
|      contains the concept of the predicate, then the
|      concept of the subject with some addition (i.e.
|      the concept of a species of the subject) will
|      contain the concept of the predicate.  This
|      is enough for us, since we do not deny that
|      the predicate is in the subject itself when
|      we say that it is in a species of it.
|
|      So we can say that some metal is liquid in fire,
|      properly applied, though we could have stated
|      more generally and more usefully that every
|      metal is liquid in fire, properly applied.
|
|      However, a particular assertion has its uses, as when
|      it is sometimes proved more easily than a general one,
|      or when the hearer will accept it more readily than a
|      general proposition, and a particular proposition is
|      sufficient for our purposes.
|
| Leibniz, 'Logical Papers', p. 23.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Note 15

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Leibniz, "Elements of a Calculus" (concl.)
|
| 20.  Since, therefore, all that is required for a particular
|      affirmative proposition is that a species of the subject
|      should contain the predicate, it follows that the subject
|      is related to the predicate either as species to genus, or
|      as a species to something which coincides with itself (i.e.
|      a reciprocal attribute), or as genus to species.  That is,
|      the concept of the subject will be related to the concept
|      of the predicate either as whole to part, or as a whole
|      to a coincident whole, or as a part to the whole (see
|      above, articles 7 and 11).
|
|      It will be related as whole to part when the concept of the predicate, as genus,
|      is in the concept of the subject, as species:  e.g. if "bernicle" is the subject
|      and "bird" the predicate.  It will be related as whole to coincident whole when
|      two equivalents are stated of each other reciprocally, as when "triangle" is
|      the subject and "trilateral" the predicate.  Finally, it will be related as
|      part to whole, as when "metal" is the subject and "gold" the predicate.
|      So we can say "Some bernicle is a bird", "Some triangle is a trilateral",
|      (even though I could also have stated these two propositions universally),
|      and lastly "Some metal is gold".
|
|      In other cases a particular affirmative proposition does not hold.
|      I prove this as follows.  If a species of the subject contains
|      the predicate, it will contain it either as coincident with
|      itself or as a part;  if it contains it as equal to itself,
|      i.e. as coincident, then the predicate is a species of the
|      subject, since it coincides with a species of the subject.
|      But if a species of the subject contains the predicate as
|      a part, the predicate will be a genus of a species of the
|      subject, by article 11;  therefore predicate and subject
|      will be two genera of the same species.  Now, two genera
|      of the same species either coincide or, if they do not,
|      they are necessarily related as genus and species.
|      This is easily shown, since the concept of the
|      genus is formed simply by casting-off [abjectio]
|      from that of the species;  since, therefore, from
|      a common species of two genera, genera will appear
|      on both sides by continued casting-off (that is, they
|      will be left behind as superfluous concepts are cast off),
|      one will appear before the other, and so one will seem to be
|      a whole and the other a part.**
|
|      So we have a paralogism, and with it there falls much that we have said
|      hitherto;  for I see that a particular affirmative proposition holds even
|      when neither term is a genus or species, such as "Some animal is rational",
|      provided only that the terms are compatible.  Hence it is also evident that
|      it is not necessary that the subject can be divided by the predicate or the
|      predicate by the subject, on which we have so far built a great deal.  What
|      we have said, therefore, is more restricted than it should be;  so we shall
|      begin again.
|
| [**  Leibniz has written in the margin:
|
|      |          (  2, 3, 4, 5
|      |          (  sensible body
|      | diamond  <
|      |          (  homogeneous
|      |          (  most durable
|
|      This is perhaps meant to illustrate how different genera can be
|      obtained from the species "diamond" ('adamas') by "casting-off"
|      concepts successively.]
|
| Leibniz, 'Logical Papers', pp. 23-24.
|
| Leibniz, G.W., "Elements of a Calculus" (April, 1679),
| G.H.R. Parkinson (ed.), 'Leibniz:  Logical Papers', pp. 17-24,
| Oxford University Press, London, UK, 1966.   (Couturat, 49-57).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Work Area

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

coincident -- 20, 53-54, 57-58, 122, 131
identity   -- xx
same       -- xxvi, xxxi, lxii, 34-35, 43, 52-53, 122, 131

the difference between authoritarian dogmatic
and exploratory heuristic thinkers.

isms as referring to emphasized aspects of a complex reality
or as heuristic approaches to it.  reduction to one aspect
or one approach.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HRY.  Hemlock, the Rack, and You -- W.T. Sherman

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HRY.  Innumerate Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

people who study groups -- i mean the social, not of necessity the mathematical kind --
will tell you that every group has its espoused ideals and its real ideals, and that
seldom the twain shall mark each other.  so it is hardly surprising that an ideal
like "thou shalt not block the way of inquiry" will be honored more in the breach
than in the observance.  that is not cynicism, it is merely empiricism.  so we
find that what distinguishes people, from secular citizens to holy peirceans,
is not so much whether they will block inquiry, but what excuses they will
give when they do.  i have come to gather a rather large dataset of these
in my time -- the most interesting one that i have encountered lately
is "your inquiry is blocking mine".  it's a near perfect squelch,
as squelches go, but it does raise an interesting question about
the presence of interference phenomena in the electurodynamic
fields of inquiry.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HRY.  Innumerate Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway

HC: I'm sure there is such a phenomenon, Jon.  It enters into the
    philosophical motivation of pluralism at the deepest level.
    William James' metaphor is quite apt here.  If we imagine
    a group of people carrying a log, then no one person could
    do it -- get the log from point A to point B.  So, it has
    to be a group.  Yet, it doesn't follow that if we need to
    have a group of people to carry the log that it has to be
    the exactly the same group of people all the way from A to B. 

HC: If carring the log stands in for inquiry in general, then one
    alternative here is that different groups should do the work
    over differing portions of the course from A to B.

HC: Moreover, the same person might be involved in differing groups, over
    differing portions of the course from A to B.  In the background is
    James' distinction between the "all-einheit" of absolute idealism and
    his own pluralism.  See our selection from James, 'A Pluralistic Universe',
    in Stroh and Callaway, 'American Ethics', p. 276f.  This kind of point is
    both moral and methodological.

A very apt quote, and a bit of synchronicity, as I happen to
have a -- somewhat distributed -- dataset of many such quotes
along these same lines from Peirce, Dewey, James, and Mead --
it is clear that all of these pioneers spent a lot of time in
the woods! -- plus one from H.P. Stapp that ties them together
and brings them to bear on contemporary issues.  So I will put
that on my do-list to go look-up.

As an optimist, you naturally focussed on the cooperative aspect of
the underlying problematic.  The yin to that yang is the competitive
face of the game, and though I'm an optimist while I yet live, hard
experience has taught me to be wary of that.  I am not naive -- the
person who says to you "your inquiry is blocking mine" (YIIBM) is
nine times out of ten concerned about something more ego-related
than the purity of his or her ongoing life of inquiry.  So we
must approach the log-jam with caution, and not be too blazé
about jumping into the water, while we yet live to inquire.

Still, your borrowing of this metaphor does
point strait to the crux of the problem,
whether co-operative or co-optative,
and that is precisely the physical
character of signs in practice.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HRY.  Innumerate Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o



o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Identity & Teridentity

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

We had a very extended discussion of this on the
Ontology, Standard & Otherwise Lists last year,
and I gave a detailed exposition of the various
issues as they are understood in mathematics.
I have heard of Burch's & Herzberger's proofs,
but never bothered to look them up, since the
basic facts are considered in graph theory
circles to be nearly trivial observations
that have been established from the time
of Euler, at least.

Most of the controversy in other circles
appears to turn on (1) not understanding
the statement of the question, as it is
generally understood, and as Peirce most
definitely understood it -- this appears
to be the problem with Quine's fallacy,
since what he does prove is irrelevant
and trivial with respect to the matter
in question, (2) failing to define the
terms that one is using in a way that
makes the problem well-posed.

The interesting thing about the "reduction" half of the problem
is simply that one can play games with the accessibility or the
admissibility of the hypostatic intermediary relations.  On the
"irreduction" side, in constrast, saying that you can compose
2-adic relations in such a way as to produce a 3-adic relation
just as embarrassing a gaffe as saying that you can multiply
square matrices in the ordinary way and get a cubic array!
No proof is needed here, because it is simply a matter of
understanding the definition of "compose" or "product"
that is used in the statement of the problem.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway

HC: But if we add an identity to the statement,
    then we get something more definite, as with,
    say, (Ex)(Ey) [(Fx & Fy) & Ixy].  Here "Ixy"
    establishes a cross-reference between the two
    variables, while without this, we only have the
    distinctive cross-reference of each variable to
    its corresponding quantifier.

Counting the tokens of the lexical items "x" and "y" in the above text,
I count 4 "x"'s and 4 "y"'s.  Can you explain to me the relationships
that you see as being established among these eight various tokens?

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway
JA = Jon Awbrey

HC_1:  But if we add an identity to the statement,
       then we get something more definite, as with,
       say, (Ex)(Ey) [(Fx & Fy) & Ixy].  Here "Ixy"
       establishes a cross-reference between the two
       variables, while without this, we only have the
       distinctive cross-reference of each variable to
       its corresponding quantifier.

JA: Counting the tokens of the lexical items "x" and "y" in the above text,
    I count 4 "x"'s and 4 "y"'s.  Can you explain to me the relationships
    that you see as being established among these eight various tokens?

HC_2: The basic cross reference is between a quantified variable,
      such as the "x" of "Fx" in "(Ex) Fx" and the corresponding
      variable of quantification, to be seen in the quantifier
      "(Ex)."   So, for instance, if we say,

      (Ex) (Fx & Gx)

      then the second and third "x"'s make cross reference to the first:
      imagine, if you will that instead of repeating the "x" we drew a line,
      in somewhat Peircean style, from the 2nd and the 3rd positions back to
      the first.  This is to say that there is something x, such that it is
      an F and it is a G.  But, if we cannot be sure of co-reference inside
      a given expression, then we use distinctive variables to keep track of
      the distinctive lines of cross-reference.  This works somewhat like the
      distinctive genders of pronouns in those languages which mark nouns and
      pronouns for gender.

HC_3: Now look at the more complex case, with two quantifiers:

      (Ex) (Ey) (Fx & Gy)

      Here the second "x" makes cross reference back to
      the quantifier containing the same variable, and the
      second y refers back to the variable of quantification
      in the second quantifier:  There is something x and there
      is something y such that x is an F and y is a G.  Given that
      this statement is true, we know that one or more of the values
      of the variables are F's and that one or more of the values of
      the variables are G's, but we don't know whether these values
      are the same or different.  The sentence will be true if they
      are the same, but it does not logically imply that they are.
      It will also be true if they are different.  However, this
      last option is eliminated if we add the identity statement,
      as in the original example,

      (Ex) (Ey) [(Fx & Gy) & Ixy]

HC_4: This tells us no more than the following,

      (Ex) (Fx & Gx) 

HC_5: I think that should answer the question, though I have not detailed the
      answer regarding the last examples.  Of course, the variables continue
      to "range over" the entire domain, whatever that may be, but if the
      statements are true, then this tells us something about how the
      predicates are to be interpreted in relationship to each other,
      and this also tell us something of the details of the domain.

Counting the tokens of the lexical items "x" and "y"
in the above texts, I make the following observations:

1.  In the subtext marked "HC_1", I count 4 "x"'s and 4 "y"'s.

2.  In the subtext marked "HC_2", I count 4 "x"'s. 

3.  In the subtext marked "HC_3", I count 7 "x"'s and 8 "y"'s.

4.  In the subtext marked "HC_4", I count 3 "x"'s.

Can you explain to me the relationships that you
see as being established among these 30 tokens,
some of which are distributed across distinctly
dated and variously located files of type text?

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

The Invisible Hand Of The Markuplace

BM = Bernard Morand
HC = Howard Callaway
JA = Jon Awbrey

There's a lot of stuff here to think about so
we will need to take it real slow and careful.
But just so that it won't be too boring, not
to say "elegant", let me throw out a few rash
and hopefully provocative opinions that come
from my current perspective on these issues.

1.  There are ideas in Peirce's 1870 "Description of a Notation" that make
    the so-called "discovery" of quantifiers look like a relatively minor
    afterthought in comparison.  For instance, a hint of the possibility
    of "differential logic", which so far I have only found previously
    suggested in the correspondence between Boole and DeMorgan, and
    maybe a few even more mysterious remarks of Leibniz, but that
    would be stretching it.

2.  On second thought, that's enough for now.

HC: But if we add an identity to the statement,
    then we get something more definite, as with,
    say, (Ex)(Ey) [(Fx & Fy) & Ixy].  Here "Ixy"
    establishes a cross-reference between the two
    variables, while without this, we only have the
    distinctive cross-reference of each variable to
    its corresponding quantifier.

JA: Counting the tokens of the lexical items "x" and "y" in the above text,
    I count 4 "x"'s and 4 "y"'s. Can you explain to me the relationships
    that you see as being established among these eight various tokens?

HC: The basic cross reference is between a quantified variable,
    such as the "x" of "Fx" in "(Ex) Fx" and the corresponding
    varibale of quantification, to be seen in the quantifier
    "(Ex)."   So, for instance, if we say,

    (Ex) (Fx & Gx)
    
    then the second and third "x"'s make cross reference to the first:
    imagine, if you will that instead of repeating the "x" we drew a line,
    in somewhat Peircean style, from the 2nd and the 3rd positions back to
    the first.  This is to say that there is something x, such that it is
    an F and it is a G.  But, if we cannot be sure of co-reference inside
    a given expression, then we use distinctive variables to keep track of
    the distinctive lines of cross-reference. This works somewhat like the
    distinctive genders of pronouns in those languages which mark nouns and
    pronouns for gender.

    Now look at the more complex case, with two quantifiers:
    
    (Ex) (Ey) (Fx & Gy)

    Here the second "x" makes cross reference back to the quantifier
    containing the same variable, and the second y refers back to the
    variable of quantification in the second quantifier:  There is
    something x and there is something y such that x is an F and y
    is a G. Given that this statement is true, we know that one or
    more of the values of the variables are F's and that one or more
    of the values of the variables are G's, but we don't know whether
    these values are the same or different.  The sentence will be true
    if they are the same, but it does not logically imply that they are.
    It will also be true if they are different. However, this last option
    is eliminated if we add the identity statement, as in the original 
    example,

    (Ex) (Ey) [(Fx & Gy) & Ixy]

BM: This seems to be precisely what I had in mind in a previous message
    when I said that the Peirce-Mitchell quantifiers were different from
    those of modern logic.  To be sure:  do you consider, Howard, that the
    formula you are giving above, as augmented with the Ixy, could be a valid
    interpretation of the Sigma and Pi from Peirce?  Or do you think otherwise?

HC: This tells us no more than the following, (Ex)(Fx & Gx).

BM: Yes indeed!  And then x is no more a variable nor a value
    in some domain of the world but an index giving a label to
    F and G in the formula?

We haven't got as far as even being able to discuss any of that yet.
I've said nothing in this particular discussion about "variables",
"quantifiers", or even, heaven help us, "quotation".  Some readers,
okay, Howard and Seth, are simply failing to grasp the problem that
Peirce was aware of when he talked about "indices", "ingredients",
"umbrals" (Sylvester's word for it), and that he was trying to
resolve in some measure by bringing in teridentity relations.

There are syntactic, semantic, heaven help us, pragmatic layers
of issues here, but so far we are still trying to say how it is
that we are supposed to "establish" the bare syntactic grounds.

For instance, by what convention of context do any of the tokens
of the characters "x" and "y" in the following text have anything
at all to do with each other?

HC: But if we add an identity to the statement,
    then we get something more definite, as with,
    say, (Ex)(Ey) [(Fx & Fy) & Ixy].  Here "Ixy"
    establishes a cross-reference between the two
    variables, while without this, we only have the
    distinctive cross-reference of each variable to
    its corresponding quantifier.

You and I both know that some readers are simply failing to understand
what the "average mechanical parser" (AMP) has to quasi-understand in
order to act as if it quasi-grasps merely the syntactic bearing of
all these tokens on each other, for example, the "x" and the "y"
in the "Ixy" of the second sentence on the "x" and the "y" in
the phrase "(Ex)(Ey) [(Fx & Fy) & Ixy]" of the first sentence.

There is just a whole lot of hand-waving going on underneath the
cover of that magical spelling:  "establishes a cross-reference".

Until those readers acquire that minimal level of
syntactic consciousness, the capacity to reflect
on the actual ongoing semiotic process, they'll
simply never get what all the fuss is about,
keep on casting this spell on themselves,
and the wheels of 'their' progress will
continue to grind as slowly as they
always have so far.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Dogs Are 2-Peds, The Victorian Proof

The hind legs are unmentionable.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Howard, addressing Seth, wrote:

HC: I thought the following two quotations from a prior posting of
    yours especially interesting, so I am going to repeat them here:

SS: 1896 Tridentity decomposable
    ---------------------------------
    Now, identity is essentially a dual relation.
    That is, it requires two subjects and no more.
    If three objects are identical, this fact is
    entirely contained in the fact that the three
    pairs of objects are identical.  CP1.446 (1896).

SS: 1906 Tridentity not decomposable
    -----------------------------------
    ... there is one more reform that needs to be made in the system
    of existential graphs.  Namely the line of identity must be totally
    abolished, or rather must be understood quite differently.  We must
    hereafter understand it to be potentially the graph of teridentity
    by which means there always will be at least one loose end in every
    graph.  In fact, it will not be truly a graph of teridentity but
    a graph of indefinitely multiple identity.  CP4.583 (1906)
    -------------------------------------

A writer who is not merely speaking loosely but is trying to write with
any technical clarity would simply not summarize Peirce's first statement
under the heading that Seth has given it.  All apart from the question of
standard nomenclature, it is a further error to equate under the same head
the two distinct logical situations that Peirce is describing.  Now that
Howard seems compelled to compound and propagate these errors, I will
repeat what I said about them on the other thread:

| The statements about the various identity relations, taking "identity"
| in the sense that it has within the logic of relatives, not in, say,
| meteorology, you are just plain misunderstanding, by dint of removing
| these statements from the distinctive contexts in which each is true.
| There is a definition of "decomposable" that has to be observed here.
| This definition is invoked when one says that no 3-adic relation is
| "composed" of 2-adic relations.  It is not invoked in the statement
| that one fact is "contained" in several other facts, because the
| form of that style of "containment" involves the application of
| several other 3-adic relations, including 3-identity relations.
| In general, the fact that there exist k-identity relations
| I_k c X^k for each k = 2, 3, 4, ..., is one thing.  Which
| of them can be defined in terms of which others in which
| ways -- that is a whole manifold of different questions.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway
JA = Jon Awbrey

I will return to the examples that Howard gave.
To avoid being charged with the fallacy that is
classically known as "proof by title" he would
eventually have to demonstrate how all of the
cross-references that are necessary in logic
can be established through 2-adic relations
alone, but for now I would be content to
see him supply even a single worked-out
example of a non-trivial sort.

So let us look at some of what might be involved in that task.

HC_1:  But if we add an identity to the statement,
       then we get something more definite, as with,
       say, (Ex)(Ey) [(Fx & Fy) & Ixy].  Here "Ixy"
       establishes a cross-reference between the two
       variables, while without this, we only have the
       distinctive cross-reference of each variable to
       its corresponding quantifier.

JA: Counting the tokens of the lexical items "x" and "y" in the above text,
    I count 4 "x"'s and 4 "y"'s.  Can you explain to me the relationships
    that you see as being established among these eight various tokens?

HC_2: The basic cross reference is between a quantified variable, such as the
      "x" of "Fx" in "(Ex) Fx" and the corresponding variable of quantification,
      to be seen in the quantifier "(Ex)."   So, for instance, if we say,

      (Ex) (Fx & Gx)

      then the second and third "x"'s make cross reference to the first:
      imagine, if you will that instead of repeating the "x" we drew a line,
      in somewhat Peircean style, from the 2nd and the 3rd positions back to
      the first.  This is to say that there is something x, such that it is
      an F and it is a G.  But, if we cannot be sure of co-reference inside
      a given expression, then we use distinctive variables to keep track of
      the distinctive lines of cross-reference.  This works somewhat like the
      distinctive genders of pronouns in those languages which mark nouns and
      pronouns for gender.

At this point Howard has yet to discuss all eight of the tokens
in HC_1 that I asked him about, so I have no way of knowing how
he "establishes a cross-reference" between the first and second
sentences.  So let me pass by the first question for the moment.

Turning to the text HC_2, I would like to ask the following questions:

1.  Supposing that we "drew a line, in somewhat Peircean style,
    from the 2nd and the 3rd positions back to the first", and
    ignoring for now the question of whether such a line would
    bear any family resemblance to the "Peircean style", allow
    me but to ask:  What manner and, more acutely, what degree
    of relation would that line have to signify?

2.  Let me credit Howard here for providing us with a significant
    bit of practical methodology for establishing cross-references
    in problematic cases, namely:

    | But, if we cannot be sure of co-reference inside a given expression,
    | then we use distinctive variables to keep track of the distinctive
    | lines of cross-reference.  This works somewhat like the distinctive
    | genders of pronouns in those languages which mark nouns and pronouns
    | for gender.

    However, this method raises some new questions in its application.
    For example, let us push on into the interior of the quantified
    formula in HC_1, to wit:

    (Fx & Fy) & Ixy

    When I look at that, with the type/token distinction fresh in mind,
    I naturally can't help wondering how one establishes co-references
    between the 2 tokens of "x" and the 2 tokens of "y", respectively,
    I mean, in terms of the logical relations that must be involved.
    So let me try to apply Howard's method.

    Introducing "distinctive variables", we arrive at this form:

    (Fx & Fy) & Iuv

    Well, that solves the type/token problem, since each
    of the types is now represented by a token sui generis,
    but in order to recover the originally intended sense
    of the formula, it is apparently necessary to extend
    it just a little bit, like so:

    (((Fx & Fy) & Iuv) & Ixu) & Iyv

    Well, that's the sort of thing that could go on forever,
    and I think you can see how more complex formulas would
    lead to whole new orders of problems with establishing
    co-references among localized tokens of given types.

    So the question is:  If 2-adic identity relations are
    useless for establishing co-refernces among tokens,
    what kind of logical resource might do the job?

Co-referential pre-established harmony, maybe?

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 8

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Relations in the sense here considered are known, more particularly,
| as 'dyadic' relations;  they relate elements in pairs.  The relation of
| giving (y gives z to w) or betweenness (y is between z and w), on the other
| hand, is triadic;  and the relation of paying (x pays y to z for w) is tetradic.
| But the theory of dyadic relations provides a convenient basis for the treatment
| also of such polyadic cases.  A triadic relation among elements y, z, and w might
| be conceived as a dyadic relation borne by y to z;w [the ordered pair (z, w)].
|
| Quine, 'Math Logic', p. 201
|
| W.V. Quine,
|'Mathematical Logic, Revised Edition,
| Harvard University Press, Cambridge, MA, 1981.

This is one of the 'loci' that is frequently cited in claiming
that triadic relations decompose or reduce to dyadic relations.

What Quine fails to articulate here is that the general relation
of ordered pairs to their components is itself a triadic relation.

In particular, it has this shape:

o---------o---------o---------o
| First   | Second  | Pair    |
o---------o---------o---------o
| a       | a       | (a, a)  |
| a       | b       | (a, b)  |
| a       | c       | (a, c)  |
| ...     | ...     | ...     |
| b       | a       | {b, a)  |
| b       | b       | (b, b)  |
| b       | c       | (b, c)  |
| ...     | ...     | ...     |
o---------o---------o---------o

In graphical form we can draw each pairing-triple like this:

 x   y
  \ /
   o 
   |
 (x,y)

Thus, the ordered triple (u, v, w) of simple elements u, v, w
can be represented in the following two-step fashion:

 u   v
  \ /
   o 
   |
 (u,v)  z
   \   /
    \ /
     o
     |
     |
 ((u,v),z)

The moral of the story is that there is no way
to achieve synthesis without nodes of degree 3.

This has turned out to be a commonplace but a very fundamental fact
in many areas of math and computer science, theoretical and applied.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 9

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Relations in the sense here considered are known, more particularly,
| as 'dyadic' relations;  they relate elements in pairs.  The relation of
| giving (y gives z to w) or betweenness (y is between z and w), on the other
| hand, is triadic;  and the relation of paying (x pays y to z for w) is tetradic.
| But the theory of dyadic relations provides a convenient basis for the treatment
| also of such polyadic cases.  A triadic relation among elements y, z, and w might
| be conceived as a dyadic relation borne by y to z;w [the ordered pair (z, w)].
|
| Quine, 'Math Logic', p. 201
|
| W.V. Quine,
|'Mathematical Logic, Revised Edition,
| Harvard University Press, Cambridge, MA, 1981.

With Quine's text in view, I can now add to my list of reasons
why the basic facts of 3-adic irreducibility and 3-identity have
continued to remain such a bother outside of mathematics and most
areas of computer science, where they have been considered trivial
observations since the time of Euler in the first case and Peirce
in the second, at the very least.

JA: Most of the controversy in other circles
    appears to turn on (1) not understanding
    the statement of the question, as it is
    generally understood, and as Peirce most
    definitely understood it -- this appears
    to be the problem with Quine's fallacy,
    since what he does prove is irrelevant
    and trivial with respect to the matter
    in question, (2) failing to define the
    terms that one is using in a way that
    makes the problem well-posed.

What Quine knew and when he knew it is not the business of my inquiry here.
I think that it is fair to refer to Quine's statement as "Quine's Fallacy"
because of the uses he and others have put it to, and because, if he knew
better, he simply did not take up the reponsibility or making that clear.

Reason (3), that Quine so amply exemplifies at this point, is this:
Not understanding what a relation is.  I know that probably sounds
shocking, so let me explain.  We find a category of thinkers who
are perfectly capable of saying what a relation is, speaking in
extension, as is our concern here, they will quite facilely say:

| A k-adic relation is a set of k-tuples, a subset L c X^k, for
| an inclusive enough domain X and its k^th cartesian power X^k.

So far so good.

But when they come to speak on matters like the "composition",
the "decomposition", the "production", or the "reduction" of
a given relation in relation to a given set of relations, they
constantly fail to draw the correct conclusion about what that
means.  To facilitate the remainder of this discussion, let us
introduce the generic terms "(de-)generation" to range over all
of the above (de-)constructions in the obvious way.  Then, the
immediate consequence that they fail to appreciate is just this:

|  A relation is a set of tuples.
| ----------------------------------------------------------------
|  A generation of a relation is a generation of a set of tuples.

I will take it up from there next time.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BU = Ben Udell

BU: Generally I'm not saying that there is no problem
    in computer programming like what you and Jon have
    been talking about.  I wish you or Jon could state
    the problem in a way that would make it easier for
    non computer-programmers like me to understand what
    it is, even if it means providing some oversimplified
    examples, or writing at the level where the reader of
    a popular science magazine like 'Scientific American'
    can understand.

I believe that the basic issues were written up in 'Popular Science' some time ago.
If I knew the emoticon for "not entirely tongue in cheek" (NETIC), I would insert
it at this point.  I have already recommended to you the second half of Smullyan's
book 'To Mock a Mockingbird' as a light-hearted but very non-shabby introduction to
combinator calc and combinator logic and many related issues.  The main reason that
some bitty.gritty.practical.engineering.untouchable.and.darn.near.unmentionable.caste
like computer programming comes into this garden of philosophical e-lightenment is
a bit like this:  It was only with the 20th Century re-discovery of the reality of
the information dimension and the critical importance of "effective description" in
the theory of computability (recursive function theory) that many people started to
appreciate many of the things that Peirce had been talking about a half-century before.
Most of the people who went through these intellectual revolutions, with a handful of
notable exceptions, had never even heard of Peirce, in spite of using conceptual frames
and systems of notation that Peirce had played the lion's share in shaping, and so they
had to learn all this stuff all over again the hard way.

The key notion in computability theory is that of "effective description".
Like the notion of "operational definition", this is easily recognized as
yet another variation on the theme of the "pragmatic maxim".  When we are
talking about the effective description of the objects that we imagine we
denote by means of our informal concepts, say, like the square root of 2,
and the effective description of the actions that it would take to begin
producing a less obscure approximation to it, it is almost an incidental
motiv that we have built machines which have the capacity to act in ways
that "model" these effective specifications.

Formal intellectual procedures like computations and proofs -- and it is one of
the leitmotivs of recursive function theory that computations, rightly conceived,
have very much the same structures as proofs by mathematical induction -- are
examples of semiosis, that proceed from obscure signs of objects, say, "2+2",
to clear or canonical signs of the same objects, in this case, "4" for 4.

This is what Frege was talking about with all that Hesperus/Phosphorus business.
He was a mathematicion, and his audience would have understood his illustration
as referring to their common problems about diverse descriptions denoting the
same formal objects -- they all knew that he did not care a Fig. 1 about this
example taken literally.  It is like Bernard's comment that we should not let
the illustration of the theory reduce us to the theory of the illustration.

My best bet at present for making any of this clear is to continue with
the "Critique" and the "New List" threads.  I hope eventually to give
the careful reader a sense of how Peirce would have read the logical
formulas that he wrote, and to put that in a comparative context
with the way that many of us apparently learned was the only
way possible to read them.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 10

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

CL = Cathy Legg
HC = Howard Callaway

CL: Ok, but can you give me an example of something which is
    expressable by, say the alpha graphs and not propositional
    (ahem!) 'logic', or by the beta graphs and not predicate logic?
    What is the pragmatic different between the two?  Your discussion
    with Ben re 'variables' vs 'indices' might, I guess, be heading in
    this direction, but can you give me something more specific?

HC: Good question, Cathy.  I'd be interested to see what Bernard may
    have to say in reply to this.  Of course I've been arguing that
    Peirce's indices are pretty much like variables, though I am
    willing to find out differently.  I'm afraid, though, that
    if there are interesting differences in this respect or 
    otherwise, then we are yet to see them very clearly.

Here is what I think that Peirce has already said in pre-ply to this:

http://suo.ieee.org/ontology/msg04336.html

In particular, let me call your attention to the concluding observation:

| With these two kinds of signs alone any proposition can be expressed;
| but it cannot be reasoned upon, for reasoning consists in the observation
| that where certain relations subsist certain others are found, and it
| accordingly requires the exhibition of the relations reasoned within an
| icon.  It has long been a puzzle how it could be that, on the one hand,
| mathematics is purely deductive in its nature, and draws its conclusions
| apodictically, while on the other hand, it presents as rich and apparently
| unending a series of surprising discoveries as any observational science.
| Various have been the attempts to solve the paradox by breaking down one
| or the other of these assertions, but without success.  The truth, however,
| appears to be that all deductive reasoning, even simple syllogism, involves
| an element of observation;  namely, deduction consists in constructing an icon
| or diagram the relations of whose parts shall present a complete analogy with
| those of the parts of the object of reasoning, of experimenting upon this image
| in the imagination, and of observing the result so as to discover unnoticed and
| hidden relations among the parts.  (CP 3.363).
|
| C.S. Peirce, 'Collected Papers', CP 3.363.
| "On the Algebra of Logic" (1885, CP 3.359-403).

It is difficult to overestimate the significance of the
fundamental mathematical insight that Peirce sums up here.
It is the 'pons asinorum', the 'sine qua non', the basic
faculty of vision that is needed to carry on mathematical,
indeed, all quasi-observational reasoning.  It is what many
mathematicians feel, and some have troubled themselves to
say, would make Gödel's Inexhaustibility Proof redundant.

Many of the most frustrating and apparently futile discussions that I have
had in various places over the last three years have turned, or failed to
turn, on this difference between between "power of expression" (POE) and
"power of reasoning" (POR).  There are many who consider this post hoc
equipollence of expressibility to be some kind of perfect squelch --
like all those who fail to grasp the pragmatic difference between the
sciences of discovery and the sciences of review -- they always seem
to imagine themselves abiding at the end of inquiry, if not the end
of history, where it does not really matter all that much if you
sum it all up in roman numerals or tally marks, or some other
system of notation that would have rendered the discoveries
impossible in practice in the first place.

PS.  For convenience, I include the full paragraph here:

| I have taken pains to make my distinction of icons, indices,
| and tokens [more frequently called "symbols"] clear, in order to
| enunicate this proposition:  in a perfect system of logical notation
| signs of these several kinds must all be employed.  Without tokens there
| would be no generality in the statements, for they are the only general
| signs;  and generality is essential to reasoning.  Take, for example, the
| circles by which Euler represents the relations of terms.  They well fulfill
| the function of icons, but their want of generality and their incompetence
| to expresss propositions must have been felt by everybody who has used them.
| Mr. Venn has, therefore, been led to add shading to them;  and this shading
| is a conventional sign of the nature of a token.  In algebra, the letters,
| both quantitative and functional, are of this nature.  But tokens alone do
| not state what is the subject of discourse;  and this can, in fact, not be
| described in general terms;  it can only be indicated.  The actual world
| cannot be distinguished from a world of imagination by any description.
| Hence the need of pronoun and indices, and the more complicated the subject
| the greater the need of them.  The introduction of indices into the algebra
| of logic is the greatest merit of Mr. Mitchell's system.  He writes 'F'_1
| to mean that the proposition 'F' is true of every object in the universe,
| and 'F'_u to mean that the same is true of some object.  This distinction
| can only be made in some such way as this.  Indices are also required to
| show in what manner other signs are connected together.  With these two
| kinds of signs alone any proposition can be expressed;  but it cannot be
| reasoned upon, for reasoning consists in the observation that where certain
| relations subsist certain others are found, and it accordingly requires the
| exhibition of the relations reasoned within an icon.  It has long been a puzzle
| how it could be that, on the one hand, mathematics is purely deductive in its
| nature, and draws its conclusions apodictically, while on the other hand, it
| presents as rich and apparently unending a series of surprising discoveries
| as any observational science.  Various have been the attempts to solve the
| paradox by breaking down one or the other of these assertions, but without
| success.  The truth, however, appears to be that all deductive reasoning,
| even simple syllogism, involves an element of observation;  namely, deduction
| consists in constructing an icon or diagram the relations of whose parts shall
| present a complete analogy with those of the parts of the object of reasoning,
| of experimenting upon this image in the imagination, and of observing the result
| so as to discover unnoticed and hidden relations among the parts.  (CP 3.363).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 11

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The difference between a realist and a personal infallibilist
| is like the relation between a monotheist and a theomaniac.
|
| It is the difference between
| one who thinks that God is one
| and one who thinks that one is God.

J: Finally, I will just point out that your continuing projection of the
   3-fold (tone, token, type) upon the 2-some (particular, universal) is
   causing more than a bit of distortion in the texts of Peirce you read.

S: The subject heading was used to identify a thread, not the subject
   of my recent posts.  Perhaps it would have been better to change
   the subject heading, as I have now done.  The thread evolved into
   a discussion of identity owing to my assertion that "types" are
   criteria of identity for tokens.  I had not engaged in the
   discussion of "tone," nor did I think that the triad,
   tone, token, type, represented an application of
   Peirce's categories.

That's not what I meant.  What I am trying to point out is just what I got from
reading the various quotations that we collected.  In the more complete ones,
Peirce talks about types, tokens, tones altogether, and I think that there
is a reason for this, namely, that a type, in Peirce's sense, involves
something more than a mere property or a universal -- which, I gather,
is more like the pure tone -- it involves a natural kind or a law.
The tokens of the word "the" on a page, or on the web, fall under
a type, not just because they have a particular configuration of
geometric shapes, but because they fall under a law whereby they
function as a particular type of definite article for the readers
thereof.  That is just my first impression of what I read, but it
does seem to fit in what I have read about the "law of the symbol"
in another context.  Read this way, the question of types connects
us back to the Kantian issue of natural kinds, synthetic a priori's,
and all that jazz that we know Peirce was deeply motivated to grasp.

J: If one finds even the simplest question, for instance,
   whether mass is a "property" of a physical "entity",
   one whereof one must be silent, then does it not
   appear that the issue of Leibniz's principle is
   not so much whether it is true, just yet, as
   what in the heceity it means?

S: Your question, "Is mass a property of a physical entity?" was irrelevant to
   the point at issue, and indeed, the very fact that you asked it suggested
   that you either had not read or had not understood my posts nor even some
   of the Peirce quotes that you posted.  The subject was the identification
   of "individuals", as Peirce generally used that term.  An optical image
   may be an "individual" (CP1.458), which you would have known if you had
   read the post to which you were replying.

Perhaps it seems irrelevant to you because you take as settled
a question that I do not.  My memory of last week is beginning
to dim a bit, so let me take a moment to find your first note ...
I think it was this:

SS: Quine objects to properties on the ground that we do not have
    criteria of identity for properties.  For Quine, one would need
    criteria of identity for classes (or properties, if one insists
    on admitting them) but no criteria of identity for individuals;
    individuals ARE criteria of identity for Quine.  To know whether
    class 'a' = class 'b', one looks to the individuals that they
    (or member classes) contain. 

SS: As a realist, I take exactly the opposite view.  I would say
    that one needs criteria of identity for individuals but not
    for properties (or at least not for all properties) since
    properties ARE criteria of identity.  (Of course, Quine --
    like Russell in his nominalistic phase -- has a certain
    amount of trouble in specifying what is an individual;
    time-slices and all that.)

Okay, that reminds me why I keep bringing up the "doctrine of individuals" quote.
Before you can even get to the questions of criteria of sameness for individuals
or criteria of sameness for "dividuals", you have to ask the question:  Gee, can
I really tell the difference between dividuals and individuals?  Peirce's point
is that you can't really, not in any absolute essential invariant necessary way,
but that you merely impose a distinction by convention, whose excuse is that it
has some sort of utility relative to a particular frame of discourse.  Sorts of
predicate calculus that start out by declaring two disjoint alphabets of signs
called "individual constants and variables" and "predicate letters" are simply
assuming that this distinction can be made, and so they will ever after lack
the capacity to examine this stipulation in a crtical and reflective way,
which is, after all, one espoused criterion of what makes a philosophy.

So, to sharpen the pertinence of my question once again:
One of the unclarities of a phrase like "all properties"
is just what counts as a property, and apparently that
is still far too stunning a question to be answered.

SS: Peirce reminds us that the Schoolmen distinguished between "individuals"
    proper and "singulars".  Peirce usually adopted the common idiom, using
    the word "individual" to denote an object possibly enduring in time,
    however briefly, for which the Schoolmen would have used the term
    "singular".  "Individuals proper" are ideal entities which Peirce
    sometimes called "logical atoms".

I did read these quotations as I was copying them out.
He made the classic mistake of trying to compromise
with a degenerate usage.  I consider it a lesson.

J: Why do you call the conflating of identity with similarity a "realist" position?
   For that matter, why not call your "relative identity" by the name "similarity"?
   The use of "relative" in this way, to refer to a universal or an absolute term,
   seems to be just begging for trouble.  Moreover, it introduces a confound with
   all of the other sorts of relativity that might be involved in predication.

S: By a "realist's position, I mean one that assumes that universals --
   properties and relations, forms -- are what they are independently of
   what anyone may think.  They are not reducible to thoughts in someone's
   head, they are not words, and they are not the 'flatus vocis' of Roscellin.

Yes, but it's not an either-or.  To assume that some generals exist is not yet to
think that we have "direct unmediated knowledge of them as they are" (DUKOTATA).
Our knowledge of them is as represented through signs in a semiotic medium,
which is why Peirce defines logic as "formal semiotics".  And, by the way,
that knowledge is always partial, no matter what the object objects are.

S: By a "theory of identity," I mean a theory that deals with a cluster
   of philosophic problems, mostly concerned with how we identify the
   denotata of nouns and noun phrases, or how we determine whether
   two noun phrases refer to the same things.  (This is to cast
   the problem in the area of semantics.  There are other, in
   some eyes, more strictly "logical" problems associated with
   the concept of "identity", but most of them crop up in a
   discussion of the semantic aspect of the problem.")

Of course, the Peircean perspective on the relation of logic
to sign relational matters will be very different from that.

S: A nominalist cannot appeal to identifying properties or characters.
   He doesn't believe in their reality.  I have taken Quine as a prime
   example of a nominalist, because he is so explicit about it (though
   he calls himself an "extensionalist" rather than a "nominalist" since
   he admits "classes" as universals.)  He rejects talk of "properties"
   or "relations" as distinct from classes, and he makes the criteria of
   identity for classes the individuals they contain.

S: When Quine speaks of "relations", he means ordered sets (which is
   probably one of the reasons he is able to "reduce" all relations
   to binary relations, i.e., ordered pairs).

If you appreciate what Peirce is saying in the "doctrine of individuals" quote,
you will realize that the nominal thinker cannot get off the hook this easily,
because the ostensible distinction between predicate terms and individual terms
is "interpretive", or relative to the community of interpretation, even if it is
just an army of one.  It is always possible that it will be found out to be an
artefact of the interpreter, not a property of nature.  Being a realist does
not make one infallible.  Believing that invariants exist does not mean that
the ones you think you know are the ones that are.  That would constitute
what we call "infallibilism".

Keeping the nominal aspects of the thinking of different "analytic realists"
straight is a problem for many.  Russell continued to promote his "no class"
theory to the very end, so far as I know, and most other nominalists I have
read are suspicious of sets and classes along with properties and intensions.
Hence, all the rampant mereology these days.

S: Notice that for a nominalist such as Quine, one can identify an individual by
   pointing at it, though there is always some ambiguity -- some inscrutability
   of reference.  Once one identifies individuals, the identification of classes
   is no problem for Quine.  Classes are identical if they contain the same
   individuals.  Traditionally, going back to the Scholastic Realists and
   arguably even to Aristotle, realists say that we identify the referents
   of nouns by certain "defining" or "essential" properties.  Two expressions
   are said to refer to the same entities if and only if they have the same
   defining properties or "essence".  Mere "similarity" requires identity
   only of non-defining properties.

S: But just as Quine has trouble in identifying individuals by pointing,
   so the realist has trouble with the vagueness of essential properties.
   How big is big?  How blue is blue?  Needless to say, hosts of problems,
   both logical and epistemic, are associated with these views, which is
   one of the reasons "identity" is such an interesting subject.

I did not say that it was not an interesting subject.
Geometry is an interesting subject.  But it's a many,
not a one, and I do not expect that there will arise
any sort of unique answer to the questions involved
in the parallel axiom, not in mathematics, and even
many physicists think that the "geometry of nature"
is more than likely far too exotic to make the whole
class of classical models anything more than a very
rough approximation.  Peirce addressed the issue of
identity in a very sophisticated and insightful way,
and that is about the most I expect from any thinker.

SS: And various alternatives to these traditional polar positions have been
    mooted, including family resemblances, prototype theory, etc.  But your
    question was:  why do I not speak of "similarity" rather than "identity".
    My answer is that only a nominalist would ask that question.

Now you are just being silly.

Hmm, but we have been seeing some strange kinds of realism lately.
Perhaps I have to have a Principian proof that there is a reality?
As if a person is only a theist if he/she has a proof that God is?
Neglected Arguments need not apply.

J: Why do you call "numerical identity" the "degenerate" form of "relative identity",
   and why do you call your "relative identity" by the name "identity simpliciter"?

S: I use 'degenerate' in the sense Peirce used it, to refer to an extreme
   or limiting case, one that has unique properties not shared by the cases
   of which it is the limit.  Peirce gives an example:  "Conic sections are
   either the curves usually so called, or they are pairs of straight lines.
   A pair of straight lines is called a 'degenerate' conic."  (CP1.365).

You are misunderstanding the use of the term.  Not all extreme or
limiting cases are "degenerate".  The "uniqueness" of degenerate
cases is their uniquely impoverished position within their genus.
The word "identical" is normally used in ordinary or formal talk
for things that are "very much alike".  If you say that two things
are "relatively identical" simply because they both exist in the
same known universe, people would say that you are using the word
in a "degenerate" way.

S: This brings me to Leibniz's law, which is often expressed in two clauses
   (1) Indiscernibles are identical, (2) Identicals are indiscernible.
   ("Discernibility" here means differing in features;  it is not
   intended to be a psychological or subjective concept.)

I discern a distinction that makes a difference
between "interpretive" and "subjective".  Everyone
understands the importance of context in our informal
discussion and reasoning.  I see no reason to dispense
with it in our attempted formalizations, for all their
normative intent, indeed, their normative intent is a
part of their context, their purpose, their purport.

SS: Only the first clause holds for what I have called "relative identity"
    or "identity simpliciter".  Both clauses hold for numerical identity,
    since entities which are numerically identical share all properties
    (including relations).   Since numerical identity is a limiting case
    with a unique feature, it seems fair to call it a "degenerate" case
    of identity.

No, that is just not how the word is used.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 12

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JA: I wish that you would try every now and then reading what Peirce writes
    without trying to atomize each and every remark, if not the man himself,
    according to your true-false checklist of dichotomies, especially since
    the most casual reader of Peirce would know that he would consider your
    attempt to pit extensions versus intensions (properly "comprehensions")
    to be an utterly false and misleading antagonism.

SS: You baffle me!  I have never attempted to "pit extensions versus intensions".
    I begin to wonder if you have ever read late 20th century philosophy of logic
    which you hold in such contempt.  You could perhaps accuse me of pitting
    "extensionalism" (which is a form of nominalism) against "realism"
    (in the sense defined above), or even of contrasting "extensional"
    and "intensional" 'languages', where an extensional language is
    a language that permits free substitution of expressions having
    the same extension 'salva veritate', whereas an intensional
    language allows free substitution only of words having
    the same intensions. 

Let me try to explain something.  I have very little use for isms of any kind.
I even make an effort to speak of "pragmatic method" or "pragmatic thinking"
in order to avoid the pernicious effects of turning it into an ism.  Now,
if I had my choice in the matter, I would use "ism" to refer to a way of
looking at a subject:  a POV that emphasizes a particular aspect of the
more solid reality, or a methodology that specializes in a particular
set of heuristic strategies for approaching the inexhaustible realm
of phenomena.  From this POV, which you may call "anti-ism-ism" if
you really need a label, denotations and connotations, extensions
and intensions, or whatever it may be, are just facets of facts,
and there is really no need to eliminate some of them in favor
of the others.  But that option is denied me by contemporary
conditions in philosophy.  Twentieth Century philosophy was
this pitched battle of opposing reductionisms that made it
impossible to say something in favor of one way of looking
at things without being interpreted as trying to eliminate
the other.  I did not start the fire.

Peirce had a "language" and a mind that allowed him to speak and to think
with equal facility about each of these various aspects as related to an
integral subject matter.  It does not appear that 20th Century thinkers
are yet able to read him through the reticles of their canons.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 13

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SS: It is interesting that you take such a pollyannaish view of Peirce.
    He was not averse to speaking of "isms", his own as well as those of
    others, and he was certainly willing to attack the "isms" of others
    doggedly.  You say you "did not start the fire".  In fact, Jon, the
    contempt that you are so ready to show for others does start fires.

I can but suggest to you the following possibility.  That there are people
who see these "isms" rather differently that what you appear to express here.
They regard even such apparently "diehard with a vengeance" issues as nominal
versus platonic as being no more than exploratory heuristics.  They do not see
the purpose of inquiry as being to catapult oneself as quickly as possible to
the day of judgment, when all of the platonic elect, say, will go to platonic
heaven, and all of the nominal sinners, say, will follow their sops to hell,
or maybe vice versa, as the case may be, but as another sort of pilgrimage
than that.  They may have come to the honest opinion that what works best,
and what they therefore feel duty bound to keep reporting to others as
working best, will often depend not so much on some kind of reductive
strategy of elimination, which they suspect may be nothing more than
a short-sighted technique for reducing uncertainty in the short run,
but rather depend on finding the integration of intellectual facets
that is adequate to the thing itself.  A careful reading of Peirce,
I believe, shows that he did not disagree with everybody about all
things, but was a bit more reflective and selective than that, and
he even understood what Ockham was really talking about far beter
than those who would not call themselves Ockhamists if they could
trouble themselves to read Ockham.  I believe that this is one of
the reasons that you think Peirce changed so radically about this.
The fact is, like most sensible problem-solvers, he always knew
the sense of starting with the simple guesses first, and it is
mainly as the problems get tougher that you learn to give up
on a failed simplicity.  Some people just never learn this,
so we have to endure their unexperienced teachings about
matters that they have never really explored very far.
I hope that this informs you of my present "position".
Another day, perhaps, we will speak of "momentum".

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 14

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

My mention of "position" and "momentum" is
meant to be an allusion to the whole matter
of "particles" and "waves", which has gotta
be one of the first false oppositions that
I ever got catechized into my grey matter.

A false dichotomy is one that is presented
as a dilemma between A and ~A, in the same
sense, of the same thing, at the same time,
when it's not really that kind of relation.

Position versus Momentum, like all of the other
complementary aspects or conjugate variables of
physics or psychology or philosophy or whatever --
where complementary angles are those that unite
in a right angle, and do not summa to opposites --
are not a good basis for a pro-ism/con-ism pair.

Thus, they are more like the adic case than the tomic case:

q     p
 ^   ^
  \ /     instead of    q <---@---> p
   @

The matter of Wave vs Particle does not docket a case of A vs ~A,
but a case of an A and a B that constitute complemenatry aspects
of whatever the hec it is that is going on immanent/transcendent
to the physical phenomenon in question.

So we have to watch out for the pseudon protons.

The way I see it, <denotation | connotation>, <extension | intension>,
and many others, are very likely complementations and not oppositions.

Sincerely Yours,

Poly-Ana (that's Greek for "many ways up, and back, and again")

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 15

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Re:  Quine's supposed "bisection of the triad".

SS: When Quine speaks of "relations", he means ordered sets (which is
    probably one of the reasons he is able to "reduce" all relations
    to binary relations, i.e., ordered pairs).

I will make another attempt to explain what is going on here.
We had a long wrangle on the Standard/Ontology lists over this,
and it left me too burned out even to look up the links right now,
but if you keep pushing it ... consider that your fair warning.

Quine is just wrong here.  And what he is commonly taken to have said
is even wronger still.  As for Quine, the errors are so glaring that
I can only guess that he must have been operating under the influence
of a strong reductionist toxin, perhaps a mutant strain of behaviorism,
or something equally stupifying.

Up til now, I have mainly focused on what Peirce meant by saying that
triadic relations are irreducible, with special reference to the way
that he pictured the obviousness of it all in the Existential Graphs.
Now, if one grasps the morphism between relations and graphs, then
the basic fact about graphs was already proved by the one who is
commonly recognized as the first graph theorist, namely, Euler.
So the only wiggle room here is in denying the aptness of the
putative morphism h : Relations -> Graphs.  But the facts
are clear enough in the source domain, at any rate.

As far as what Peirce actually claimed, it is a mathematical fact.
Though less familiar, it is literally a more elementary fact than
the facts that 2, 3, 5, 7, 11, 13 are a prime numbers, since these
facts would take a bit of proving from a suitable axiomatic basis,
while the fact that the set of 2-adic relations is closed under
ordinary relational composition is simply a matter of definition.
To be ignorant of that definition is a severe 'ignoratio elenchi'.

Down from this scene, is possible to define other sorts of algebraic
operations on relations or relative terms -- Peirce and his students,
especially Christine Ladd, later Franklin, were especially ubertous
in thinking up new ones -- but all of these involve the use of basic
logical operations like conjunction and disjunction in the mix, and
so they do not bear on the validity of the original question, since
"binary operations are ternary relations", as my very first abstract
algebra book once put it.

But if we put aside the mere technicality of what Peirce actually said,
you must try to comprehend what a total no-brainer this whole thing is.

The very notion of putting two things together
to produce a third involves a triadic relation!

Ergo, all notions of analysis, composition, reduction, synthesis, whatever,
contain a notion of triadic relations as a part of their very constitution.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 16

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Re:  Quine's supposed "bisection of the triad".

I get the feeling that maybe Lyris is playing the role
of Descartes' Demon's Evil Twin here, so I will recopy
my earlier posting with the quotation from Quine.

This remark, its equivalents in other places, and
the 1953 "Reduction to a Dyadic Predicate" paper,
are the ones that most folks cite on this issue.
I will copy out the 1953 paper later on, when
I figure out how to asciify it, but the first
thing that I mark in his ostentatious 2-adic
predicate F is this big "V" sign -- Pynchon
fans take note! -- for disjunction, or as
Quine would call it, "alternation", right
smack dab in the middle of it.  But later.

| Relations in the sense here considered are known, more particularly,
| as 'dyadic' relations;  they relate elements in pairs.  The relation of
| giving (y gives z to w) or betweenness (y is between z and w), on the other
| hand, is triadic;  and the relation of paying (x pays y to z for w) is tetradic.
| But the theory of dyadic relations provides a convenient basis for the treatment
| also of such polyadic cases.  A triadic relation among elements y, z, and w might
| be conceived as a dyadic relation borne by y to z;w [the ordered pair (z, w)].
|
| Quine, 'Math Logic', p. 201
|
| W.V. Quine,
|'Mathematical Logic, Revised Edition,
| Harvard University Press, Cambridge, MA, 1981.

With Quine's text in view, I can now add to my list of reasons
why the basic facts of 3-adic irreducibility and 3-identity have
continued to remain such a bother outside of mathematics and most
areas of computer science, where they have been considered trivial
observations since the time of Euler in the first case and Peirce
in the second, at the very least.

JA: Most of the controversy in other circles
    appears to turn on (1) not understanding
    the statement of the question, as it is
    generally understood, and as Peirce most
    definitely understood it -- this appears
    to be the problem with Quine's fallacy,
    since what he does prove is irrelevant
    and trivial with respect to the matter
    in question, (2) failing to define the
    terms that one is using in a way that
    makes the problem well-posed.

What Quine knew and when he knew it is not the business of my inquiry here.
I think that it is fair to refer to Quine's statement as "Quine's Fallacy"
because of the uses he and others have put it to, and because, if he knew
better, he simply did not take up the reponsibility or making that clear.

Reason (3), that Quine so amply exemplifies at this point, is this:
Not understanding what a relation is.  I know that probably sounds
shocking, so let me explain.  We find a category of thinkers who
are perfectly capable of saying what a relation is, speaking in
extension, as is our concern here, they will quite facilely say:

| A k-adic relation is a set of k-tuples, a subset L c X^k, for
| an inclusive enough domain X and its k^th cartesian power X^k.

So far so good.

But when they come to speak on matters like the "composition",
the "decomposition", the "production", or the "reduction" of
a given relation in relation to a given set of relations, they
constantly fail to draw the correct conclusion about what that
means.  To facilitate the remainder of this discussion, let us
introduce the generic terms "(de-)generation" to range over all
of the above (de-)constructions in the obvious way.  Then, the
immediate consequence that they fail to appreciate is just this:

|  A relation is a set of tuples.
| ----------------------------------------------------------------
|  A generation of a relation is a generation of a set of tuples.

I will take it up from there next time.

SS: When Quine speaks of "relations", he means ordered sets (which is
    probably one of the reasons he is able to "reduce" all relations
    to binary relations, i.e., ordered pairs).

Once again, taken in extension, which is sufficient, and at any rate
what Peirce was talking about -- since a consistent comprehension is
one that is capable of having an extension, there is really no wffle
room to be had by taking that old intensional dodge -- the definition
is that k-adic relations are SETS of ordered k-ples, not just ordered
k-ples simpliciter, and certainly not "ordered sets".  Therefore, what
it means to de-compose a 3-adic relation into a composition of 2-adic
relations is to de-compose a set into a composition of two other sets.
And what that means is given by a definition that all mathematically
literate people know.  And it happens to be an important fact, about
as important as knowing that 2, 3, 5, 7, 11, 13, and an infinitely
large number of others to be named later, are prime numbers in
ordinary arithmetic, maybe even moreso.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 17

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JC = John Collier

JC: I think from the line of argument below, which is none
    to clear in itself, that we can see an appeal to the way
    things are represented (graphs) to their actual properties.
    This appears to me to be a pretty fundamental category error.

It is not a category error.  It is a morphism between categories,
h : Relations -> Graphs, mapping relation arities to vertex degrees,
preserving the pertinent properties under compositions on each side.
This is helpful to some people, but it's not a proof.  In any case,
a proof is not required, since the fact at issue is a definition.

JC: Relations and graphs are not the same.
    Graphs are representations.
    Relations are not.

Graphs are a category of mathematical objects,
not to be confused with their representations,
whether you mean "representations" in the
mathematical or the sign-theoretic sense.

Representations in mathematics are just morphisms,
typically from a space to a group of automorphisms,
but that is only indirectly related to this issue.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 18

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JC: It is possible for a relation to be triadic and
    for it to be reducible.  I gave examples in my
    post.  There is nothing especially difficult or
    tricky in what I am saying.  Quite the contrary.
    Between, as I mentioned, is a triadic relation, but
    it can be composed of dyadic relations.  For example,
    if we say that the ham is between the upper and lower
    slices of bread, that is equivalent to saying that it
    is below the upper and above the lower.  More formally
    Between(ham, upper, lower) iff Above(upper, lower) and
    Above(ham, lower) and Above(upper, ham).  Between is
    clearly a triadic relation, and the location of the
    ham in this case involves it, but the relation can
    be analyzed fully into dyadic relations.

John Collier has provided us with a perfect example --
a perfect example of someone who does not know what
a relation is, does not know what the difference
between a relation and one of its instances is,
does not therefore know what either a 3-adic
or a 2-adic relation is, and does not know
what decomposition or reduction is.

No wonder he buys Quine's brand of baloney.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 19

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JC: It is possible for a relation to be triadic and
    for it to be reducible.  I gave examples in my
    post.  There is nothing especially difficult or
    tricky in what I am saying.  Quite the contrary.
    Between, as I mentioned, is a triadic relation, but
    it can be composed of dyadic relations.  For example,
    if we say that the ham is between the upper and lower
    slices of bread, that is equivalent to saying that it
    is below the upper and above the lower.  More formally
    Between(ham, upper, lower) iff Above(upper, lower) and
    Above(ham, lower) and Above(upper, ham).  Between is
    clearly a triadic relation, and the location of the
    ham in this case involves it, but the relation can
    be analyzed fully into dyadic relations.

JA: John Collier has provided us with a perfect example --
    a perfect example of someone who does not know what
    a relation is, does not know what the difference
    between a relation and one of its instances is,
    does not therefore know what either a 3-adic
    or a 2-adic relation is, and does not know
    what decomposition or reduction is.

JA: No wonder he buys Quine's brand of baloney.

John Collier has done us the yowman's service of exemplifying
almost all of the commonly known fallacies in a single short
paragraph.  I have a bit more time now, so let us run through
these points in detail.

Zeroth, I will allow that there may be something that JC
is talking about under the heading of the word "relation",
but I have yet to see a coherent account of it, and I do
not hold out much hope for its consistency.  I am pretty
sure that it bears no consistent relation to what Boole,
DeMorgan, Peirce, Schroeder, and omnes generationes of
logicians and mathematicians, before and after, meant
by the words "relation" or "relative term".  Since
this crew got there first, even Plato and Aristotle
knew better, I suggest that others choose another
word, so as to avoid confusing the massess.

I analyzed the "betweensy" example in some detail for the
Standard/Ontology Lists last year and I will look up those
links for you later on.  In the meantime, here is a summary,
far succincter than I have ever done before, of the pertinent
material, as the pertinent thinkers and tradition understand it:

Theory Of Relations

01.  http://suo.ieee.org/ontology/msg04377.html
02.  http://suo.ieee.org/ontology/msg04378.html
03.  http://suo.ieee.org/ontology/msg04379.html
04.  http://suo.ieee.org/ontology/msg04380.html
05.  http://suo.ieee.org/ontology/msg04381.html
06.  http://suo.ieee.org/ontology/msg04382.html

Before we start, let us dismiss JC's inane example, since his "analysis"
of it fails if the plane of the sandwich is vertical to the surface of
the nearest-by large planet, if the sandwich is flipped over, or if the
sandwich is floating in space "between" galaxies -- which incidentally
illustrates the fact that the English "between" is polymorphous, since
"among" would not substitute 'saliva verita' (sic joke), and I never
said anything about there being just two galaxies, but never mind
all that now.

A fairer example would be to pull the mathematician's
"without loss of generality" (WOLOG) gambit, and say
that we are discussing, say, real numbers a, x, b,
where we assume a < b, WOLOG, and then revert to
the 3-adic situs where "x is between a and b".

1.  What a relation is.

So far we are still discussing relations in general,
and have not got as far as discussing sign relations,
which are a special case of 3-adic relations.

Where terms of the characters "abstract", "concrete", "general", "individual",
and so on, are distinguished relative to a particular context of discourse,
where we conventionally neglect the differences that are irrelevant to the
purposes of that discourse, relations are denoted by abstract general terms,
and not by concrete individual terms.

2.  What the difference between a relation and one of its instances is.

This brings up the fallacy that DB professionals know as the
"one-row database" (ORD) fallacy.  Any relation worth its salt
serves up a whole "table" of instances, and not just one like:

<this top slice, this baloney, this bottom slice>.

3.  What decomposition or reduction is.

In this regard, JC commits what has long been known as the fallacy
of the "Alchemist's Dodge" or the "(Philosophers') Stone Soup".
It goes a bit like this:

You can make a hearty soup out of nothing but stones and hot water ...
if you add a pinch of salt ...
if you add a dash of pepper ...
if you add a few potatoes ...
if you add a carrot or two ...
if you add a hock of ham ...
if you add ...

And so it goes.

The moral of this particular telling of the story
is that a reduction is not a proper reduction if
it does not reduce something to something lower.

JC's purported "analysis" goes like this:

| Between(ham, upper, lower)
|
| iff
|
| Above(upper, lower)
|
| and
|
| Above(ham, lower)
|
| and
|
| Above(upper, ham).

Each of the bits of connective tissue that are signified by the
uses of the word "and" invokes a 3-adic relation, in other words,
an operation that passes from two sentences to a conjoint sentence --
or from their truth values to the conjunction of their truth values,
according to your taste -- and so the putative "reduction" is really
the sort of "improper" reduction that reduces something to something
of the same complexity, in this case, more 3-adic relations.

All of these things were commonplace understandings well
before Boole, DeMorgan, Peirce, & Co. got into the act.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 20

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

PB = Peter Brawley

PB: Jon would you please point to where John shows
    he "does not know what the difference between a
    relation and one of its instances is" and "does
    not know what decomposition or reduction is"?

I explained these points more fully in the next post that I sent --
I got a message from Lyris saying that it was rejected, now I see
two copies at the Archive, but I still don't know for sure if it
got distributed or not.  Anyway, here's an expansion of that.

JA: John Collier has done us the yowman's service of exemplifying
    almost all of the commonly known fallacies in a single short
    paragraph.  I have a bit more time now, so let us run through
    these points in detail.

| Of JC's statements the multitude of fallacies is so terrific
| that I have usually shrunk from the task of enumerating them ...

The more that JC writes on this issue, the more his logical and practical
fallacies pile up, and the more I realize that he has never really taken
an interest in reading what Peirce actually wrote on any of these topics.

JA: Zeroth, I will allow that there may be something that JC
    is talking about under the heading of the word "relation",
    but I have yet to see a coherent account of it, and I do
    not hold out much hope for its consistency.  I am pretty
    sure that it bears no consistent relation to what Boole,
    DeMorgan, Peirce, Schroeder, and omnes generationes of
    logicians and mathematicians, before and after, meant
    by the words "relation" or "relative term".  Since
    this crew got there first, even Plato and Aristotle
    knew better, I suggest that others choose another
    word, so as to avoid confusing the massess.

I am very serious about this.  JC is perfectly free to exposit his idea of
what a relation is, or to exposit his interpretation of what Rosen's idea
of a relation is, and "ethics of terminology" notwithstanding he is free
to use any terms that he fancies to do so.  But I would be remiss in my
responsibilities if I did not continue to point out, what I can say
with a great deal of confidence from having read everything in CP
once or twice of three times, often after a decade or two of
reflection and acquired perspective, that whatever he is
talking about has very little to do, thank goodness,
with what Peirce and that whole army of logicians
and mathematicians are talking about under the
headings of "relations", "relative terms",
"reducibility", "generic vs. degenerate",
and all of those other words in play.

What Peirce was talking about under the headings of relations,
relative terms, relative and non-relative operations on relations,
strong and weak types of reducibility with respect to these operations,
genericity, degeneracy, identity, teridentity, and so on, is very clear
and immediately recognizable to readers with the minimal mathematical
background, was published in the premier mathematical journals of his
day and ours, is subject to definition, formalization, and the whole
discipline of theorem and proof, for those who accept those rigors.

JA: I analyzed the "betweenness" example in some detail for
    the Standard/Ontology Lists sometime last year and I will
    look up those links for you later on.  In the meantime, here
    is a summary, far succincter than I have ever done before, of
    the pertinent material, as the pertinent thinkers and tradition
    understand it:

Theory Of Relations

01.  http://suo.ieee.org/ontology/msg04377.html
02.  http://suo.ieee.org/ontology/msg04378.html
03.  http://suo.ieee.org/ontology/msg04379.html
04.  http://suo.ieee.org/ontology/msg04380.html
05.  http://suo.ieee.org/ontology/msg04381.html
06.  http://suo.ieee.org/ontology/msg04382.html

Betweenness is an example of a 3-adic relation, which makes it,
by the definition of relational composition, irreducible to any
composition of 2-adic relations.  That is the "strong" notion of
reducibility, to put it in a more contemporary idiom.  However,
once that is understood, it is possible to weaken the notion of
reducibility, saying to the wannabe reducer, as it were:  Okay,
now that you have conceded that this 3-adic is irreducible to
2-adics in the primary sense, suppose that I give you for free
to use in your composition some simple 3-adic relations, oh,
say, like 'and' : B x B -> B, or things of that ilk -- could
you then compose me this 3-adic out of nothing but 2-adics
of your choosing along with those few gratuitous 3-adics?

That game is about the "weak" notion of reducibility,
but it's a consolation game, and only begins when the
reducer has acknowledged futility on the first score.

Have to take a time out here ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 21

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I continue from where I left off last time.

I gather that you have some computer science background, so I will
take the liberty of expressing some of my arguments in those terms.

Let us consider the informal notion of "computational work" as
a measure of the presence of an "information-theoretic reality".

For instance, consider JC's statement to the effect that there
is no "third" involved in the formation of the truth-functional
conjunction 'and' : B x B -> B.

I wish!  Sadly, though, truth-functions do not compute themselves,
and even if they did, they would need to do computational work in
order to do the job.  This should clue us in to the informational
reality of the situation, and it tells us what a practical fallacy
JC has e-mitted in this connection.  Like my old algebra book said:
"A binary operation is a ternary relation" -- so any truth-function
of the type f : B x B -> B is also 3-adic relation F c B x B x B.
Being functional from its first two domains to its third, it is
of course a very special type of 3-adic relation, but it remains
a 3-adic relation for all that.  And we both know that there is
a gap between the indicated operation and the functional result
that doesn't get crossed without doing some computational work.

Let us imagine a generic venn diagram with circle A and circle B.
Then the intersection A |^| B is a third thing, distinct from both.
For another third thing, there is the union A |_| B, also distinct
from A, B, and A |^| B.  Now maybe in our fantasy the intersection
and the union just appear as if by magic, but how, even in fantasy,
do you know which of the 16 possible third things is the right one?
And of course, you know and I know that, no matter how you compute
the intersection or the union or any other truth function, whether
by bit-maps or by object equations, it's just gonna take some work.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 22

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

You Can't Tell the Topics from the Fallacies without a Programme:

1.  Topic.  What is a relation?

1.1.  Fallacy.  The "I lied about the set" (ILATS) puzzle.

2.  Topic.  What is the difference between a relation and one of its instances?

2.1.  Fallacy.  The "one row database" (ORD) illusion.

3.  Topic.  What is decomposition or reduction?

3.1.  Fallacy.  The "2-ped dog" (2PD) dogma.
3.2.  Fallacy.  The "alchemist's dodge" (AD).
3.3.  Fallacy.  The "stone soup" (SS) parable.
3.4.  Fallacy.  The "who's on third" (WOT) joke.
      AKA.  "Truth-functions compute themselves".

JA: Before we start, let us dismiss JC's inane example, since his "analysis"
    of it fails if the plane of the sandwich is vertical to the surface of
    the nearest-by large planet, if the sandwich is flipped over, or if the
    sandwich is floating in space "between" galaxies -- which incidentally
    illustrates the fact that the English "between" is polymorphous, since
    "among" would not substitute 'saliva verita' (sic joke), and I never
    said anything about there being just two galaxies, but never mind
    all that now.

The point of my light objection is just this, that JC's example,
in the form that he gave it, does not serve his purpose, since
the 2-adic relation that he uses, "Above(x, y)", requires an
external orientation or reference direction to make sense.
So I supply him with a better example on my own:

A fairer example to JC's case, if I have to supply it myself, would be
to pull the mathematician's "without loss of generality" (WOLOG) gambit,
and stipulate that we are discussing, say, real numbers a, x, b, where
we assume a < b, WOLOG, and then revert to the 3-adic situs specified
by the statement "x is between a and b".

JA: 1.  What a relation is.

JA: So far we are still discussing relations in general,
    and have not got as far as discussing sign relations,
    which are a special case of 3-adic relations.

JA: Where terms of the characters "abstract", "concrete", "general", "individual",
    and so on, are distinguished relative to a particular context of discourse,
    where we conventionally neglect the differences that are irrelevant to the
    purposes of that discourse, relations are denoted by abstract general terms,
    and not by concrete individual terms.

Let me now explain to you the "I lied about the set" (ILATS) fallacy.
I have already discussed this fallacy early on in connection with the
loco classico from Quine's 'Math Logic'.  Very often you find people
who have learned to 'say' the correct thing, namely, that a relation
is a SET of relation instances, also called "elementary relations"
or "tuples", but when it comes to drawing even the most immediate
consequences of this definition, they constantly slip up, most
likely on account of the unexamined assumption that what you can
say about an instance will automatically "lift up" to a closely
analogous statement about the set, in this case, the relation.
Anyone who reflects on this assumption would find ample reason
to question it.  And indeed, it is a very rare property that
can be "lifted" from elements to sets in such a facile manner.
One of the principal symptoms of this unconscious fantasy is
that the afflicted one will compulsively, repetitively regress
to the level of a single primal instance, the "one row database".
They will do their perfunctory analytic dance on that pinhead,
and then wave their arms about while chanting the incantation:
"As Below, So Above" or perhaps some Greco-Latin equivalent.

For example, "Being uniquely determined by its projections" is
NOT one of those properties that lifts from elements to sets.

To express the point in computer science terms, as in Lisp or any
other moderately "functional" language, complex elements like tuples
are formalized in terms of their "constructors" and their "selectors".
In category theory these are called "injections" and "projections",
respectively.

Now a single point, here, a tuple, is determined by its projections,
but a set of points, here, a relation, is not in general determined
by its projections.  For example a solid ball and a hollow sphere
with the same center and radius have indiscernible projections
on the XY, XZ, YZ planes.

So we get the following breakdown:

a.  All 3-adic relations are irreducible to compositions of 2-adic relations.

b.  If one allows the use of one 3-adic relation, namely, 'and' : B x B -> B,
    this corresponds to a weaker form of reducibility, that may be called the
    "projective reducibility" of relations.  Even so, even with this assist,
    some 3-adics are projectively reducible to 2-adics -- these are bodies
    in XYZ-space that can be reconstructed uniquely from their shadows on
    the XY, XZ, YZ planes, and some are not.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 23

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

PB: John Collier's ham sandwich looks to my (undoubtedly
    insufficiently untutored) eyes like "standard issue"
    relational analysis, (e.g.:

    http://www.hum.auc.dk/cg/Module_I/1034.html

    in an online course that acknowledges Peirce).

PB: Could you give an example of a "non-degenerate" triadic relation?

That website crashes my browser.  Maybe you could give us
a relevant excerpt?  So far I am still just going on what
seemed "obvious" to me when I first, or maybe second read
this stuff some time ago, but so far the distances example
seems to conform my first impression.  If what Peirce means
by a "non-degenerate" 3-adic really does turn out to be what
I have called a "non-reconstructible" 3-adic, in other words,
a "projectively irreducible" 3-adic, then they always come in
bunches of two or more projectively indiscernible relations at
a time, and so the solid/hollow ball is one example, and there
many, many others.  I will go look up the discrete example that
I gave on the Standard/Ontology List last year.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 24

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I still can't get into the Conceptual Graphs site you cited,
but I have read Sowa's 1984 to present work fairly closely,
so a minimal excerpt of the example that you have in mind
will probably be enough -- I'm guessing something of the
"blocks world" variety?

Meawhile, scanning trough the Standard Upper Ontology archives
has reminded me of many more of the most common fallacies that
pass for "conventional wisdom" of the non-Peircean variety due
to the decline in logical literacy brought on by the Principian
mould of reasoning -- I do not exaggerate -- Russell expressly
advised students of logic not to read classical sources, which
appears to be one of the reasons that so many of his students
fail to understand even the statements of the classic problems.

So I will attach an updated list of relation-theoretic fallacies and
downright infelicities.  Not all of these are necessarily accounted
to John Collier -- after all, one person can only do so much!

You Can't Tell the Topics from the Fallacies without a Programme:

1.  Topic.  What is a relation?

1.1.  Fallacy.  The "I lied about the set" (ILATS) indirection.

2.  Topic.  What is a relation instance?
            aka:  elementary relation, individual relation, tuple.

2.1.  Fallacy.  The "direct and full access to beings in themselves"
                (DAFATBIT) fallacy.

2.2.  Fallacy.  The "change of variables" (COV) switcheroo.

3.  Topic.  What is the difference between a relation and one of its instances?

3.1.  Fallacy.  The "one row database" (ORD) illusion.

4.  Topic.  What are composition and decomposition, production and reduction?

4.1.  Fallacy.  The "2-ped dog" (2PD) dogma.
4.2.  Fallacy.  The "alchemist's dodge" (AD).
4.3.  Fallacy.  The "stone soup" (SS) parable.
4.4.  Fallacy.  The "who's on third" (WOT) joke.
      AKA.  "Truth-functions compute themselves".

Also in the mean time, here is
a helpful diagnostic hint for
the ILATS malady:

A relation is a set.

An immediate consequence of this familiar jingle
is that the analysis of a 3-adic relation into
a couple of 2-adic relations is the analysis
of a set into a couple of sets.

So what, exactly, in this context, is the analysis of set into a number of sets?
Well, it's not a question that many of our wannabe analysts often get as far as.
But all we need for the present's sake is the following:

Dx.  Diagnostic Criterion.  

A person who does not present the decomposition of a set into sets
is not presenting the decomposition of a relation into relations.

So far, John Collier has not presented the analysis of even a single relation.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 25

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JA: For example a solid ball and a hollow sphere with the
    same center and radius have indiscernible projections
    on the XY, XZ, YZ planes.

PB: Eh?  A hollow sphere projects as an empty circle in all planes
    that go through it, a full sphere projects as a filled circle
    in all planes that go through it.

You are confusing projection with intersection.

JA: So we get the follwing breakdown:
   
    a.  All 3-adic relations are irreducible
        to compositions of 2-adic relations.

PB: You did not prove this (yet).

I repeat:  It is not a matter of proof.
It is from the pertinent definition of
the operation of relative composition.

Exercise for the reader:

Using ordinary matrix multiplication,
find two square matrices A and B,
which multiplied together yield
an AB that is a cubic array.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 26

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

PB: A solid sphere projected onto 2 dims yields 1's at all defined points
    within the circle.  An empty sphere projected onto 2 dims yields 1's on
    the circumference and 0's elsewhere.  Integrating over z reconstitutes
    the original objects.  Please indicate where the "confusion" is in this.

That is not the pertinent definition of projection.
Actually, it's not any definition of projection,
but people are free to make up what they like.
You appear to be adding mod 2 as you project,
and that is a whole nuther thing.

Experiment for the reader:  Take two equiradial beachballs out into the sun,
one filled with air another with opaque, dense matter.  Observe their shadows
on the ground and report your observations in a respectable scientific journal.
You may, of course, wait until summer if you prefer.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 27

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

ILATS.  Diagnostic Criterion.

| A person who does not present the decomposition of a set with respect to sets
| is not presenting the decomposition of a relation with respect to relations.

PB: Jon, I did not follow this post of yours, at all.
    We have been decomposing 3-ary relations into
    2-ary relations and back again, is all.

Who's we?

You folks have been discussing nothing but single relation instances.
Thus, you have not even got as far (yet) as talking about relations,
which are SETS of relation instances.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 28

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JC: So, if you project a triadic relation onto
    a dyadic relation you loose information.
    Why is that surprising?

PB: Right, the relevant question seems to be,
    whether there is a proof that there exists
    a triadic relation that cannot be losslessly
    reconstituted from its dyadic constituents.

See the discussion of the
Examples L_0 and L_1 that
begins in the RAR note 4.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 29

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

PB: I don't know about your beachballs,
    but my beachballs have completely
    translucent surfaces.

PB: That is, "shadow" is only one kind of projection.

For a relation L c XxYxZ,
the pertinent definitions
of the 2-adic projections
are these:

Proj_XY (L) = L_XY = {<x, y> in XxY : <x, y, z> in L for some z in Z},
Proj_XZ (L) = L_XZ = {<x, z> in XxZ : <x, y, z> in L for some y in Y},
Proj_YZ (L) = L_YZ = {<y, z> in YxZ : <x, y, z> in L for some x in X}.

If one is thinking of a 3-column relational table,
then the 2-adic projections are what one gets by
deleting one column and ignoring redundancies
in the remainder.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 30

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

PB: Your L_0 and L_1 are relations using aggregate operators.
    Of course individuals can't be reconstituted from aggregates.
    Do you have examples of non-aggregate irreducible triadic relations?

L_0 and L_1 are 3-adic relations.

3-adic relations are sets of 3-tuples.

Do you know of any non-set sets?

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 31

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Most of the stuff that I am saying here is extremely elementary,
consisting of theorems and folklore that had trickled down from
Peirce's logic of relations to nuts-&-bolts database practice by
the 1950's.  In the 1970's, when I was working as a statistical
jockey on what was then called "very large databases", it was
not all that unusual to run into folks in AI and DB who knew
about the relation to Peirce's work -- after all, many of
the biggies like McCulloch, Arbib, Burks, Codd, and some
others had paid their homage to Peirce in some of the
most classic works of those fields.  Unfortunately,
it appears that many practitioners today are no
longer as aware of where it all comes from,
nor even all that well grounded in the
theoretical basics.

Incidental Musement

| George Boole (1847, 1854) applied his algebra to propositions, sets, and monadic
| predicates.  The expression p×q, for example, could represent the conjunction of
| two propositions, the intersection of two sets, or the conjunction of two monadic
| predicates.  With his algebra of dyadic relations, Peirce (1870) made the first
| major breakthrough in extending symbolic logic to predicates with two arguments
| (or subjects, as he called them).  With that notation, he could represent
| expressions such as "lovers of women with bright green complexions".
| That version of the relational algebra was developed further by
| Ted Codd (1970, 1971), who earned his PhD under Arthur Burks,
| the editor of volumes 7 and 8 of Peirce's 'Collected Papers'.
| At IBM, Codd promoted relational algebra as the foundation for
| database systems, a version of which was adopted for the query
| language SQL, which is used in all relational database systems
| today.  Like Peirce's version, Codd's relational algebra and the
| SQL language leave the existential quantifier implicit and require
| a double negation to express universal quantification.
|
| John Sowa, "Existential Graphs:  MS 514 by Charles Sanders Peirce"
|
| http://users.bestweb.net/~sowa/peirce/ms514w.htm

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 32

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JA: A person who does not present the decomposition of a set with respect to sets
    is not presenting the decomposition of a relation with respect to relations.

PB: Jon, I did not follow this post of yours, at all.
    We have been decomposing 3-ary relations into
    2-ary relations and back again, is all.

JA: Who's we?

PB: Database designers & engineers.  According to you,
    the database we are communicating through does not
    implement relations ...

That's not what I said.

The examples that you have been discussing are all "one row databases" --
one particular sandwich, one particular trip between cities.  You have
yet to introduce a single example of a non-trivial relation, much less
decompose it into other relations.

JA: You folks have been discussing nothing but single relation instances.
    Thus, you have not even got as far (yet) as talking about relations,
    which are SETS of relations instances.

PB: ... but I decompose and recompose n-ary relations losslessly
    every day, actually most waking hours, so before accepting
    your declaration, I would want an argument.

You do understand that we are talking
about 'unkeyed' relational tables here?
If you are thinking of a k-adic relation
with an extra key, then that is really
a (k+1)-adic relation.

Exercise.  Show me a lossless reconstruction
of L_0 and L_1 from their 2-adic projections.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 33

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC: Let me repeat, though, the simple idea of the reduction of teridentity.
    If the following can be regarded as a formulation of teridentity,

HC: Ixyz,

HC: Then the idea is that we don't need teridentity,
    in standard logic, since we can always just
    substitute "x=y & y=z" for "Ixyz".

The very first two quotes that Seth cited on this
topic were concerned with two very different things:

One of them used the technical term "composition" and
one of them used the technical term "containment".

There is no contradiction between these two statements,
because they are talking about two different things.

The first is talking about relational composition,
that Peirce also called "relative multiplication"
or "relative product".  In as much as a function
is a special case of a relation, this is a direct
generalization of ordinary functional composition.

Relational composition, with slight modifications
appropriate to the logical use, is analogous to
ordinary matrix multiplication, and Peirce often
represented it this way.  In this sense of the
technical term "composition", it is a matter of
definition, not requiring proof, that 3-adic
relations are "indecomposable" to 2-adics.

If anyone should have any doubts as to what Peirce meant
by "relative product" in his earliest papers, or whether
he was still using the same concept later on in his work,
the fundamental analogy between relational "adinities" and
vertex "degrees" (also called "valencies") that is a basic
feature of Existential Graphs should dispell any such doubt.

A reader of Peirce's 1870 "Notation" who recognizes how much of
his technical language comes straight out of Leibniz would know
that the phrase "contained in", as used in the quotation at issue,
invokes a very different concept, that Peirce is here talking about
containment in the 'de inesse' or intensional sense, rather than the
extensional sense.  Ignoring many niceties, this is more or less what
you are invoking with formulations of the form "F(x, y) & G(y, z)".

I wish that I knew the capital of ampersand, so that I could help
you to read such schemata the way that Peirce would have read them --
and as any computationally and semiotically sensitive reader could
hardly help but to read them -- beause the "&", in point of its
practical effects, is the biggest darn sign in that whole scheme.
And there is abundant evidence in what Peirce writes that he was
acutely aware that what the "&" evokes is nothing less than, and
nothing simpler than a 3-adic relation, interpretable as applying
either to sentences, to propositions, or to their truth values,
according to the application in question.

Given for free this one 3-adic relation, the one signified by "&",
as a part of one's "logical resource base", along with all of the
2-adic relations that one had in store beforehand, one can, then,
of course, construct a number of additional 3-adic relations.
But all of this newfound "generative potency" is tucked away
in that seed of 3-ness that is evoked by the "&".

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 34

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Actually, Seth, since I recited this out of remembered
impressions formed a quarter of a century ago, and not
myself being a mental power on the order of Peirce,
I would consider it very likely that I need to
check almost anything like that over again,
so let me bring some materials together:

| Leibniz, "Elements of a Calculus" (1679)
|
| 7.  To make evident the use of symbolic numbers in propositions, it is
|     necessary to consider the fact that every true universal affirmative
|     categorical proposition simply shows ['significat'] some connexion
|     between predicate and subject (a 'direct' connexion, which is what
|     is always meant here).  This connexion is, that the predicate is
|     said to be in the subject, or to be contained in the subject;
|
| [Example:]
|
|     so when I say "All gold is metal"
|     I simply mean that in the concept
|     of gold the concept of metal is
|     contained directly, since gold
|     is the heaviest metal.

So "Gold => Metal" is read "The predicate Metal is 'contained in' the subject Gold".

| Now, identity is essentially a dual relation.
| That is, it requires two subjects and no more.
| If three objects are identical, this fact is
| entirely contained in the fact that the three
| pairs of objects are identical.  CP1.446 (1896).

A.  The fact that i, j, k are identical
    is contained in
B.  The fact that i = j, i = k, j = k.

According to my interpretation, a statement of the
form "predicate A is contained in subject B" would
decode as the implication "B => A" or "All B is A".

That still accords with the way that I read the original statement.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 35

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I had this vague sense of familiarity about your second indiscernibility
quote from Peirce, but you had so ripped it out of context that I never
became quite aware of it till now.  As it happens, I wrote my senior
thesis "Complications of the Simplest Mathematics" (1976) on this
locus in Peirce's studies, and I can tell you exactly what he was
talking about here.  He was talking about the "conjugacy classes"
of group elements in the symmetric group on three letters Sym(3)
and in group theory generally.

I constantly get the feeling that numerous people are trying to get
something out of Peirce's work that he did not put into it, because
he had a different sense of what were the truly inportant questions
than they do -- especially in these places where some people act as
if they are trying squeeze ontological blood out of logical turnips.

I gave my best maxim at the beginning of this discussion:
Tease apart as much as possible the ontological objects
from the logical signs -- then the apt answers to the
questions of identity will look very different from
one side of the Object/Sign ledger to the other.

On the ontological side, identity tends away from being a 3-adic relation
and even away from being a 2-adic relation towards being a 1-adic relation --
things just are, and there is no real need to put them besides themselves
for ontology's sake.  I am not myself all that teribly interisted in those
sorts of questions.

On the logical (formal semiotic) side, the interesting and useful questions
involve sign relations, whereby a sign relates to another sign in respect
of an object, and the sensible identity question is just a special case
of that type, whereby a sign is equivalent to another in identifying
an object.  These are the sorts of things that are handled in math
by means of different sorts of equivalence classes of signs, and
one of the more interesting semiotic phenomena that arises here
is that these equivalence classes interpenetrate, overlap, and
superpose each other, "like raindrops", to pursue the metaphor,
which is probably all that it is.

| Let us now glance at the permutations of three things.
| To say that there are six permutations of three things
| is the same as to say the two sets of three things may
| correspond, one to one, in six ways.  The ways are here
| shown:
|
| o-----------o-----------o-----------o-----------o-----------o-----------o
| |           |           |           |           |           |           |
| |  r  s  t  |  r  s  t  |  r  s  t  |  r  s  t  |  r  s  t  |  r  s  t  |
| |  |  |  |  |  |   \/   |   \__\/   |   \/   |  |   \/__/   |    \|/    |
| |  |  |  |  |  |   /\   |   /\  \   |   /\   |  |   /  /\   |    /|\    |
| |  o  p  q  |  o  p  q  |  o  p  q  |  o  p  q  |  o  p  q  |  o  p  q  |
| |           |           |           |           |           |           |
| o-----------o-----------o-----------o-----------o-----------o-----------o
|
| No one of these has any properties different from those of any other.
| They are like two ideal raindrops, distinct but not different.  Leibniz's
| "principle of indiscernibles" is all nonsense.  No doubt all things differ;
| but there is no logical necessity for it.  (CP 4.311).

Peirce is making a point about operational structure that is closely
connected to the pragmatic maxim.  In themselves, as monadic elements
of a group, all of the group elements are in the first instance alike.
Still, in the second instance, they can be recognized as distinct from
one another "by their effects", that is to say, by the ways that they
act on one another.  This refers to the "regular representations" of
group elements acting on the group itself.  See the following etude:

http://www.altheim.com/cs/difflogic.html

The rendition of Leibniz's principle of indiscernibility to
which Peirce is alluding in this connection, I am guessing,
is the one that says:  "No two monads can be exactly alike".
And the objection that Peirce is really making is to the
logical necessity for it.  So, my guess is that Peirce
is talking about a symmetry principle, something akin
to an equivalence or an invariance principle, say,
like all reference frames or POV's are alike,
but that does not diminish the diversity
of phenomena or the potential for truth
one little bit.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 36

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

J = Jon Awbrey
S = Seth Sharpless

J: So "Gold => Metal" is read
   "The predicate Metal
   is 'contained in'
   the subject Gold".

S: OK, but from that, one may infer that
   "x is metal" follows of necessity from "x is gold".

I would not infer that, not in the context of these readings, because I do not
know for sure what all of the various writers and readers involved might mean
by "necessity" -- I mean, there sits Leibniz, contemplator, overseer, surveyor
of all possible worlds ... no, let's not go there, just yet -- for the moment,
Leibniz has called this a "true universal affirmative categorical proposition",
and I think that I will just stick with that until I have read some more of
his actual text into the record.

J: A.  The fact that i, j, k are identical
       is contained in
   B.  The fact that i = j, i = k, j = k.

S: So again, one may infer that the proposition that i, j, k are identical
   follows of necessity from i = j, i = k, j = k.  Indeed, and isn't that
   what everybody has been assuming Peirce meant?  Leibniz seems a long
   way around to make this discovery.

No, maybe this is just me, but I would be very
careful about throwing any extra ingredients
like 'anagke' into this soup just yet, or
we might incite 'anarche' and end up
in 'anagkaion'.

All I have at present is that the
simple predicate "I_3 (x, y, z)" is
contained in essence in the compound
subject "I(i, j) & I(j, k) & I(i, k)".

Indeed, I worry some about the "simple"
as that is a technical term in Leibniz,
and the last "I(i, k)" looks redundant,
but then again ...

Maybe this is just me, again, but I consider it a "necessity", practically speaking,
to read Leibniz in order to have a clue what Leibniz meant by "indiscernibility" --
which you invited in, or "composite", "contained in", "individual", "simple",
and so on, all of which terms Peirce is using and wrestling with in all of
their possible Leibnizian senses in his basic technical papers from the
"continental divide" year of 1870 onwards.

This is just the way that I show my respect, indeed, my sympathy,
to any writer, by reading what they write before I criticize it,
instead of using my criticism as an excuse not to read it,
as seems to be the fashion of late.

Lucky for us, Peirce did not derive his knowledge of Leibniz or
his insights about comprehension, extension, or their synthesis
as information from Russell's Leibniz or from Quine's rendering
of these notions.  I take it as a lesson.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 37

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Okay, I do find several uses of "necessary",
in a mathematical sense, in the last notes
on Leibniz (1679) that I sent:

http://suo.ieee.org/ontology/msg04369.html
http://suo.ieee.org/ontology/msg04371.html

So that will help a little to get his sense,
though I still suspect that there will be
many different senses to watch out for.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 38

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| I have also pointed out that in consequence of imperceptible variations no two
| individuals things could be perfectly alike, and that they must always differ
| more than numerically.  This puts an end to the blank tablets of the soul,
| a soul without thought, a substance without action, empty space, atoms,
| and even to portions of matter which are not actually divided, and also
| to absolute rest, completely uniform parts of time or place or matter,
| perfect spheres of the second element which take their origin from
| perfect cubes, and hundreds of other fictions which have arisen
| from the incompleteness of philosophers' notions.  They are
| something which the nature of things does not allow of.
| They escape challenge because of our ignorance and our
| neglect of the insensible;  but nothing could make them
| acceptable, short of their being confined to abstractions
| of the mind, with a formal declaration that the mind is not
| denying what it sets aside as irrelevant to some present concern.
| On the other hand if we meant literally that things of which we are
| unaware exist neither in the soul nor in the body, then we would fail
| in philosophy as in politics, because we would be neglecting 'to mikron',
| imperceptible changes.  Whereas abstraction is not an error as long as one
| knows that what one is pretending not to notice, is 'there'.  This is what
| mathematicians are doing when they ask us to consider perfect lines and
| uniform motions and other regular effects, although matter (i.e. the
| jumble of effects of the surrounding infinity) always provides some
| exception.  This is done so as to separate one circumstance from
| another and, as far as we can, to trace effects back to their
| causes and to foresee some of their results;  the more care
| we take not to overlook any circumstance that we can control,
| the more closely practice corresponds to theory.  But only the
| supreme Reason, who overlooks nothing, can distinctly grasp the
| entire infinite and see all the causes and all the results.  All
| we can do with infinities is to know them confusedly and at least
| to know distinctly that they are there.  Otherwise we shall not
| only judge quite wrongly as to the beauty and grandeur of the
| universe, but will be unable to have a sound natural science
| which explains the nature of things in general, still less
| a sound pneumatology, comprising knowledge of God, souls,
| and simple substances in general.
|
| Leibniz, 'New Essays', p. 57.
|
| G.W. Leibniz, 'New Essays on Human Understanding'
| Peter Remnant & Jonathan Bennet (trans. & ed.),
| Cambridge University Press, Cambridge, UK, 1996.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 39

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway
JA = Jon Awbrey
KK = Kenneth Ketner

KK: Howard and Jon:  All this nonreduction of triads stuff and the wrongness 
    of Quine's supposed reduction to a dyadic predicate is laid out with the 
    highest mathematical rigor in Robert Burch's A PEIRCEAN REDUCTION THESIS:
    THE FOUNDATIONS OF TOPOLOGICAL LOGIC.  Arisbebooks has some copies (search
    via google).  This volume is an absolute essential in any literature review
    for persons working this area.

JA: Ergo, all notions of analysis, composition, reduction, synthesis, whatever,
    contain a notion of triadic relations as a part of their very constitution.

HC: I find the related arguments very slippery.  Certainly I am no fan of reductionism,
    but it seems the the presuppositions involved in the proofs offered of the reduction
    or non-reduction of "genuine" triadic relations (Correct me if I am wrong, but I think
    Peirce uses the term "genuine" triadic relations, so that there is some distinction
    between genuinely triadic relations and those which may just appear to be so, and
    are open to some analysis) -- the various presuppositions -- seem not so obvious. 

HC: So, please be aware that I do not have in mind to defend Quine's proof of 
    reduction to non-triadic relations.  But that he gives a proof or apparent 
    proof seems a chief point of interest.  On the other hand, Peirce's contrary 
    arguments to the effect that triadic relations cannot be reduced, seems to 
    involve presuppositions connected with his use of teridentity, in contrast
    to the usual versions we see in logic books. 

HC: I have my serious doubts that we actually need the concept of teridentity for 
    logical purposes generally. The cross-reference and possible cross-reference 
    of the variables seems to take the place of teridentity as things are usually 
    formulated.

HC: If you can clarify this matter, then I think you will
    provide a benefit to readers of the list, myself included.

I apparently failed to receive this earlier message from Howard.
I will put Burch's book on my wish list, but the basic facts have
long been beyond question in the necks of the mathematical woods
that I was once accustomed to frequent.  We had long battles over
this on SUO, and even when I got one of the reviewers who panned
Burch's book in print to concede off-list -- they always do it by
saying that it was always already trivial in the first place --
he/she never would fess up in public.  So I have little hope.

There are at least three levels to the irreducibility half of it,
and I have seen all the relevant Peirce quotes already pass under
the bridge several times here with not so much cognizance of what
they say.

At level one, the argument is over before it starts.  The very idea
of reducing 1 thing to 1 thing plus 1 other thing is a triadic idea.

At level two, there is ordinary relational composition, or as Peirce
called it "relative multiplication", and it defines the composition
of two 2-adic relations to be a 2-adic relation, which means that
you can never ever get a 3-adic relation as the composition of
two 2-adic relations.  Peirce just takes this much as given
from the start, as it is already clear in his first papers.

At level three, which a player reaches in an orderly way
only by conceding the issue of irreducibility on the first
two scores, and this is where, as I currently understand it,
the whole business about "genuineness" comes into play, one is
given a single 3-adic relation, say, the one that corresponds to
logical conjunction, in terms of truth values 'and' : B x B -> B,
along with the whole stock of 2-adic relations previously granted.
Then one tries to see what other 3-adics can be generated from these
augmented resources.  The ones that can be so produced are reducible
over this resource base;  the ones that can't be so constructed are
then called "irreducible" in a stronger sense, the genuine 3-adics.
At any rate, this seems to fit the examples that Peirce provides.

I am putting a detailed "work in progress" on these issues here:

http://www.nexist.org/wiki/Doc16596Document
http://www.nexist.org/wiki/Doc16600Document

It can take a couple of minutes to load the pages, though.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 40

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Instead of translation, perhaps we can return to the orginal.
What Peirce says in the following quote, that Joe posted on
23 Nov 2002 08:36:18 -0600, covers the same facts that I am
indicating under my rubric of level 1 and level 2.  This is
all still treating of relations as sets of tuples, before
we even get as far as mentioning sign relations, per se.

| The criticism which I make on [my] algebra of dyadic relations, with
| which I am by no means in love, though I think it is a pretty thing,
| is that the very triadic relations which it does not recognize, it
| does itself employ.  For every combination of relatives to make a
| new relative is a triadic relation irreducible to dyadic relations.
| Its 'inadequacy' is shown in other ways, but in this way it is in a
| conflict with itself 'if it be regarded', as I never did regard it,
| 'as sufficient for the expression of all relations'.
|
| C.S. Peirce, 'Collected Papers', CP 8.331

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 41

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I see that the Nexist webport is offline at the moment --
it is very experimental and often goes off for updates --
so I will give the following sets of alternative links:

Reductions Among Relations, links 01-16.

Theory Of Relations, links 01-06.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 42

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

thus be it e'er in our backward race ---
our castles to build in a nary place ---
only then, if there's time ...
only then, if there's grace ...
do we e'er so stepwise turn our mind ---
to eke, to imp, the founding incline ---

jon awbrey
9 feb 2003

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Note 43

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| thus be it e'er in our backward race ---
| our castles to build in a nary place ---
| only then, if there's time ...
| only then, if there's grace ...
| do we e'er so stepwise turn our mind ---
| to eke, to imp, the founding incline ---
|
| jon awbrey
| 9 feb 2003

a mongrel like that is the sort of thing
that i find wimpering on my doorstep on
a cold winter morning, so i am as much
at a loss to know where it's been, or
what it's trying to say about timmy.

but here are some hermenautic logs
that I was given to pitch together:

1. "thus be it ever" = obvious allusion to tyrants,
   and a slightly less obvious allusion to oedipus.
   also suggests "cosi fan tutte" = "they all do it".

2. "nary" = "n-ary", 'nuff said.

3. "founding incline" suggests "initial swerve".
   not at all fundamentalist for somebody who's
   moved around a lot, or lived in a basement.

4. punctuation scheme:  the caesurae "---" and ellipses "...".
   an obvious encryption of hexagram 61 "inner truth", though,
   of course, all lines are changing.

5. rhyme scheme:  "race, place, time, grace, mind, incline",
   the alternation of vowels is:  <a, a, i, a, i, i>,
   in other weirds <yang, yang, yin, yang, yin, yin>.
   obvious encryption of hexagram 53 "gradual progress",
   though, of course, all lines are changing.

6. sic semper glory-i-a

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Work 10

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Subject: Re: Identity & Teridentity
From: Gary Richmond <garyrichmond@rcn.com>
Date: Sat, 08 Feb 2003 15:46:05 -0500

GR = Gary Richmond

GR: I came upon this quotation which I don't recall being discussed -- at least
    not fully -- in the identity/teridentity thread.  It seems quite valuable in
    advancing our understanding of that concept. I would especially draw the list
    to its conclusion, which begins with an example:

| "The concept of man necessarily involves the thought of the possible 
| being of a man; and thus it is precisely the judgment, "There may be a 
| man." Since no perfectly determinate proposition is possible, there is 
| one more reform that needs to be made in the system of existential 
| graphs. Namely, the line of identity must be totally abolished, or 
| rather must be understood quite differently. We must hereafter 
| understand it to be potentially the graph of teridentity by which means 
| there always will virtually be at least one loose end in every graph.
| In fact, it will not be truly a graph of teridentity but a graph of 
| indefinitely multiple identity."

This allows for semiosis (or rather, its diagrammatic analysis) to 
continue indefinitely. Or better, it is the sign of the possibility of 
the indefinite, evolutionary movement of semiosis:  "there always will
virtually be at least one loose end." "a graph of teridentity"  [==]
"a graph of indefinitely multiple identity."

> Peirce: CP 4.583
> An Improvement on the Gamma Graphs
>     583. The System of Existential Graphs recognizes but one mode of 
> combination of ideas, that by which two indefinite propositions 
> define, or rather partially define, each other on the recto and by 
> which two general propositions mutually limit each other upon the 
> verso; or, in a unitary formula, by which two indeterminate 
> propositions mutually determine each other in a measure. I say in a 
> measure, for it is impossible that any sign whether mental or external 
> should be perfectly determinate. If it were possible such sign must 
> remain absolutely unconnected with any other. It would quite obviously 
> be such a sign of its entire universe, as Leibniz and others have 
> described the omniscience of God to be, an intuitive representation 
> amounting to an indecomposable feeling of the whole in all its 
> details, from which those details would not be separable. For no 
> reasoning, and consequently no abstraction, could connect itself with 
> such a sign. This consideration, which is obviously correct, is a 
> strong argument to show that what the system of existential graphs 
> represents to be true of propositions and which must be true of them, 
> since every proposition can be analytically expressed in existential 
> graphs, equally holds good of concepts that are not propositional; and 
> this argument is supported by the evident truth that no sign of a 
> thing or kind of thing -- the ideas of signs to which concepts belong 
> -- can arise except in a proposition; and no logical operation upon a 
> proposition can result in anything but a proposition; so that 
> non-propositional signs can only exist as constituents of 
> propositions. But it is not true, as ordinarily represented, that a 
> proposition can be built up of non-propositional signs. The truth is 
> that concepts are nothing but indefinite problematic judgments. The 
> concept of man necessarily involves the thought of the possible being 
> of a man; and thus it is precisely the judgment, "There may be a man." 
> Since no perfectly determinate proposition is possible, there is one 
> more reform that needs to be made in the system of existential graphs. 
> Namely, the line of identity must be totally abolished, or rather must 
> be understood quite differently. We must hereafter understand it to be 
> potentially the graph of teridentity by which means there always will 
> virtually be at least one loose end in every graph. In fact, it will 
> not be truly a graph of teridentity but a graph of indefinitely 
> multiple identity.

Teridentity within the gamma graph system suggests, at least to me,
the possibility of the growth of meaning. [The optimal  means towards
this evolutionary goal  is, of course,  pragmaticistic inquiry.] Here, 
especially, I think
one ought be an "authentic peircean." .

[For the "pragmaticism in metaphysics" thread, do not that this is a 
concept of logic,
not a metaphysical principle.]

Gary


Gary Richmond wrote:

> (postscript to my last message) from the first of your quoted passages 
> (the segment I had omitted when posting it myself, incidentally)
>
>>CP 1.345 I think even degenerate triadic
>>relations involve something like thought.
>>
> In short, all triadic relations involve thought, so there is no need 
> to distinguish between them and their representations, right?
>
> GR
>
> Joseph Ransdell wrote:
>
>>Howard:
>>
>>"Genuine" is a term of art in Peirce's philosophy, and is not equivalent to
>>"real" as opposed to "apparent".    I am distributing separately a
>>collection of passages from the CP compiled some time ago.  Bear in mind
>>that they were acquired by a string search, and I have only minimally edited
>>them for legibility.   The first paragraph is the same as the one Gary
>>quoted, but contains a little more, I believe..
>>
>>Joe Ransdell
>>
>>----- Original Message -----
>>From: <HGCALLAWAY@aol.com>
>>To: "Peirce Discussion Forum" <peirce-l@lyris.acs.ttu.edu>
>>Sent: Saturday, November 23, 2002 5:10 AM
>>Subject: [peirce-l] Re: Identity & Teridentity
>
>>>Peirce-l,
>>>
>>>Here follows some comments and analysis of the Peirce quote helpfully
>>>supplied by Gary Richmond. Generally I aim to keep an eye on the thesis of
>>>the non-reducibility of triadic relations.
>>>
>>>Peirce writes:
>>>----quote-------------------
>>>CP 1.345. I will sketch a proof that the idea of meaning is irreducible to
>>>those of quality and reaction. It depends on two main premisses. The first
>>is
>>>that every genuine triadic relation involves meaning, as meaning is
>>obviously
>>>a triadic relation.
>>>----pause----------
>>>
>>>Notice here that Peirce speaks of "every genuine triadic relation,"
>>implying
>>>that what appears triadic may yet not be. Every genuine triadic relation
>>>"involves meaning."
>>>Some apparently triadic relations, may, it seems be properly analyzed into
>>>some-thing less than triadic. Intuitively, I follow Peirce to the extend
>>of
>>>thinking that our talk of meaning involves the reference of signs to
>>objects
>>>in light of an interpretation.
>>>But is this intuitive notion of meaning "reducible" to component or
>>>non-triadic relations, say, the relation of sign to object, the relation
>>of
>>>sign to interpretation and the relation of object to interpretation? That
>>>meaning is not reducible to quality and reaction seems, so far, a perhaps
>>>less interesting claim.
>>>
>>>A second premise:
>>>
>>>----Peirce continued-----
>>>The second is that a triadic relation is inexpressible by means of dyadic
>>>relations
>>>alone.
>>>---pause----------------------
>>>
>>>The second premise of this argument for Peirce's seems to be the
>>>irreducibility of triadic relations to dyadic relations. So, if I read
>>this
>>>argument correctly then Peirce is using the assumption of the
>>>non-reducibility of triadic relations to dyadic relations, here explicitly
>>>taken as a premise, in order to argue for a a distinct claim -- "that the
>>>idea of meaning is irreducible to those of quality and reaction." This
>>looks
>>>like an argument for the irreducibility of the category of thirdness based
>>on
>>>the assumption of the irreducibility of triadic realtions. So, it seems we
>>>are not about to find an argument for the irreducibility of triadic
>>>relations.
>>>
>>>----Peirce continued---------
>>>Considerable reflexion may be required to convince yourself of the first
>>of
>>>these premisses, that every triadic relation involves meaning. There will
>>be
>>>two lines of inquiry. First, all physical forces appear to subsist between
>>>pairs of particles. This was assumed by Helmholtz in his original paper,
>>On
>>>the Conservation of Forces.+1
>>>Take any fact in physics of the triadic kind, by which I mean a fact which
>>>can only be defined by simultaneous reference to three things,  and you
>>will
>>>find there is ample evidence that it never was produced by the action of
>>>forces on mere dyadic conditions. Thus, your right hand is that hand which
>>is
>>>toward the east, when you face the north with your head toward the zenith.
>>>Three things, east, west, and up,
>>>are required to define the difference between right and left. Consequently
>>>chemists find that those substances which rotate the plane of polarization
>>to
>>>the right or left can only be produced from such [similar] active
>>substances.
>>>They are all of such complex constitution that they cannot have existed
>>when
>>>the earth was very
>>>hot, and how the first one was produced is a puzzle. It cannot have been
>>by
>>>the action of brute forces. For the second branch of the inquiry, you must
>>>train yourself to the analysis of relations, beginning with such as are
>>very
>>>markedly triadic,
>>>----end quote-------
>>>
>>>In this last passage, a triadic relation is defined as one which requires
>>>reference to three things. That strikes me as somewhat problematic. Are
>>there
>>>facts which can only be defined by reference to three things? Well,
>>perhaps;
>>>and perhaps meaning is one of them. But in any case, I wonder how we are
>>to
>>>take the intuitive use of the phrase "three things." What is taken as one
>>>thing for one purpose may be taken as more than one for some other
>>purpose,
>>>or so it seems. So, I ask, whether the reference to three things, on which
>>>the notion of triadic facts and relations appears to depend, is something
>>so
>>>firm as to lead us to generally deny that what are taken as three things
>>for
>>>some purpose may be taken as more or less than three for some other
>>purposes?
>>>Twelve eggs are also 1 dozen. Right? Or is this somehow just common-sense
>>>nominalism, which we want to get beyond in a critical spirit?
>>>
>>>Howard

----------------------------------------------------------------------

Subject: loose ends
From: Jon Awbrey <jawbrey@oakland.edu>
Date: Sat, 08 Feb 2003 15:58:41 -0500
X-Message-Number: 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

was just about to break for today, but the same idea is
already implicit in the "comma operator" of the 1870 LOR.
it's one of the things that makes all of peirce's systems
work so well as "extensible logical languages" (ELL's).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

----------------------------------------------------------------------

Subject: Re: loose ends
From: Gary Richmond <garyrichmond@rcn.com>
Date: Sat, 08 Feb 2003 17:02:19 -0500
X-Message-Number: 7

But note, Jon, that Peirce doesn't say that everything that needed
to be accomplished in the development of his graphical logic has been
even at the date of the paper from which I quoted (let alone 1870) has 
been.

> CP:  Since no perfectly determinate proposition is possible, there is 
> one more reform that needs to be made in the system of existential 
> graphs. Namely, the line of identity must be totally abolished, or 
> rather must be understood quite differently. We must hereafter 
> understand it to be potentially the graph of teridentity by which 
> means there always will virtually be at least one loose end in every 
> graph. In fact, it will not be truly a graph of teridentity but a 
> graph of indefinitely multiple identity.

Rather, he says, "one more reform . . . needs to be made in the system 
of existential
graphs," and this explication of the line of teridenity has been very 
"suggestive," to me
of a possible thrusting forward that I am being to see underlying 
philosophical rhetoric,
which then is seen as the last phase of methodeutic. For example, in the 
thread concerning
pragmaticism's place within Peirce's classification of the sciences, I 
argued at one point
that methodeutic, as pure rhetoric, thrusts forward into metaphysical 
inquiry with all the power
of the great analytical tools that logic as semiotic provides it (and of 
which Peirce is typically
the discoverer or inventor).

But the gamma graphs are incomplete, they only hint at a project,
a future project (perhaps our generation's  project were we to take it
up now that we could really develop the collaboratory web tools that Doug
Engelbart, Ted Nelson,  Jack Park and others have been exploring for some
time now). Of course much more thought needs to be given as to how a
community of inquiry ought be established within the HCI context, what
tools would be needed, the nature of the investigations to be undertaken,
the place of gamma-graph-like-things within that investigation, and so 
forth.

I'm especially interested in the future of research, Jon, not its 
path's, it's past,
not necessarily even the history of the evolution of Peirce's logic as 
semiotic.
No doubt his graph systems is prefigured by the important work of 1870.
I'm certainly glad that others--that you, especially-- not suffering my 
narrowness
are interested, because this is important work with its own "leading 
principles"
and itself serves as a critique and antidote to that infamous tradition 
you love
to hate :-)  (My own position is something like: Let us within the 
Peirce community
 move on whatever "they" might do.)

Still, for the ongoing discussion, what's your point? Truly, I can't tell
what the ELL you are pointing to when you write:

>the "comma operator" [is]one of the things that makes all of peirce's systems
>work so well as "extensible logical languages" (ELL's).
>
If the "comma operator" later becomes the line of teridentity, i.e.,
" a graph of indefinitely multiple identity," then this tends to
interest me more than its origins. Is a kind of e-gamma graph yet 
conceivable?
What would it do? What would it look like? This kind of thing. . .

But, before I incur your wrath,  I must admit a narrowness in my interest
in Peirce's work these last few years. I find that I  tend to study the 
Peirce
of  the turn of the century and later, from about 1898 on. There's so much
Peirce to read and re-read, that I have found it necessary to concentrate
on works after 1900. Yes, a severe limitation, a narrowness of focus, 
perhaps.
Keep up your good work.

Best,

Gary

Peirce: CP 4.583
An Improvement on the Gamma Graphs
    583. The System of Existential Graphs recognizes but one mode of 
combination of ideas, that by which two indefinite propositions define, 
or rather partially define, each other on the recto and by which two 
general propositions mutually limit each other upon the verso; or, in a 
unitary formula, by which two indeterminate propositions mutually 
determine each other in a measure. I say in a measure, for it is 
impossible that any sign whether mental or external should be perfectly 
determinate. If it were possible such sign must remain absolutely 
unconnected with any other. It would quite obviously be such a sign of 
its entire universe, as Leibniz and others have described the 
omniscience of God to be, an intuitive representation amounting to an 
indecomposable feeling of the whole in all its details, from which those 
details would not be separable. For no reasoning, and consequently no 
abstraction, could connect itself with such a sign. This consideration, 
which is obviously correct, is a strong argument to show that what the 
system of existential graphs represents to be true of propositions and 
which must be true of them, since every proposition can be analytically 
expressed in existential graphs, equally holds good of concepts that are 
not propositional; and this argument is supported by the evident truth 
that no sign of a thing or kind of thing -- the ideas of signs to which 
concepts belong -- can arise except in a proposition; and no logical 
operation upon a proposition can result in anything but a proposition; 
so that non-propositional signs can only exist as constituents of 
propositions. But it is not true, as ordinarily represented, that a 
proposition can be built up of non-propositional signs. The truth is 
that concepts are nothing but indefinite problematic judgments. The 
concept of man necessarily involves the thought of the possible being of 
a man; and thus it is precisely the judgment, "There may be a man." 
Since no perfectly determinate proposition is possible, there is one 
more reform that needs to be made in the system of existential graphs. 
Namely, the line of identity must be totally abolished, or rather must 
be understood quite differently. We must hereafter understand it to be 
potentially the graph of teridentity by which means there always will 
virtually be at least one loose end in every graph. In fact, it will not 
be truly a graph of teridentity but a graph of indefinitely multiple 
identity.

Jon Awbrey wrote:

>o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o
>
>gary,
>
>was just about to break for today, but the same idea is
>already implicit in the "comma operator" of the 1870 LOR.
>it's one of the things that makes all of peirce's systems
>work so well as "extensible logical languages" (ELL's).
>
>jon
>
>o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o


o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Work 9

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JA = Jon Awbrey
SS = Seth Sharpless

JA, referring to CP 4.311:

    Peirce is making a point about operational structure that is closely
    related to the pragmatic maxim.  In themselves, as monadic elements
    of a group, all of the group elements are in the first instance alike.
    Still, in the second instance, they can be recognized as distinct from
    one another "by their effects", that is to say, by the way that they
    act on one another.  This refers to the "regular representations" of
    group elements acting on the group itself. See the following etude:

JA: http://www.altheim.com/cs/difflogic.html

JA: The version of Leibniz's principle of indiscernibility to
    which Peirce is alluding in this connection, I am guessing,
    is the one that says:  "No two monads can be exactly alike".
    And the objection that Peirce is really making is to the
    logical necessity for it.  So, my guess is that Peirce
    is talking about a symmetry principle, something akin
    to an equivalence or an invariance principle, say,
    like all reference frames or POV's are alike,
    but that does not diminish the diversity
    of phenomena or the potential for truth
    one little bit.

SS: If I understand you, I think your comment is fair.
    The CP 1.456 quote is the most troublesome. 

   | Two drops of water retain each its identity and opposition to the
   | other no matter in what or in how many respects they are alike.
   | Even could they interpenetrate one another like optical images 
   | (which are also individual), they would nevertheless react,
   | though perhaps not at that moment, and by virtue of that
   | reaction would retain their identities.  (CP 1.456, 1896).

SS: Because this quote seems to say that two entities could be indiscernible
    in ALL respects (even spatial) at time t, and yet, non-identical at time t,
    because they could interact at time t+1. 

SS: Problem:  What could be the possible practical effects of
    swapping image one and image two at time t, when they are
    coincident and completely indistinguishable?  And if there
    are none, then how can we say that the images "retain their
    identities" when they are indistinguishable from each other? 

SS: Further, how can we distinguish practically between the case
    in which one image gives way to two from the case in which
    there are always two images, the two just being coincident
    at one time?

SS: Of course, there is a certain 'intensional' simplicity in assuming
    that there are two entities which retain their identities throughout.
    That constraint would limit the range of "possible worlds", so to speak
    (since we are falling back on Leibniz).  That would put the criteria of
    identity back in the intensional fold (where I think it belongs). 

SS: Thank you for your courteous reply.  I am sorry that I don't have
    time to read your paper, and I will not be able to respond until
    after the holidays. 

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Work 8

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

4.  What decomposition or reduction is.

JA: In this regard, JC commits what has long been known as the fallacy
    of the "Alchemist's Dodge" or the "(Philosophers') Stone Soup".
    It goes a bit like this:

JA: You can make a hearty soup out of nothing but stones and hot water ...
    if you add a pinch of salt ...
    if you add a dash of pepper ...
    if you add a few potatoes ...
    if you add a carrot or two ...
    if you add a hock of ham ...
    if you add ...

JA: And so it goes.

JA: The moral of this particular telling of the story
    is that a reduction is not a proper reduction if
    it does not reduce something to something lower.

JA: JC's purported "analysis" goes like this:

JA, quoting JC:

    | Between(ham, upper, lower)
    |
    | iff
    |
    | Above(upper, lower)
    |
    | and
    |
    | Above(ham, lower)
    |
    | and
    |
    | Above(upper, ham).

JA: Each of the bits of connective tissue that are signified by the
    uses of the word "and" invokes a 3-adic relation, in other words,
    an operation that passes from two sentences to a conjoint sentence --
    or from their truth values to the conjunction of their truth values,
    according to your taste -- and so the putative "reduction" is really
    the sort of "improper" reduction that reduces something to something
    of the same complexity, in this case, more 3-adic relations.

JA: All of these things were commonplace understandings well
    before Boole, DeMorgan, Peirce, & Co. got into the act.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Work 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SS: An aside on idealized individuals as "logical atoms":
    But there is another reason for calling it a "degenerate" case.

SS: That is because for Peirce (1870), no actual existents can conform to (2)!!
    See CP3.93 and CP3.93_n and my post sent 11/18/2002.  Only infinitely
    determinate individuals (he calls them "logical atoms") conform to
    both clauses of Leibniz's Law, and such infinitely determinate
    individuals are at best limiting cases -- ideal entities
    belonging to the realm of the possible.

Here you are falling victim to a typical mistake that people today
frequently make when reading a classical or a scholastic text, and
Peirce in 1870 is still writing very much in that vein.  The average
old school writer will start out -- and "start out" very often means
"once or twice at the start of his/her career" -- being very careful
to distinguish the attributes of things from the features of signs,
and will always write out "general name" and "individual term" and 
such every time.  But a writer soon tires of that, and he/she tends
to assume that nobody in the prospective audience could very likely
mistake which they mean, anyway, and so he/she just goes ahead and
writes "general" and "individual", as the case may be, for either
the term or the thing in question.

In this case, "logical atom" is a "term" which is ...

(But not for that reason unreal; unactualized possibilities are real for Peirce.)
The existents that we usually call "individuals" (and that the Schoolmen called
"singulars" are, of necessity, partially indeterminate with respect to their
possible features.  They are such that if they exist, they have to exist for
more than an instant and undergo change in that interval;  thus, the entity
at instant one could be said to be identical to the entity at instant
two only in some respects, not numerically.

Now, in most of his early logical writings, his theory of relations, etc.,
Peirce was concerned with ideal individuals, logical atoms, for which both
clauses of Leibniz's Law are supposed to hold.  But these are idealized 
entities, not actual existents, only possibles.  Actual existents have
a penumbra of indeterminacy, as a consequence of which they do not
behave like logical atoms, so we look to what is determinate and
constant in them as criteria of identity.
~~~~~~~~~~~End of aside~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 

J: You obviously understand that any statement involving a phrase
   like "all predicates", "all properties", or "every respect" is
   to be regarded with extreme circumspection. Why can you not
   accord to Peirce the right that we all assume for ourselves,
   to wit, of having to look at it from many different angles?
   As I see it, there is/are a host of ambiguities lurking
   in all these concepts, one that cannot be addressed short
   of saying what one means by "all", "every", "predicate",
   "property", and "respect".

S: Yes. It is conceivable that the early and late writings on identity
   which seem conflicted are just a matter of "having to look at it from
   different angles", but it is also conceivable that Peirce changed his mind.
   I don't think it is a disgrace to do so.  I'm still open to both hypotheses,
   but currently favoring the second. 

J: I will continue with the reading from Leibniz, which I began for two reasons:
   one, to introduce some of the terminology that Peirce was taking for granted
   in his writing about such concepts as "composite", "individual", "primitive",
   "simple", and so on; two, in order to give an account of Leibniz's principle
   as Leibniz was given to write about it.

S: Well, Leibniz is always worth reading, but I doubt that reading
   more Leibniz will be much help in understanding Peirce. 

S: The question is whether one needs an intensional language to describe the
   world (an "extensionalist" such as Quine thought not), or with respect
   to whether certain components of natural languages, such as proper
   names, have intensions (J. S. Mill, Kripke, and most late 20th
   century philosophers of logic, following Kripke, thought not;
   Carnap, surprisingly, allowed individual variables to have
   intensions in "Meaning and Necessity").

S: As for the proper use of the words 'comprehension' and 'intension',
   Peirce has a wonderful historical review of such usages, in which
   he points out that the word 'intension' by his time had come to be
   used in place of the Port Royal logicians' 'comprehension' owing to
   the influence of Hamilton and Leibniz (CP 2.393 & 2.393 n). (Peirce
   thought that 'intension' had the disadvantage that it was "liable to
   be confounded with 'intensity').

SS: Peirce himself preferred the terms 'breadth' and 'depth'.
    His expression 'essential depth', perhaps comes closest to
    the late 20th century usage of 'intension'.  He defines the
    essential depth of a term as "the really conceivable qualities
    predicated of it in its definition" (CP 2.409). 

SS: Modern logicians relativize extension and intension to languages
    with fixed interpretations.  Peirce assumed a growing language, in
    which the interpretation could change, and relativized breadth and
    depth to state of knowledge.

SS: These are important differences but I do not think they have
    an essential bearing on the problem that I raised concerning
    Peirce's theory of identity.  In his later writings (1902),
    Peirce said of this earlier work:

CSP: But I was too much taken up in considering syllogistic forms and
     the doctrine of logical extension and comprehension, both of which
     I made more fundamental than they really are.  As long as I held that
     opinion, my conceptions of Abduction necessarily confused two different
     kinds of reasoning (CP 2.102). 

SS: This again reminds us that Peirce sometimes
    changed his mind, as he said in 1903:

    ~~~~~~~~~Quote from Peirce, CP1.20~~~~~~~~~~ 
    I have since [1871] very carefully and thoroughly revised my
    philosophical opinions more than half a dozen times, and have
    modified them more or less on most topics.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Work 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Tue, 12 Nov 2002 11:36:20 -0700

SS = Seth Sharpless

Howard,

SS: In connection with the following quote from Peirce:
    ----------------------------------------------------
CP: ... there is one more reform that needs to be made in the system
    of existential graphs. Namely the line of identity must be totally
    abolished, or rather must be understood quite differently.  We must
    hereafter understand it to be potentially the graph of teridentity by
    which means there always will be at least one loose end in every graph.
    In fact, it will not be truly a graph of teridentity but a graph of
    indefinitely multiple identity.  CP 4.583 (1906).

    -------------------------------------
    you wrote:
    -------------------------------------

HC: So part of the reason I found this last quotation you provided as
    something of particular interest, is that Peirce writes of a graph "not
    be[ing] truly a graph of teridentity but a graph of indefinitely
    multiple identity." Though I haven't checked the context of the remark,
    I am wondering if this does not suggest something of the idefinite
    possibilities of cross-reference and identitfcation of cross-reference,
    other-wise achieved by means of quantified variables. Can you say
    anything further on this? Does Peirce take us beyond teridentity? 
    ----end of quotes---------------------------

SS: I had assumed that the indefinitely multiple identity would be a
    consequence of the fact that any higher-order n-adic identity relation
    could be constructed from triadic identity relations, in accordance with
    the more general statement of Peirce's reduction theorem, whereas this
    would not be the case for dyadic identity relations, the
    "indefiniteness" arising because there will always be at least one
    "loose end" hanging if every identity line joining two points is
    construed as potential tridentity. 

SS: Lacking access to Burch's papers at present, I am uncertain how
    the presumed indecomposability of tridentity and the decomposability of
    n-adic (n > 3) identity relations plays into the proof of Peirce's
    theorem generally. That is, is this just an assumption for identity
    relations, with the proof of the theorem for n-adic relations (other
    than identity) being dependent on that assumption, or does he have some
    kind of proof of the theorem in application specifically to identity
    relations, upon which he bases the proof of the broader theorem (which
    applies to n-adic relations other than identity). Cathy Legg has
    something illuminating to say about this in her note of 11/7.
    Your point about the "indefinite possibilities of cross-referencing"
    by way of quantified variables in the predicate calculus sounds
    interesting, but I don't think I understand it. One would think that
    there must be some limitation (of cross-referencing?) in Peirce's system
    not present in the predicate calculus; otherwise, one would think that
    Quine's proof that all relations can be constructed from dyadic ones
    would apply in Peirce's system as well. Is your paper on
    cross-referencing available on the web or otherwise?

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Work 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway

HC: I thought the following two quotations from a prior posting of yours 
    especially interesting, so I am going to repeat them here:

SS: 1896 Tridentity decomposable
    ---------------------------------
    Now, identity is essentially a dual relation.  That is, it requires
    two subjects and no more.  If three objects are identical, this fact
    is entirely contained in the fact that the three pairs of objects are
    identical.  CP1.446 (1896).
    ---------------------------------

SS: In this passage, Peirce appears to be saying that identity should be
    thought of in terms of a predicate expression with two places, like so,
    "___is identical with___" or "___=___".  So if we want to say that
    A, B, and C, are identical, then this is properly expressed using
    such a "dual" predicate, say, "A=B & B=C".  Or Peirce may have in
    mind something more complex, such as "A=B & B=C & A=C", as an
    appropriate form of expression, though this perhaps appears to
    us redundant.  I have avoided Peirce's talk of "subjects" and
    "objects", as possibly unclear. (After all, if A=B & B=C, then
    we only have one object, with three names. 

HC: The passage above contrasts, as you note, with the following passage
    which brings up the concept of "teridentity".  This concept is one
    which Peirce makes use of in his system of existential graphs.

HC: You quote Peirce:

SS: 1906 Tridentity not decomposable
    -----------------------------------
    there is one more reform that needs to be made in the system of
    existential graphs. Namely the line of identity must be totally
    abolished, or rather must be understood quite differently. We must
    hereafter understand it to be potentially the graph of teridentity by
    which means there always will be at least one loose end in every graph.
    In fact, it will not be truly a graph of teridentity but a graph of
    indefinitely multiple identity. CP4.583 (1906)
    -------------------------------------

HC: The concept of teridentity involves thinking of identity by means
    of a predicate with three and only three places, which we might
    think of, translating to the predicate calculus, as something
    like this:  "___is identical with___and___".  Instead of writing
    "A=B & B=C", in accord with the concept of teridentity, we are
    to write "A is identical with B & C", or words to that effect.
    It is a bit of a mystery why Peirce should prefer this, and it
    is a difficult point to explain without going into the system
    of existential graphs explicitly.  Of course we know that Peirce
    likes things in threes, but that is not all there is to the story
    of teridentity.  Teridentity seems to be requitred by his system
    of notation in the existential graphs -- that is how I understand
    the matter, and how I explained it in a review I published a couple
    of years back.  In the system of existential graphs, every predicate
    is to have exactly one, two, or three open places, to be filled, or
    connected to those of other predicates.  I remarked in my review, as
    I recall, that this interconnecting of predicate expressions played
    some of the role of cross-reference via quantified variables in the
    standard notation for the predicate calculus.  The difference is
    that using such cross reference, we allow for the possibility of
    identifying any number of cross references.  (We needn't decide,
    I think, that every predicate is once-and-for-all one-placed,
    two-placed, or N-placed, whatever the value of such decisions,
    in particular cases.) 

HC: So part of the reason I found this last quotation you provided
    as something of particular interest, is that Peirce writes of a
    graph "not be[ing] truly a graph of teridentity but a graph of
    indefinitely multiple identity".  Though I haven't checked the
    context of the remark, I am wondering if this does not suggest
    something of the idefinite possibilities of cross-reference and
    identitfcation of cross-reference, other-wise achieved by means
    of quantified variables.  Can you say anything further on this?
    Does Peirce take us beyond teridentity?

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Work 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway
SS = Seth Sharpless

SS: Your point about the "indefinite possibilities of cross-referencing"
    by way of quantified variables in the predicate calculus sounds
    interesting, but I don't think I understand it.

HC: I just took a look at the passage from Peirce which you quoted,
    and Peirce was certainly aware of the kind of phenomenon I mentioned
    in this connection.  So, briefly, suppose that the following is true:
    (Ex) (Ey) (Fx & Gy). This tells us that something is an F and something
    is a G, but it is uninformative about, say, whether something is both an
    F and a G.  That is consistent with the statement but not logically implied.
    Both the variables "x" and "y" range over the same domain, we naturally suppose,
    but whether it is any of the same elements of the domain of which "F" and "G" can
    be truly predicate, we do not know.  Or we could put it otherwise and say that the
    denotations of "F" and "G" are only partly determined, in reference to questions
    we might ask.  But if we add an identity to the statement, then we get something
    more definite, as with, say, (Ex)(Ey) [(Fx & Fy) & Ixy].  Here "Ixy" establishes
    a cross-reference between the two variables, while without this, we only have the
    distinctive cross-reference of each variable to its corresponding quantifier. 

SS: One would think that there must be some limitation (of cross-referencing?) 
    in Peirce's system not present in the predicate calculus; otherwise, one 
    would think that Quine's proof that all relations can be constructed from 
    dyadic ones would apply in Peirce's system as well.  Is your paper on 
    cross-referencing available on the web or otherwise?

HC: It is not really a paper on cross-reference which I mentioned.
    I was thinking of a review, of Baltzer, which appeared in the
    'Transactions', Spring 1995, pp. 445-453.  Basically I pose
    some of the related questions. If you have trouble getting 
    hold of it, let me know and I can send you a copy.  Certainly,
    Quine's argument presents some sort of challenge, or appears
    to do so;  and I think you are looking in the right direction,
    thinking about limitations on cross-reference within Peirce's
    system. 

HC: Actually, I put in a good word for Cathy in this very review,
    mentioned above.  So, she would be welcome to come back to the
    topic.  The wheels of progress on this topic have turned only
    slowly, so far as I know.  Thanks for your comments, Seth. 

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Work 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JA: It seems to me that the sources of your confusion are quite clear:

JA: 1.  You consistently ignore what Peirce said in his 1870 remark on the
        "doctrine of undividuals", that attributions of identity are relative to
        a context of discourse, an insight that he did not abandon but developed
        to its finest degree in his 1883 remark "On a Limited Universe of Marks":

        http://suo.ieee.org/ontology/msg03204.html

SS: For the benefit of those who may not be following Jon Awbrey's
    arcane argument, I'm going to try to explain it, and to explain
    why it is mostly nonsense.  For convenience, I give here the quote
    to which Awbrey is referring.  Then I shall comment on its bearing,
    if any, on logical criteria of identity.

CSP: http://suo.ieee.org/ontology/msg03204.html

SS: I comment on this quote, showing that Jon Awbrey's contention that
    it has some deep significance concerning Leibniz's law is nonsense.

SS: This passage is opposed to the kind of extensionalism advocated by Quine.
    Quine's extensional languages are ones in which classes substitute for
    properties and relations, two classes being identical if they have the
    same members.  In an intensional language, admitting properties as well
    as classes, different properties may belong to exactly the same things.
    In an intensional language, "a proposition concerning the relations of
    two groups of marks is not necessarily  equivalent to any proposition
    concerning classes of things".  Extensional languages, such as the
    first-order predicate calculus, or set theory, are adequate for
    mathematics, but it is controversial whether the sentences of
    ordinary language or the sciences in general can be translated
    into such an extensional language sentence for sentence.
   
SS: Peirce evidently thinks (as I do) that such translation is not
    always possible for ordinary "limited universes of discourse".
    However, intensionality in itself neither supports nor defeats
    Leibniz's Law.  In spite of adopting an intensional language,
    one could deny Leibniz's Law, saying, as Peirce did late in
    his career, that two raindrops could have all their properties,
    including position in space, in common (being merged together)
    and yet not be numerically identical raindrops, owing to their
    dynamic interactions with one another.  Or one could affirm
    Leibniz's Law in an intensional language, saying that two
    things are numerically identical if and only if anything
    true of one is true of the other, as Peirce did, early
    in his career.

SS: The second paragraph of the quote bears on the law of "excluded middle,"
    since, as Peirce says, in a universe of discourse in which the universe
    of characters is limited, (x)(Px V ~Px) may not hold. [It may not be
    the case that "the non-possession of any character is regarded as
    implying the possession of another character the negative of the
    first."] This paragraph is relevant to whether we require a sortal
    logic or logic admitting of presuppositions to cope with natural
    languages, but it has no bearing on Leibniz's law. 

SS: To understand the third paragraph, we have to know what Peirce means by
    an "unlimited universe of marks."  Well, here is an example: A universe
    consisting of only two "marks", namely the colors red and blue, and two
    "objects," one red and the other blue.  In this case, "the universe of
    characters is unlimited [because] every aggregate of things short of
    the whole universe of things possesses in common one of the characters
    of the universe of characters".  This is because there are only two
    non-empty aggregates short of the whole universe, namely the aggregate
    consisting of the one red thing and the aggregate consisting of the one
    blue thing.  Now, in such a universe as this, Peirce points out, when
    the class S is included in the class P, every common character of the
    S's must also belong to the P's.  Why?  Well, in this case, if class S
    is included in class P, where P is an "aggregate of things short of the
    whole universe," S and P must be the same class. 
 
SS: But so what? Neither in this odd universe, nor in the more common
    "limited" universes of ordinary discourse, described by an intensional
    language, is there reason to deny Leibniz's Law. If Peirce chose to
    deny Leibniz's law, as he did late in his career, he did so on grounds
    not relevant to this passage, in spite of Jon Awbrey's handwaving.

SS: If I can muster up the patience, I will respond to Jon Awbrey's second
    point (below) in another message. But to make a long story short, it is
    just more self-aggrandizing handwaving, having no bearing on the
    significance of the quotes in question. 

JA: 2. You consistently ignore the distinctions and the meanings of technical
       terms in mathematics, as they were used by Peirce and as they have been
       used for at least 200 years, for instance, the meanings of "composition"
       and "reduction" as they are used in the algebra of relations and in the
       logic of relative terms.

JA: I have pointed out all of this to you before.  It is all pretty clear to
    anybody who does not have a prior theory of what can qualify as a theory
    of identity, or a philosophy, for that matter, that they keep trying shove
    Peirce's more general theory, and more general philosophy, into. 

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Work 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SS = Seth Sharpless

SS: Thanks to Cathy, Jon, Joe, Howard, and Benjamin for responding to
    my request for help in understanding Peirce's theory of identity.

SS: Let me try to make my quandary clear:  I would have expected Peirce
    to take a position on identity akin to that which I have described as
    a "realist's" position, namely, that identity is always with respect to
    some universal or type.  For example, a is the same as b with respect to
    color, or size, or personhood (the same "person" as), etc.  Then, what has
    been called (since Aristotle) "numerical identity" could be regarded as
    a "degenerate" form of relative identity, in which a is the same as b
    in 'every' respect.  This would make Leibniz's law of indiscernables
    true for numerical identity but not for identity simpliciter, since
    a could be identical to b in one respect and different in another.
    In the late 20th century, the classic defense of a "realist" theory
    of identity of this kind is that of Peter Geach ("Identity", reprinted
    in 'Logic Matters', Univ Cal. Press, 1972, 238-249).  The opposing
    "nominalistic" position is expounded by Quine and Wiggins in their
    respective criticisms of Geach:  Quine, Phil. Rev, 1964, p. 102 and
    Wiggins, 'Identity and Spatio-Temporal Continuity', Blackwell, 1971.

SS: However, surveying passages on identity or sameness in 'Collected Papers' and
    most of of the 'Writings' (I don't have access to Vol 4 of the latter at the
    moment) has failed to support this hypothesis.  Indeed, I find it difficult
    to extract a coherent theory of identity from Peirce's writings, at least
    up until 1903 or thereabouts.  Just to give an example of the problem,
    Cathy and Jon mentioned Peirce's concept of teridentity, which seems
    to be essential to the proof of "Peirce's Theorem" about reducibility
    to triads.  Cathy recommends Burch's papers in Houser's book.  (I have
    looked at these papers in the past, but the book is not available to me
    at the moment, so I'm relying on CP and 'Writings' and dating passages in
    CP is difficult as you know.)  The first reference I can find to teridentity
    in CP is from "A Syllabus of Certain Topics of Logic" (1903).  But here is the
    problem:  As late as 1896, Peirce wrote: 

SS, quoting CSP:

    | Now, identity is essentially a dual relation.
    | That is, it requires two subjects and no more.
    | If three objects are identical, this fact is
    | entirely contained in the fact that the three
    | pairs of objects are identical.  CP1.446 (1896)

SS: So trying to make a coherent story out of Peirce's writings on identity,
    most of which precede his development of the concept of teridentity, even
    though the latter is essential to the proof of "Peirce's theorem" and must
    somehow have played a role in his thinking, if not a conscious one, early
    on, is exceedingly difficult.  There are many interesting comments on
    identity scattered through Peirce's papers, but I have not been able
    to make a coherent story out of them, even of the ones preceding the
    explicit development of the teridentity theory.  I sometimes have the
    impression in his earlier writings on this subject that he is all over
    the map. In any case, I have to say that neither his late theory of
    teridentity nor his earlier treatments of identity as an essentially
    binary relation seem to be compatible with what I have called the
    "realist" theory of identity.  In discussing Peirce's theory of
    identity, one has to remember his devotion of Scotus's haecceities,
    but making things difficult for the interpreter, Peirce has put his
    own spin on haecceities.  This comes out in his observation that
    "Even Duns Scotus is too nominalistic when he says that universals
    are contracted to the mode of individuality in singulars, meaning,
    as he does, by singulars, ordinary existing things" (8.208, 1905). 

SS: Jon and Joe cite the pragmatic maxim, either Peirce's or James's version:
    "A difference that makes no difference is no difference at all", as the
    way to discover Peirce's views on identity.  But one has to remember
    that it took Peirce himself many years and the development of his
    theory of synechism to show that the incommensurability of the
    diagonal conforms to the requirements of the pragmatic maxim,
    that it is a "difference that makes a difference." 

SS: I admit to utter confusion over Peirce's theory of identity,
    and further help would be welcome.

SS: Things are not helped by Peirce's devotion to coining new words.
    Here is a classic bit of Peirceana touching on identity I think?

SS, quoting CSP:

| CP 3.586. There is but one ambilative suilation. It is the juxtalation,
| or coëxistence. There is but one contrambilative suilation: it is the
| relation of individual identity, called numerical identity by the
| logicians. But the adjective seems needless. There is but one ambilative
| [contra] suilation: it is the relation of individual otherness, or
| negation. There is properly no contrambilative contrasuilation: it would
| be the absurd relation of incompossibility. These four relations are to
| be termed the Four Cardinal Dyadic Relations of Second Intention. It
| will be enough to call them the cardinilations, or cardinal relations.
| 587. Any peneperlation or penereperlation is a juxtambilation; any
| perlation or reperlation is, in addition, a juxtasuilation. Any
| penecontraperlation or penecontrareperlation is an extrambilation: any
| contraperlation or contrareperlation is, in addition, an extrasuilation.
| Every ambilation is a penereperlative penereperlation: every
| contrambilation is a penecontrareperlative penecontraperlation. Every
| suilation is a juxtareperlative juxtaperlation: every contrasuilation is
| an extrareperlative extraperlation.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Work 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SS = Seth Sharpless

SS: Yes, the fact that there can be different dimensions along which
    "identity" can be assessed is brought home forcibly in lexicography,
    where, for example, we have to deal with: phonetic identity and
    typographic identity, identity of reference, identity of meaning,
    homonymy, synonymy, etc.  Philosophic and logical theories of identity
    often do not seem to do justice to this problem of criterial identity
    (i.e., "sameness in respect to"), though it does seem that in most
    commonplace judgements of identity (e.g., same man, same word, same
    river, same word, same meaning, etc.), it is always sameness in respect
    of some criterion or property that is at issue. The hoary problem of
    indiscernability of identicals vs identity of indiscernables is with us
    yet in philosophy of logic.  And for this reason, perhaps, Harris has
    a point in focusing on the type-token distinction as a problem area.
    However, it seems to me that this problem is most acute for the
    nominalist, who in judging identity, has always to come back to 
    "same particular".  As a student of Quine, you must have thought
    a good deal about this problem.

SS: Quine objects to properties on the ground that we do not have
    criteria of identity for properties.  For Quine, one would need
    criteria of identity for classes (or properties, if one insists
    on admitting them) but no criteria of identity for individuals;
    individuals ARE criteria of identity for Quine.  To know whether
    class 'a' = class 'b', one looks to the individuals that they
    (or member classes) contain. 

SS: As a realist, I take exactly the opposite view.  I would say
    that one needs criteria of identity for individuals but not
    for properties (or at least not for all properties) since
    properties ARE criteria of identity.  (Of course, Quine --
    like Russell in his nominalistic phase -- has a certain
    amount of trouble in specifying what is an individual;
    time-slices and all that.)

SS: From a logical or metaphysical point of view, this is
    a bewildering and fundamental problem area.  I don't
    feel very confident about Peirce's theory of identity.
    A computer search of 'Collected Papers' has left me
    somewhat confused.  Have you (or any lister) a good
    idea of Peirce's philosophy of identity? Of course,
    I would expect him to take something like what
    I have called the "realist's" position above. 

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Reference Materials

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

You Can't Tell the Topics from the Fallacies without a Programme:

1.  Topic.  What is a relation?

1.1.  Fallacy.  The "I lied about the set" (ILATS) indirection.

2.  Topic.  What is a relation instance?
            aka:  elementary relation, individual relation, tuple.

2.1.  Fallacy.  The "direct and full access to beings in themselves"
                (DAFATBIT) fallacy.

2.2.  Fallacy.  The "change of variables" (COV) switcheroo.

3.  Topic.  What is the difference between a relation and one of its instances?

3.1.  Fallacy.  The "one row database" (ORD) illusion.
3.2.  Fallacy.  The "levitation" trick.

4.  Topic.  What is decomposition or reduction?

4.1.  Fallacy.  The "2-ped dog" (2PD) dogma.
4.2.  Fallacy.  The "alchemist's dodge" (AD).
4.3.  Fallacy.  The "stone soup" (SS.
4.4.  Fallacy.  The "who's on third" (WOT) joke.
      AKA.  "Truth-functions compute themselves".

ILATS.  Diagnostic Criterion.

| A person who does not present the decomposition of a set with respect to sets
| is not presenting the decomposition of a relation with respect to relations.
 
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Limited Mark Universes

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| On A Limited Universe Of Marks
|
| Boole, De Morgan, and their followers, frequently speak of
| a "limited universe of discourse" in logic.  An unlimited universe
| would comprise the whole realm of the logically possible.  In such
| a universe, every universal proposition, not tautologous, is false;
| every particular proposition, not absurd, is true.  Our discourse
| seldom relates to this universe:  we are either thinking of the
| physically possible, or of the historically existent, or of
| the world of some romance, or of some other limited universe.
|
| But besides its universe of objects, our discourse also refers to
| a universe of characters.  Thus, we might naturally say that virtue
| and an orange have nothing in common.  It is true that the English
| word for each is spelt with six letters, but this is not one of the
| marks of the universe of our discourse.
|
| A universe of things is unlimited in which every combination of characters,
| short of the whole universe of characters, occurs in some object.  In like
| manner, the universe of characters is unlimited in case every aggregate
| of things short of the whole universe of things possesses in common one
| of the characters of the universe of characters.  The conception of
| ordinary syllogistic is so unclear that it would hardly be accurate
| to say that it supposes an unlimited universe of characters;  but
| it comes nearer to that than to any other consistent view.  The
| non-possession of any character is regarded as implying the
| possession of another character the negative of the first.
|
| In our ordinary discourse, on the other hand, not only are both universes limited, but,
| further than that, we have nothing to do with individual objects nor simple marks;
| so that we have simply the two distinct universes of things and marks related to
| one another, in general, in a perfectly indeterminate manner.  The consequence
| is, that a proposition concerning the relations of two groups of marks is not
| necessarily equivalent to any proposition concerning classes of things;  so
| that the distinction between propositions in extension and propositions in
| comprehension is a real one, separating two kinds of facts, whereas in the
| view of ordinary syllogistic the distinction only relates to two modes of
| considering any fact.  To say that every object of the class S is included
| among the class of P's, of course must imply that every common character of
| the P's is a common character of the S's.  But the converse implication is by
| no means necessary, except with an unlimited universe of marks.  The reasonings
| in depth of which I have spoken, suppose, of course, the absence of any general
| regularity about the relations of marks and things.  (CSP, SIL, 182-183).
|
| CSP, SIL, pp. 182-186.  (CP 2.517-531;  CE 4, 450-453).
|
| Charles Sanders Peirce, "On A Limited Universe Of Marks" (1883), in:
| CSP (ed.), 'Studies in Logic, by Members of the Johns Hopkins University',
| Reprinted with an Introduction by Max H. Fisch and a Preface by Achim Eschbach,
|'Foundations of Semiotics, Volume 1', John Benjamins, Amsterdam, NL, 1983.
|
|'Writings of Charles S. Peirce:  A Chronological Edition, Volume 4, 1879-1884',
| Peirce Edition Project, Indiana University Press, Bloomington, IN, 1986.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LMU.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Whenever we read what writers have written, we should try to figure out
what they were trying to say, within their own frame of reference, and
with an eye to their own chief motives, and put off till later, maybe,
the task of staking them out on the dissecting trays of our chosen
isms and anti-isms.  It is just barely conceivable, after all,
that the pre-dissected writer is trying to tell us something
of import about the limitations of our taxonomic labels.

In that spirit, I will return to Peirce's remark
"On A Limited Universe of Marks" and try to give
it an independent reading.

http://suo.ieee.org/ontology/msg03204.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LMU.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

It will be necessary to do a little bit of background reading.
For ease of study, I'll break up long paragraphs as I see fit.

| Still it may be well to consider the matter a little further.
| Imagine, then, a particular case under Boole's calculus, in
| which the letters are no longer terms of first intention,
| but terms of second intention, and that of a special kind.
| Genus, species, difference, property, and accident, are
| the well-known terms of second intention.  These relate
| particularly to the 'comprehension' of first intentions;
| that is, they refer to different sorts of predication.
| Genus and species, however, have at least a secondary
| reference to the 'extension' of first intentions.
|
| Now let the letters, in the particular application of
| Boole's calculus now supposed, be terms of second intention
| which relate exclusively to the extension of first intentions.
| Let the differences of the characters of things and events be
| disregarded, and let the letters signify only the differences
| of classes as wider or narrower.  In other words, the only
| logical comprehension which the letters considered as terms
| will have is the greater or less divisibility of the classes.
|
| Thus, 'n' in another case of Boole's calculus might,
| for example, denote "New England States";  but in the
| case now supposed, all the characters which make these
| States what they are being neglected, it would signify
| only what essentially belongs to a class which has the
| same relations to higher and lower classes which the
| class of New England States has, -- that is,
| a collection of 'six'.
|
| C.S. Peirce, 'Collected Papers', CP 3.43
|
| Charles Sanders Peirce, "Upon the Logic of Mathematics",
|'Proceedings of the American Academy of Arts and Sciences',
| Volume 7, pp. 402-412, September 1867.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LMU.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| In this case, the sign of identity will receive a special meaning.
| For, if 'm' denotes what essentially belongs to a class of the
| rank of "sides of a cube", then 'm =, n' will imply, not that
| every New England state is a side of a cube, and conversely,
| but that whatever essentially belongs to a class of the
| numerical rank of "New England States" essentially belongs
| to a class of the rank of "sides of a cube", and conversely.
|
| 'Identity' of this particular sort may be termed 'equality',
| and be denoted by the sign "=".  Moreover, since the numerical
| rank of a 'logical sum' depends on the identity or diversity
| (in first intention) of the integrant parts, and since the
| numerical rank of a 'logical product' depends on the identity
| or diversity (in first intention) of parts of the factors,
| logical addition and multiplication can have no place in
| this system.
|
| Arithmetical addition and multiplication, however, will not be destroyed.
|
| 'ab = c' will imply that whatever essentially belongs at once to a class
| of the rank of 'a', and to another independent class of the rank of 'b'
| belongs essentially to a class of the rank of 'c', and conversely.
|
| 'a + b = c' implies that whatever belongs essentially to a class
| which is the logical sum of two mutually exclusive classes of
| the ranks of 'a' and 'b' belongs essentially to a class of
| the rank of 'c', and conversely.
|
| It is plain that from these definitions the
| same theorems follow as from those given above.
|
| 'Zero' and 'unity' will, as before, denote the classes which have respectively
| no extension and no comprehension;  only the comprehension here spoken of is,
| of course, that comprehension which alone belongs to letters in the system now
| considered, that is, this or that degree of divisibility;  and therefore 'unity'
| will be what belongs essentially to a class of any rank independently of its
| divisibility.  These two classes alone are common to the two systems, because
| the first intentions of these alone determine, and are determined by, their
| second intentions.
|
| Finally, the laws of the Boolian calculus, in its ordinary form,
| are identical with those of this other so far as the latter apply
| to 'zero' and 'unity', because every class, in its first intention,
| is either without any extension (that is, is nothing), or belongs
| essentially to that rank to which every class belongs, whether
| divisible or not.
|
| These considerations, together with those advanced [in CP 1.556] will,
| I hope, put the relations of logic and arithmetic in a somewhat clearer
| light that heretofore.
| 
| C.S. Peirce, 'Collected Papers', CP 3.44
|
| Charles Sanders Peirce, "Upon the Logic of Mathematics",
|'Proceedings of the American Academy of Arts and Sciences',
| Volume 7, pp. 402-412, September 1867.

NB.  A symbol that the editors transcribe as an equal sign
     with a subtended comma is here transcribed as "=,".

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LMU.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

There is a little more background material that I ought to fill in,
but I am going to make an initial attempt to paint the Big Picture,
and try to explain in broad terms why I think that Peirce's remarks
on "Limited Mark Universes" (LMU's) are so significant, and how they
anticipate many ideas that I personally did not encounter until the
mid 1980's, in two different fields, cognitive psychology and the
area that is known as "category theory applied to computation".

I am beginning to get a better understanding of the ways that different
thinkers differ in their thinking processes.  Peirce was what I think of
as an "exploratory heuristic" (EH) and a "3-adic relational" (3R) thinker.
Thinkers of this sort, a category to which I aspire to aggrandize myself
one day, think in very different ways from those that I am coming to
recognize as "absolute dichotomous" (AD) thinkers.

For example, you can forget all that guff about classifying
languages into "extensional" versus "intensional" brands,
and making some 'auto da fe' to one or the other article
of faith.  All the real languages that anybody really
uses in reality have names for things that can be
instances or properties in relation to suitable
other things, not to mention, but they do,
names for these relations themselves.
This is so whether we are talking
logic, math, or normal people.

I think that I will begin from what is closest to me, a problem that
I worked on all through the 1980's, one of the hot topics in AI and
cognitive science at the time, namely, "language acquisition" (LA).
You may remember the analogy that Chomsky pointed out, time and
again, between the problem of "giving a rule to abduction" and
the "poverty of the stimulus" argument for rational grammars,
that is to say, cartesian rationalism and innate grammars.

A large part of the work that I did on this problem reduced me
to working on computational models of formal language learning,
where the formal languages that I could handle from the outset
were very "impoverished" in comparison to natural languages,
but still not entirely trivial, and with many interesting
facets that would repay even a minimalist treatment.

To make a decade-long story as short as I can make it,
here are some of the ideas that gradually worked their
way into my probable density as I errored and trialed:

1.  Although it initially looks like a problem of classical induction,
    that is to say, forming rules from facts and cases, it turned out
    that I couldn't detach this from the more abductive, anticipatory,
    and hypothetical forms of concept formation.

2.  Instead of just the extensions and the intensions of concepts
    (I had not yet clued into comprehensions at that time), there
    was another sort of relation between data and concepts that
    I was forced to consider, on account of what many call the
    "generative property" of any non-trivial language, or what
    is just about the same thing, the circumstance that the
    language learner, by the very nature of the task, does
    not have the whole extension of a language or any of
    its grammatical categories, but at every stage of
    the game has only a finite experiential record
    of the instances that fall under the putative,
    contingent, and ever hypothetical concepts.

This last aspect of the problem gradually led me to appreciate the importance
of the sampling relation, which reminded me, via some vague or vagrant memory,
of Aristotle's "enumerative induction", and so I eventually came to call this
sort of relation between the data and the concept the "enumeration" of the
concept, and extension and intension makes three.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LMU.  Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Peirce came into this arena with a question about "how science works",
and he took off from a standard sort of Kantian platform that permits
you to get started by just going ahead and accepting the evident fact,
the apparent phenomenon, or the provisional hypothesis that science,
as we do it, but not necessarily as we know it, does work, and then
to move on to the next question, to wit:  What are the conditions
for the possibility of science working?

I walked into this fun-house with a question about language learning, and I
had very little acquaintance with and a whole lot of wrong ideas about Kant.

But there is a natural analogy between the task of scientific knowing
and the task of language acquisition, as Newton clearly recognized in
the guise of his metaphor about science as the decoding of nature's
cryptographic laws.

So I will start out by explaining a very simple sort of language acquisition task.
One of the first obstacles that we run into is this huge gulf between all of the
realistic examples and all of the sorts of examples that we can discuss in the
beginning, is sum, the fact that all of the motive settings are very complex
indeed and all of the rudimentary set-pieces are very simple indeed.

So I will beg you to use your imagination.

Okay, enough preamble.

An "alphabet" (or a "lexicon") is a finite set A.

The "kleene star" A* of the alphabet A
is the set of all finite sequences that
can be formed out of the elements of A.
We call these "strings" or "sequences".
Mark that A* includes the empty string.

A "formal language" L over the alphabet A is an arbitrary subset of A*,
thus L c A*.  Depending on the setting, the strings or sequences of L
are called "L-words", "L-strands", or "L-sentences", in one locution,
or "words of L", "strands of L", or "sentences of L", in another.
Whenever there is only one language under discussion, or when
it is otherwise clear, the obvious abridgements may be used.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LMU.  Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| By their fruit flies ye shall know them.
|
| ~~ Pragmatic definition of a geneticist.

To be a language, formally speaking, L c A*, is to embody
a distinction between the strings or sequences of A* that
are in L and those that aren't.  There are two exceptions,
or degenerate cases, if one prefers to view them that way,
two languages that draw no distinction in the space of A*.
These are the "empty language" L_0 = {}, so empty that it
fails to contain even so much as the empty string <>, and
the "total language" L_1 = A*.

Some would say that we are only doing syntax as this point,
but others say that the semantic and pragmatic dimensions
cannot be reduced to zero magnitudes, even in this frame.
For example it is possible to see in the present setting --
others would say "read into the present scene" -- a bit,
yes, exactly one bit, of semantic meaning and pragmatic
motive, namely the Peircean arrow from the nonsense of
non-L to the sense of L, except for the non-orientable
cases, of course.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LMU.  Note 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o



o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LMU.  Work Area

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

atom 3.93

individual 3.92f, 214ff

simple 3.216, 217, 220

singular 3.93, 216, 252, 602, 611

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Logic Of Relatives

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The letters of the alphabet will denote logical signs.
| Now logical terms are of three grand classes.
|
| The first embraces those whose logical form involves only the
| conception of quality, and which therefore represent a thing
| simply as "a ---".  These discriminate objects in the most
| rudimentary way, which does not involve any consciousness
| of discrimination.  They regard an object as it is in
| itself as 'such' ('quale');  for example, as horse,
| tree, or man.  These are 'absolute terms'.
|
| The second class embraces terms whose logical form involves the
| conception of relation, and which require the addition of another
| term to complete the denotation.  These discriminate objects with a
| distinct consciousness of discrimination.  They regard an object as
| over against another, that is as relative;  as father of, lover of,
| or servant of.  These are 'simple relative terms'.
|
| The third class embraces terms whose logical form involves the
| conception of bringing things into relation, and which require
| the addition of more than one term to complete the denotation.
| They discriminate not only with consciousness of discrimination,
| but with consciousness of its origin.  They regard  an object
| as medium or third between two others, that is as conjugative;
| as giver of --- to ---, or buyer of --- for --- from ---.
| These may be termed 'conjugative terms'.
|
| The conjugative term involves the conception of 'third', the relative that of
| second or 'other', the absolute term simply considers 'an' object.  No fourth
| class of terms exists involving the conception of 'fourth', because when that
| of 'third' is introduced, since it involves the conception of bringing objects
| into relation, all higher numbers are given at once, inasmuch as the conception
| of bringing objects into relation is independent of the number of members of the
| relationship.  Whether this 'reason' for the fact that there is no fourth class
| of terms fundamentally different from the third is satisfactory of not, the fact
| itself is made perfectly evident by the study of the logic of relatives.
|
| C.S. Peirce, CP 3.63
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I am going to experiment with an interlacing commentary
on Peirce's 1870 "Logic of Relatives" paper, revisiting
some critical transitions from several different angles
and calling attention to a variety of puzzles, problems,
and potentials that are not so often remarked or tapped.

What strikes me about the initial installment this time around is its
use of a certain pattern of argument that I can recognize as invoking
a "closure principle", and this is a figure of reasoning that Peirce
uses in three other places, his discussion of "continuous relations",
his definition of sign relations, and the pragmatic maxim itself.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Numbers Corresponding to Letters
|
| I propose to use the term "universe" to denote that class of individuals
| 'about' which alone the whole discourse is understood to run.  The universe,
| therefore, in this sense, as in Mr. De Morgan's, is different on different
| occasions.  In this sense, moreover, discourse may run upon something which
| is not a subjective part of the universe;  for instance, upon the qualities
| or collections of the individuals it contains.
|
| I propose to assign to all logical terms, numbers;  to an absolute term,
| the number of individuals it denotes;  to a relative term, the average
| number of things so related to one individual.  Thus in a universe of
| perfect men ('men'), the number of "tooth of" would be 32.  The number
| of a relative with two correlates would be the average number of things
| so related to a pair of individuals;  and so on for relatives of higher
| numbers of correlates.  I propose to denote the number of a logical term
| by enclosing the term in square brackets, thus ['t'].
|
| C.S. Peirce, CP 3.65
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Peirce's remarks at CP 3.65 are so replete with remarkable ideas,
some of them so taken for granted in mathematical discourse that
they usually escape explicit mention, and others so suggestive
of things to come in a future remote from his time of writing,
and yet so smoothly introduced in passing that it's all too
easy to overlook their consequential significance, that I
can do no better here than to highlight these ideas in
other words, whose main advantage is to be a little
more jarring to the mind's sensibilities.

| Numbers Corresponding to Letters
|
| I propose to use the term "universe" to denote that class of individuals
| 'about' which alone the whole discourse is understood to run.  The universe,
| therefore, in this sense, as in Mr. De Morgan's, is different on different
| occasions.  In this sense, moreover, discourse may run upon something which
| is not a subjective part of the universe;  for instance, upon the qualities
| or collections of the individuals it contains.
|
| I propose to assign to all logical terms, numbers;  to an absolute term,
| the number of individuals it denotes;  to a relative term, the average
| number of things so related to one individual.  Thus in a universe of
| perfect men ('men'), the number of "tooth of" would be 32.  The number
| of a relative with two correlates would be the average number of things
| so related to a pair of individuals;  and so on for relatives of higher
| numbers of correlates.  I propose to denote the number of a logical term
| by enclosing the term in square brackets, thus ['t'].
|
| C.S. Peirce, 'Collected Papers', CP 3.65

1.  This mapping of letters to numbers, or logical terms to mathematical quantities,
    is the very core of what "quantification theory" is all about, and definitely
    more to the point than the mere "innovation" of using distinctive symbols
    for the so-called "quantifiers".  We will speak of this more later on.

2.  The mapping of logical terms to numerical measures,
    to express it in current language, would probably be
    recognizable as some kind of "morphism" or "functor"
    from a logical domain to a quantitative co-domain.

3.  Notice that Peirce follows the mathematician's usual practice,
    then and now, of making the status of being an "individual" or
    a "universal" relative to a discourse in progress.  I have come
    to appreciate more and more of late how radically different this
    "patchwork" or "piecewise" approach to things is from the way of
    some philosophers who seem to be content with nothing less than
    many worlds domination, which means that they are never content
    and rarely get started toward the solution of any real problem.
    Just my observation, I hope you understand.

4.  It is worth noting that Peirce takes the "plural denotation"
    of terms for granted, or what's the number of a term for,
    if it could not vary apart from being one or nil?

5.  I also observe that Peirce takes the individual objects of a particular
    universe of discourse in a "generative" way, not a "totalizing" way,
    and thus they afford us with the basis for talking freely about
    collections, constructions, properties, qualities, subsets,
    and "higher types", as the phrase is mint.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The Signs of Inclusion, Equality, Etc.
|
| I shall follow Boole in taking the sign of equality to signify identity.
| Thus, if v denotes the Vice-President of the United States, and p the
| President of the Senate of the United States,
|
| v = p
|
| means that every Vice-President of the United States is President of the
| Senate, and every President of the United States Senate is Vice-President.
| The sign "less than" is to be so taken that
|
| f < m
|
| means that every Frenchman is a man, but there are men besides Frenchmen.
| Drobisch has used this sign in the same sense.  It will follow from these
| significations of '=' and '<' that the sign '-<' (or '=<', "as small as")
| will mean "is".  Thus,
|
| f -< m
|
| means "every Frenchman is a man", without saying whether there are any
| other men or not.  So,
|
| 'm' -< 'l'
|
| will mean that every mother of anything is a lover of the same thing;
| although this interpretation in some degree anticipates a convention to
| be made further on.  These significations of '=' and '<' plainly conform
| to the indispensable conditions.  Upon the transitive character of these
| relations the syllogism depends, for by virtue of it, from
|
| f -< m
|
| and
|
| m -< a,
|
| we can infer that
|
| f -< a;
|
| that is, from every Frenchman being a man and every
| man being an animal, that every Frenchman is an animal.
|
| But not only do the significations of '=' and '<' here adopted fulfill all
| absolute requirements, but they have the supererogatory virtue of being very
| nearly the same as the common significations.  Equality is, in fact, nothing
| but the identity of two numbers;  numbers that are equal are those which are
| predicable of the same collections, just as terms that are identical are those
| which are predicable of the same classes.  So, to write 5 < 7 is to say that 5
| is part of 7, just as to write f < m is to say that Frenchmen are part of men.
| Indeed, if f < m, then the number of Frenchmen is less than the number of men,
| and if v = p, then the number of Vice-Presidents is equal to the number of
| Presidents of the Senate;  so that the numbers may always be substituted
| for the terms themselves, in case no signs of operation occur in the
| equations or inequalities.
|
| C.S. Peirce, CP 3.66
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

The quantifier mapping from terms to their numbers that Peirce signifies
by means of the square bracket notation has one of its principal uses in
providing a basis for the computation of frequencies, probabilities, and
all of the other statistical measures that can be constructed from these,
and thus in affording what may be called a "principle of correspondence"
between probability theory and its limiting case in the forms of logic.

This brings us once again to the relativity of contingency and necessity,
as one way of approaching necessity is through the avenue of probability,
describing necessity as a probability of 1, but the whole apparatus of
probability theory only figures in if it is cast against the backdrop
of probability space axioms, the reference class of distributions,
and the sample space that we cannot help but to abdeuce upon the
scene of observations.  Aye, there's the snake eyes.  And with
them we can see that there is always an irreducible quantum
of facticity to all our necessities.  More plainly spoken,
it takes a fairly complex conceptual infrastructure just
to begin speaking of probabilities, and this setting
can only be set up by means of abductive, fallible,
hypothetical, and inherently risky mental acts.

Pragmatic thinking is the logic of abduction, which is just another
way of saying that it addresses the question:  "What may be hoped?"
We have to face the possibility that it may be just as impossible
to speak of "absolute identity" with any hope of making practical
philosophical sense as it is to speak of "absolute simultaneity"
with any hope of making operational physical sense.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The Signs for Addition
|
| The sign of addition is taken by Boole so that
|
| x + y
|
| denotes everything denoted by x, and, 'besides',
| everything denoted by y.
|
| Thus
|
| m + w
|
| denotes all men, and, besides, all women.
|
| This signification for this sign is needed for
| connecting the notation of logic with that of the
| theory of probabilities.  But if there is anything
| which is denoted by both terms of the sum, the latter
| no longer stands for any logical term on account of
| its implying that the objects denoted by one term
| are to be taken 'besides' the objects denoted by
| the other.
|
| For example,
|
| f + u
|
| means all Frenchmen besides all violinists, and,
| therefore, considered as a logical term, implies
| that all French violinists are 'besides themselves'.
|
| For this reason alone, in a paper which is published
| in the Proceedings of the Academy for March 17, 1867,
| I preferred to take as the regular addition of logic
| a non-invertible process, such that
|
| m +, b
|
| stands for all men and black things, without any implication that
| the black things are to be taken besides the men;  and the study of
| the logic of relatives has supplied me with other weighty reasons for
| the same determination.
|
| Since the publication of that paper, I have found that Mr. W. Stanley Jevons, in
| a tract called 'Pure Logic, or the Logic of Quality' [1864], had anticipated me in
| substituting the same operation for Boole's addition, although he rejects Boole's
| operation entirely and writes the new one with a '+' sign while withholding from
| it the name of addition.
|
| It is plain that both the regular non-invertible addition
| and the invertible addition satisfy the absolute conditions.
| But the notation has other recommendations.  The conception
| of 'taking together' involved in these processes is strongly
| analogous to that of summation, the sum of 2 and 5, for example,
| being the number of a collection which consists of a collection of
| two and a collection of five.  Any logical equation or inequality
| in which no operation but addition is involved may be converted
| into a numerical equation or inequality by substituting the
| numbers of the several terms for the terms themselves --
| provided all the terms summed are mutually exclusive.
|
| Addition being taken in this sense,
| 'nothing' is to be denoted by 'zero',
| for then
|
| x +, 0 = x,
|
| whatever is denoted by x;  and this is the definition
| of 'zero'.  This interpretation is given by Boole, and
| is very neat, on account of the resemblance between the
| ordinary conception of 'zero' and that of nothing, and
| because we shall thus have
|
| [0] = 0.
|
| C.S. Peirce, CP 3.67
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 8

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

A wealth of issues arise here that I hope
to take up in depth at a later point, but
for the moment I shall be able to mention
only the barest sample of them in passing.

The two papers that precede this one in CP 3 are Peirce's papers of
March and September 1867 in the 'Proceedings of the American Academy
of Arts and Sciences', titled "On an Improvement in Boole's Calculus
of Logic" and "Upon the Logic of Mathematics", respectively.  Among
other things, these two papers provide us with further clues about
the motivating considerations that brought Peirce to introduce the
"number of a term" function, signified here by square brackets.
I have already quoted from the "Logic of Mathematics" paper in
a related connection.  Here are the links to those excerpts:

http://suo.ieee.org/ontology/msg04350.html
http://suo.ieee.org/ontology/msg04351.html

In setting up a correspondence between "letters" and "numbers",
my sense is that Peirce is "nocking an arrow", or constructing
some kind of structure-preserving map from a logical domain to
a numerical domain, and this interpretation is here reinforced
by the careful attention that he gives to the conditions under
which precisely which aspects of structure are preserved, plus
his telling recognition of the criterial fact that zeroes are
preserved by the mapping.  But here's the catch, the arrow is
from the qualitative domain to the quantitative domain, which
is just the opposite of what I tend to expect, since I think
of quantitative measures as preserving more information than
qualitative measures.  To curtail the story, it is possible
to sort this all out, but that is a story for another day.

Other than that, I just want to red flag the beginnings
of another one of those "failures to communicate" that
so dogged the disciplines in the 20th Century, namely,
the fact that Peirce seemed to have an inkling about
the problems that would be caused by using the plus
sign for inclusive disjunction, but, as it happens,
his advice was overridden by the usages in various
different communities, rendering the exchange of
information among engineering, mathematical, and
philosophical specialties a minefield in place
of mindfield to this very day.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 9

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The Signs for Multiplication
|
| I shall adopt for the conception of multiplication
| 'the application of a relation', in such a way that,
| for example, 'l'w shall denote whatever is lover of
| a woman.  This notation is the same as that used by
| Mr. De Morgan, although he apears not to have had
| multiplication in his mind.
|
| 's'(m +, w) will, then, denote whatever is
| servant of anything of the class composed
| of men and women taken together.  So that:
|
| 's'(m +, w)  =  's'm +, 's'w.
|
| ('l' +, 's')w will denote whatever is
| lover or servant to a woman, and:
|
| ('l' +, 's')w  =  'l'w +, 's'w.
|
| ('sl')w will denote whatever stands to
| a woman in the relation of servant of
| a lover, and:
|
| ('sl')w  =  's'('l'w).
|
| Thus all the absolute conditions
| of multiplication are satisfied.
|
| The term "identical with ---" is a unity
| for this multiplication.  That is to say,
| if we denote "identical with ---" by !1!
| we have:
|
| 'x'!1!  =  'x',
|
| whatever relative term 'x' may be.
| For what is a lover of something
| identical with anything, is the
| same as a lover of that thing.
|
| C.S. Peirce, CP 3.68
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 10

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Peirce in 1870 is five years down the road from the Peirce of 1865-1866
who lectured extensively on the role of sign relations in the logic of
scientific inquiry, articulating their involvement in the three types
of inference, and inventing the concept of "information" to explain
what it is that signs convey in the process.  By this time, then,
the semiotic or sign relational approach to logic is so implicit
in his way of working that he does not always take the trouble
to point out its distinctive features at each and every turn.
So let's take a moment to draw out a few of these characters.

Sign relations, like any non-trivial brand of 3-adic relations,
can become overwhelming to think about once the cardinality of
the object, sign, and interpretant domains or the complexity
of the relation itself ascends beyond the simplest examples.
Furthermore, most of the strategies that we would normally
use to control the complexity, like neglecting one of the
domains, in effect, projecting the 3-adic sign relation
onto one of its 2-adic faces, or focusing on a single
ordered triple of the form <o, s, i> at a time, can
result in our receiving a distorted impression of
the sign relation's true nature and structure.

I find that it helps me to draw, or at least to imagine drawing,
diagrams of the following form, where I can keep tabs on what's
an object, what's a sign, and what's an interpretant sign, for
a selected set of sign-relational triples.

Here is how I would picture Peirce's example of equivalent terms:
v = p, where "v" denotes the Vice-President of the United States,
and "p" denotes the President of the Senate of the United States.

o-----------------------------o-----------------------------o
|  Objective Framework (OF)   | Interpretive Framework (IF) |
o-----------------------------o-----------------------------o
|           Objects           |            Signs            |
o-----------------------------o-----------------------------o
|                                                           |
|                                 o "v"                     |
|                                /                          |
|                               /                           |
|                              /                            |
|           o  o  o-----------@                             |
|                              \                            |
|                               \                           |
|                                \                          |
|                                 o "p"                     |
|                                                           |
o-----------------------------o-----------------------------o

Depending on whether we interpret the terms "v" and "p" as applying to
persons who hold these offices at one particular time or as applying to
all those persons who have held these offices over an extended period of
history, their denotations may be either singular of plural, respectively.

As a shortcut technique for indicating general denotations or plural referents,
I will use the "elliptic convention" that represents these by means of figures
like "o o o" or "o ... o", placed at the object ends of sign relational triads.

For a more complex example, here is how I would picture Peirce's example
of an equivalence between terms that comes about by applying one of the
distributive laws, for relative multiplication over absolute summation.

o-----------------------------o-----------------------------o
|  Objective Framework (OF)   | Interpretive Framework (IF) |
o-----------------------------o-----------------------------o
|           Objects           |            Signs            |
o-----------------------------o-----------------------------o
|                                                           |
|                                 o "'s'(m +, w)"           |
|                                /                          |
|                               /                           |
|                              /                            |
|           o ... o-----------@                             |
|                              \                            |
|                               \                           |
|                                \                          |
|                                 o "'s'm +, 's'w"          |
|                                                           |
o-----------------------------o-----------------------------o

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 11

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The Signs for Multiplication (cont.)
|
| A conjugative term like 'giver' naturally requires two correlates,
| one denoting the thing given, the other the recipient of the gift.
|
| We must be able to distinguish, in our notation, the
| giver of A to B from the giver to A of B, and, therefore,
| I suppose the signification of the letter equivalent to such
| a relative to distinguish the correlates as first, second, third,
| etc., so that "giver of --- to ---" and "giver to --- of ---" will
| be expressed by different letters.
|
| Let `g` denote the latter of these conjugative terms.  Then, the correlates
| or multiplicands of this multiplier cannot all stand directly after it, as is
| usual in multiplication, but may be ranged after it in regular order, so that:
|
| `g`xy
|
| will denote a giver to x of y.
|
| But according to the notation,
| x here multiplies y, so that
| if we put for x owner ('o'),
| and for y horse (h),
|
| `g`'o'h
|
| appears to denote the giver of a horse
| to an owner of a horse.  But let the
| individual horses be H, H', H", etc.
|
| Then:
|
| h  =  H +, H' +, H" +, etc.
|
| `g`'o'h  =  `g`'o'(H +, H' +, H" +, etc.)
|
|          =  `g`'o'H +, `g`'o'H' +, `g`'o'H" +, etc.
|
| Now this last member must be interpreted as a giver
| of a horse to the owner of 'that' horse, and this,
| therefore must be the interpretation of `g`'o'h.
|
| This is always very important.
|
| 'A term multiplied by two relatives shows that
|  the same individual is in the two relations.'
|
| If we attempt to express the giver of a horse to
| a lover of a woman, and for that purpose write:
|
| `g`'l'wh,
|
| we have written giver of a woman to a lover of her,
| and if we add brackets, thus,
|
| `g`('l'w)h,
|
| we abandon the associative principle of multiplication.
|
| A little reflection will show that the associative principle must
| in some form or other be abandoned at this point.  But while this
| principle is sometimes falsified, it oftener holds, and a notation
| must be adopted which will show of itself when it holds.  We already
| see that we cannot express multiplication by writing the multiplicand
| directly after the multiplier;  let us then affix subjacent numbers after
| letters to show where their correlates are to be found.  The first number
| shall denote how many factors must be counted from left to right to reach
| the first correlate, the second how many 'more' must be counted to reach
| the second, and so on.
|
| Then, the giver of a horse to a lover of a woman may be written:
|
| `g`_12 'l'_1 w h  =  `g`_11 'l'_2 h w  =  `g`_2(-1) h 'l'_1 w.
|
| Of course a negative number indicates that
| the former correlate follows the latter
| by the corresponding positive number.
|
| A subjacent 'zero' makes the term itself the correlate.
|
| Thus,
|
| 'l'_0
|
| denotes the lover of 'that' lover or the lover of himself, just as
| `g`'o'h denotes that the horse is given to the owner of itself, for
| to make a term doubly a correlate is, by the distributive principle,
| to make each individual doubly a correlate, so that:
|
| 'l'_0  =  L_0 +, L_0' +, L_0" +, etc.
|
| A subjacent sign of infinity may
| indicate that the correlate is
| indeterminate, so that:
|
| 'l'_oo
|
| will denote a lover of something.
| We shall have some confirmation
| of this presently.
|
| If the last subjacent number is a 'one'
| it may be omitted.  Thus we shall have:
|
| 'l'_1  =  'l',
|
| `g`_11  =  `g`_1  =  `g`.
|
| This enables us to retain our former expressions 'l'w, `g`'o'h, etc.
|
| C.S. Peirce, CP 3.69-70
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 12

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Peirce's way of representing sets as sums may seem archaic, but it is
quite often used, and is actually the tool of choice in many branches
of algebra, combinatorics, computing, and statistics to this very day.

Peirce's application to logic is fairly novel, and the degree of his
elaboration of the logic of relative terms is certainly original with
him, but this particular genre of representation, commonly going under
the handle of "generating functions", goes way back, well before anyone
thought to stick a flag in set theory as a separate territory or to try
to fence off our native possessions of it with expressly decreed axioms.
And back in the days when computers were people, before we had the sorts
of "electronic register machines" that we take so much for granted today,
mathematicians were constantly using generating functions as a rough and
ready type of addressable memory to sort, store, and keep track of their
accounts of a wide variety of formal objects of thought.

Let us look at a few simple examples of generating functions,
much as I encountered them during my own first adventures in
the Fair Land Of Combinatoria.

Suppose that we are given a set of three elements,
say, {a, b, c}, and we are asked to find all the
ways of choosing a subset from this collection.

We can represent this problem setup as the
problem of computing the following product:

(1 + a)(1 + b)(1 + c).

The factor (1 + a) represents the option that we have, in choosing
a subset of {a, b, c}, to leave the 'a' out (signified by the "1"),
or else to include it (signified by the "a"), and likewise for the
other elements 'b' and 'c' in their turns.

Probably on account of all those years I flippered away
playing the oldtime pinball machines, I tend to imagine
a product like this being displayed in a vertical array:

(1 + a)
(1 + b)
(1 + c)

I picture this as a playboard with six "bumpers",
the ball chuting down the board in such a career
that it strikes exactly one of the two bumpers
on each and every one of the three levels.

So a trajectory of the ball where it
hits the "a" bumper on the 1st level,
hits the "1" bumper on the 2nd level,
hits the "c" bumper on the 3rd level,
and then exits the board, represents
a single term in the desired product
and corresponds to the subset {a, c}.

Multiplying out (1 + a)(1 + b)(1 + c), one obtains:

1 + a + b + c + ab + ac + bc + abc.

And this informs us that the subsets of choice are:

{}, {a}, {b}, {c}, {a, b}, {a, c}, {b, c}, {a, b, c}.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 13

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The Signs for Multiplication (cont.)
|
| The associative principle does not hold in this counting
| of factors.  Because it does not hold, these subjacent
| numbers are frequently inconvenient in practice, and
| I therefore use also another mode of showing where
| the correlate of a term is to be found.  This is
| by means of the marks of reference, † ‡ || § ¶,
| which are placed subjacent to the relative
| term and before and above the correlate.
| Thus, giver of a horse to a lover of
| a woman may be written:
|
| `g`_†‡ †'l'_|| ||w ‡h.
|
| The asterisk I use exclusively to refer to the last
| correlate of the last relative of the algebraic term.
|
| Now, considering the order of multiplication to be: --
| a term, a correlate of it, a correlate of that correlate,
| etc. -- there is no violation of the associative principle.
| The only violations of it in this mode of notation are that
| in thus passing from relative to correlate, we skip about
| among the factors in an irregular manner, and that we
| cannot substitute in such an expression as `g`'o'h
| a single letter for 'o'h.
|
| I would suggest that such a notation may be found useful in treating other
| cases of non-associative multiplication.  By comparing this with what was
| said above [in CP 3.55] concerning functional multiplication, it appears
| that multiplication by a conjugative term is functional, and that the
| letter denoting such a term is a symbol of operation.  I am therefore
| using two alphabets, the Greek and Kennerly, where only one was
| necessary.  But it is convenient to use both.
|
| C.S. Peirce, CP 3.71-72
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 14

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

NB.  On account of the invideous circumstance that various
listservers balk at Peirce's "marks of reference" -- or is
it only the Microsoft Cryptkeeper's kryptonizing of them? --
I will make the following substitutions in Peirce's text:

@  =  dagger symbol
#  =  double dagger
|| =  parallel sign
$  =  section symbol
%  =  paragraph mark

It is clear from our last excerpt that Peirce is already on the verge
of a graphical syntax for the logic of relatives.  Indeed, it seems
likely that he had already reached this point in his own thinking.

For instance, it seems quite impossible to read his last variation on the
theme of a "giver of a horse to a lover of a woman" without drawing lines
of identity to connect up the corresponding marks of reference, like this:

o---------------------------------------o
|                                       |
|            @        ||                |
|           / \      /  \               |
|          o   o    o    o              |
|      `g`_@#  @'l'_||  ||w  #h         |
|           o                o          |
|            \______________/           |
|                   #                   |
|                                       |
o---------------------------------------o
Giver of a Horse to a Lover of a Woman

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 15

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The Signs for Multiplication (cont.)
|
| Thus far, we have considered the multiplication of relative terms only.
| Since our conception of multiplication is the application of a relation,
| we can only multiply absolute terms by considering them as relatives.
|
| Now the absolute term "man" is really exactly equivalent to
| the relative term "man that is ---", and so with any other.
| I shall write a comma after any absolute term to show that
| it is so regarded as a relative term.
|
| Then:
|
| "man that is black"
|
| will be written
|
| m,b.
|
| But not only may any absolute term be thus regarded as a relative term,
| but any relative term may in the same way be regarded as a relative with
| one correlate more.  It is convenient to take this additional correlate
| as the first one.
|
| Then:
|
| 'l','s'w
|
| will denote a lover of a woman that is a servant of that woman.
|
| The comma here after 'l' should not be considered as altering at
| all the meaning of 'l', but as only a subjacent sign, serving to
| alter the arrangement of the correlates.
|
| In point of fact, since a comma may be added in this way to any
| relative term, it may be added to one of these very relatives
| formed by a comma, and thus by the addition of two commas
| an absolute term becomes a relative of two correlates.
|
| So:
|
| m,,b,r
|
| interpreted like
|
| `g`'o'h
|
| means a man that is a rich individual and
| is a black that is that rich individual.
|
| But this has no other meaning than:
|
| m,b,r
|
| or a man that is a black that is rich.
|
| Thus we see that, after one comma is added, the
| addition of another does not change the meaning
| at all, so that whatever has one comma after it
| must be regarded as having an infinite number.
|
| If, therefore, 'l',,'s'w is not the same as 'l','s'w (as it plainly is not,
| because the latter means a lover and servant of a woman, and the former a
| lover of and servant of and same as a woman), this is simply because the
| writing of the comma alters the arrangement of the correlates.
|
| And if we are to suppose that absolute terms are multipliers
| at all (as mathematical generality demands that we should},
| we must regard every term as being a relative requiring
| an infinite number of correlates to its virtual infinite
| series "that is --- and is --- and is --- etc."
|
| Now a relative formed by a comma of course receives its
| subjacent numbers like any relative, but the question is,
| What are to be the implied subjacent numbers for these
| implied correlates?
|
| Any term may be regarded as having an
| infinite number of factors, those
| at the end being 'ones', thus:
|
| 'l','s'w  =  'l','s'w,!1!,!1!,!1!,!1!,!1!,!1!,!1!, etc.
|
| A subjacent number may therefore be as great as we please.
|
| But all these 'ones' denote the same identical individual denoted
| by w;  what then can be the subjacent numbers to be applied to 's',
| for instance, on account of its infinite "that is"'s?  What numbers
| can separate it from being identical with w?  There are only two.
| The first is 'zero', which plainly neutralizes a comma completely,
| since
|
| 's',_0 w  =  's'w
|
| and the other is infinity;  for as 1^oo is indeterminate
| in ordinary algbra, so it will be shown hereafter to be
| here, so that to remove the correlate by the product of
| an infinite series of 'ones' is to leave it indeterminate.
|
| Accordingly,
|
| m,_oo
|
| should be regarded as expressing 'some' man.
|
| Any term, then, is properly to be regarded as having an infinite
| number of commas, all or some of which are neutralized by zeros.
|
| "Something" may then be expressed by:
|
| !1!_oo.
|
| I shall for brevity frequently express this by an antique figure one (`1`).
|
| "Anything" by:
|
| !1!_0.
|
| I shall often also write a straight 1 for 'anything'.
|
| C.S. Peirce, CP 3.73
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 16

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

To my way of thinking, CP 3.73 is one of the most remarkable passages
in the history of logic.  In this first pass over its deeper contents
I won't be able to accord it much more than a superficial dusting off.

As always, it is probably best to begin with a concrete example.
So let us initiate a discourse, whose universe X may remind us
a little of the cast of characters in Shakespeare's 'Othello'.

X  =  {Bianca, Cassio, Clown, Desdemona, Emilia, Iago, Othello}.

The universe X is "that class of individuals 'about' which alone
the whole discourse is understood to run" but its marking out for
special recognition as a universe of discourse in no way rules out
the possibility that "discourse may run upon something which is not
a subjective part of the universe;  for instance, upon the qualities
or collections of the individuals it contains" (CP 3.65).

In order to provide ourselves with the convenience of abbreviated terms,
while staying a bit closer to Peirce's conventions about capitalization,
let us rename the universe "u", the Clown "Jeste", and then rewrite the
above description of the universe of discourse in the following fashion:

u  =  {B, C, D, E, I, J, O}.

This specification of the universe of discourse could be
summed up in Peirce's notation by the following equation:

1  =  B +, C +, D +, E +, I +, J +, O.

Within this discussion, then, the "individual terms" are
"B", "C", "D", "E", "I", "J", "O", each of which denotes
in a singular fashion the corresponding individual in X.

As "general terms" of this discussion,
we might begin with the following set:

"b"  =  "black"

"m"  =  "man"

"w"  =  "woman"

In Peirce's notation, the denotation of a general term
can be expressed by means of an equation between terms:

b  =  O

m  =  C +, I +, J +, O

w  =  B +, D +, E

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 17

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I will continue with my commentary on CP 3.73, developing
the Othello example as a way of illustrating its concepts.

In the development of the story so far, we have a universe of discourse
that can be characterized by means of the following system of equations:

1  =  B +, C +, D +, E +, I +, J +, O

b  =  O

m  =  C +, I +, J +, O

w  =  B +, D +, E

This much provides a basis for collection of absolute terms that
I plan to use in this example.  Let us now consider how we might
represent a sufficiently exemplary collection of relative terms.

If we consider the genesis of relative terms, for example, "lover of ---",
"betrayer to --- of ---", or "winner over of --- to --- from ---", we may
regard these fill-in-the-blank forms as being derived by way of a kind of
"rhematic abstraction" from the corresponding instances of absolute terms.

In other words:

1.  The relative term "lover of ---" can be constructed by abstracting
    the absolute term "Emilia" from the absolute term "lover of Emilia".
    Since Iago is a lover of Emilia, the relate-correlate pair denoted
    by "Iago:Emilia" is a summand of the relative term "lover of ---".

2.  The relative term "betrayer to --- of ---" can be constructed
    by abstracting the absolute terms "Othello" and "Desdemona"
    from the absolute term "betrayer to Othello of Desdemona".
    In as much as Iago is a betrayer to Othello of Desdemona,
    the relate-correlate-correlate triple denoted by "I:O:D"
    belongs to the relative term "betrayer to --- of ---".

3.  The relative term "winner over of --- to --- from ---" can be constructed
    by abstracting the absolute terms "Othello", "Iago", and "Cassio" from the
    absolute term "winner over of Othello to Iago from Cassio".  Since Iago is
    a winner over of Othello to Iago from Cassio, the elementary relative term
    "I:O:I:C" belongs to the relative term "winner over of --- to --- from ---".

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 18

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Speaking very strictly, we need to be careful to
distinguish a "relation" from a "relative term".

1.  The relation is an 'object' of thought
    that may be regarded "in extension" as
    a set of ordered tuples that are known
    as its "elementary relations".

2.  The relative term is a 'sign' that denotes certain objects,
    called its "relates", as these are determined in relation
    to certain other objects, called its "correlates".  Under
    most circumstances, one may also regard the relative term
    as denoting the corresponding relation.

Returning to the Othello example, let us take up the
2-adic relatives "lover of ---" and "servant of ---".

Ignoring the many splendored nuances appurtenant to the idea of love,
we may regard the relative term 'l' for "lover of ---" to be given by
the following equation:

'l'  =  B:C +, C:B +, D:O +, E:I +, I:E +, O:D.

If for no better reason than to make the example more interesting,
let us put aside all distinctions of rank and fealty, collapsing
the motley crews of attendant, servant, subordinate, and so on,
under the heading of a single service, denoted by the relative
term 's' for "servant of ---".  The terms of this service are:

's'  =  C:O +, E:D +, I:O +, J:D +, J:O.

The term I:C may also be implied, but, since it is
so hotly arguable, I will leave it out of the toll.

One more thing that we need to be duly wary about:
There are many different conventions in the field
as to the ordering of terms in their applications,
and it happens that different conventions will be
more convenient under different circumstances, so
there does not appear to be much of a chance that
any one of them can be canonized once and for all.

In the current reading, we are applying relative terms
from right to left, and so our conception of relative
multiplication, or relational composition, will need
to be adjusted accordingly.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 19

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

To familiarize ourselves with the forms of calculation
that are available in Peirce's notation, let us compute
a few of the simplest products that we find at hand in
the Othello case.

Here are the absolute terms:

1  =  B +, C +, D +, E +, I +, J +, O

b  =  O

m  =  C +, I +, J +, O

w  =  B +, D +, E

Here are the 2-adic relative terms:

'l'  =  B:C +, C:B +, D:O +, E:I +, I:E +, O:D

's'  =  C:O +, E:D +, I:O +, J:D +, J:O

Here are a few of the simplest products among these terms:

'l'1 = "lover of anybody"

     = (B:C +, C:B +, D:O +, E:I +, I:E +, O:D)(B +, C +, D +, E +, I +, J +, O)

     = B +, C +, D +, E +, I +, O

     = "anybody except J"

'l'b = "lover of a black"

     = (B:C +, C:B +, D:O +, E:I +, I:E +, O:D)O

     = D

'l'm = "lover of a man"

     = (B:C +, C:B +, D:O +, E:I +, I:E +, O:D)(C +, I +, J +, O)

     = B +, D +, E

'l'w = "lover of a woman"

     = (B:C +, C:B +, D:O +, E:I +, I:E +, O:D)(B +, D +, E)

     = C +, I +, O

's'1 = "servant of anybody"

     = (C:O +, E:D +, I:O +, J:D +, J:O)(B +, C +, D +, E +, I +, J +, O)

     = C +, E +, I +, J

's'b = "servant of a black"

     = (C:O +, E:D +, I:O +, J:D +, J:O)O

     = C +, I +, J

's'm = "servant of a man"

     = (C:O +, E:D +, I:O +, J:D +, J:O)(C +, I +, J +, O)

     = C +, I +, J

's'w = "servant of a woman"

     = (C:O +, E:D +, I:O +, J:D +, J:O)(B +, D +, E)

     = E +, J

'ls' = "lover of a servant of ---"

     = (B:C +, C:B +, D:O +, E:I +, I:E +, O:D)(C:O +, E:D +, I:O +, J:D +, J:O)

     = B:O +, E:O +, I:D

'sl' = "servant of a lover of ---"

     = (C:O +, E:D +, I:O +, J:D +, J:O)(B:C +, C:B +, D:O +, E:I +, I:E +, O:D)

     = C:D +, E:O +, I:D +, J:D +, J:O

Among other things, one observes that the
relative terms 'l' and 's' do not commute,
that is to say, 'ls' is not equal to 'sl'.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 20

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Since multiplication by a 2-adic relative term
is a logical analogue of matrix multiplication
in linear algebra, all of the products that we
computed above can be represented in terms of
logical matrices and logical vectors.

Here are the absolute terms again, followed by
their representation as "coefficient tuples",
otherwise thought of as "coordinate vectors".

1  =  B +, C +, D +, E +, I +, J +, O

   =  <1, 1, 1, 1, 1, 1, 1>

b  =  O

   =  <0, 0, 0, 0, 0, 0, 1>

m  =  C +, I +, J +, O

   =  <0, 1, 0, 0, 1, 1, 1>

w  =  B +, D +, E

   =  <1, 0, 1, 1, 0, 0, 0>

Since we are going to be regarding these tuples as "column vectors",
it is convenient to arrange them into a table of the following form:

   | 1 b m w
---o---------
 B | 1 0 0 1
 C | 1 0 1 0
 D | 1 0 0 1
 E | 1 0 0 1
 I | 1 0 1 0
 J | 1 0 1 0
 O | 1 1 1 0

Here are the 2-adic relative terms again, followed by
their representation as coefficient matrices, in this
case bordered by row and column labels to remind us
what the coefficient values are meant to signify.

'l' = B:C +, C:B +, D:O +, E:I +, I:E +, O:D =

'l'| B C D E I J O
---o---------------
 B | 0 1 0 0 0 0 0
 C | 1 0 0 0 0 0 0
 D | 0 0 0 0 0 0 1
 E | 0 0 0 0 1 0 0
 I | 0 0 0 1 0 0 0
 J | 0 0 0 0 0 0 0
 O | 0 0 1 0 0 0 0

's' = C:O +, E:D +, I:O +, J:D +, J:O =

's'| B C D E I J O
---o---------------
 B | 0 0 0 0 0 0 0
 C | 0 0 0 0 0 0 1
 D | 0 0 0 0 0 0 0
 E | 0 0 1 0 0 0 0
 I | 0 0 0 0 0 0 1
 J | 0 0 1 0 0 0 1
 O | 0 0 0 0 0 0 0

Here are the matrix representations of
the products that we calculated before:

'l'1 = "lover of anybody" =

| 0 1 0 0 0 0 0 | | 1 |   | 1 |
| 1 0 0 0 0 0 0 | | 1 |   | 1 |
| 0 0 0 0 0 0 1 | | 1 |   | 1 |
| 0 0 0 0 1 0 0 | | 1 | = | 1 |
| 0 0 0 1 0 0 0 | | 1 |   | 1 |
| 0 0 0 0 0 0 0 | | 1 |   | 0 |
| 0 0 1 0 0 0 0 | | 1 |   | 1 |

'l'b = "lover of a black" =

| 0 1 0 0 0 0 0 | | 0 |   | 0 |
| 1 0 0 0 0 0 0 | | 0 |   | 0 |
| 0 0 0 0 0 0 1 | | 0 |   | 1 |
| 0 0 0 0 1 0 0 | | 0 | = | 0 |
| 0 0 0 1 0 0 0 | | 0 |   | 0 |
| 0 0 0 0 0 0 0 | | 0 |   | 0 |
| 0 0 1 0 0 0 0 | | 1 |   | 0 |

'l'm = "lover of a man" =

| 0 1 0 0 0 0 0 | | 0 |   | 1 |
| 1 0 0 0 0 0 0 | | 1 |   | 0 |
| 0 0 0 0 0 0 1 | | 0 |   | 1 |
| 0 0 0 0 1 0 0 | | 0 | = | 1 |
| 0 0 0 1 0 0 0 | | 1 |   | 0 |
| 0 0 0 0 0 0 0 | | 1 |   | 0 |
| 0 0 1 0 0 0 0 | | 1 |   | 0 |

'l'w = "lover of a woman" =

| 0 1 0 0 0 0 0 | | 1 |   | 0 |
| 1 0 0 0 0 0 0 | | 0 |   | 1 |
| 0 0 0 0 0 0 1 | | 1 |   | 0 |
| 0 0 0 0 1 0 0 | | 1 | = | 0 |
| 0 0 0 1 0 0 0 | | 0 |   | 1 |
| 0 0 0 0 0 0 0 | | 0 |   | 0 |
| 0 0 1 0 0 0 0 | | 0 |   | 1 |

's'1 = "servant of anybody" =

| 0 0 0 0 0 0 0 | | 1 |   | 0 |
| 0 0 0 0 0 0 1 | | 1 |   | 1 |
| 0 0 0 0 0 0 0 | | 1 |   | 0 |
| 0 0 1 0 0 0 0 | | 1 | = | 1 |
| 0 0 0 0 0 0 1 | | 1 |   | 1 |
| 0 0 1 0 0 0 1 | | 1 |   | 1 |
| 0 0 0 0 0 0 0 | | 1 |   | 0 |

's'b = "servant of a black" =

| 0 0 0 0 0 0 0 | | 0 |   | 0 |
| 0 0 0 0 0 0 1 | | 0 |   | 1 |
| 0 0 0 0 0 0 0 | | 0 |   | 0 |
| 0 0 1 0 0 0 0 | | 0 | = | 0 |
| 0 0 0 0 0 0 1 | | 0 |   | 1 |
| 0 0 1 0 0 0 1 | | 0 |   | 1 |
| 0 0 0 0 0 0 0 | | 1 |   | 0 |

's'm = "servant of a man" =

| 0 0 0 0 0 0 0 | | 0 |   | 0 |
| 0 0 0 0 0 0 1 | | 1 |   | 1 |
| 0 0 0 0 0 0 0 | | 0 |   | 0 |
| 0 0 1 0 0 0 0 | | 0 | = | 0 |
| 0 0 0 0 0 0 1 | | 1 |   | 1 |
| 0 0 1 0 0 0 1 | | 1 |   | 1 |
| 0 0 0 0 0 0 0 | | 1 |   | 0 |

's'w = "servant of a woman" =

| 0 0 0 0 0 0 0 | | 1 |   | 0 |
| 0 0 0 0 0 0 1 | | 0 |   | 0 |
| 0 0 0 0 0 0 0 | | 1 |   | 0 |
| 0 0 1 0 0 0 0 | | 1 | = | 1 |
| 0 0 0 0 0 0 1 | | 0 |   | 0 |
| 0 0 1 0 0 0 1 | | 0 |   | 1 |
| 0 0 0 0 0 0 0 | | 0 |   | 0 |

'ls' = "lover of a servant of ---" =

| 0 1 0 0 0 0 0 | | 0 0 0 0 0 0 0 |   | 0 0 0 0 0 0 1 |
| 1 0 0 0 0 0 0 | | 0 0 0 0 0 0 1 |   | 0 0 0 0 0 0 0 |
| 0 0 0 0 0 0 1 | | 0 0 0 0 0 0 0 |   | 0 0 0 0 0 0 0 |
| 0 0 0 0 1 0 0 | | 0 0 1 0 0 0 0 | = | 0 0 0 0 0 0 1 |
| 0 0 0 1 0 0 0 | | 0 0 0 0 0 0 1 |   | 0 0 1 0 0 0 0 |
| 0 0 0 0 0 0 0 | | 0 0 1 0 0 0 1 |   | 0 0 0 0 0 0 0 |
| 0 0 1 0 0 0 0 | | 0 0 0 0 0 0 0 |   | 0 0 0 0 0 0 0 |

'sl' = "servant of a lover of ---" =

| 0 0 0 0 0 0 0 | | 0 1 0 0 0 0 0 |   | 0 0 0 0 0 0 0 |
| 0 0 0 0 0 0 1 | | 1 0 0 0 0 0 0 |   | 0 0 1 0 0 0 0 |
| 0 0 0 0 0 0 0 | | 0 0 0 0 0 0 1 |   | 0 0 0 0 0 0 0 |
| 0 0 1 0 0 0 0 | | 0 0 0 0 1 0 0 | = | 0 0 0 0 0 0 1 |
| 0 0 0 0 0 0 1 | | 0 0 0 1 0 0 0 |   | 0 0 1 0 0 0 0 |
| 0 0 1 0 0 0 1 | | 0 0 0 0 0 0 0 |   | 0 0 1 0 0 0 1 |
| 0 0 0 0 0 0 0 | | 0 0 1 0 0 0 0 |   | 0 0 0 0 0 0 0 |

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 21

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

The foregoing has hopefully filled in enough background that we
can begin to make sense of the more mysterious parts of CP 3.73.

| Thus far, we have considered the multiplication of relative terms only.
| Since our conception of multiplication is the application of a relation,
| we can only multiply absolute terms by considering them as relatives.
|
| Now the absolute term "man" is really exactly equivalent to
| the relative term "man that is ---", and so with any other.
| I shall write a comma after any absolute term to show that
| it is so regarded as a relative term.
|
| Then:
|
| "man that is black"
|
| will be written
|
| m,b.
|
| C.S. Peirce, CP 3.73

In any system where elements are organized according to types,
there tend to be any number of ways in which elements of one
type are naturally associated with elements of another type.
If the association is anything like a logical equivalence,
but with the first type being "lower" and the second type
being "higher" in some sense, then one frequently speaks
of a "semantic ascent" from the lower to the higher type.

For instance, it is very common in mathematics to associate an element m
of a set M with the constant function f_m : X -> M such that f_m (x) = m
for all x in X, where X is an arbitrary set.  Indeed, the correspondence
is so close that one often uses the same name "m" for the element m in M
and the function m = f_m : X -> M, relying on the context or an explicit
type indication to tell them apart.

For another instance, we have the "tacit extension" of a k-place relation
L c X_1 x ... x X_k to a (k+1)-place relation L' c X_1 x ... x X_k+1 that
we get by letting L' = L x X_k+1, that is, by maintaining the constraints
of L on the first k variables and letting the last variable wander freely.

What we have here, if I understand Peirce correctly, is another such
type of natural extension, sometimes called the "diagonal extension".
This associates a k-adic relative or a k-adic relation, counting the
absolute term and the set whose elements it denotes as the cases for
k = 0, with a series of relatives and relations of higher adicities.

A few examples will suffice to anchor these ideas.

Absolute terms:

m   =  "man"                =  C +, I +, J +, O

n   =  "noble"              =  C +, D +, O

w   =  "woman"              =  B +, D +, E

Diagonal extensions:

m,  =  "man that is ---"    =  C:C +, I:I +, J:J +, O:O

n,  =  "noble that is ---"  =  C:C +, D:D +, O:O

w,  =  "woman that is ---"  =  B:B +, D:D +, E:E

Sample products:

m,n  =  "man that is noble"  

     =  (C:C +, I:I +, J:J +, O:O)(C +, D +, O)

     =  C +, O

n,m  =  "noble that is man"

     =  (C:C +, D:D +, O:O)(C +, I +, J +, O)

     =  C +, O

n,w  =  "noble that is woman"

     =  (C:C +, D:D +, O:O)(B +, D +, E)

     =  D

w,n  =  "woman that is noble"

     =  (B:B +, D:D +, E:E)(C +, D +, O)

     =  D

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 22

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The Signs for Multiplication (cont.)
|
| It is obvious that multiplication into
| a multiplicand indicated by a comma is
| commutative <1>, that is,
|
| 's','l'  =  'l','s'.
|
| This multiplication is effectively the same as
| that of Boole in his logical calculus.  Boole's
| unity is my 1, that is, it denotes whatever is.
|
| <1>.  It will often be convenient to speak of the whole operation of
| affixing a comma and then multiplying as a commutative multiplication,
| the sign for which is the comma.  But though this is allowable, we shall
| fall into confusion at once if we ever forget that in point of fact it is
| not a different multiplication, only it is multiplication by a relative
| whose meaning -- or rather whose syntax -- has been slightly altered;
| and that the comma is really the sign of this modification of the
| foregoing term.
|
| C.S. Peirce, CP 3.74
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 23

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Let us backtrack a few years, and consider how Boole explained his
twin conceptions of "selective operations" and "selective symbols".

| Let us then suppose that the universe of our discourse
| is the actual universe, so that words are to be used in
| the full extent of their meaning, and let us consider the
| two mental operations implied by the words "white" and "men".
| The word "men" implies the operation of selecting in thought
| from its subject, the universe, all men;  and the resulting
| conception, 'men', becomes the subject of the next operation.
| The operation implied by the word "white" is that of selecting
| from its subject, "men", all of that class which are white.
| The final resulting conception is that of "white men".
|
| Now it is perfectly apparent that if the operations above described
| had been performed in a converse order, the result would have been the
| same.  Whether we begin by forming the conception of "'men'", and then by
| a second intellectual act limit that conception to "white men", or whether
| we begin by forming the conception of "white objects", and then limit it to
| such of that class as are "men", is perfectly indifferent so far as the result
| is concerned.  It is obvious that the order of the mental processes would be
| equally indifferent if for the words "white" and "men" we substituted any
| other descriptive or appellative terms whatever, provided only that their
| meaning was fixed and absolute.  And thus the indifference of the order
| of two successive acts of the faculty of Conception, the one of which
| furnishes the subject upon which the other is supposed to operate,
| is a general condition of the exercise of that faculty.  It is
| a law of the mind, and it is the real origin of that law of
| the literal symbols of Logic which constitutes its formal
| expression (1) Chap. II, [namely, xy = yx].
|
| It is equally clear that the mental operation above described is of such
| a nature that its effect is not altered by repetition.  Suppose that by
| a definite act of conception the attention has been fixed upon men, and
| that by another exercise of the same faculty we limit it to those of the
| race who are white.  Then any further repetition of the latter mental act,
| by which the attention is limited to white objects, does not in any way
| modify the conception arrived at, viz., that of white men.  This is also
| an example of a general law of the mind, and it has its formal expression
| in the law ((2) Chap. II) of the literal symbols [namely, x^2 = x].
|
| Boole, 'Laws of Thought', pp. 44-45.
|
| George Boole,
|'An Investigation of the Laws of Thought,
| On Which are Founded the Mathematical
| Theories of Logic and Probabilities',
| Reprinted, Dover, New York, NY, 1958.
| Originally published, Macmillan, 1854.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 24

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

In setting up his discussion of selective operations and
their corresponding selective symbols, Boole writes this:

| The operation which we really perform is one of 'selection according to
| a prescribed principle or idea'.  To what faculties of the mind such an
| operation would be referred, according to the received classification of
| its powers, it is not important to inquire, but I suppose that it would be
| considered as dependent upon the two faculties of Conception or Imagination,
| and Attention.  To the one of these faculties might be referred the formation
| of the general conception;  to the other the fixing of the mental regard upon
| those individuals within the prescribed universe of discourse which answer to
| the conception.  If, however, as seems not improbable, the power of Attention
| is nothing more than the power of continuing the exercise of any other faculty
| of the mind, we might properly regard the whole of the mental process above
| described as referrible to the mental faculty of Imagination or Conception,
| the first step of the process being the conception of the Universe itself,
| and each succeeding step limiting in a definite manner the conception
| thus formed.  Adopting this view, I shall describe each such step,
| or any definite combination of such steps, as a 'definite act
| of conception'.
|
| Boole, 'Laws of Thought', p. 43.
|
| George Boole,
|'An Investigation of the Laws of Thought,
| On Which are Founded the Mathematical
| Theories of Logic and Probabilities',
| Reprinted, Dover, New York, NY, 1958.
| Originally published, Macmillan, 1854.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 25

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

In algebra, an "idempotent element" x is one that obeys the
"idempotent law", that is, it satisfies the equation xx = x.
Under most circumstances, it is usual to write this x^2 = x.

If the algebraic system in question falls under the additional laws
that are necessary to carry out the requisite transformations, then
x^2 = x is convertible into x - x^2 = 0, and this into x(1 - x) = 0.

If the algebraic system in question happens to be a boolean algebra,
then the equation x(1 - x) = 0 says that x & ~x is identically false,
in effect, a statement of the classical principle of non-contradiction.

We have already seen how Boole found rationales for the commutative law and
the idempotent law by contemplating the properties of "selective operations".

It is time to bring these threads together, which we can do by considering the
so-called "idempotent representation" of sets.  This will give us one of the
best ways to understand the significance that Boole attached to selective
operations.  It will also link up with the statements that Peirce makes
about his adicity-augmenting comma operation.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 26

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Boole rationalized the properties of what we now dub "boolean multiplication",
roughly equivalent to logical conjunction, in terms of the laws that apply to
selective operations.  Peirce, in his turn, taking a very significant step of
analysis that has seldom been recognized for what it would lead to, much less
followed, does not consider this multiplication to be a fundamental operation,
but derives it as a by-product of relative multiplication by a comma relative.
Thus, Peirce makes logical conjunction a special case of relative composition.

This opens up a very wide field of investigation,
"the operational significance of logical terms",
one might say, but it will be best to advance
bit by bit, and to lean on simple examples.

Back to Venice, and the close-knit party
of absolutes and relatives that we were
entertaining when last we were there.

Here is the list of absolute terms that we were considering before,
to which I have thrown in 1, the universe of "anybody or anything",
just for good measure:

1   =  "anybody"              =  B +, C +, D +, E +, I +, J +, O

m   =  "man"                  =  C +, I +, J +, O

n   =  "noble"                =  C +, D +, O

w   =  "woman"                =  B +, D +, E

Here is the list of "comma inflexions" or "diagonal extensions" of these terms:

1,  =  "anybody that is ---"  =  B:B +, C:C +, D:D +, E:E +, I:I +, J:J +, O:O

m,  =  "man that is ---"      =  C:C +, I:I +, J:J +, O:O

n,  =  "noble that is ---"    =  C:C +, D:D +, O:O

w,  =  "woman that is ---"    =  B:B +, D:D +, E:E

One observes that the diagonal extension of 1
is the same thing as the identity relation !1!.

Inspired by this identification of "1," with "!1!", and because
the affixed commas of the diagonal extensions tend to get lost
in the ordinary commas of punctuation, I will experiment with
using the alternative notations:

m,  =  !m!
n,  =  !n!
w,  =  !w!

Working within our smaller sample of absolute terms,
we have already computed the sorts of products that
apply the diagonal extension of an absolute term to
another absolute term, for instance, these products:

m,n  =  !m!n  =  "man that is noble"    =  C +, O
n,m  =  !n!m  =  "noble that is man"    =  C +, O
n,w  =  !n!w  =  "noble that is woman"  =  D
w,n  =  !w!n  =  "woman that is noble"  =  D

This exercise gave us a bit of practical insight into
why the commutative law holds for logical conjunction.

Further insight into the laws that govern this realm of logic,
and the underlying reasons why they apply, might be gained by
systematically working through the whole variety of different
products that are generated by the operational means in sight,
namely, the products indicated by {1, m, n, w}<,>{1, m, n, w}.

But before we try to explore this territory more systematically,
let us equip ourselves with the sorts of graphical and matrical
representations that we discovered to provide us with such able
assists to the intuition in so many of our previous adventures.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 27

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Peirce's comma operation, in its application to an absolute term,
is tantamount to the representation of that term's denotation as
an idempotent transformation, which is commonly represented as a
diagonal matrix.  This is why I call it the "diagonal extension".

An idempotent element x is given by the abstract condition that xx = x,
but we commonly encounter such elements in more concrete circumstances,
acting as operators or transformations on other sets or spaces, and in
that action they will often be represented as matrices of coefficients.

Let's see how all of this looks from the graphical and matrical perspectives.

Absolute terms:

1  =  "anybody"  =  B +, C +, D +, E +, I +, J +, O

m  =  "man"      =  C +, I +, J +, O

n  =  "noble"    =  C +, D +, O

w  =  "woman"    =  B +, D +, E

Previously, we represented absolute terms as column vectors.
The above four terms are given by the columns of this table:

   | 1 m n w
---o---------
 B | 1 0 0 1
 C | 1 1 1 0
 D | 1 0 1 1
 E | 1 0 0 1
 I | 1 1 0 0
 J | 1 1 0 0
 O | 1 1 1 0

One way to represent sets in the bigraph picture
is simply to mark the nodes in some way, like so:

    B   C   D   E   I   J   O
1   +   +   +   +   +   +   +

    B   C   D   E   I   J   O
m   o   +   o   o   +   +   +

    B   C   D   E   I   J   O
n   o   +   +   o   o   o   +

    B   C   D   E   I   J   O
w   +   o   +   +   o   o   o

Diagonal extensions of the absolute terms:

1,  =  "anybody that is ---"  =  B:B +, C:C +, D:D +, E:E +, I:I +, J:J +, O:O

m,  =  "man that is ---"      =  C:C +, I:I +, J:J +, O:O

n,  =  "noble that is ---"    =  C:C +, D:D +, O:O

w,  =  "woman that is ---"    =  B:B +, D:D +, E:E

Naturally enough, the diagonal extensions are represented by diagonal matrices:

!1!| B C D E I J O
---o---------------
 B | 1 0 0 0 0 0 0
 C | 0 1 0 0 0 0 0
 D | 0 0 1 0 0 0 0
 E | 0 0 0 1 0 0 0
 I | 0 0 0 0 1 0 0
 J | 0 0 0 0 0 1 0
 O | 0 0 0 0 0 0 1

!m!| B C D E I J O
---o---------------
 B | 0 0 0 0 0 0 0
 C | 0 1 0 0 0 0 0
 D | 0 0 0 0 0 0 0
 E | 0 0 0 0 0 0 0
 I | 0 0 0 0 1 0 0
 J | 0 0 0 0 0 1 0
 O | 0 0 0 0 0 0 1

!n!| B C D E I J O
---o---------------
 B | 0 0 0 0 0 0 0
 C | 0 1 0 0 0 0 0
 D | 0 0 1 0 0 0 0
 E | 0 0 0 0 0 0 0
 I | 0 0 0 0 0 0 0
 J | 0 0 0 0 0 0 0
 O | 0 0 0 0 0 0 1

!w!| B C D E I J O
---o---------------
 B | 1 0 0 0 0 0 0
 C | 0 0 0 0 0 0 0
 D | 0 0 1 0 0 0 0
 E | 0 0 0 1 0 0 0
 I | 0 0 0 0 0 0 0
 J | 0 0 0 0 0 0 0
 O | 0 0 0 0 0 0 0

Cast into the bigraph picture of 2-adic relations,
the diagonal extension of an absolute term takes on
a very distinctive sort of "straight-laced" character:

    B   C   D   E   I   J   O
u   o   o   o   o   o   o   o
    |   |   |   |   |   |   |
1,  |   |   |   |   |   |   |
    |   |   |   |   |   |   |
u   o   o   o   o   o   o   o
    B   C   D   E   I   J   O

    B   C   D   E   I   J   O
u   o   o   o   o   o   o   o
        |           |   |   |
m,      |           |   |   |
        |           |   |   |
u   o   o   o   o   o   o   o
    B   C   D   E   I   J   O

    B   C   D   E   I   J   O
u   o   o   o   o   o   o   o
        |   |               |
n,      |   |               |
        |   |               |
u   o   o   o   o   o   o   o
    B   C   D   E   I   J   O

    B   C   D   E   I   J   O
u   o   o   o   o   o   o   o
    |       |   |
w,  |       |   |
    |       |   |
u   o   o   o   o   o   o   o
    B   C   D   E   I   J   O

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 28

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Just to be doggedly persistent about it all, here is what
ought to be a sufficient sample of products involving the
multiplication of a comma relative onto an absolute term,
presented in both graphical and matrical representations.

Example 1.  Anything That Is Anything

1,1  =  1

"anything that is anything"  =  "anything"

B   C   D   E   I   J   O
+   +   +   +   +   +   +  1
|   |   |   |   |   |   |
|   |   |   |   |   |   |  1,
|   |   |   |   |   |   |
o   o   o   o   o   o   o  =

+   +   +   +   +   +   +  1
B   C   D   E   I   J   O

| 1 0 0 0 0 0 0 | | 1 |     | 1 |
| 0 1 0 0 0 0 0 | | 1 |     | 1 |
| 0 0 1 0 0 0 0 | | 1 |     | 1 |
| 0 0 0 1 0 0 0 | | 1 |  =  | 1 |
| 0 0 0 0 1 0 0 | | 1 |     | 1 |
| 0 0 0 0 0 1 0 | | 1 |     | 1 |
| 0 0 0 0 0 0 1 | | 1 |     | 1 |

Example 2.  Anything That Is Man

1,m  =  m

"anything that is man"  =  "man"

B   C   D   E   I   J   O
o   +   o   o   +   +   +  m
|   |   |   |   |   |   |
|   |   |   |   |   |   |  1,
|   |   |   |   |   |   |
o   o   o   o   o   o   o  =

o   +   o   o   +   +   +  m
B   C   D   E   I   J   O

| 1 0 0 0 0 0 0 | | 0 |     | 0 |
| 0 1 0 0 0 0 0 | | 1 |     | 1 |
| 0 0 1 0 0 0 0 | | 0 |     | 0 |
| 0 0 0 1 0 0 0 | | 0 |  =  | 0 |
| 0 0 0 0 1 0 0 | | 1 |     | 1 |
| 0 0 0 0 0 1 0 | | 1 |     | 1 |
| 0 0 0 0 0 0 1 | | 1 |     | 1 |

Example 3.  Man That Is Anything

m,1  =  m

"man that is anything"  =  "man"

B   C   D   E   I   J   O
+   +   +   +   +   +   +  1
    |           |   |   |
    |           |   |   |  m,
    |           |   |   |
o   o   o   o   o   o   o  =

o   +   o   o   +   +   +  m
B   C   D   E   I   J   O

| 0 0 0 0 0 0 0 | | 1 |     | 0 |
| 0 1 0 0 0 0 0 | | 1 |     | 1 |
| 0 0 0 0 0 0 0 | | 1 |     | 0 |
| 0 0 0 0 0 0 0 | | 1 |  =  | 0 |
| 0 0 0 0 1 0 0 | | 1 |     | 1 |
| 0 0 0 0 0 1 0 | | 1 |     | 1 |
| 0 0 0 0 0 0 1 | | 1 |     | 1 |

Example 4.  Man That Is Noble

m,n  =  "man that is noble"

B   C   D   E   I   J   O
o   +   +   o   o   o   +  n
    |           |   |   |
    |           |   |   |  m,
    |           |   |   |
o   o   o   o   o   o   o  =

o   +   o   o   o   o   +  m,n
B   C   D   E   I   J   O

| 0 0 0 0 0 0 0 | | 0 |     | 0 |
| 0 1 0 0 0 0 0 | | 1 |     | 1 |
| 0 0 0 0 0 0 0 | | 1 |     | 0 |
| 0 0 0 0 0 0 0 | | 0 |  =  | 0 |
| 0 0 0 0 1 0 0 | | 0 |     | 0 |
| 0 0 0 0 0 1 0 | | 0 |     | 0 |
| 0 0 0 0 0 0 1 | | 1 |     | 1 |

Example 5.  Noble That Is Man

n,m  =  "noble that is man"

B   C   D   E   I   J   O
o   +   o   o   +   +   +  m
    |   |               |
    |   |               |  n,
    |   |               |
o   o   o   o   o   o   o  =

o   +   o   o   o   o   +  n,m
B   C   D   E   I   J   O

| 0 0 0 0 0 0 0 | | 0 |     | 0 |
| 0 1 0 0 0 0 0 | | 1 |     | 1 |
| 0 0 1 0 0 0 0 | | 0 |     | 0 |
| 0 0 0 0 0 0 0 | | 0 |  =  | 0 |
| 0 0 0 0 0 0 0 | | 1 |     | 0 |
| 0 0 0 0 0 0 0 | | 1 |     | 0 |
| 0 0 0 0 0 0 1 | | 1 |     | 1 |

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 29

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

From this point forward we may think of idempotents, selectives,
and zero-one diagonal matrices as being roughly equivalent notions.
The only reason that I say "roughly" is that we are comparing ideas
at different levels of abstraction when we propose these connections.

We have covered the way that Peirce uses his invention of the
comma modifier to assimilate boolean multiplication, logical
conjunction, or what we may think of as "serial selection"
under his more general account of relative multiplication.

But the comma functor has its application to relative terms
of any arity, not just the zeroth arity of absolute terms,
and so there will be a lot more to explore on this point.
But now I must return to the anchorage of Peirce's text,
and hopefully get a chance to revisit this topic later.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 30

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The Signs for Multiplication (cont.)
|
| The sum 'x' + 'x' generally denotes no logical term.
| But 'x',_oo + 'x',_oo may be considered as denoting
| some two 'x's.
|
| It is natural to write:
|
| 'x' + 'x'  =  !2!.'x'
|
| and
|
| 'x',_oo + 'x',_oo  =  !2!.'x',_oo
|
| where the dot shows that this multiplication is invertible.
|
| We may also use the antique figures so that:
|
| !2!.'x',_oo  =  `2`'x'
|
| just as
|
| !1!_oo  =  `1`.
|
| Then `2` alone will denote some two things.
|
| But this multiplication is not in general commutative,
| and only becomes so when it affects a relative which
| imparts a relation such that a thing only bears it
| to 'one' thing, and one thing 'alone' bears it to
| a thing.
|
| For instance, the lovers of two women are not
| the same as two lovers of women, that is:
|
| 'l'`2`.w
|
| and
|
| `2`.'l'w
|
| are unequal;
|
| but the husbands of two women are the
| same as two husbands of women, that is:
|
| 'h'`2`.w  =  `2`.'h'w
|
| and in general:
|
| 'x',`2`.'y'  =  `2`.'x','y'.
|
| C.S. Peirce, CP 3.75
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 31

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

What Peirce is attempting to do in CP 3.75 is absolutely amazing,
and I personally did not see anything on par with it again until
I began to study the application of mathematical category theory
to computation and logic, back in the mid 1980's.  To completely
evaluate the success of this attempt, we would have to return to
Peirce's earlier paper "Upon the Logic of Mathematics" (1867) to
pick up some of the ideas about arithmetic that he set out there.

Another branch of the investigation would require that we examine
more careully the entire syntactic mechanics of "subjacent signs"
that Peirce uses to establish linkages among relational domains.
It is important to note that these types of indices constitute
a diacritical, interpretive, syntactic category under which
Peirce also places the comma functor.

The way that I would currently approach both of these branches
of the investigation would be to open up a wider context for
the study of relational compositions, attempting to get at
the essence of what is going on we when relate relations,
possibly complex, to other relations, possibly simple.

But that will take another cup of java ('c'j) ...
or maybe (!2!.'c',_oo)j = `2`'c'j ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 32

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

To say that a relative term "imparts a relation"
is to say that it conveys information about the
space of tuples in a cartesian product, that is,
it determines a particular subset of that space.

When we study the combinations of relative terms, from the most
elementary forms of composition to the most complex patterns of
correlation, we are considering the ways that these constraints,
determinations, and informations, as imparted by relative terms,
can be compounded in the formation of syntax.

Let us go back and look more carefully at just how it happens that
Peirce's jacent terms and subjacent indices manage to impart their
respective measures of information about relations.

I will begin with the two examples illustrated in Figures 1 and 2,
where I have drawn in the corresponding lines of identity between
the subjacent marks of reference #, $, %.

o-------------------------------------------------o
|                                                 |
|                                                 |
|        'l'__#       #'s'__$   $w                |
|             o       o     o   o                 |
|              \     /       \ /                  |
|               \   /         o                   |
|                \ /          $                   |
|                 o                               |
|                 #                               |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 1.  Lover of a Servant of a Woman

o-------------------------------------------------o
|                                                 |
|                                                 |
|        `g`__#__$    #'l'__%   %w   $h           |
|             o  o    o     o   o    o            |
|              \  \  /       \ /    /             |
|               \  \/         o    /              |
|                \ /\         %   /               |
|                 o  ------o------                |
|                 #        $                      |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 2.  Giver of a Horse to a Lover of a Woman

One way to approach the problem of "information fusion"
in Peirce's syntax is to soften the distinction between
jacent terms and subjacent signs, and to treat the types
of constraints that they separately signify more on a par
with each other.

To that purpose, I will set forth a way of thinking about
relational composition that emphasizes the set-theoretic
constraints involved in the construction of a composite.

For example, suppose that we are given the relations L c X x Y, M c Y x Z.
Table 3 and Figure 4 present a couple of ways of picturing the constraints
that are involved in constructing the relational composition L o M c X x Z.

Table 3.  Relational Composition
o---------o---------o---------o---------o
|         #   !1!   |   !1!   |   !1!   |
o=========o=========o=========o=========o
|    L    #    X    |    Y    |         |
o---------o---------o---------o---------o
|    M    #         |    Y    |    Z    |
o---------o---------o---------o---------o
|  L o M  #    X    |         |    Z    |
o---------o---------o---------o---------o

The way to read Table 3 is to imagine that you are
playing a game that involves placing tokens on the
squares of a board that is marked in just this way.
The rules are that you have to place a single token
on each marked square in the middle of the board in
such a way that all of the indicated constraints are
satisfied.  That is to say, you have to place a token
whose denomination is a value in the set X on each of
the squares marked "X", and similarly for the squares
marked "Y" and "Z", meanwhile leaving all of the blank
squares empty.  Furthermore, the tokens placed in each
row and column have to obey the relational constraints
that are indicated at the heads of the corresponding
row and column.  Thus, the two tokens from X have to
denominate the very same value from X, and likewise
for Y and Z, while the pairs of tokens on the rows
marked "L" and "M" are required to denote elements
that are in the relations L and M, respectively.
The upshot is that when just this much is done,
that is, when the L, M, and !1! relations are
satisfied, then the row marked "L o M" will
automatically bear the tokens of a pair of
elements in the composite relation L o M.

o-------------------------------------------------o
|                                                 |
|                L     L o M     M                |
|                @       @       @                |
|               / \     / \     / \               |
|              o   o   o   o   o   o              |
|              X   Y   X   Z   Y   Z              |
|              o   o   o   o   o   o              |
|               \   \ /     \ /   /               |
|                \   /       \   /                |
|                 \ / \__ __/ \ /                 |
|                  @     @     @                  |
|                 !1!   !1!   !1!                 |
|                                                 |
o-------------------------------------------------o
Figure 4.  Relational Composition

Figure 4 merely shows a different way of viewing the same situation.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 33

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I will devote some time to drawing out the relationships
that exist among the different pictures of relations and
relative terms that were shown above, or as redrawn here:

o-------------------------------------------------o
|                                                 |
|                                                 |
|        'l'__$       $'s'__%   %w                |
|             o       o     o   o                 |
|              \     /       \ /                  |
|               \   /         o                   |
|                \ /          %                   |
|                 o                               |
|                 $                               |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 1.  Lover of a Servant of a Woman

o-------------------------------------------------o
|                                                 |
|                                                 |
|        `g`__$__%    $'l'__*   *w   %h           |
|             o  o    o     o   o    o            |
|              \  \  /       \ /    /             |
|               \  \/         o    /              |
|                \ /\         *   /               |
|                 o  ------o------                |
|                 $        %                      |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 2.  Giver of a Horse to a Lover of a Woman

Table 3.  Relational Composition
o---------o---------o---------o---------o
|         #   !1!   |   !1!   |   !1!   |
o=========o=========o=========o=========o
|    L    #    X    |    Y    |         |
o---------o---------o---------o---------o
|    S    #         |    Y    |    Z    |
o---------o---------o---------o---------o
|  L o S  #    X    |         |    Z    |
o---------o---------o---------o---------o

o-------------------------------------------------o
|                                                 |
|                L     L o S     S                |
|                @       @       @                |
|               / \     / \     / \               |
|              o   o   o   o   o   o              |
|              X   Y   X   Z   Y   Z              |
|              o   o   o   o   o   o              |
|               \   \ /     \ /   /               |
|                \   /       \   /                |
|                 \ / \__ __/ \ /                 |
|                  @     @     @                  |
|                 !1!   !1!   !1!                 |
|                                                 |
o-------------------------------------------------o
Figure 4.  Relational Composition

Figures 1 and 2 exhibit examples of relative multiplication
in one of Peirce's styles of syntax, to which I subtended
lines of identity to mark the anaphora of the correlates.
These pictures are adapted to showing the anatomy of the
relative terms, while the forms of analysis illustrated
in Table 3 and Figure 4 are designed to highlight the
structures of the objective relations themselves.

There are many ways that Peirce might have gotten from his 1870 Notation
for the Logic of Relatives to his more evolved systems of Logical Graphs.
For my part, I find it interesting to speculate on how the metamorphosis
might have been accomplished by way of transformations that act on these
nascent forms of syntax and that take place not too far from the pale of
its means, that is, as nearly as possible according to the rules and the
permissions of the initial system itself.

In Existential Graphs, a relation is represented by a node
whose degree is the adicity of that relation, and which is
adjacent via lines of identity to the nodes that represent
its correlative relations, including as a special case any
of its terminal individual arguments.

In the 1870 Logic of Relatives, implicit lines of identity are invoked by
the subjacent numbers and marks of reference only when a correlate of some
relation is the relate of some relation.  Thus, the principal relate, which
is not a correlate of any explicit relation, is not singled out in this way.

Remarkably enough, the comma modifier itself provides us with a mechanism
to abstract the logic of relations from the logic of relatives, and thus
to forge a possible link between the syntax of relative terms and the
more graphical depiction of the objective relations themselves.

Figure 5 demonstrates this possibility, posing a transitional case between
the style of syntax in Figure 1 and the picture of composition in Figure 4.

o-----------------------------------------------------------o
|                                                           |
|                           L o S                           |
|                 ____________@____________                 |
|                /                         \                |
|               /      L             S      \               |
|              /       @             @       \              |
|             /       / \           / \       \             |
|            /       /   \         /   \       \            |
|           o       o     o       o     o       o           |
|           X       X     Y       Y     Z       Z           |
|       1,__#       #'l'__$       $'s'__%       %1          |
|           o       o     o       o     o       o           |
|            \     /       \     /       \     /            |
|             \   /         \   /         \   /             |
|              \ /           \ /           \ /              |
|               @             @             @               |
|              !1!           !1!           !1!              |
|                                                           |
o-----------------------------------------------------------o
Figure 5.  Anything that is a Lover of a Servant of Anything

In this composite sketch, the diagonal extension of the universe 1
is invoked up front to anchor an explicit line of identity for the
leading relate of the composition, while the terminal argument "w"
has been generalized to the whole universe 1, in effect, executing
an act of abstraction.  This type of universal bracketing isolates
the composing of the relations L and S to form the composite L o S.
The three relational domains X, Y, Z may be distinguished from one
another, or else rolled up into a single universe of discourse, as
one prefers.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 34

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

From now on I will use the forms of analysis exemplified in the last set of
Figures and Tables as a routine bridge between the logic of relative terms
and the logic of their extended relations.  For future reference, we may
think of Table 3 as illustrating the "solitaire" or "spreadsheet" model
of relational composition, while Figure 4 may be thought of as making
a start toward the "hyper(di)graph" model of generalized compositions.
I will explain the hypergraph model in some detail at a later point.
The transitional form of analysis represented by Figure 5 may be
called the "universal bracketing" of relatives as relations.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 35

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

We have sufficiently covered the application of the comma functor,
or the diagonal extension, to absolute terms, so let us return to
where we were in working our way through CP 3.73, and see whether
we can validate Peirce's statements about the "commifications" of
2-adic relative terms that yield their 3-adic diagonal extensions.

| But not only may any absolute term be thus regarded as
| a relative term, but any relative term may in the same
| way be regarded as a relative with one correlate more.
| It is convenient to take this additional correlate as
| the first one.
|
| Then:
|
| 'l','s'w
|
| will denote a lover of a woman
| that is a servant of that woman.
|
| The comma here after 'l' should not be considered
| as altering at all the meaning of 'l', but as only
| a subjacent sign, serving to alter the arrangement
| of the correlates.
|
| C.S. Peirce, CP 3.73

Just to plant our feet on a more solid stage,
let's apply this idea to the Othello example.

For this performance only, just to make the example more interesting,
let us assume that Jeste (J) is secretly in love with Desdemona (D).

Then we begin with the modified data set:

 w   =  "woman"           =  B +, D +, E

'l'  =  "lover of ---"    =  B:C +, C:B +, D:O +, E:I +, I:E +, J:D +, O:D

's'  =  "servant of ---"  =  C:O +, E:D +, I:O +, J:D +, J:O

And next we derive the following results:

'l',  =  "lover that is --- of ---"

      =  B:B:C +, C:C:B +, D:D:O +, E:E:I +, I:I:E +, J:J:D +, O:O:D

'l','s'w  =  (B:B:C +, C:C:B +, D:D:O +, E:E:I +, I:I:E +, J:J:D +, O:O:D)

          x  (C:O +, E:D +, I:O +, J:D +, J:O)

          x  (B +, D +, E)

Now what are we to make of that?

If we operate in accordance with Peirce's example of `g`'o'h
as the "giver of a horse to an owner of that horse", then we
may assume that the associative law and the distributive law
are by default in force, allowing us to derive this equation:

'l','s'w  =  'l','s'(B +, D +, E)

          =  'l','s'B +, 'l','s'D +, 'l','s'E

Evidently what Peirce means by the associative principle,
as it applies to this type of product, is that a product
of elementary relatives having the form (R:S:T)(S:T)(T)
is equal to R but that no other form of product yields
a non-null result.  Scanning the implied terms of the
triple product tells us that only the following case
is non-null:  J = (J:J:D)(J:D)(D).  It follows that:

'l','s'w  =  "lover and servant of a woman"

          =  "lover that is a servant of a woman"

          =  "lover of a woman that is a servant of that woman"

          =  J

And so what Peirce says makes sense in this case.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 36

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

As Peirce observes, it is not possible to work with
relations in general without eventually abandoning
all of one's algebraic principles, in due time the
associative and maybe even the distributive, just
as we have already left behind the commutative.
It cannot be helped, as we cannot reflect on
a law if not from a perspective outside it,
that is to say, at any rate, virtually so.

One way to do this would be from the standpoint of the combinator calculus,
and there are places where Peirce verges on systems that are very similar,
but I am making a deliberate effort to remain here as close as possible
within the syntactoplastic chronism of his 1870 Logic of Relatives.
So let us make use of the smoother transitions that are afforded
by the paradigmatic Figures and Tables that I drew up earlier.

For the next few episodes, then, I will examine the examples
that Peirce gives at the next level of complication in the
multiplication of relative terms, for instance, the three
that I have redrawn below.

o-------------------------------------------------o
|                                                 |
|                                                 |
|         `g`__$__%    $'l'__*   *w   %h          |
|              o  o    o     o   o    o           |
|               \  \  /       \ /    /            |
|                \  \/         @    /             |
|                 \ /\______ ______/              |
|                  @        @                     |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 6.  Giver of a Horse to a Lover of a Woman

o-------------------------------------------------o
|                                                 |
|                                                 |
|         `g`__$__%    $'o'__*   *%h              |
|              o  o    o     o   oo               |
|               \  \  /       \ //                |
|                \  \/         @/                 |
|                 \ /\____ ____/                  |
|                  @      @                       |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 7.  Giver of a Horse to an Owner of It

o-------------------------------------------------o
|                                                 |
|                                                 |
|        'l',__$__%    $'s'__*   *%w              |
|              o  o    o     o   oo               |
|               \  \  /       \ //                |
|                \  \/         @/                 |
|                 \ /\____ ____/                  |
|                  @      @                       |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 8.  Lover that is a Servant of a Woman

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 37

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Here is what I get when I try to analyze Peirce's
"giver of a horse to a lover of a woman" example
along the same lines as the 2-adic compositions.

We may begin with the mark-up shown in Figure 6.

o-------------------------------------------------o
|                                                 |
|                                                 |
|         `g`__$__%    $'l'__*   *w   %h          |
|              o  o    o     o   o    o           |
|               \  \  /       \ /    /            |
|                \  \/         @    /             |
|                 \ /\______ ______/              |
|                  @        @                     |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 6.  Giver of a Horse to a Lover of a Woman

If we analyze this in accord with the "spreadsheet" model
of relational composition, the core of it is a particular
way of composing a 3-adic "giving" relation G c T x U x V
with a 2-adic "loving" relation L c U x W so as to obtain
a specialized sort of 3-adic relation (G o L) c T x W x V.
The applicable constraints on tuples are shown in Table 9.

Table 9.  Composite of Triadic and Dyadic Relations
o---------o---------o---------o---------o---------o
|         #   !1!   |   !1!   |   !1!   |   !1!   |
o=========o=========o=========o=========o=========o
|    G    #    T    |    U    |         |    V    |
o---------o---------o---------o---------o---------o
|    L    #         |    U    |    W    |         |
o---------o---------o---------o---------o---------o
|  G o L  #    T    |         |    W    |    V    |
o---------o---------o---------o---------o---------o

The hypergraph picture of the abstract composition is given in Figure 10.

o---------------------------------------------------------------------o
|                                                                     |
|                                G o L                                |
|                       ___________@___________                       |
|                      /                  \    \                      |
|                     /  G              L  \    \                     |
|                    /   @              @   \    \                    |
|                   /   /|\            / \   \    \                   |
|                  /   / | \          /   \   \    \                  |
|                 /   /  |  \        /     \   \    \                 |
|                /   /   |   \      /       \   \    \                |
|               o   o    o    o    o         o   o    o               |
|               T   T    U    V    U         W   W    V               |
|            1,_#   #`g`_$____%    $'l'______*   *1   %1              |
|               o   o    o    o    o         o   o    o               |
|                \ /      \    \  /           \ /    /                |
|                 @        \    \/             @    /                 |
|                !1!        \   /\            !1!  /                  |
|                            \ /  \_______ _______/                   |
|                             @           @                           |
|                            !1!         !1!                          |
|                                                                     |
o---------------------------------------------------------------------o
Figure 10.  Anything that is a Giver of Anything to a Lover of Anything

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 38

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

In taking up the next example of relational composition,
let's exchange the relation 't' = "trainer of ---" for
Peirce's relation 'o' = "owner of ---", simply for the
sake of avoiding conflicts in the symbols that we use.
In this way, Figure 7 is transformed into Figure 11.

o-------------------------------------------------o
|                                                 |
|                                                 |
|         `g`__$__%    $'t'__*   *%h              |
|              o  o    o     o   oo               |
|               \  \  /       \ //                |
|                \  \/         @/                 |
|                 \ /\____ ____/                  |
|                  @      @                       |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 11.  Giver of a Horse to a Trainer of It

Now here's an interesting point, in fact, a critical transition point,
that we see resting in potential but a stone's throw removed from the
chronism, the secular neigborhood, the temporal vicinity of Peirce's
1870 LOR, and it's a vertex that turns on the teridentity relation.

The hypergraph picture of the abstract composition is given in Figure 12.

o---------------------------------------------------------------------o
|                                                                     |
|                                G o T                                |
|                 _________________@_________________                 |
|                /                                   \                |
|               /        G              T             \               |
|              /         @              @              \              |
|             /         /|\            / \              \             |
|            /         / | \          /   \              \            |
|           /         /  |  \        /     \              \           |
|          /         /   |   \      /       \              \          |
|         o         o    o    o    o         o              o         |
|         X         X    Y    Z    Y         Z              Z         |
|      1,_#         #`g`_$____%    $'t'______%              %1        |
|         o         o    o    o    o         o              o         |
|          \       /      \    \  /          |             /          |
|           \     /        \    \/           |            /           |
|            \   /          \   /\           |           /            |
|             \ /            \ /  \__________|__________/             |
|              @              @              @                        |
|             !1!            !1!            !1!                       |
|                                                                     |
o---------------------------------------------------------------------o
Figure 12.  Anything that is a Giver of Anything to a Trainer of It

If we analyze this in accord with the "spreadsheet" model
of relational composition, the core of it is a particular
way of composing a 3-adic "giving" relation G c X x Y x Z
with a 2-adic "training" relation T c Y x Z in such a way
as to determine a certain 2-adic relation (G o T) c X x Z.
Table 13 schematizes the associated constraints on tuples.

Table 13.  Another Brand of Composition
o---------o---------o---------o---------o
|         #   !1!   |   !1!   |   !1!   |
o=========o=========o=========o=========o
|    G    #    X    |    Y    |    Z    |
o---------o---------o---------o---------o
|    T    #         |    Y    |    Z    |
o---------o---------o---------o---------o
|  G o T  #    X    |         |    Z    |
o---------o---------o---------o---------o

So we see that the notorious teridentity relation,
which I have left equivocally denoted by the same
symbol as the identity relation !1!, is already
implicit in Peirce's discussion at this point.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 39

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

The use of the concepts of identity and teridentity is not to identify
a thing in itself with itself, much less twice or thrice over, since
there is no need and thus no utility in that.  I can imagine Peirce
asking, on Kantian principles if not entirely on Kantian premisses,
"Where is the manifold to be unified?"  The manifold that demands
unification does not reside in the object but in the phenomena,
that is, in the appearances that might have been appearances
of different objects but that happen to be constrained by
these identities to being just so many aspects, facets,
parts, roles, or signs of one and the same object.

For example, notice how the various identity concepts actually
functioned in the last example, where they had the opportunity
to show their behavior in something like their natural habitat.

The use of the teridentity concept in the case
of the "giver of a horse to a trainer of it" is
to stipulate that the thing appearing with respect
to its quality under the aspect of an absolute term,
a horse, and the thing appearing with respect to its
recalcitrance in the role of the correlate of a 2-adic
relative, a brute to be trained, and the thing appearing
with respect to its synthesis in the role of a correlate
of a 3-adic relative, a gift, are one and the same thing.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 40

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Figure 8 depicts the last of the three examples involving
the composition of 3-adic relatives with 2-adic relatives:

o-------------------------------------------------o
|                                                 |
|                                                 |
|        'l',__$__%    $'s'__*   *%w              |
|              o  o    o     o   oo               |
|               \  \  /       \ //                |
|                \  \/         @/                 |
|                 \ /\____ ____/                  |
|                  @      @                       |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 8.  Lover that is a Servant of a Woman

The hypergraph picture of the abstract composition is given in Figure 14.

o---------------------------------------------------------------------o
|                                                                     |
|                                L , S                                |
|                __________________@__________________                |
|               /                                     \               |
|              /       L_,              S              \              |
|             /         @               @               \             |
|            /         /|\             / \               \            |
|           /         / | \           /   \               \           |
|          /         /  |  \         /     \               \          |
|         /         /   |   \       /       \               \         |
|        /         /    |    \     /         \               \        |
|       o         o     o     o   o           o               o       |
|       X         X     X     Y   X           Y               Y       |
|    1,_#         #'l',_$_____%   $'t'________%               %1      |
|       o         o     o     o   o           o               o       |
|        \       /       \     \ /            |              /        |
|         \     /         \     \             |             /         |
|          \   /           \   / \            |            /          |
|           \ /             \ /   \___________|___________/           |
|            @               @                @                       |
|           !1!             !1!              !1!                      |
|                                                                     |
o---------------------------------------------------------------------o
Figure 14.  Anything that's a Lover of Anything and that's a Servant of It

This example illustrates the way that Peirce analyzes the logical conjunction,
we might even say the "parallel conjunction", of a couple of 2-adic relatives
in terms of the comma extension and the same style of composition that we saw
in the last example, that is, according to a pattern of anaphora that invokes
the teridentity relation.

If we lay out this analysis of conjunction on the spreadsheet model
of relational composition, the gist of it is the diagonal extension
of a 2-adic "loving" relation L c X x Y to the corresponding 3-adic
"loving and being" relation L_, c X x X x Y, which is then composed
in a specific way with a 2-adic "serving" relation S c X x Y, so as
to determine the 2-adic relation L,S c X x Y.  Table 15 schematizes
the associated constraints on tuples.

Table 15.  Conjunction Via Composition
o---------o---------o---------o---------o
|         #   !1!   |   !1!   |   !1!   |
o=========o=========o=========o=========o
|    L,   #    X    |    X    |    Y    |
o---------o---------o---------o---------o
|    S    #         |    X    |    Y    |
o---------o---------o---------o---------o
|  L , S  #    X    |         |    Y    |
o---------o---------o---------o---------o

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 41

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I return to where we were in unpacking the contents of CP 3.73.
Peirce remarks that the comma operator can be iterated at will:

| In point of fact, since a comma may be added in this way to any
| relative term, it may be added to one of these very relatives
| formed by a comma, and thus by the addition of two commas
| an absolute term becomes a relative of two correlates.
|
| So:
|
| m,,b,r
|
| interpreted like
|
| `g`'o'h
|
| means a man that is a rich individual and
| is a black that is that rich individual.
|
| But this has no other meaning than:
|
| m,b,r
|
| or a man that is a black that is rich.
|
| Thus we see that, after one comma is added, the
| addition of another does not change the meaning
| at all, so that whatever has one comma after it
| must be regarded as having an infinite number.
|
| C.S. Peirce, CP 3.73

Again, let us check whether this makes sense
on the stage of our small but dramatic model.

Let's say that Desdemona and Othello are rich,
and, among the persons of the play, only they.

With this premiss we obtain a sample of absolute terms
that is sufficiently ample to work through our example:

1    =   B +, C +, D +, E +, I +, J +, O

b    =   O

m    =   C +, I +, J +, O

r    =   D +, O

One application of the comma operator
yields the following 2-adic relatives:

1,   =   B:B +, C:C +, D:D +, E:E +, I:I +, J:J +, O:O

b,   =   O:O

m,   =   C:C +, I:I +, J:J +, O:O

r,   =   D:D +, O:O

Another application of the comma operator
generates the following 3-adic relatives:

1,,  =   B:B:B +, C:C:C +, D:D:D +, E:E:E +, I:I:I +, J:J:J +, O:O:O

b,,  =   O:O:O

m,,  =   C:C:C +, I:I:I +, J:J:J +, O:O:O

r,,  =   D:D:D +, O:O:O

Assuming the associativity of multiplication among 2-adic relatives,
we may compute the product m,b,r by a brute force method as follows:

m,b,r  =  (C:C +, I:I +, J:J +, O:O)(O:O)(D +, O)

       =  (C:C +, I:I +, J:J +, O:O)(O)

       =  O

This avers that a man that is black that is rich is Othello,
which is true on the premisses of our universe of discourse.

The stock associations of `g`'o'h lead us to multiply out the
product m,,b,r along the following lines, where the trinomials
of the form (X:Y:Z)(Y:Z)(Z) are the only ones that produce any
non-null result, specifically, of the form (X:Y:Z)(Y:Z)(Z) = X.

m,,b,r  =  (C:C:C +, I:I:I +, J:J:J +, O:O:O)(O:O)(D +, O)

        =  (O:O:O)(O:O)(O)

        =  O

So we have that m,,b,r = m,b,r.

In closing, observe that the teridentity relation has turned up again
in this context, as the second comma-ing of the universal term itself:

1,,  =  B:B:B +, C:C:C +, D:D:D +, E:E:E +, I:I:I +, J:J:J +, O:O:O.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 42

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The conception of multiplication we have adopted is that of
| the application of one relation to another.  So, a quaternion
| being the relation of one vector to another, the multiplication
| of quaternions is the application of one such relation to a second.
|
| Even ordinary numerical multiplication involves the same idea, for
| 2 x 3 is a pair of triplets, and 3 x 2 is a triplet of pairs, where
| "triplet of" and "pair of" are evidently relatives.
|
| If we have an equation of the form:
|
| xy  =  z
|
| and there are just as many x's per y as there are,
| 'per' things, things of the universe, then we have
| also the arithmetical equation:
|
| [x][y]  =  [z].
|
| For instance, if our universe is perfect men, and there
| are as many teeth to a Frenchman (perfect understood)
| as there are to any one of the universe, then:
|
| ['t'][f]  =  ['t'f]
|
| holds arithmetically.
|
| So if men are just as apt to be black as things in general:
|
| [m,][b]  =  [m,b]
|
| where the difference between [m] and [m,] must not be overlooked.
|
| It is to be observed that:
|
| [!1!]  =  `1`.
|
| Boole was the first to show this connection between logic and
| probabilities.  He was restricted, however, to absolute terms.
| I do not remember having seen any extension of probability to
| relatives, except the ordinary theory of 'expectation'.
|
| Our logical multiplication, then, satisfies the essential conditions
| of multiplication, has a unity, has a conception similar to that of
| admitted multiplications, and contains numerical multiplication as
| a case under it.
|
| C.S. Peirce, CP 3.76
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 43

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

We have reached in our reading of Peirce's text a suitable place to pause --
actually, it is more like to run as fast as we can along a parallel track --
where I can due quietus make of a few IOU's that I've used to pave my way.

The more pressing debts that come to mind are concerned with the matter
of Peirce's "number of" function, that maps a term t into a number [t],
and with my justification for calling a certain style of illustration
by the name of the "hypergraph" picture of relational composition.
As it happens, there is a thematic relation between these topics,
and so I can make my way forward by addressing them together.

At this point we have two good pictures of how to compute the
relational compositions of arbitrary 2-adic relations, namely,
the bigraph and the matrix representations, each of which has
its differential advantages in different types of situations.

But we do not have a comparable picture of how to compute the
richer variety of relational compositions that involve 3-adic
or any higher adicity relations.  As a matter of fact, we run
into a non-trivial classification problem simply to enumerate
the different types of compositions that arise in these cases.

Therefore, let us inaugurate a systematic study of relational composition,
general enough to explicate the "generative potency" of Peirce's 1870 LOR.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 44

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Let's bring together the various things that Peirce has said
about the "number of" function up to this point in the paper.

NOF 1.

| I propose to assign to all logical terms, numbers;
| to an absolute term, the number of individuals it denotes;
| to a relative term, the average number of things so related
| to one individual.
|
| Thus in a universe of perfect men ('men'),
| the number of "tooth of" would be 32.
|
| The number of a relative with two correlates would be the
| average number of things so related to a pair of individuals;
| and so on for relatives of higher numbers of correlates.
|
| I propose to denote the number of a logical term by
| enclosing the term in square brackets, thus ['t'].
|
| C.S. Peirce, CP 3.65

NOF 2.

| But not only do the significations of '=' and '<' here adopted fulfill all
| absolute requirements, but they have the supererogatory virtue of being very
| nearly the same as the common significations.  Equality is, in fact, nothing
| but the identity of two numbers;  numbers that are equal are those which are
| predicable of the same collections, just as terms that are identical are those
| which are predicable of the same classes.  So, to write 5 < 7 is to say that 5
| is part of 7, just as to write f < m is to say that Frenchmen are part of men.
| Indeed, if f < m, then the number of Frenchmen is less than the number of men,
| and if v = p, then the number of Vice-Presidents is equal to the number of
| Presidents of the Senate;  so that the numbers may always be substituted
| for the terms themselves, in case no signs of operation occur in the
| equations or inequalities.
|
| C.S. Peirce, CP 3.66

NOF 3.

| It is plain that both the regular non-invertible addition
| and the invertible addition satisfy the absolute conditions.
| But the notation has other recommendations.  The conception
| of 'taking together' involved in these processes is strongly
| analogous to that of summation, the sum of 2 and 5, for example,
| being the number of a collection which consists of a collection of
| two and a collection of five.  Any logical equation or inequality
| in which no operation but addition is involved may be converted
| into a numerical equation or inequality by substituting the
| numbers of the several terms for the terms themselves --
| provided all the terms summed are mutually exclusive.
|
| Addition being taken in this sense,
| 'nothing' is to be denoted by 'zero',
| for then:
|
| x +, 0  =  x
|
| whatever is denoted by x;  and this is the definition
| of 'zero'.  This interpretation is given by Boole, and
| is very neat, on account of the resemblance between the
| ordinary conception of 'zero' and that of nothing, and
| because we shall thus have
|
| [0]  =  0.
|
| C.S. Peirce, CP 3.67

NOF 4.

| The conception of multiplication we have adopted is
| that of the application of one relation to another.  ...
|
| Even ordinary numerical multiplication involves the same idea,
| for 2 x 3 is a pair of triplets, and 3 x 2 is a triplet of pairs,
| where "triplet of" and "pair of" are evidently relatives.
|
| If we have an equation of the form:
|
| xy  =  z
|
| and there are just as many x's per y as there are,
| 'per' things, things of the universe, then we have
| also the arithmetical equation:
|
| [x][y]  =  [z].
|
| For instance, if our universe is perfect men, and there
| are as many teeth to a Frenchman (perfect understood)
| as there are to any one of the universe, then:
|
| ['t'][f]  =  ['t'f]
|
| holds arithmetically.
|
| So if men are just as apt to be black as things in general:
|
| [m,][b]  =  [m,b]
|
| where the difference between [m] and [m,] must not be overlooked.
|
| It is to be observed that:
|
| [!1!]  =  `1`.
|
| Boole was the first to show this connection between logic and
| probabilities.  He was restricted, however, to absolute terms.
| I do not remember having seen any extension of probability to
| relatives, except the ordinary theory of 'expectation'.
|
| Our logical multiplication, then, satisfies the essential conditions
| of multiplication, has a unity, has a conception similar to that of
| admitted multiplications, and contains numerical multiplication as
| a case under it.
|
| C.S. Peirce, CP 3.76

Before I can discuss Peirce's "number of" function in greater detail
I will need to deal with an expositional difficulty that I have been
very carefully dancing around all this time, but that will no longer
abide its assigned place under the rug.

Functions have long been understood, from well before Peirce's time to ours,
as special cases of 2-adic relations, so the "number of" function itself is
already to be numbered among the types of 2-adic relatives that we've been
explictly mentioning and implicitly using all this time.  But Peirce's way
of talking about a 2-adic relative term is to list the "relate" first and
the "correlate" second, a convention that goes over into functional terms
as making the functional value first and the functional antecedent second,
whereas almost anyone brought up in our present time frame has difficulty
thinking of a function any other way than as a set of ordered pairs where
the order in each pair lists the functional argument, or domain element,
first and the functional value, or codomain element, second.

It is possible to work all this out in a very nice way within a very general context
of flexible conventions, but not without introducing an order of anachronisms into
Peirce's presentation that I am presently trying to avoid as much as possible.
Thus, I will need to experiment with various sorts of compromise formations.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 45

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Having spent a fair amount of time in earnest reflection on the issue,
I cannot see a way to continue my interpretation of Peirce's 1870 LOR,
to master the distance between his conventions of presentation and my
present personal perspectives on relations, without introducing a few
interpretive anachronisms and other artifacts in the process, and the
only excuse that I can make for myself is that at least these will be
novel sorts of anachronisms and artifacts in comparison with the ones
that the reeder may alreedy have seen.  A poor excuse, but all I have.
The least that I can do, then, and I'm something of an expert on that,
is to exposit my personal interpretive apparatus on a separate thread,
where it will not distract too much from the intellectual canon, that
is to opine, the "thinking panpipe" that we find in Peirce's 1870 LOR.

Ripped from the pages of my dissertation, then, I will lay out
some samples of background material on "Relations In General",
as spied from a combinatorial point of view, that I hope will
serve in reeding Peirce's text, if we draw on it judiciously.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 46

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

The task before us now is to get very clear about the relationships
among relative terms, relations, and the special cases of relations
that are constituted by equivalence relations, functions, and so on.

I am optimistic that the some of the tethering material that I spun
along the "Relations In General" (RIG) thread will help us to track
the equivalential and functional properties of special relations in
a way that will not weigh too heavy on the rather capricious lineal
embedding of syntax in 1-dimensional strings on 2-dimensional pages.
But I cannot see far enough ahead to forsee all the consequences of
trying this tack, and so I cannot help but to be a bit experimental.

The first obstacle to get past is the order convention
that Peirce's orientation to relative terms causes him
to use for functions.  By way of making our discussion
concrete, and directing our attentions to an immediate
object example, let us say that we desire to represent
the "number of" function, that Peirce denotes by means
of square brackets, by means of a 2-adic relative term,
say 'v', where 'v'(t) = [t] = the number of the term t.

To set the 2-adic relative term 'v' within a suitable context of interpretation,
let us suppose that 'v' corresponds to a relation V c R x S, where R is the set
of real numbers and S is a suitable syntactic domain, here described as "terms".
Then the 2-adic relation V is evidently a function from S to R.  We might think
to use the plain letter "v" to denote this function, as v : S -> R, but I worry
this may be a chaos waiting to happen.  Also, I think that we should anticipate
the very great likelihood that we cannot always assign numbers to every term in
whatever syntactic domain S that we choose, so it is probably better to account
the 2-adic relation V as a partial function from S to R.  All things considered,
then, let me try out the following impedimentaria of strategies and compromises.

First, I will adapt the functional arrow notation so that it allows us
to detach the functional orientation from the order in which the names
of domains are written on the page.  Second, I will need to change the
notation for "pre-functions", or "partial functions", from one likely
confound to a slightly less likely confound.  This gives the scheme:

q : X -> Y means that q is functional at X.

q : X <- Y means that q is functional at Y.

q : X ~> Y means that q is pre-functional at X.

q : X <~ Y means that q is pre-functional at Y.

For now, I will pretend that v is a function in R of S, v : R <- S,
amounting to the functional alias of the 2-adic relation V c R x S,
and associated with the 2-adic relative term 'v' whose relate lies
in the set R of real numbers and whose correlate lies in the set S
of syntactic terms.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 47

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

It always helps me to draw lots of pictures of stuff,
so let's extract the somewhat overly compressed bits
of the "Relations In General" thread that we'll need
right away for the applications to Peirce's 1870 LOR,
and draw what icons we can within the frame of Ascii.

For the immediate present, we may start with 2-adic relations
and describe the customary species of relations and functions
in terms of their local and numerical incidence properties.

Let P c X x Y be an arbitrary 2-adic relation.
The following properties of P can be defined:

P is "total" at X     iff   P is (>=1)-regular at X.

P is "total" at Y     iff   P is (>=1)-regular at Y.

P is "tubular" at X   iff   P is (=<1)-regular at X.

P is "tubular" at Y   iff   P is (=<1)-regular at Y.

To illustrate these properties, let us fashion
a "generic enough" example of a 2-adic relation,
E c X x Y, where X = Y = {0, 1, ..., 8, 9}, and
where the bigraph picture of E looks like this:

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
     \  |\ /|\   \   \  |   |\
      \ | / | \   \   \ |   | \         E
       \|/ \|  \   \   \|   |  \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

If we scan along the X dimension we see that the "Y incidence degrees"
of the X nodes 0 through 9 are 0, 1, 2, 3, 1, 1, 1, 2, 0, 0, in order.

If we scan along the Y dimension we see that the "X incidence degrees"
of the Y nodes 0 through 9 are 0, 0, 3, 2, 1, 1, 2, 1, 1, 0, in order.

Thus, E is not total at either X or Y,
since there are nodes in both X and Y
having incidence degrees that equal 0.

Also, E is not tubular at either X or Y,
since there exist nodes in both X and Y
having incidence degrees greater than 1.

Clearly, then, E cannot qualify as a pre-function
or a function on either of its relational domains.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 48

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Let's continue to work our way through the rest of the first
set of definitions, making up appropriate examples as we go.

| Let P c X x Y be an arbitrary 2-adic relation.
| The following properties of P can be defined:
|
| P is "total" at X     iff   P is (>=1)-regular at X.
|
| P is "total" at Y     iff   P is (>=1)-regular at Y.
|
| P is "tubular" at X   iff   P is (=<1)-regular at X.
|
| P is "tubular" at Y   iff   P is (=<1)-regular at Y.

E_1 exemplifies the quality of "totality at X".

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
 \   \  |\ /|\   \   \  |   |\   \  |
  \   \ | / | \   \   \ |   | \   \ |   E_1
   \   \|/ \|  \   \   \|   |  \   \|
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

E_2 exemplifies the quality of "totality at Y".

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
|\   \  |\ /|\   \   \  |   |\   \
| \   \ | / | \   \   \ |   | \   \     E_2
|  \   \|/ \|  \   \   \|   |  \   \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

E_3 exemplifies the quality of "tubularity at X".

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
     \  |  /     \   \  |   |
      \ | /       \   \ |   |           E_3
       \|/         \   \|   |
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

E_4 exemplifies the quality of "tubularity at Y".

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
           /|\   \   \      |\
          / | \   \   \     | \         E_4
         /  |  \   \   \    |  \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

| If P c X x Y is tubular at X, then P is known as a "partial function"
| or a "pre-function" from X to Y, frequently signalized by renaming P
| with an alternative lower case name, say "p", and writing p : X ~> Y.
|
| Just by way of formalizing the definition:
| 
| P is a "pre-function" P : X ~> Y   iff   P is tubular at X.
|
| P is a "pre-function" P : X <~ Y   iff   P is tubular at Y.

So, E_3 is a pre-function e_3 : X ~> Y,
and E_4 is a pre-function e_4 : X <~ Y.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 49

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

We come now to the very special cases of 2-adic relations that are
known as functions.  It will serve a dual purpose on behalf of the
exposition if we take the class of functions as a source of object
examples to clarify the more abstruse concepts in the RIG material.

To begin, let's recall the definition of a local flag:

L_x@j  =  {<x_1, ..., x_j, ..., x_k> in L : x_j = x}.

In the case of a 2-adic relation L c X_1 x X_2 = X x Y,
we can reap the benefits of a radical simplification in
the definitions of the local flags.  Also in this case,
we tend to denote L_u@1 by "L_u@X" and L_v@2 by "L_v@Y".

In the light of these considerations, the local flags of
a 2-adic relation L c X x Y may be formulated as follows:

L_u@X  =  {<x, y> in L : x = u}

       =  the set of all ordered pairs in L incident with u in X.

L_v@Y  =  {<x, y> in L : y = v}

       =  the set of all ordered pairs in L incident with v in Y.

A sufficient illustration is supplied by the earlier example E.

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
     \  |\ /|\   \   \  |   |\
      \ | / | \   \   \ |   | \         E
       \|/ \|  \   \   \|   |  \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

The local flag E_3@X is displayed here:

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
           /|\
          / | \                         E_3@X
         /  |  \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

The local flag E_2@Y is displayed here:

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
     \  |  /
      \ | /                             E_2@Y
       \|/
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 50

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Now let's re-examine the "numerical incidence properties" of relations,
concentrating on the definitions of the assorted regularity conditions.

| For instance, L is said to be "c-regular at j" if and only if
| the cardinality of the local flag L_x@j is c for all x in X_j,
| coded in symbols, if and only if |L_x@j| = c for all x in X_j.
|
| In a similar fashion, one can define the NIP's "<c-regular at j",
| ">c-regular at j", and so on.  For ease of reference, I record a
| few of these definitions here:
|
| L is  c-regular at j      iff   |L_x@j|   = c for all x in X_j.
|
| L is (<c)-regular at j    iff   |L_x@j|   < c for all x in X_j.
|
| L is (>c)-regular at j    iff   |L_x@j|   > c for all x in X_j.
|
| L is (=<c)-regular at j   iff   |L_x@j|  =< c for all x in X_j.
|
| L is (>=c)-regular at j   iff   |L_x@j|  >= c for all x in X_j.

Clearly, if any relation is (=<c)-regular on one
of its domains X_j and also (>=c)-regular on the
same domain, then it must be (=c)-regular on the
affected domain X_j, in effect, c-regular at j.

For example, let G = {r, s, t} and H = {1, ..., 9},
and consider the 2-adic relation F c G x H that is
bigraphed here:

    r           s           t
    o           o           o       G
   /|\         /|\         /|\
  / | \       / | \       / | \     F
 /  |  \     /  |  \     /  |  \
o   o   o   o   o   o   o   o   o   H
1   2   3   4   5   6   7   8   9

We observe that F is 3-regular at G and 1-regular at H.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 51

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Among the vast variety of conceivable regularities affecting 2-adic relations,
we pay special attention to the c-regularity conditions where c is equal to 1.

| Let P c X x Y be an arbitrary 2-adic relation.
| The following properties of P can be defined:
|
| P is "total" at X     iff   P is (>=1)-regular at X.
|
| P is "total" at Y     iff   P is (>=1)-regular at Y.
|
| P is "tubular" at X   iff   P is (=<1)-regular at X.
|
| P is "tubular" at Y   iff   P is (=<1)-regular at Y.

We have already looked at 2-adic relations that
separately exemplify each of these regularities.

Also, we introduced a few bits of additional terminology and
special-purpose notations for working with tubular relations:

| P is a "pre-function" P : X ~> Y   iff   P is tubular at X.
|
| P is a "pre-function" P : X <~ Y   iff   P is tubular at Y.

Thus, we arrive by way of this winding stair at the very special stamps
of 2-adic relations P c X x Y that are "total prefunctions" at X (or Y),
"total and tubular" at X (or Y), or "1-regular" at X (or Y), more often
celebrated as "functions" at X (or Y).

| If P is a pre-function P : X ~> Y that happens to be total at X, then P
| is known as a "function" from X to Y, typically indicated as P : X -> Y.
|
| To say that a relation P c X x Y is totally tubular at X is to say that
| it is 1-regular at X.  Thus, we may formalize the following definitions:
|
| P is a "function" p : X -> Y   iff   P is 1-regular at X.
|
| P is a "function" p : X <- Y   iff   P is 1-regular at Y.

For example, let X = Y = {0, ..., 9} and let F c X x Y be
the 2-adic relation that is depicted in the bigraph below:

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
 \ /       /|\   \      |   |\   \
  \       / | \   \     |   | \   \     F
 / \     /  |  \   \    |   |  \   \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

We observe that F is a function at Y,
and we record this fact in either of
the manners F : X <- Y or F : Y -> X.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 52

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

In the case of a 2-adic relation F c X x Y that has
the qualifications of a function f : X -> Y, there
are a number of further differentia that arise:

| f is "surjective"   iff   f is total at Y.
|
| f is "injective"    iff   f is tubular at Y.
|
| f is "bijective"    iff   f is 1-regular at Y.

For example, or more precisely, contra example,
the function f : X -> Y that is depicted below
is neither total at Y nor tubular at Y, and so
it cannot enjoy any of the properties of being
sur-, or in-, or bi-jective.

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
|    \  |  /     \   \  |   |    \ /
|     \ | /       \   \ |   |     \     f
|      \|/         \   \|   |    / \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

A cheap way of getting a surjective function out of any function
is to reset its codomain to its range.  For example, the range
of the function f above is Y'=  {0, 2, 5, 6, 7, 8, 9}.  Thus,
if we form a new function g : X -> Y' that looks just like
f on the domain X but is assigned the codomain Y', then
g is surjective, and is described as mapping "onto" Y'.
g is surjective, and we say also that g maps "onto" Y'.
g is surjective, inin which case we say and we say also that g maps "onto" Y'.

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
|    \  |  /     \   \  |   |    \ /
|     \ | /       \   \ |   |     \     g
|      \|/         \   \|   |    / \
o       o           o   o   o   o   o   Y'
0       2           5   6   7   8   9

The function h : Y' -> Y is injective.

0       2           5   6   7   8   9
o       o           o   o   o   o   o   Y'
|       |            \ /    |    \ /
|       |             \     |     \     h
|       |            / \    |    / \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

The function m : X -> Y is bijective.

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
|   |   |    \ /     \ /    |    \ /
|   |   |     \       \     |     \     m
|   |   |    / \     / \    |    / \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 53

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

The preceding exercises were intended to beef-up our
functional literacy skills to the point where we can
read our functional alphabets backwards and forwards
and to ferret out the local functionalites that may
be immanent in relative terms no matter where they
locate themselves within the domains of relations.
I am hopeful that these skills will serve us in
good stead as we work to build a catwalk from
Peirce's platform to contemporary scenes on
the logic of relatives, and back again.

By way of extending a few very tentative plancks,
let us experiment with the following definitions:

1.  A relative term 'p' and the corresponding relation P c X x Y are both
    called "functional on relates" if and only if P is a function at X,
    in symbols, P : X -> Y.

2.  A relative term 'p' and the corresponding relation P c X x Y are both
    called "functional on correlates" if and only if P is function at Y,
    in symbols, P : X <- Y.

When a relation happens to be a function, it may be excusable
to use the same name for it in both applications, writing out
explicit type markers like P : X x Y, P : X -> Y, P : X <- Y,
as the case may be, when and if it serves to clarify matters.

From this current, perhaps transient, perspective, it appears that
our next task is to examine how the known properties of relations
are modified when an aspect of functionality is spied in the mix.

Let us then return to our various ways of looking at relational composition,
and see what changes and what stays the same when the relations in question
happen to be functions of various different kinds at some of their domains.

Here is one generic picture of relational composition,
cast in a style that hews pretty close to the line of
potentials inherent in Peirce's syntax of this period.

o-----------------------------------------------------------o
|                                                           |
|                           P o Q                           |
|                 ____________@____________                 |
|                /                         \                |
|               /      P             Q      \               |
|              /       @             @       \              |
|             /       / \           / \       \             |
|            /       /   \         /   \       \            |
|           o       o     o       o     o       o           |
|           X       X     Y       Y     Z       Z           |
|       1,__#       #'p'__$       $'q'__%       %1          |
|           o       o     o       o     o       o           |
|            \     /       \     /       \     /            |
|             \   /         \   /         \   /             |
|              \ /           \ /           \ /              |
|               @             @             @               |
|              !1!           !1!           !1!              |
|                                                           |
o-----------------------------------------------------------o
Figure 16.  Anything that is a 'p' of a 'q' of Anything

From this we extract the "hypergraph picture" of relational composition:

o-----------------------------------------------------------o
|                                                           |
|                 P         P o Q         Q                 |
|                 @           @           @                 |
|                / \         / \         / \                |
|               /   \       /   \       /   \               |
|              o     o     o     o     o     o              |
|              X     Y     X     Z     Y     Z              |
|              o     o     o     o     o     o              |
|               \     \   /       \   /     /               |
|                \     \ /         \ /     /                |
|                 \     /           \     /                 |
|                  \   / \         / \   /                  |
|                   \ /   \___ ___/   \ /                   |
|                    @        @        @                    |
|                   !1!      !1!      !1!                   |
|                                                           |
o-----------------------------------------------------------o
Figure 17.  Relational Composition P o Q

All of the relevant information of these Figures can be compressed
into the form of a "spreadsheet", or constraint satisfaction table:

Table 18.  Relational Composition P o Q
o---------o---------o---------o---------o
|         #   !1!   |   !1!   |   !1!   |
o=========o=========o=========o=========o
|    P    #    X    |    Y    |         |
o---------o---------o---------o---------o
|    Q    #         |    Y    |    Z    |
o---------o---------o---------o---------o
|  P o Q  #    X    |         |    Z    |
o---------o---------o---------o---------o

So the following presents itself as a reasonable plan of study:
Let's see how much easy mileage we can get in our exploration
of functions by adopting the above templates as a paradigm.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 54

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Since functions are special cases of 2-adic relations, and since the space
of 2-adic relations is closed under relational composition, in other words,
the composition of a couple of 2-adic relations is again a 2-adic relation,
we know that the relational composition of a couple of functions has to be
a 2-adic relation.  If it is also necessarily a function, then we would be
justified in speaking of "functional composition", and also of saying that
the space of functions is closed under this functional form of composition.

Just for novelty's sake, let's try to prove this
for relations that are functional on correlates.

So our task is this:  Given a couple of 2-adic relations,
P c X x Y and Q c Y x Z, that are functional on correlates,
P : X <- Y and Q : Y <- Z, we need to determine whether the
relational composition P o Q c X x Z is also P o Q : X <- Z,
or not.

It always helps to begin by recalling the pertinent definitions.

For a 2-adic relation L c X x Y, we have:

L is a "function" L : X <- Y  iff  L is 1-regular at Y.

As for the definition of relational composition,
it is enough to consider the coefficient of the
composite on an arbitrary ordered pair like i:j.

(P o Q)_ij  =  Sum_k (P_ik Q_kj).

So let us begin.

P : X <- Y, or P being 1-regular at Y, means that there
is exactly one ordered pair i:k in P for each k in Y.

Q : Y <- Z, or Q being 1-regular at Z, means that there
is exactly one ordered pair k:j in Q for each j in Z.

Thus, there is exactly one ordered pair i:j in P o Q
for each j in Z, which means that P o Q is 1-regular
at Z, and so we have the function P o Q : X <- Z.

And we are done.

Bur proofs after midnight must be checked the next day.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 55

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

As we make our way toward the foothills of Peirce's 1870 LOR, there
is one piece of equipment that we dare not leave the plains without --
for there is little hope that "l'or dans les montagnes là" will lie
among our prospects without the ready use of its leverage and lifts --
and that is a facility with the utilities that are variously called
"arrows", "morphisms", "homomorphisms", "structure-preserving maps",
and several other names, in accord with the altitude of abstraction
at which one happens to be working, at the given moment in question.

As a middle but not too beaten track, I will lay out the definition
of a morphism in the forms that we will need right off, in a slight
excess of formality at first, but quickly bringing the bird home to
roost on more familiar perches.

Let's say that we have three functions J, K, L
that have the following types and that satisfy
the equation that follows:

| J : X <- Y
|
| K : X <- X x X
|
| L : Y <- Y x Y
|
| J(L(u, v))  =  K(Ju, Jv)

Our sagittarian leitmotif can be rubricized in the following slogan:

>->   The image of the ligature is the compound of the images.   <-<

Where J is the "image", K is the "compound", and L is the "ligature".

Figure 19 presents us with a picture of the situation in question.

o-----------------------------------------------------------o
|                                                           |
|                       K           L                       |
|                       @           @                       |
|                      /|\         /|\                      |
|                     / | \       / | \                     |
|                    v  |  \     v  |  \                    |
|                   o   o   o   o   o   o                   |
|                   X   X   X   Y   Y   Y                   |
|                   o   o   o   o   o   o                   |
|                    ^   ^   ^ /   /   /                    |
|                     \   \   \   /   /                     |
|                      \   \ / \ /   /                      |
|                       \   \   \   /                       |
|                        \ / \ / \ /                        |
|                         @   @   @                         |
|                         J   J   J                         |
|                                                           |
o-----------------------------------------------------------o
Figure 19.  Structure Preserving Transformation J : K <- L

Here, I have used arrowheads to indicate the relational domains
at which each of the relations J, K, L happens to be functional.

Table 20 gives the constraint matrix version of the same thing.

Table 20.  Arrow:  J(L(u, v)) = K(Ju, Jv)
o---------o---------o---------o---------o
|         #    J    |    J    |    J    |
o=========o=========o=========o=========o
|    K    #    X    |    X    |    X    |
o---------o---------o---------o---------o
|    L    #    Y    |    Y    |    Y    |
o---------o---------o---------o---------o

One way to read this Table is in terms of the informational redundancies
that it schematizes.  In particular, it can be read to say that when one
satisfies the constraint in the L row, along with all of the constraints
in the J columns, then the constraint in the K row is automatically true.
That is one way of understanding the equation:  J(L(u, v)) = K(Ju, Jv).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 56

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

First, a correction.  Ignore for now the
gloss that I gave in regard to Figure 19:

| Here, I have used arrowheads to indicate the relational domains
| at which each of the relations J, K, L happens to be functional.

It is more like the feathers of the arrows that serve to mark the
relational domains at which the relations J, K, L are functional,
but it would take yet another construction to make this precise,
as the feathers are not uniquely appointed but many splintered.

Now, as promised, let's look at a more homely example of a morphism,
say, any one of the mappings J : R -> R (roughly speaking) that are
commonly known as "logarithm functions", where you get to pick your
favorite base.  In this case, K(r, s) = r + s and L(u, v) = u . v,
and the defining formula J(L(u, v)) = K(Ju, Jv) comes out looking
like J(u . v) = J(u) + J(v), writing a dot (.) and a plus sign (+)
for the ordinary 2-ary operations of arithmetical multiplication
and arithmetical summation, respectively.

o-----------------------------------------------------------o
|                                                           |
|                      {+}         {.}                      |
|                       @           @                       |
|                      /|\         /|\                      |
|                     / | \       / | \                     |
|                    v  |  \     v  |  \                    |
|                   o   o   o   o   o   o                   |
|                   X   X   X   Y   Y   Y                   |
|                   o   o   o   o   o   o                   |
|                    ^   ^   ^ /   /   /                    |
|                     \   \   \   /   /                     |
|                      \   \ / \ /   /                      |
|                       \   \   \   /                       |
|                        \ / \ / \ /                        |
|                         @   @   @                         |
|                         J   J   J                         |
|                                                           |
o-----------------------------------------------------------o
Figure 21.  Logarithm Arrow J : {+} <- {.}

Thus, where the "image" J is the logarithm map,
the "compound" K is the numerical sum, and the
the "ligature" L is the numerical product, one
obtains the immemorial mnemonic motto:

| The image of the product is the sum of the images.
|
| J(u . v)  =  J(u) + J(v)
|
| J(L(u, v))  =  K(Ju, Jv)

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 57

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I'm going to elaborate a little further on the subject
of arrows, morphisms, or structure-preserving maps, as
a modest amount of extra work at this point will repay
ample dividends when it comes time to revisit Peirce's
"number of" function on logical terms.

The "structure" that is being preserved by a structure-preserving map
is just the structure that we all know and love as a 3-adic relation.
Very typically, it will be the type of 3-adic relation that defines
the type of 2-ary operation that obeys the rules of a mathematical
structure that is known as a "group", that is, a structure that
satisfies the axioms for closure, associativity, identities,
and inverses.

For example, in the previous case of the logarithm map J, we have the data:

| J : R <- R (properly restricted)
|
| K : R <- R x R, where K(r, s) = r + s
|
| L : R <- R x R, where L(u, v) = u . v

Real number addition and real number multiplication (suitably restricted)
are examples of group operations.  If we write the sign of each operation
in braces as a name for the 3-adic relation that constitutes or defines
the corresponding group, then we have the following set-up:

| J : {+} <- {.}
|
| {+} c R x R x R
|
| {.} c R x R x R

In many cases, one finds that both groups are written with the same
sign of operation, typically ".", "+", "*", or simple concatenation,
but they remain in general distinct whether considered as operations
or as relations, no matter what signs of operation are used.  In such
a setting, our chiasmatic theme may run a bit like these two variants:

| The image of the sum is the sum of the images.
|
| The image of the product is the product of the images.

Figure 22 presents a generic picture for groups G and H.

o-----------------------------------------------------------o
|                                                           |
|                       G           H                       |
|                       @           @                       |
|                      /|\         /|\                      |
|                     / | \       / | \                     |
|                    v  |  \     v  |  \                    |
|                   o   o   o   o   o   o                   |
|                   X   X   X   Y   Y   Y                   |
|                   o   o   o   o   o   o                   |
|                    ^   ^   ^ /   /   /                    |
|                     \   \   \   /   /                     |
|                      \   \ / \ /   /                      |
|                       \   \   \   /                       |
|                        \ / \ / \ /                        |
|                         @   @   @                         |
|                         J   J   J                         |
|                                                           |
o-----------------------------------------------------------o
Figure 22.  Group Homomorphism J : G <- H

In a setting where both groups are written with a plus sign,
perhaps even constituting the very same group, the defining
formula of a morphism, J(L(u, v)) = K(Ju, Jv), takes on the
shape J(u + v) = Ju + Jv, which looks very analogous to the
distributive multiplication of a sum (u + v) by a factor J.
Hence another common name for a morphism:  a "linear" map.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 58

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I think that we have enough material on morphisms now
to go back and cast a more studied eye on what Peirce
is doing with that "number of" function, the one that
we apply to a logical term 't', absolute or relative
of any number of correlates, by writing it in square
brackets, as ['t'].  It is frequently convenient to
have a prefix notation for this function, and since
Peirce reserves 'n' to signify 'not', I will try to
use 'v', personally thinking of it as a Greek 'nu',
which stands for frequency in physics, and which
kind of makes sense if we think of frequency as
it's habitual in statistics.  End of mnemonics.

My plan will be nothing less plodding than to work through
all of the principal statements that Peirce has made about
the "number of" function up to our present stopping place
in the paper, namely, those that I collected once before
and placed at this location:

http://suo.ieee.org/ontology/msg04488.html

NOF 1.

| I propose to assign to all logical terms, numbers;
| to an absolute term, the number of individuals it denotes;
| to a relative term, the average number of things so related
| to one individual.
|
| Thus in a universe of perfect men ('men'),
| the number of "tooth of" would be 32.
|
| The number of a relative with two correlates would be the
| average number of things so related to a pair of individuals;
| and so on for relatives of higher numbers of correlates.
|
| I propose to denote the number of a logical term by
| enclosing the term in square brackets, thus ['t'].
|
| C.S. Peirce, CP 3.65

We may formalize the role of the "number of" function by assigning it
a local habitation and a name 'v' : S -> R, where S is a suitable set
of signs, called the "syntactic domain", that is ample enough to hold
all of the terms that we might wish to number in a given discussion,
and where R is the real number domain.

Transcribing Peirce's example, we may let m = "man" and 't' = "tooth of ---".
Then 'v'('t') = ['t'] = ['t'm]/[m], that is to say, in a universe of perfect
human dentition, the number of the relative term "tooth of ---" is equal to
the number of teeth of humans divided by the number of humans, that is, 32.

The 2-adic relative term 't' determines a 2-adic relation T c U x V,
where U and V are two universes of discourse, possibly the same one,
that hold among other things all of the teeth and all of the people
that happen to be under discussion, respectively.

A rough indication of the bigraph for T
might be drawn as follows, where I have
tried to sketch in just the toothy part
of U and the peoply part of V.

t_1     t_32  t_33    t_64  t_65    t_96  ...     ...
 o  ...  o     o  ...  o     o  ...  o     o  ...  o     U
  \  |  /       \  |  /       \  |  /       \  |  /
   \ | /         \ | /         \ | /         \ | /       T
    \|/           \|/           \|/           \|/
     o             o             o             o         V
    m_1           m_2           m_3           ...

Notice that the "number of" function 'v' : S -> R
needs the data that is represented by this entire
bigraph for T in order to compute the value ['t'].

Finally, one observes that this component of T is a function
in the direction T : U -> V, since we are counting only those
teeth that ideally occupy one and only one mouth of a creature.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 59

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I think that the reader is beginning to get an inkling of the crucial importance of
the "number of" map in Peirce's way of looking at logic, for it's one of the plancks
in the bridge from logic to the theories of probability, statistics, and information,
in which logic forms but a limiting case at one scenic turnout on the expanding vista.
It is, as a matter of necessity and a matter of fact, practically speaking, at any rate,
one way that Peirce forges a link between the "eternal", logical, or rational realm and
the "secular", empirical, or real domain.

With that little bit of encouragement and exhortation,
let us return to the nitty gritty details of the text.

NOF 2.

| But not only do the significations of '=' and '<' here adopted fulfill all
| absolute requirements, but they have the supererogatory virtue of being very
| nearly the same as the common significations.  Equality is, in fact, nothing
| but the identity of two numbers;  numbers that are equal are those which are
| predicable of the same collections, just as terms that are identical are those
| which are predicable of the same classes.  So, to write 5 < 7 is to say that 5
| is part of 7, just as to write f < m is to say that Frenchmen are part of men.
| Indeed, if f < m, then the number of Frenchmen is less than the number of men,
| and if v = p, then the number of Vice-Presidents is equal to the number of
| Presidents of the Senate;  so that the numbers may always be substituted
| for the terms themselves, in case no signs of operation occur in the
| equations or inequalities.
|
| C.S. Peirce, CP 3.66

Peirce is here remarking on the principle that the
measure 'v' on terms "preserves" or "respects" the
prevailing implication, inclusion, or subsumption
relations that impose an ordering on those terms.

In these initiatory passages of the text, Peirce is using a single symbol "<"
to denote the usual linear ordering on numbers, but also what amounts to the
implication ordering on logical terms and the inclusion ordering on classes.
Later, of course, he will introduce distinctive symbols for logical orders.

Now, the links among terms, sets, and numbers can be pursued in all directions,
and Peirce has already indicated in an earlier paper how he would "construct"
the integers from sets, that is, from the aggregate denotations of terms.

We will get back to that at another time.

In the immediate example, we have this sort of statement:

"if f < m, then the number of Frenchmen is less than the number of men"

In symbolic form, this would be written:

f < m  =>  [f] < [m]

Here, the "<" on the left is a logical ordering on syntactic terms
while the "<" on the right is an arithmetic ordering on real numbers.

The type of principle that comes up here is usually discussed
under the question of whether a map between two ordered sets
is "order-preserving" or not.  The general type of question
may be formalized in the following way.

Let X_1 be a set with an ordering denoted by "<_1".
Let X_2 be a set with an ordering denoted by "<_2".

What makes an ordering what it is will commonly be
a set of axioms that defines the properties of the
order relation in question.  Since one frequently
has occasion to view the same set in the light of
several different order relations, one will often
resort to explicit forms like (X, <_1), (X, <_2),
and so on, to invoke a set with a given ordering.

A map F : (X_1, <_1) -> (X_2, <_2) is "order-preserving"
if and only if a statement of a particular form holds
for all x and y in (X_1, <_1), specifically, this:

x <_1 y  =>  Fx <_2 Fy

The action of the "number of" map 'v' : (S, <_1) -> (R, <_2)
has just this character, as exemplified by its application to
the case where x = f = "frenchman" and y = m = "man", like so:

| f < m  =>   [f] < [m]
|
| f < m  =>  'v'f < 'v'm

Here, to be more exacting, we may interpret the "<" on the left
as "proper subsumption", that is, excluding the equality case,
while we read the "<" on the right as the usual "less than".

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 60

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

There is a comment that I ought to make on the concept of
a "structure preserving map", including as a special case
the idea of an "order-preserving map".  It seems to be a
peculiarity of mathematical usage in general -- at least,
I don't think it's just me -- that "preserving structure"
always means "preserving 'some', not of necessity 'all',
of the structure in question".  People sometimes express
this by speaking of "structure preservation in measure",
the implication being that any property that is amenable
to being qualified in manner is potentially amenable to
being quantified in degree, perhaps in such a way as to
answer questions like "How structure-preserving is it?".

Let's see how this remark applies to the order-preserving property of
the "number of" mapping 'v' : S -> R.  For any pair of absolute terms
x and y in the syntactic domain S, we have the following implications,
where "-<" denotes the logical subsumption relation on terms and "=<"
is the "less than or equal to" relation on the real number domain R.

x -< y  =>  'v'x =< 'v'y

Equivalently:

x -< y  =>  [x] =< [y]

It is easy to see that nowhere near all of the distinctions that make up
the structure of the ordering on the left hand side will be preserved as
one passes to the right hand side of these implication statements, but
that is not required in order to call the map 'v' "order-preserving",
or what is also known as an "order morphism".

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 61

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Up to this point in the LOR of 1870, Peirce has introduced the
"number of" measure on logical terms and discussed the extent
to which this measure, 'v' : S -> R such that 'v' : s ~> [s],
exhibits a couple of important measure-theoretic principles:

1.  The "number of" map exhibits a certain type of "uniformity property",
    whereby the value of the measure on a uniformly qualified population
    is in fact actualized by each member of the population.

2.  The "number of" map satisfies an "order morphism principle", whereby
    the illative partial ordering of logical terms is reflected up to a
    partial extent by the arithmetical linear ordering of their measures.

Peirce next takes up the action of the "number of" map on the two types of,
loosely speaking, "additive" operations that we normally consider in logic.

NOF 3.

| It is plain that both the regular non-invertible addition and the
| invertible addition satisfy the absolute conditions.  (CP 3.67).

The "regular non-invertible addition" is signified by "+,",
corresponding to what we'd call the inclusive disjunction
of logical terms or the union of their extensions as sets.

The "invertible addition" is signified in algebra by "+",
corresponding to what we'd call the exclusive disjunction
of logical terms or the symmetric difference of their sets,
ignoring many details and nuances that are often important,
of course.

| But the notation has other recommendations.  The conception of 'taking together'
| involved in these processes is strongly analogous to that of summation, the sum
| of 2 and 5, for example, being the number of a collection which consists of a
| collection of two and a collection of five.  (CP 3.67).

A full interpretation of this remark will require us to pick up the precise
technical sense in which Peirce is using the word "collection", and that will
take us back to his logical reconstruction of certain aspects of number theory,
all of which I am putting off to another time, but it is still possible to get
a rough sense of what he's saying relative to the present frame of discussion.

The "number of" map 'v' : S -> R evidently induces
some sort of morphism with respect to logical sums.
If this were straightforwardly true, we could write:

? 'v'(x +, y)  =  'v'x + 'v'y
?
? Equivalently:
?
? [x +, y]  =  [x] + [y]

Of course, things are just not that simple in the case
of inclusive disjunction and set-theoretic unions, so
we'd "probably" invent a word like "sub-additive" to
describe the principle that does hold here, namely:

| 'v'(x +, y)  =<  'v'x + 'v'y
|
| Equivalently:
|
| [x +, y]  =<  [x] + [y]

This is why Peirce trims his discussion of this point with the following hedge:

| Any logical equation or inequality in which no operation but addition
| is involved may be converted into a numerical equation or inequality by
| substituting the numbers of the several terms for the terms themselves --
| provided all the terms summed are mutually exclusive.  (CP 3.67).

Finally, a morphism with respect to addition,
even a contingently qualified one, must do the
right stuff on behalf of the additive identity:

| Addition being taken in this sense,
|'nothing' is to be denoted by 'zero',
| for then:
|
| x +, 0  =  x
|
| whatever is denoted by x;  and this is the definition
| of 'zero'.  This interpretation is given by Boole, and
| is very neat, on account of the resemblance between the
| ordinary conception of 'zero' and that of nothing, and
| because we shall thus have
|
| [0]  =  0.
|
| C.S. Peirce, CP 3.67

With respect to the nullity 0 in S and the number 0 in R, we have:

'v'0  =  [0]  =  0.

In sum, therefor, it also serves that only preserves
a due respect for the function of a vacuum in nature.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 62

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

We arrive at the last, for the time being, of
Peirce's statements about the "number of" map.

NOF 4.

| The conception of multiplication we have adopted is
| that of the application of one relation to another.  ...
|
| Even ordinary numerical multiplication involves the same idea,
| for 2 x 3 is a pair of triplets, and 3 x 2 is a triplet of pairs,
| where "triplet of" and "pair of" are evidently relatives.
|
| If we have an equation of the form:
|
| xy  =  z
|
| and there are just as many x's per y as there are
|'per' things, things of the universe, then we have
| also the arithmetical equation:
|
| [x][y]  =  [z].
|
| C.S. Peirce, CP 3.76

Peirce is here observing what we might dub a "contingent morphism"
or a "skeptraphotic arrow", if you will.  Provided that a certain
condition, to be named and, what is more hopeful, to be clarified
in short order, happens to be satisfied, we would find it holding
that the "number of" map 'v' : S -> R such that 'v's = [s] serves
to preserve the multiplication of relative terms, that is as much
to say, the composition of relations, in the form:  [xy] = [x][y].

So let us try to uncross Peirce's manifestly chiasmatic encryption
of the condition that is called on in support of this preservation.

Proviso for [xy] = [x][y] --

| there are just as many x's per y
| as there are 'per' things<,>
| things of the universe ...

I have placed angle brackets around
a comma that CP shows but CE omits,
not that it helps much either way.
So let us resort to the example:

| For instance, if our universe is perfect men, and there
| are as many teeth to a Frenchman (perfect understood)
| as there are to any one of the universe, then:
|
| ['t'][f]  =  ['t'f]
|
| holds arithmetically.  (CP 3.76).

Now that is something that we can sink our teeth into,
and trace the bigraph representation of the situation.
In order to do this, it will help to recall our first
examination of the "tooth of" relation, and to adjust
the picture that we sketched of it on that occasion.

Transcribing Peirce's example, we may let m = "man" and 't' = "tooth of ---".
Then 'v'('t') = ['t'] = ['t'm]/[m], that is to say, in a universe of perfect
human dentition, the number of the relative term "tooth of ---" is equal to
the number of teeth of humans divided by the number of humans, that is, 32.

The 2-adic relative term 't' determines a 2-adic relation T c U x V,
where U and V are two universes of discourse, possibly the same one,
that hold among other things all of the teeth and all of the people
that happen to be under discussion, respectively.  To make the case
as simple as we can and still cover the point, let's say that there
are just four people in our initial universe of discourse, and that
just two of them are French.  The bigraphic composition below shows
all of the pertinent facts of the case.

T_1     T_32  T_33    T_64  T_65    T_96  T_97    T_128
 o  ...  o     o  ...  o     o  ...  o     o  ...  o      U
  \  |  /       \  |  /       \  |  /       \  |  /
   \ | /         \ | /         \ | /         \ | /       't'
    \|/           \|/           \|/           \|/
     o             o             o             o          V = m = 1
                   |                           |
                   |                           |         'f'
                   |                           |
     o             o             o             o          V = m = 1
     J             K             L             M

Here, the order of relational composition flows up the page.
For convenience, the absolute term f = "frenchman" has been
converted by using the comma functor to give the idempotent
representation 'f' = f, = "frenchman that is ---", and thus
it can be taken as a selective from the universe of mankind.

By way of a legend for the figure, we have the following data:

| m   =  J +, K +, L +, M  =  1
|
| f   =  K +, M
|
|'f'  =  K:K +, M:M
|
|'t'  =  (T_001 +, ... +, T_032):J  +,
|        (T_033 +, ... +, T_064):K  +,
|        (T_065 +, ... +, T_096):L  +,
|        (T_097 +, ... +, T_128):M

Now let's see if we can use this picture
to make sense of the following statement:

| For instance, if our universe is perfect men, and there
| are as many teeth to a Frenchman (perfect understood)
| as there are to any one of the universe, then:
|
| ['t'][f]  =  ['t'f]
|
| holds arithmetically.  (CP 3.76).

In the lingua franca of statistics, Peirce is saying this:
That if the population of Frenchmen is a "fair sample" of
the general population with regard to dentition, then the
morphic equation ['t'f] = ['t'][f], whose transpose gives
['t'] = ['t'f]/[f], is every bite as true as the defining
equation in this circumstance, namely, ['t'] = ['t'm]/[m].

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 63

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

One more example and one more general observation, and then we will
be all caught up with our homework on Peirce's "number of" function.

| So if men are just as apt to be black as things in general:
|
| [m,][b]  =  [m,b]
|
| where the difference between [m] and [m,] must not be overlooked.
|
| C.S. Peirce, CP 3.76

The protasis, "men are just as apt to be black as things in general",
is elliptic in structure, and presents us with a potential ambiguity.
If we had no further clue to its meaning, it might be read as either:

1.  Men are just as apt to be black as things in general are apt to be black.

2.  Men are just as apt to be black as men are apt to be things in general.

The second interpretation, if grammatical, is pointless to state,
since it equates a proper contingency with an absolute certainty.

So I think it is safe to assume this paraphrase of what Peirce intends:

3.  Men are just as likely to be black as things in general are likely to be black.

Stated in terms of the conditional probability:

4.  P(b|m)  =  P(b)

From the definition of conditional probability:

5.  P(b|m)  =  P(b m)/P(m)

Equivalently:

6.  P(b m)  =  P(b|m)P(m)

Thus we may derive the equivalent statement:

7.  P(b m)  =  P(b|m)P(m)  =  P(b)P(m)

And this, of course, is the definition of independent events, as
applied to the event of being Black and the event of being a Man.

It seems like a likely guess, then, that this is the content of Peirce's
statement about frequencies, [m,b] = [m,][b], in this case normalized to
produce the equivalent statement about probabilities:  P(m b) = P(m)P(b).

Let's see if this checks out.

Let n be the number of things in general, in Peirce's lingo, n = [1].
On the assumption that m and b are associated with independent events,
we get [m,b] = P(m b)n = P(m)P(b)n = P(m)[b] = [m,][b], so we have to
interpret [m,] = "the average number of men per things in general" as
P(m) = the probability of a thing in general being a man.  Seems okay.

Then again, it 'is' almost midnight ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 64

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Let's look at that last example from a different angle.

| So if men are just as apt to be black as things in general:
|
| [m,][b]  =  [m,b]
|
| where the difference between [m] and [m,] must not be overlooked.
|
| C.S. Peirce, CP 3.76

In different lights the formula [m,b] = [m,][b] presents itself
as an "aimed arrow", "fair sample", or "independence" condition.

The example apparently assumes a universe of "things in general",
encompassing among other things the denotations of the absolute
terms m = "man" and b = "black".  That suggests to me that we
might well illustrate this case in relief, by returning to
our earlier staging of 'Othello' and seeing how well that
universe of dramatic discourse observes the premiss that
"men are just as apt to be black as things in general".

Here is the relevant data:

| 1   =  B +, C +, D +, E +, I +, J +, O
|
| b   =  O
|
| m   =  C +, I +, J +, O
|
| 1,  =  B:B +, C:C +, D:D +, E:E +, I:I +, J:J +, O:O
|
| b,  =  O:O
|
| m,  =  C:C +, I:I +, J:J +, O:O

The "fair sampling" or "episkeptral arrow" condition is tantamount to this:
"Men are just as apt to be black as things in general are apt to be black".
In other words, men are a fair sample of things in general with respect to
the factor of being black.

Should this hold, the consequence would be:

[m,b]  =  [m,][b].

When [b] is not zero, we obtain the result:

[m,]  =  [m,b]/[b].

Once again, the absolute term b = "black" is most felicitously depicted
by way of its idempotent representation 'b' = b, = "black that is ---",
and thus it can be taken as a selective from the universe of discourse.

Here is the bigraph for the composition:

m,b   =  "man that is black",

here represented in the equivalent form:

m,b,  =  "man that is black that is ---".

B   C   D   E   I   J   O
o   o   o   o   o   o   o   1
    |           |   |   |
    |           |   |   |   m,
    |           |   |   |
o   o   o   o   o   o   o   1
                        |
                        |   b,
                        |
o   o   o   o   o   o   o   1
B   C   D   E   I   J   O

Thus we observe one of the more factitious facts
that hold in this universe of discourse, namely:

m,b  =  b.

Another way of saying that is:

b  -<  m.

That in itself is enough to puncture any notion
that b and m are statistically independent, but
let us continue to develop the plot a bit more.

Putting all of the general formulas and particular facts together,
we arrive at following summation of situation in the Othello case:

If the fair sampling condition holds:

[m,]  =  [m,b]/[b]  =  [b]/[b]  =  `1`,

In fact, however, it is the case that:

[m,]  =  [m,1]/[1]  =  [m]/[1]  =  4/7.

In sum, it is not the case in the Othello example that
"men are just as apt to be black as things in general".

Expressed in terms of probabilities:  P(m) = 4/7 and P(b) = 1/7.

If these were independent we'd have:  P(mb) = 4/49.

On the contrary, P(mb) = P(b) = 1/7.

Another way to see it is as follows:  P(b|m) = 1/4 while P(b) = 1/7.

Again, all subject to the midnight judgment disclaimer ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 65

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Let me try to sum up as succinctly as possible the lesson
that we ought to take away from Peirce's last "number of"
example, since I know that the account that I have given
of it so far may appear to have wandered rather widely.

| So if men are just as apt to be black as things in general:
|
| [m,][b]  =  [m,b]
|
| where the difference between [m] and [m,] must not be overlooked.
|
| C.S. Peirce, CP 3.76

In different lights the formula [m,b] = [m,][b] presents itself
as an "aimed arrow", "fair sample", or "independence" condition.
I had taken the tack of illustrating this polymorphous theme in
bas relief, that is, via detour through a universe of discourse
where it fails.  Here's a brief reminder of the Othello example:

B   C   D   E   I   J   O
o   o   o   o   o   o   o   1
    |           |   |   |
    |           |   |   |   m,
    |           |   |   |
o   o   o   o   o   o   o   1
                        |
                        |   b,
                        |
o   o   o   o   o   o   o   1
B   C   D   E   I   J   O

The condition, "men are just as apt to be black as things in general",
is expressible in terms of conditional probabilities as P(b|m) = P(b),
written out, the probability of the event Black given the event Male
is exactly equal to the unconditional probability of the event Black.

Thus, for example, it is sufficient to observe in the Othello setting
that P(b|m) = 1/4 while P(b) = 1/7 in order to cognize the dependency,
and thereby to tell that the ostensible arrow is anaclinically biased.

This reduction of a conditional probability to an absolute probability,
in the form P(A|Z) = P(A), is a familiar disguise, and yet in practice
one of the ways that we most commonly come to recognize the condition
of independence P(AZ) = P(A)P(Z), via the definition of a conditional
probability according to the rule P(A|Z) = P(AZ)/P(Z).  To recall the
familiar consequences, the definition of conditional probability plus
the independence condition yields P(A|Z) = P(AZ)/P(Z) = P(A)P(Z)/P(Z),
to wit, P(A|Z) = P(A).

As Hamlet learned, there's a lot to be learned from turning a crank.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 66

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

And so we come to the end of the "number of" examples
that we found on our agenda at this point in the text:

| It is to be observed that:
|
| [!1!]  =  `1`.
|
| Boole was the first to show this connection between logic and
| probabilities.  He was restricted, however, to absolute terms.
| I do not remember having seen any extension of probability to
| relatives, except the ordinary theory of 'expectation'.
|
| Our logical multiplication, then, satisfies the essential conditions
| of multiplication, has a unity, has a conception similar to that of
| admitted multiplications, and contains numerical multiplication as
| a case under it.
|
| C.S. Peirce, CP 3.76

There appears to be a problem with the printing of the text at this point.
Let us first recall the conventions that I am using in this transcription:
`1` for the "antique 1" that Peirce defines as !1!_oo = "something", and
!1! for the "bold 1" that signifies the ordinary 2-identity relation.

CP 3 gives [!1!] = `1`, which I cannot make any sense of.
CE 2 gives [!1!] =  1 , which makes sense on the reading
of "1" as denoting the natural number 1, and not as the
absolute term "1" that denotes the universe of discourse.
On this reading, [!1!] is the average number of things
related by the identity relation !1! to one individual,
and so it makes sense that [!1!] = 1 : N, where "N" is
the set or type of the natural numbers {0, 1, 2, ...}.

With respect to the 2-identity !1! in the syntactic domain S
and the number 1 in the non-negative integers N c R, we have:

'v'!1!  =  [!1!]  =  1.

And so the "number of" mapping 'v' : S -> R has another one
of the properties that would be required of an arrow S -> R.

The manner in which these arrows and qualified arrows help us
to construct a suspension bridge that unifies logic, semiotics,
statistics, stochastics, and information theory will be one of
the main themes that I aim to elaborate throughout the rest of
this inquiry.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 67

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The Sign of Involution
|
| I shall take involution in such a sense that x^y
| will denote everything which is an x for every
| individual of y.
|
| Thus
|
| 'l'^w
|
| will be a lover of every woman.
|
| Then
|
| ('s'^'l')^w
|
| will denote whatever stands to every woman in
| the relation of servant of every lover of hers;
|
| and
|
| 's'^('l'w)
|
| will denote whatever is a servant of
| everything that is lover of a woman.
|
| So that
|
| ('s'^'l')^w  =  's'^('l'w).
|
| C.S. Peirce, CP 3.77
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 68

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| The Sign of Involution
|
| I shall take involution in such a sense that x^y
| will denote everything which is an x for every
| individual of y.
|
| Thus
|
| 'l'^w
|
| will be a lover of every woman.
|
| C.S. Peirce, CP 3.77

In arithmetic, the "involution" x^y, or the "exponentiation" of x
to the power of y, is the iterated multiplication of the factor x,
repeated as many times as there are ones making up the exponent y.

In analogous fashion, 'l'^w is the iterated multiplication of 'l',
repeated as many times as there are individuals under the term w.

For example, suppose that the universe of discourse has,
among other things, just the three women, W_1, W_2, W_3.
This could be expressed in Peirce's notation by writing:

w  =  W_1 +, W_2 +, W_3.

In this setting, we would have:

'l'^w  =  'l'^(W_1 +, W_2 +, W_3)  =  'l'W_1 , 'l'W_2 , 'l'W_3.

That is, a lover of every woman in the universe of discourse
would be a lover of W_1 and a lover of W_2 and lover of W_3.

So far so good ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Note 69

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o



o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Work Area

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Up to this point in the discussion, we have observed that
the "number of" map 'v' : S -> R such that 'v's = [s] has
the following morphic properties:

0.  [0]  =  0

1.  'v'

2.  x -< y  =>  [x] =< [y]

3.  [x +, y]  =<  [x] + [y]

contingent:

4.  [xy]  =  [x][y]

view relation P c X x Y x Z as related to three functions:

`p_1` c 
`p_3` c X x Y x Pow(Z)


f(x)

f(x+y) = f(x) + f(y)

f(p(x, y))  =  q(f(x), f(y))

P(x, y, z)

(f^-1)(y)

f(z(x, y))  =  z'(f(x), f(y))

Definition.  f(x:y:z)  =  (fx:fy:fz).

f(x:y:z)  =  (fx:fy:

x:y:z in R => fx:fy:fz in fR

R(x, y, z) => (fR)(fx, fy, fz)

(L, x, y, z) => (fL, fx, fy, fz)

(x, y, z, L) => (xf, yf, zf, Lf)

(x, y, z, b) => (xf, yf, zf, bf)


fzxy = z'(fx)(fy)


         F
         o
         |
         o
        / \
       o   o
                      o
                   .  |  .
                .     |     .
             .        |        .
          .           o           .
                   . / \ .
                .   /   \   .
             .     /     \     .
          .       o       o       .
                     . .     .
                    .   .       .
                                   .

                       
   C o        . / \ .        o
     |     .   /   \   .     | CF
     |  .     o     o     .  |
   f o     .     .     .     o fF
    / \ .     .     .       / \ 
   / . \   .               o   o
X o     o Y               XF   YF



<u, v, w> in P -> 

o---------o---------o---------o---------o
|         #    h    |    h    |    f    |
o=========o=========o=========o=========o
|    P    #    X    |    Y    |    Z    |
o---------o---------o---------o---------o
|    Q    #    U    |    V    |    W    |
o---------o---------o---------o---------o


Products of diagonal extensions:

1,1,  =  !1!!1!

      =  "anything that is anything that is ---"

      =  "anything that is ---"

      =  !1!

m,n  =  "man that is noble"  

     =  (C:C +, I:I +, J:J +, O:O)(C +, D +, O)

     =  C +, O

n,m  =  "noble that is man"

     =  (C:C +, D:D +, O:O)(C +, I +, J +, O)

     =  C +, O

n,w  =  "noble that is woman"

     =  (C:C +, D:D +, O:O)(B +, D +, E)

     =  D

w,n  =  "woman that is noble"

     =  (B:B +, D:D +, E:E)(C +, D +, O)

     =  D

Given a set X and a subset M c X, define e_M,
the "idempotent representation" of M over X,
as the 2-adic relation e_M c X x X which is
the identity relation on M.  In other words,
e_M = {<x, x> : x in M}.

Transposing this by steps into Peirce's notation:

e_M  =  {<x, x> : x in M}

     =  {x:x : x in M}

     =  Sum_X |x in M| x:x 

'l'  =  "lover of ---"

's'  =  "servant of ---"

'l',  =  "lover that is --- of ---"

's',  =  "servant that is --- of ---"

| But not only may any absolute term be thus regarded as a relative term, 
| but any relative term may in the same way be regarded as a relative with
| one correlate more.  It is convenient to take this additional correlate
| as the first one.
|
| Then:
|
| 'l','s'w
|
| will denote a lover of a woman that is a servant of that woman.
|
| C.S. Peirce, CP 3.73

o~~~~~~~~~o~~~~+~~~~o~~~~~~~~~o~~~~~~~~~o~~~~+~~~~o~~~~~~~~~o
o-----------------------------o-----------------------------o
|  Objective Framework (OF)   | Interpretive Framework (IF) |
o-----------------------------o-----------------------------o
|           Objects           |            Signs            |
o-----------------------------o-----------------------------o
|                                                           |
|           C  o---------------                             |
|                                                           |
|           F  o---------------                             |
|                                                           |
|           I  o---------------                             |
|                                                           |
|           O  o---------------                             |
|                                                           |
|           B  o---------------                             |
|                                                           |
|           D  o---------------                             |
|                                                           |
|           E  o---------------                             |
|                                 o "m"                     |
|                                /                          |
|                               /                           |
|                              /                            |
|           o  o  o-----------@                             |
|                              \                            |
|                               \                           |
|                                \                          |
|                                 o                         |
|                                                           |
o-----------------------------o-----------------------------o

†‡||§¶
@#||$%

quality, reflection, synecdoche

1.  neglect of
2.  neglect of
3. 

Now, it's not the end of the story, of course, but it's a start.
The significant thing is what is usually the significant thing
in mathematics, at least, that two distinct descriptions refer
to the same things.  Incidentally, Peirce is not really being
as indifferent to the distinctions between signs and things
as this ascii text makes him look, but uses a host of other
type-faces to distinguish the types and the uses of signs.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

GR = Gary Richmond

GR: I wonder if the necessary "elementary triad" spoken of
    below isn't somehow implicated in those discussions
    "invoking a 'closure principle'".

GR, quoting CSP:

    | CP 1.292.  It can further be said in advance, not, indeed,
    | purely a priori but with the degree of apriority that is
    | proper to logic, namely, as a necessary deduction from
    | the fact that there are signs, that there must be an
    | elementary triad.  For were every element of the
    | phaneron a monad or a dyad, without the relative
    | of teridentity (which is, of course, a triad),
    | it is evident that no triad could ever be
    | built up.  Now the relation of every sign
    | to its object and interpretant is plainly
    | a triad.  A triad might be built up of
    | pentads or of any higher perissad
    | elements in many ways.  But it
    | can be proved -- and really
    | with extreme simplicity,
    | though the statement of
    | the general proof is
    | confusing -- that no
    | element can have
    | a higher valency
    | than three.

GR: (Of course this passage also directly relates
    to the recent thread on Identity and Teridentity.)

Yes, generally speaking, I think that there are deep formal principles here
that manifest themselves in these various guises:  the levels of intention
or the orders of reflection, the sign relation, pragmatic conceivability,
the generative sufficiency of 3-adic relations for all practical intents,
and the irreducibility of continuous relations.  I have run into themes
in combinatorics, group theory, and Lie algebras that are tantalizingly
reminiscent of the things that Peirce says here, but it will take me
some time to investigate them far enough to see what's going on.

GR: PS.  I came upon the above passage last night reading through
    the Peirce selections in John J. Stuhr's 'Classical American
    Philosophy:  Essential Readings and Interpretive Essays',
    Oxford University, 1987 (the passage above is found on
    pp 61-62), readily available in paperback in a new
    edition, I believe.

GR: An aside:  These excerpts in Sturh include versions of a fascinating
    "Intellectual Autobiography", Peirce's summary of his scientific,
    especially, philosophic accomplishments.  I've seen them published
    nowhere else.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BU = Ben Udell
JA = Jon Awbrey

BU: I'm in the process of moving back to NYC and have had little opportunity
    to do more than glance through posts during the past few weeks, but this
    struck me because it sounds something I really would like to know about,
    but I didn't understand it:

JA: Notice that Peirce follows the mathematician's usual practice,
    then and now, of making the status of being an "individual" or
    a "universal" relative to a discourse in progress.  I have come
    to appreciate more and more of late how radically different this
    "patchwork" or "piecewise" approach to things is from the way of
    some philosophers who seem to be content with nothing less than
    many worlds domination, which means that they are never content
    and rarely get started toward the solution of any real problem.
    Just my observation, I hope you understand.

BU: "Many worlds domination", "nothing less than many worlds domination" --
    as opposed to the patchwork or piecewise approach.  What is many worlds
    domination?  When I hear "many worlds" I think of Everett's Many Worlds
    interpretation of quantum mechanics.

Yes, it is a resonance of Edward, Everett, and All the Other Whos in Whoville,
but that whole microcosm is itself but the frumious reverberation of Leibniz's
Maenadolatry.

More sequitur, though, this is an issue that has simmered beneath
the surface of my consciousness for several decades now and only
periodically percolates itself over the hyper-critical thrashold
of expression.  Let me see if I can a better job of it this time.

The topic is itself a patchwork of infernally recurrent patterns.
Here are a few pieces of it that I can remember arising recently:

| Zeroth Law Of Semantics
|
| Meaning is a privilege not a right.
| Not all pictures depict.
| Not all signs denote.
|
| Never confuse a property of a sign,
| for instance, existence,
| with a sign of a property,
| for instance, existence.
|
| Taking a property of a sign,
| for a sign of a property,
| is the zeroth sign of
| nominal thinking,
| and the first
| mistake.
|
| Also Sprach Zero*

A less catchy way of saying "meaning is a privilege not a right"
would most likely be "meaning is a contingency not a necessity".
But if I reflect on that phrase, it does not quite satisfy me,
since a deeper lying truth is that contingency and necessity,
connections in fact and connections beyond the reach of fact,
depend on a line of distinction that is itself drawn on the
scene of observation from the embodied, material, physical,
non-point massive, non-purely-spectrelative point of view
of an agent or community of interpretation, a discursive
universe, an engauged interpretant, a frame of at least
partial self-reverence, a hermeneutics in progress, or
a participant observer.  In short, this distinction
between the contingent and the necessary is itself
contingent, which means, among other things, that
signs are always indexical at some least quantum.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JR = Joe Ransdell

JR: Would the Kripke conception of the "rigid designator" be an instance
    of the "many worlds domination"?   I was struck by your speaking of
    the "patchwork or piecewise" approach as well in that it seemed to
    me you might be expressing the same general idea that I have usually
    thought of in terms of contextualism instead:  I mean the limits it
    puts upon what you can say a priori if you really take contextualism
    seriously, which is the same as recognizing indexicality as incapable
    of elimination, I think.

Yes, I think this is the same ballpark of topics.
I can't really speak for what Kripke had in mind,
but I have a practical acquaintance with the way
that some people have been trying to put notions
like this to work on the applied ontology scene,
and it strikes me as a lot of nonsense.  I love
a good parallel worlds story as much as anybody,
but it strikes me that many worlds philosophers
have the least imagination of anybody as to what
an alternative universe might really be like and
so I prefer to read more creative writers when it
comes to that.  But serially, folks, I think that
the reason why some people evidently feel the need
for such outlandish schemes -- and the vast majority
of the literature on counterfactual conditionals falls
into the same spaceboat as this -- is simply that they
have failed to absorb, through the fault of Principian
filters, a quality that Peirce's logic is thoroughly
steeped in, namely, the functional interpretation
of logical terms, that is, as signs referring to
patterns of contingencies.  It is why he speaks
more often, and certainly more sensibly and to
greater effect, of "conditional generals" than
of "modal subjunctives".  This is also bound up
with that element of sensibility that got lost in
the transition from Peircean to Fregean quantifiers.
Peirce's apriorities are always hedged with risky bets.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BU = Benjamin Udell

BU: I wish I had more time to ponder the "many-worlds" issue (& that my books
    were not currently disappearing into heavily taped boxes).  I had thought
    of the piecemeal approach's opposite as the attempt to build a kind of
    monolithic picture, e.g., to worry that there is not an infinite number
    of particles in the physical universe for the infinity integers.  But
    maybe the business with rigid designators & domination of many worlds
    has somehow to do with monolithism.

Yes, that's another way of saying it.  When I look to my own priorities,
my big worry is that logic as a discipline is not fulfilling its promise.
I have worked in too many settings where the qualitative researchers and
the quantitative researchers could barely even talk to one an Other with
any understanding, and this I recognized as a big block to inquiry since
our first notice of salient facts and significant phenomena is usually
in logical, natural language, or qualitative forms, while our eventual
success in resolving anomalies and solving practical problems depends
on our ability to formalize, operationalize, and quantify the issues,
even if only to a very partial degree, as it generally turns out.

When I look to the history of how logic has been deployed in mathematics,
and through those media in science generally, it seems to me that the
Piece Train started to go off track with the 'Principia Mathematica'.
All pokes in the rib aside, however, I tend to regard this event
more as the symptom of a localized cultural phenomenon than as
the root cause of the broader malaise.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

CG = Clark Goble
JA = Jon Awbrey

JA, quoting CSP:

    | For example,
    |
    | f + u
    |
    | means all Frenchmen besides all violinists, and,
    | therefore, considered as a logical term, implies
    | that all French violinists are 'besides themselves'.

CG: Could you clarify your use of "besides"?

CG: I think I am following your thinking in that you
    don't want the logical terms to be considered
    to have any necessary identity between them.
    Is that right?

I use vertical sidebars "|" for long quotations, so this
is me quoting Peirce at CP 3.67 who is explaining in an
idiomatic way Boole's use of the plus sign for a logical
operation that is strictly speaking limited to terms for
mutually exclusive classes.  The operation would normally
be extended to signify the "symmetric difference" operator.
But Peirce is saying that he prefers to use the sign "+,"
for inclusive disjunction, corresponding to the union of
the associated classes.  Peirce calls Boole's operation
"invertible" because it amounts to the sum operation in
a field, whereas the inclusive disjunction or union is
"non-invertible", since knowing that A |_| B = C does
not allow one to say determinately that A = C - B.
I can't recall if Boole uses this 'besides' idiom,
but will check later.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

CG = Clark Goble
JA = Jon Awbrey

JA: I use vertical sidebars "|" for long quotations, so this
    is me quoting Peirce at CP 3.67 who is explaining in an
    idiomatic way Boole's use of the plus sign for a logical
    operation that is strictly speaking limited to terms for
    mutually exclusive classes.

CG: Is that essay related to any of the essays
    in the two volume 'Essential Peirce'?  I'm
    rather interested in how he speaks there.

No, the EP volumes are extremely weak on logical selections.
I see nothing there that deals with the logic of relatives.

JA: But Peirce is saying that he prefers to use the sign "+,"
    for inclusive disjunction, corresponding to the union of
    the associated classes.

CG: The reason I asked was more because it seemed
    somewhat interesting in light of the logic of
    operators in quantum mechanics.  I was curious
    if the use of "beside" might relate to that.
    But from what you say it probably was just me
    reading too much into the quote.  The issue of
    significance was whether the operation entailed
    the necessity of mutual exclusivity or whether
    some relationship between the classes might be 
    possible.  I kind of latched on to Peirce's
    odd statement about "all French violinists
    are 'beside themselves'".

CG: Did Peirce have anything to say about
    what we'd call non-commuting operators?

In general, 2-adic relative terms are non-commutative.
For example, a brother of a mother is not identical to
a mother of a brother.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

GR = Gary Richmond

GR: I am very much enjoying, which is to say,
    learning from your interlacing commentary
    on Peirce's 1870 "Logic of Relatives" paper.

GR: What an extraordinary paper the 1870 "LOG" is!  Your notes helped
    me appreciate the importance of the unanticipated proposal of P's
    to "assign to all logical terms, numbers".  On the other hand,
    the excerpts suggested to we why Peirce finally framed his
    Logic of Relatives into graphical form.  Still, I think
    that a thorough examination of the 1970 paper might
    serve as propaedeutic (and of course, much more)
    for the study of the alpha and beta graphs.

Yes, there's gold in them thar early logic papers that has been "panned"
but nowhere near mined in depth yet.  The whole quiver of arrows between
terms and numbers harks back to the 'numeri characteristici' of  Leibniz,
of course, but Leibniz attended more on the intensional chains of being
while Peirce will here start to "escavate" the extensional hierarchies.

I consider myself rewarded that you see the incipient impulse toward
logical graphs, as one of the most striking things to me about this
paper is to see these precursory seeds already planted here within
it and yet to know how long it will take them to sprout and bloom.

Peirce is obviously struggling to stay within the linotyper's art --
a thing that we, for all our exorbitant hype about markable text,
are still curiously saddled with -- but I do not believe that it
is possible for any mind equipped with a geometrical imagination
to entertain these schemes for connecting up terminological hubs
with their terminological terminals without perforce stretching
imaginary strings between the imaginary gumdrops.

GR: I must say though that the pace at which you've been throwing this at us
    is not to be kept up with by anyone I know "in person or by reputation".
    I took notes on the first 5 or 6 Notes, but can now just barely find
    time to read through your posts.

Oh, I was trying to burrow as fast as I could toward the more untapped veins --
I am guessing that things will probably "descalate" a bit over the next week,
but then, so will our attention spans ...

Speaking of which, I will have to break here, and pick up the rest later ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 8

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

GR = Gary Richmond

GR: In any event, I wish that you'd comment on Note 5 more directly (though
    you do obliquely in your own diagramming of "every [US] Vice-President(s) ...
    [who is] every President(s) of the US Senate".

There are several layers of things to say about that,
and I think that it would be better to illustrate the
issues by way of the examples that Peirce will soon be
getting to, but I will see what I can speak to for now.

GR: But what interested me even more in LOR, Note 5, was the sign < ("less than"
    joined to the sign of identity = to yield P's famous sign -< (or more clearly,
    =<) of inference, which combines the two (so that -< (literally, "as small as")
    means "is".  I must say I both "get" this and don't quite (Peirce's example(s) of
    the frenchman helped a little).  Perhaps your considerably more mathematical mind
    can help clarify this for a non-mathematician such as myself.  (My sense is that
    "as small as" narrows the terms so that "everything that occurs in the conclusion
    is already contained in the premise.)  I hope I'm not being obtuse here.  I'm sure
    it's "all too simple for words".

Then let us draw a picture.

"(F (G))", read "not F without G", means that F (G), that is, F and not G,
is the only region exempted from the occupation of being in this universe:

o-----------------------------------------------------------o
|`X`````````````````````````````````````````````````````````|
|```````````````````````````````````````````````````````````|
|`````````````o-------------o```o-------------o`````````````|
|````````````/               \`/```````````````\````````````|
|```````````/                 o`````````````````\```````````|
|``````````/                 /`\`````````````````\``````````|
|`````````/                 /```\`````````````````\`````````|
|````````/                 /`````\`````````````````\````````|
|```````o                 o```````o`````````````````o```````|
|```````|                 |```````|`````````````````|```````|
|```````|                 |```````|`````````````````|```````|
|```````|        F        |```````|````````G````````|```````|
|```````|                 |```````|`````````````````|```````|
|```````|                 |```````|`````````````````|```````|
|```````o                 o```````o`````````````````o```````|
|````````\                 \`````/`````````````````/````````|
|`````````\                 \```/`````````````````/`````````|
|``````````\                 \`/`````````````````/``````````|
|```````````\                 o`````````````````/```````````|
|````````````\               /`\```````````````/````````````|
|`````````````o-------------o```o-------------o`````````````|
|```````````````````````````````````````````````````````````|
|```````````````````````````````````````````````````````````|
o-----------------------------------------------------------o

Collapsing the vacuous region like soapfilm popping on a wire frame,
we draw the constraint (F (G)) in the following alternative fashion:

o-----------------------------------------------------------o
|`X`````````````````````````````````````````````````````````|
|```````````````````````````````````````````````````````````|
|```````````````````````````````o-------------o`````````````|
|``````````````````````````````/```````````````\````````````|
|`````````````````````````````o`````````````````\```````````|
|````````````````````````````/`\`````````````````\``````````|
|```````````````````````````/```\`````````````````\`````````|
|``````````````````````````/`````\`````````````````\````````|
|`````````````````````````o```````o`````````````````o```````|
|`````````````````````````|```````|`````````````````|```````|
|`````````````````````````|```````|`````````````````|```````|
|`````````````````````````|```F```|````````G````````|```````|
|`````````````````````````|```````|`````````````````|```````|
|`````````````````````````|```````|`````````````````|```````|
|`````````````````````````o```````o`````````````````o```````|
|``````````````````````````\`````/`````````````````/````````|
|```````````````````````````\```/`````````````````/`````````|
|````````````````````````````\`/`````````````````/``````````|
|`````````````````````````````o`````````````````/```````````|
|``````````````````````````````\```````````````/````````````|
|```````````````````````````````o-------------o`````````````|
|```````````````````````````````````````````````````````````|
|```````````````````````````````````````````````````````````|
o-----------------------------------------------------------o

So, "(F (G))", "F => G", "F =< G", "F -< G", "F c G",
under suitable mutations of interpretation, are just
so many ways of saying that the denotation of "F" is
contained within the denotation of "G".

Now, let us look to the "characteristic functions" or "indicator functions"
of the various regions of being.  It is frequently convenient to ab-use the
same letters for them and merely keep a variant interpretation "en thy meme",
but let us be more meticulous here, and reserve the corresponding lower case
letters "f" and "g" to denote the indicator functions of the regions F and G,
respectively.

Taking B = {0, 1} as the boolean domain, we have:

f, g : X -> B

(f^(-1))(1)  =  F

(g^(-1))(1)  =  G

In general, for h : X -> B, an expression like "(h^(-1))(1)"
can be read as "the inverse of h evaluated at 1", in effect,
denoting the set of points in X where h evaluates to "true".
This is called the "fiber of truth" in h, and I have gotten
where I like to abbreviate it as "[|h|]".

Accordingly, we have:

F  =  [|f|]  =  (f^(-1))(1)  c  X

G  =  [|g|]  =  (g^(-1))(1)  c  X

This brings us to the question, what sort
of "functional equation" between f and g
goes with the regional constraint (F (G))?

Just this, that f(x) =< g(x) for all x in X,
where the '=<' relation on the values in B
has the following operational table for
the pairing "row head =< column head".

o---------o---------o---------o
|   =<    #    0    |    1    |
o=========o=========o=========o
|    0    #    1    |    1    |
o---------o---------o---------o
|    1    #    0    |    1    |
o---------o---------o---------o

And this, of course, is the same thing as the truth table
for the conditional connective or the implication relation.

GR: By the way, in the semiosis implied by the modal gamma graphs,
    could -< (were it used there, which of course it is not) ever
    be taken to mean,"leads to" or "becomes" or "evolves into"?
    I informally use it that way myself, using the ordinary
    arrow for implication.

I am a bit insensitive to the need for modal logic,
since necessity in mathematics always seems to come
down to being a matter of truth for all actual cases,
if under an expanded sense of actuality that makes it
indiscernible from possibility, so I must beg off here.
But there are places where Peirce makes a big deal about
the advisability of drawing the '-<' symbol in one fell
stroke of the pen, kind of like a "lazy gamma" -- an old
texican cattle brand -- and I have seen another place where
he reads "A -< B" as "A, in every way that it can be, is B",
as if this '-<' fork in the road led into a veritable garden
of branching paths.

And out again ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 9

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

GR = Gary Richmond
JA = Jon Awbrey

JA: I am a bit insensitive to the need for modal logic,
    since necessity in mathematics always seems to come
    down to being a matter of truth for all actual cases,
    if under an expanded sense of actuality that makes it
    indiscernible from possibility, so I must beg off here.

GR: I cannot agree with you regarding modal logic.  Personally
    I feel that the gamma part of the EG's is of the greatest
    interest and potential importance, and as Jay Zeman has
    made clear in his dissertation, Peirce certainly thought
    this as well.

You disagree that I am insensitive?  Well, certainly nobody has ever done that before!
No, I phrased it that way to emphasize the circumstance that it ever hardly comes up
as an issue within the limited purview of my experience, and when it does -- as in
topo-logical boundary situations -- it seems to require a sort of analysis that
doesn't comport all that well with the classical modes and natural figures of
speech about it.  Then again, I spent thirty years trying to motorize Alpha,
have only a few good clues how I would go about Beta, and so Gamma doesn't
look like one of those items on my plate.

Speeching Of Which ---
Best Of The Season ...
And Happy Trailing ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 10

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BM = Bernard Morand
JA = Jon Awbrey

BM: Thanks for your very informative talk.  There
    is a point that I did not understand in note 35:

JA: If we operate in accordance with Peirce's example of `g`'o'h
    as the "giver of a horse to an owner of that horse", then we
    may assume that the associative law and the distributive law
    are by default in force, allowing us to derive this equation:

JA: 'l','s'w  =  'l','s'(B +, D +, E)  =  'l','s'B +, 'l','s'D +, 'l','s'E

BM: May be because language or more probably my lack of training in logic, what
    does mean that "associative law and distributive law are by default in force"?

Those were some tricky Peirces,
and I was trying to dodge them
as artful as could be, but now
you have fastly apprehended me!

It may be partly that I left out the initial sections of this paper where Peirce
discusses how he will regard the ordinarily applicable principles in the process
of trying to extend and generalize them (CP 3.45-62), but there may be also an
ambiguity in Peirce's use of the phrase "absolute conditions" (CP 3.62-68).
Does he mean "absolutely necessary", "indispensable", "inviolate", or
does he mean "the conditions applying to the logic of absolute terms",
in which latter case we would expect to alter them sooner or later?

We lose the commutative law, xy = yx, as soon as we extend to 2-adic relations,
but keep the associative law, x(yz) = (xy)z, as the multiplication of 2-adics
is the logical analogue of ordinary matrix multiplication, and Peirce like
most mathematicians treats the double distributive law, x(y + z) = xy + xz
and (x + y)z = xz + yz, and as something that must be striven to preserve
as far as possible.

Strictly speaking, Peirce is already using a principle that goes beyond
the ordinary associative law, but that is recognizably analogous to it,
for example, in the modified Othello case, where (J:J:D)(J:D)(D) = J.
If it were strictly associative, then we would have the following:

1.  (J:J:D)((J:D)(D))  =  (J:J:D)(J)  =  0?

2.  ((J:J:D)(J:D))(D)  =  (J)(D)  =  0?

In other words, the intended relational linkage would be broken.
However, the type of product that Peirce is taking for granted
in this situation often occurs in mathematics in just this way.
There is another location where he comments more fully on this,
but I have the sense that it was a late retrospective remark,
and I do not recall if it was in CP or in the microfilm MS's
that I read it.

By "default" conditions I am referring more or less to what
Peirce says at the end of CP 3.69, where he use an argument
based on the distributive principle to rationalize the idea
that 'A term multiplied by two relatives shows that the same
individual is in the two relations'.  This means, for example,
that one can let "`g`'o'h", without subjacent marks or numbers,
be interpreted on the default convention of "overlapping scopes",
where the two correlates of `g` are given by the next two terms
in line, namely, 'o' and h, and the single correlate of 'o' is
given by the very next term in line, namely, h.  Thus, it is
only when this natural scoping cannot convey the intended
sense that we have to use more explicit mark-up devices.

BM: About another point:  do you think that the LOR could be of some help to solve
    the puzzle of the "second way of dividing signs" where CSP concludes that 66
    classes could be made out of the 10 divisions (Letters to lady Welby)?
    (As I see them, the ten divisions involve a mix of relative terms,
    dyadic relations and a triadic one.  In order to make 66 classes
    it is clear that these 10 divisions have to be stated under some
    linear order.  The nature of this order is at the bottom of the
    disagreements on the subject).

This topic requires a longer excuse from me
than I am able to make right now, but maybe
I'll get back to it later today or tomorrow.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 11

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BM = Bernard Morand

BM: About another point:  do you think that the LOR could be of some help
    to solve the puzzle of the "second way of dividing signs" where CSP
    concludes that 66 classes could be made out of the 10 divisions
    (Letters to lady Welby)?  (As I see them, the ten divisions
    involve a mix of relative terms, dyadic relations and
    a triadic one.  In order to make 66 classes it is
    clear that these 10 divisions have to be stated
    under some linear order.  The nature of this
    order is at the bottom of the disagreements
    on the subject).

Yes.  At any rate, I have a pretty clear sense from reading Peirce's work
in the period 1865-1870 that the need to understand the function of signs
in scientific inquiry is one of the main reasons he found himself forced
to develop both the theory of information and the logic of relatives.

Peirce's work of this period is evenly distributed across the extensional
and intensional pans of the balance in a way that is very difficult for us
to follow anymore.  I remember when I started looking into this I thought of
myself as more of an "intensional, synthetic" than an "extensional, analytic"
type of thinker, but that seems like a long time ago, as it soon became clear
that much less work had been done in the Peirce community on the extensional
side of things, while that was the very facet that needed to be polished up
in order to reconnect logic with empirical research and mathematical models.
So I fear that I must be content that other able people are working on the
intensional classification of sign relations.

Still, the way that you pose the question is very enticing,
so maybe it is time for me to start thinking about this
aspect of sign relations again, if you could say more
about it.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 12

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BM = Bernard Morand

BM: The pairing "intensional, synthetic" against the other "extensional, analytic"
    is not one that I would have thought so.  I would have paired synthetic with
    extensional because synthesis consists in adding new facts to an already made
    conception.  On the other side analysis looks to be the determination of
    features while neglecting facts.  But may be there is something like
    a symmetry effect leading to the same view from two different points.

Oh, it's not too important, as I don't put a lot of faith in such divisions,
and the problem for me is always how to integrate the facets of the object,
or the faculties of the mind -- but there I go being synthetic again!

I was only thinking of a conventional contrast that used to be drawn
between different styles of thinking in mathematics, typically one
points to Descartes, and the extensionality of analytic geometry,
versus Desargues, and the intensionality of synthetic geometry.

It may appear that one has side-stepped the issue of empiricism
that way, but then all that stuff about the synthetic a priori
raises its head, and we have Peirce's insight that mathematics
is observational and even experimental, and so I must trail off
into uncoordinated elliptical thoughts ...

The rest I have to work at a while, and maybe go back to the Welby letters.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 13

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BM = Bernard Morand

BM: I will try to make clear the matter, at least as far as I understand it
    for now.  We can summarize in a table the 10 divisions with their number
    in a first column, their title in current (peircean) language in the second
    and some kind of logical notation in the third.  The sources come mainly from
    the letters to Lady Welby.  While the titles come from CP 8.344, the third column
    comes from my own interpretation.

BM: So we get:

I    - According to the Mode of Apprehension of the Sign itself             - S
II   - According to the Mode of Presentation of the Immediate Object        - Oi
III  - According to the Mode of Being of the Dynamical Object               - Od
IV   - According to the Relation of the Sign to its Dynamical Object        - S-Od
V    - According to the Mode of Presentation of the Immediate Interpretant  - Ii
VI   - According to the Mode of Being of the Dynamical Interpretant         - Id
VII  - According to the relation of the Sign to the Dynamical Interpretant  - S-Id
VIII - According to the Nature of the Normal Interpretant                   - If
IX   - According to the the relation of the Sign to the Normal Interpretant - S-If
X    - According to the Triadic Relation of the Sign to its Dynamical Object
       and to its Normal Interpretant                                       - S-Od-If

For my future study, I will reformat the table in a way that I can muse upon.
I hope the roman numerals have not become canonical, as I cannot abide them.

Table.  Ten Divisions of Signs (Peirce, Morand)
o---o---------------o------------------o------------------o---------------o
|   | According To: | Of:              | To:              |               |
o===o===============o==================o==================o===============o
| 1 | Apprehension  | Sign Itself      |                  | S             |
| 2 | Presentation  | Immediate Object |                  | O_i           |
| 3 | Being         | Dynamical Object |                  | O_d           |
| 4 | Relation      | Sign             | Dynamical Object | S : O_d       |
o---o---------------o------------------o------------------o---------------o
| 5 | Presentation  | Immediate Interp |                  | I_i           |
| 6 | Being         | Dynamical Interp |                  | I_d           |
| 7 | Relation      | Sign             | Dynamical Interp | S : I_d       |
o---o---------------o------------------o------------------o---------------o
| 8 | Nature        | Normal Interp    |                  | I_f           |
| 9 | Relation      | Sign             | Normal Interp    | S : I_f       |
o---o---------------o------------------o------------------o---------------o
| A | Relation      | Sign             | Dynamical Object |               |
|   |               |                  | & Normal Interp  | S : O_d : I_f |
o---o---------------o------------------o------------------o---------------o

Just as I have always feared, this classification mania
appears to be communicable!  But now I must definitely
review the Welby correspondence, as all this stuff was
a blur to my sensibilities the last 10 times I read it.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 14

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BM = Bernard Morand

[Table.  Ten Divisions of Signs (Peirce, Morand)]

BM: Yes this is clearer (in particular in expressing relations with :)

This is what Peirce used to form elementary relatives, for example,
o:s:i = <o, s, i>, and I find it utterly ubertous in a wide variety
of syntactic circumstances.

BM: I suggest making a correction to myself if
    the table is destinate to become canonic.

Hah!  Good one!

BM: I probably made a too quick jump from Normal Interpretant to Final Interpretant.
    As we know, the final interpretant, the ultimate one is not a sign for Peirce
    but a habit.  So for the sake of things to come it would be more careful to
    retain I_n in place of I_f for now.

This accords with my understanding of how the word is used in mathematics.
In my own work it has been necessary to distinguish many different species
of expressions along somewhat similar lines, for example:  arbitrary, basic,
canonical, decidable, normal, periodic, persistent, prototypical, recurrent,
representative, stable, typical, and so on.  So I will make the changes below:

Table.  Ten Divisions of Signs (Peirce, Morand)
o---o---------------o------------------o------------------o---------------o
|   | According To: | Of:              | To:              |               |
o===o===============o==================o==================o===============o
| 1 | Apprehension  | Sign Itself      |                  | S             |
| 2 | Presentation  | Immediate Object |                  | O_i           |
| 3 | Being         | Dynamical Object |                  | O_d           |
| 4 | Relation      | Sign             | Dynamical Object | S : O_d       |
o---o---------------o------------------o------------------o---------------o
| 5 | Presentation  | Immediate Interp |                  | I_i           |
| 6 | Being         | Dynamical Interp |                  | I_d           |
| 7 | Relation      | Sign             | Dynamical Interp | S : I_d       |
o---o---------------o------------------o------------------o---------------o
| 8 | Nature        | Normal Interp    |                  | I_n           |
| 9 | Relation      | Sign             | Normal Interp    | S : I_n       |
o---o---------------o------------------o------------------o---------------o
| A | Tri. Relation | Sign             | Dynamical Object |               |
|   |               |                  | & Normal Interp  | S : O_d : I_n |
o---o---------------o------------------o------------------o---------------o

BM: Peirce gives the following definition (CP 8.343):

BM, quoting CSP:

     | It is likewise requisite to distinguish
     | the 'Immediate Interpretant', i.e. the
     | Interpretant represented or signified in
     | the Sign, from the 'Dynamic Interpretant',
     | or effect actually produced on the mind
     | by the Sign;  and both of these from
     | the 'Normal Interpretant', or effect
     | that would be produced on the mind by
     | the Sign after sufficient development
     | of thought.
     |
     | C.S. Peirce, 'Collected Papers', CP 8.343.

Well, you've really tossed me in the middle of the briar patch now!
I must continue with my reading from the 1870 LOR, but now I have
to add to my do-list the problems of comparing the whole variorum
of letters and drafts of letters to Lady Welby.  I only have the
CP 8 and Wiener versions here, so I will depend on you for ample
excerpts from the Lieb volume.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 15

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I will need to go back and pick up the broader contexts of your quotes.
For ease of study I break Peirce's long paragraphs into smaller pieces.

| It seems to me that one of the first useful steps toward a science
| of 'semeiotic' ([Greek 'semeiootike']), or the cenoscopic science
| of signs, must be the accurate definition, or logical analysis,
| of the concepts of the science.
|
| I define a 'Sign' as anything which on the one hand
| is so determined by an Object and on the other hand
| so determines an idea in a person's mind, that this
| latter determination, which I term the 'Interpretant'
| of the sign, is thereby mediately determined by that
| Object.
|
| A sign, therefore, has a triadic relation to
| its Object and to its Interpretant.  But it is
| necessary to distinguish the 'Immediate Object',
| or the Object as the Sign represents it, from
| the 'Dynamical Object', or really efficient
| but not immediately present Object.
|
| It is likewise requisite to distinguish
| the 'Immediate Interpretant', i.e. the
| Interpretant represented or signified in
| the Sign, from the 'Dynamic Interpretant',
| or effect actually produced on the mind
| by the Sign;  and both of these from
| the 'Normal Interpretant', or effect
| that would be produced on the mind by
| the Sign after sufficient development
| of thought.
|
| On these considerations I base a recognition of ten respects in which Signs
| may be divided.  I do not say that these divisions are enough.  But since
| every one of them turns out to be a trichotomy, it follows that in order
| to decide what classes of signs result from them, I have 3^10, or 59049,
| difficult questions to carefully consider;  and therefore I will not
| undertake to carry my systematical division of signs any further,
| but will leave that for future explorers.
|
| C.S. Peirce, 'Collected Papers', CP 8.343.

You never know when the future explorer will be yourself.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 16

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Burks, the editor of CP 8, attaches this footnote
to CP 8.342-379, "On the Classification of Signs":

| From a partial draft of a letter to Lady Welby, bearing
| the dates of 24, 25, and 28 December 1908, Widener IB3a,
| with an added quotation in 368n23.  ...

There is a passage roughly comparable to CP 8.343 in a letter
to Lady Welby dated 23 December 1908, pages 397-409 in Wiener,
which is incidentally the notorious "sop to Cerberus" letter:

| It is usual and proper to distinguish two Objects of a Sign,
| the Mediate without, and the Immediate within the Sign.  Its
| Interpretant is all that the Sign conveys:  acquaintance with
| its Object must be gained by collateral experience.
|
| The Mediate Object is the Object outside of the Sign;  I call
| it the 'Dynamoid' Object.  The Sign must indicate it by a hint;
| and this hint, or its substance, is the 'Immediate' Object.
|
| Each of these two Objects may be said to be capable of either of
| the three Modalities, though in the case of the Immediate Object,
| this is not quite literally true.
|
| Accordingly, the Dynamoid Object may be a Possible;  when I term
| the Sign an 'Abstractive';  such as the word Beauty;  and it will be
| none the less an Abstractive if I speak of "the Beautiful", since it is
| the ultimate reference, and not the grammatical form, that makes the sign
| an 'Abstractive'.
|
| When the Dynamoid Object is an Occurrence (Existent thing or Actual fact
| of past or future), I term the Sign a 'Concretive';  any one barometer
| is an example;  and so is a written narrative of any series of events.
|
| For a 'Sign' whose Dynamoid Object is a Necessitant, I have at present
| no better designation than a 'Collective', which is not quite so bad a
| name as it sounds to be until one studies the matter:  but for a person,
| like me, who thinks in quite a different system of symbols to words, it
| is so awkward and often puzzling to translate one's thought into words!
|
| If the Immediate Object is a "Possible", that is, if the Dynamoid Object
| is indicated (always more or less vaguely) by means of its Qualities, etc.,
| I call the Sign a 'Descriptive';
|
| if the Immediate is an Occurrence, I call the Sign a 'Designative';
|
| and if the Immediate Object is a Necessitant, I call the Sign a
| 'Copulant';  for in that case the Object has to be so identified
| by the Interpreter that the Sign may represent a necessitation.
| My name is certainly a temporary expedient.
|
| It is evident that a possible can determine nothing but a Possible,
| it is equally so that a Necessitant can be determined by nothing but
| a Necessitant.  Hence it follows from the Definition of a Sign that
| since the Dynamoid Object determines the Immediate Object,
|
|    Which determines the Sign itself,
|    which determines the Destinate Interpretant
|    which determines the Effective Interpretant
|    which determines the Explicit Interpretant
|
| the six trichotomies, instead of determining 729 classes of signs,
| as they would if they were independent, only yield 28 classes;
| and if, as I strongly opine (not to say almost prove), there
| are four other trichotomies of signs of the same order of
| importance, instead of making 59,049 classes, these will
| only come to 66.
|
| The additional 4 trichotomies are undoubtedly, first:
|
|    Icons*,  Symbols,  Indices,
|
|*(or Simulacra, Aristotle's 'homoiomata'), caught from Plato, who I guess took it
| from the Mathematical school of logic, for it earliest appears in the 'Phaedrus'
| which marks the beginning of Plato's being decisively influenced by that school.
| Lutoslowski is right in saying that the 'Phaedrus' is later than the 'Republic'
| but his date 379 B.C. is about eight years too early.
|
| and then 3 referring to the Interpretants.  One of these I am pretty confident
| is into:  'Suggestives', 'Imperatives', 'Indicatives', where the Imperatives
| include the Interrogatives.  Of the other two I 'think' that one must be
| into Signs assuring their Interpretants by:
|
|    Instinct,  Experience,  Form.
|
| The other I suppose to be what, in my 'Monist'
| exposition of Existential Graphs, I called:
|
|    Semes,  Phemes,  Delomes.
|
| CSP, 'Selected Writings', pp. 406-408.
|
|'Charles S. Peirce:  Selected Writings (Values in a Universe of Chance)',
| edited with an introduction and notes by Philip P. Wiener, Dover,
| New York, NY, 1966.  Originally published under the subtitle
| in parentheses above, Doubleday & Company, 1958.

But see CP 4.549-550 for a significant distinction between
the categories (or modalities) and the orders of intention.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 17

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway
JA = Jon Awbrey

JA: In closing, observe that the teridentity relation has turned up again
    in this context, as the second comma-ing of the universal term itself:

    1,, = B:B:B +, C:C:C +, D:D:D +, E:E:E +, I:I:I +, J:J:J +, O:O:O.

HC: I see that you've come around to a mention of teridentity again, Jon.
    Still, if I recall the prior discussions, then no one doubts that we
    can have a system of notation in which teridentity appears (I don't
    actually see it here).

Perhaps we could get at the root of the misunderstanding
if you tell me why you don't actually see the concept of
teridentity being exemplified here.

If it's only a matter of having lost the context of the
present discussion over the break, then you may find the
previous notes archived at the distal ends of the ur-links
that I append below (except for the first nine discussion
notes that got lost in a disk crash at the Arisbe Dev site).

HC: Also, I think we can have a system of notation in which
    teridentity is needed.  Those points seem reasonably clear.

The advantage of a concept is the integration of a species of manifold.
The necessity of a concept is the incapacity to integrate it otherwise.

Of course, no one should be too impressed with a concept that
is only the artifact of a particular system of representation.
So before we accord a concept the status of addressing reality,
and declare it a term of some tenured office in our intellects,
we would want to see some evidence that it helps us to manage
a reality that we cannot see a way to manage any other way.

Granted.

Now how in general do we go about an investiture of this sort?
That is the big question that would serve us well to consider
in the process of the more limited investigation of identity.
Indeed, I do not see how it is possible to answer the small
question if no understanding is reached on the big question.

HC: What remains relatively unclear is why we should need a system of notation
    in which teridentity appears or is needed as against one in which it seems
    not to be needed -- since assertion of identity can be made for any number
    of terms in the standard predicate calculus.

This sort of statement totally non-plusses me.
It seems like a complete non-sequitur or even
a contradiction in terms to me.

The question is about the minimal adequate resource base for
defining, deriving, or generating all of the concepts that we
need for a given but very general type of application that we
conventionally but equivocally refer to as "logic".  You seem
to be saying something like this:  We don't need 3-identity
because we have 4-identity, 5-identity, 6-identity, ..., in
the "standard predicate calculus".  The question is not what
concepts are generated in all the generations that follow the
establishment of the conceptual resource base (axiom system),
but what is the minimal set of concepts that we can use to
generate the needed collection of concepts.  And there the
answer is, in a way that is subject to the usual sorts of
mathematical proof, that 3-identity is the minimum while
2-identity is not big enough to do the job we want to do.

Logic Of Relatives 01-41, LOR Discussion Notes 10-17.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 18

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BM = Bernard Morand
JA = Jon Awbrey

JA: but now I have to add to my do-list the problems of comparing the
    whole variorum of letters and drafts of letters to Lady Welby.
    I only have the CP 8 and Wiener versions here, so I will
    depend on you for ample excerpts from the Lieb volume.

BM: I made such a kind of comparison some time ago.  I selected
    the following 3 cases on the criterium of alternate "grounds".
    Hoping it could save some labor.  The first rank expressions
    come from the MS 339 written in Oct. 1904 and I label them
    with an (a).  I think that it is interesting to note that
    they were written four years before the letters to Welby
    and just one or two years after the Syllabus which is the
    usual reference for the classification in 3 trichotomies
    and 10 classes.  The second (b) is our initial table (from
    a draft to Lady Welby, Dec. 1908, CP 8.344) and the third
    (c) comes from a letter sent in Dec. 1908 (CP 8.345-8.376).
    A tabular presentation would be better but I can't do it.
    Comparing (c) against (a) and (b) is informative, I think.

Is this anywhere that it can be linked to from Arisbe?
I've seen many pretty pictures of these things over the
years, but may have to follow my own gnosis for a while.

Pages I have bookmarked just recently,
but not really had the chance to study:

http://www.digitalpeirce.org/hoffmann/p-sighof.htm
http://www.csd.uwo.ca/~merkle/thesis/Introduction.html
http://members.door.net/arisbe/menu/library/aboutcsp/merkle/hci-abstract.htm

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 19

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BM = Bernard Morand
JA = Jon Awbrey

I now have three partially answered messages on the table,
so I will just grab this fragment off the top of the deck.

BM: Peirce gives the following definition (CP 8.343):

BM, quoting CSP:

    | It is likewise requisite to distinguish
    | the 'Immediate Interpretant', i.e. the
    | Interpretant represented or signified in
    | the Sign, from the 'Dynamic Interpretant',
    | or effect actually produced on the mind
    | by the Sign; and both of these from
    | the 'Normal Interpretant', or effect
    | that would be produced on the mind by
    | the Sign after sufficient development
    | of thought.
    |
    | C.S. Peirce, 'Collected Papers', CP 8.343.

JA: Well, you've really tossed me in the middle of the briar patch now!
    I must continue with my reading from the 1870 LOR, ...

BM: Yes indeed!  I am irritated by having not the necessary
    turn of mind to fully grasp it.  But it seems to be a
    prerequisite in order to understand the very meaning
    of the above table.  It could be the same for:

BM, quoting CSP:

    | I define a 'Sign' as anything which on the one hand
    | is so determined by an Object and on the other hand
    | so determines an idea in a person's mind, that this
    | latter determination, which I term the 'Interpretant'
    | of the sign, is thereby mediately determined by that
    | Object.

BM: The so-called "latter determination" would make the 'Interpretant'
    a tri-relative term into a teridentity involving Sign and Object.
    Isn't it?

BM: I thought previously that the Peirce's phrasing was just applying the
    principle of transitivity.  From O determines S and S determines I,
    it follows:  O determines I.  But this is not the same as teridentity.
    Do you think so or otherwise?

My answers are "No" and "Otherwise".

Continuing to discourse about definite universes thereof,
the 3-identity term over the universe 1 = {A, B, C, D, ...} --
I only said it was definite, I didn't say it wasn't vague! --
designates, roughly speaking, the 3-adic relation that may
be hinted at by way of the following series:

1,,  =  A:A:A +, B:B:B +, C:C:C +, D:D:D +, ...

I did a study on Peirce's notion of "determination".
As I understand it so far, we need to keep in mind
that it is more fundamental than causation, can be
a form of "partial determination", and is roughly
formal, mathematical, or "information-theoretic",
not of necessity invoking any temporal order.

For example, when we say "The points A and B determine the line AB",
this invokes the concept of a 3-adic relation of determination that
does not identify A, B, AB, is not transitive, as transitivity has
to do with the composition of 2-adic relations and would amount to
the consideration of a degenerate 3-adic relation in this context.

Now, it is possible to have a sign relation q whose sum enlists
an elementary sign relation O:S:I where O = S = I.  For example,
it makes perfect sense to me to say that the whole universe may
be a sign of itself to itself, so the conception is admissable.
But this amounts to a very special case, by no means general.
More generally, we are contemplating sums like the following:

q  =  O1:S1:I1 +, O2:S2:I2 +, O3:S3:I3 +, ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 20

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway
JR = Joe Ransdell

HC: Though I certainly hesitate to think that we are separated
    from the world by a veil of signs, it seems clear, too, on
    Peircean grounds, that no sign can ever capture its object
    completely.

JR: Any case of self-representation is a case of sign-object identity,
    in some sense of "identity".  I have argued in various places that
    this is the key to the doctrine of immediate perception as it occurs
    in Peirce's theory.

To put the phrase back on the lathe:

| We are not separated from the world by a veil of signs --
| we are the veil of signs.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 21

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

AS = Armando Sercovich

AS: We are not separated from the world by a veil of signs nor we are a veil of signs.
    Simply we are signs.

AS, quoting CSP:

    | The *man-sign* acquires information, and comes to mean more than he did before.
    | But so do words.  Does not electricity mean more now than it did in the days
    | of Franklin?  Man makes the word, and the word means nothing which the man
    | has not made it mean, and that only to some man.  But since man can think
    | only by means of words or other external symbols, these might turn round
    | and say:  "You mean nothing which we have not taught you, and then only
    | so far as you address some word as the interpretant of your thought".
    | In fact, therefore, men and words reciprocally educate each other;
    | each increase of a man's information involves, and is involved by,
    | a corresponding increase of a word's information.
    |
    | Without fatiguing the reader by stretching this parallelism too far, it is
    | sufficient to say that there is no element whatever of man's consciousness
    | which has not something corresponding to it in the word;  and the reason is
    | obvious.  It is that the word or sign which man uses *is* the man itself.
    | For, as the fact that every thought is a sign, taken in conjunction with
    | the fact that life is a train of thought, proves that man is a sign;  so,
    | that every thought is an *external* sign proves that man is an external
    | sign.  That is to say, the man and the external sign are identical, in
    | the same sense in which the words 'homo' and 'man' are identical.  Thus
    | my language is the sum total of myself;  for the man is the thought ...
    |
    |'Charles S. Peirce:  Selected Writings (Values in a Universe of Chance)',
    | edited with an introduction and notes by Philip P. Wiener, Dover,
    | New York, NY, 1966. Originally published under the subtitle
    | in parentheses above, Doubleday & Company, 1958.

I read you loud and clear.
Every manifold must have
its catalytic converter.

<Innumerate Continuation:>

TUC = The Usual CISPEC

TUC Alert:

| E.P.A. Says Catalytic Converter Is
| Growing Cause of Global Warming
| By Matthew L. Wald
| Copyright 1998 The New York Times
| May 29, 1998
| -----------------------------------------------------------------------
| WASHINGTON -- The catalytic converter, an invention that has sharply
| reduced smog from cars, has now become a significant and growing cause
| of global warming, according to the Environmental Protection Agency

Much as I would like to speculate ad libitum on these exciting new prospects for the
application of Peirce's chemico-algebraic theory of logic to the theorem-o-dynamics
of auto-semeiosis, I must get back to "business as usual" (BAU) ...

And now a word from our sponsor ...

http://www2.naias.com/

Reporting from Motown ---

Jon Awbrey

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 22

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway

HC: You quote the following passage from a prior posting of mine:

HC: What remains relatively unclear is why we should need a system of notation
    in which teridentity appears or is needed as against one in which it seems
    not to be needed -- since assertion of identity can be made for any number
    of terms in the standard predicate calculus.

HC: You comment as follows:

JA: This sort of statement totally non-plusses me.
    It seems like a complete non-sequitur or even
    a contradiction in terms to me.

JA: The question is about the minimal adequate resource base for
    defining, deriving, or generating all of the concepts that we
    need for a given but very general type of application that we
    conventionally but equivocally refer to as "logic".  You seem
    to be saying something like this:  We don't need 3-identity
    because we have 4-identity, 5-identity, 6-identity, ..., in
    the "standard predicate calculus".  The question is not what
    concepts are generated in all the generations that follow the
    establishment of the conceptual resource base (axiom system),
    but what is the minimal set of concepts that we can use to
    generate the needed collection of concepts.  And there the
    answer is, in a way that is subject to the usual sorts of
    mathematical proof, that 3-identity is the minimum while
    2-identity is not big enough to do the job we want to do.

HC: I have fallen a bit behind on this thread while attending to some other 
    matters, but in this reply, you do seem to me to be coming around to an 
    understanding of the issues involved, as I see them.  You put the matter
    this way, "We don't need 3-identity because we have 4-identity, 5-identity, 
    6-identity, ..., in the 'standard predicate calculus'".  Actually, as I think 
    you must know, there is no such thing as "4-identity", "5-identity", etc., in 
    the standard predicate calculus.  It is more that such concepts are not needed,
    just as teridentity is not needed, since the general apparatus of the predicate
    calculus allows us to express identity among any number of terms without special
    provision beyond "=".

No, that is not the case.  Standard predicate calculus allows the expression
of predicates I_k, for k = 2, 3, 4, ..., such that I_k (x_1, ..., x_k) holds
if and only if all x_j, for j = 1 to k, are identical.  So predicate calculus
contains a k-identity predicate for all such k.  So whether "they're in there"
is not an issue.  The question is whether these or any other predicates can be
constructed or defined in terms of 2-adic relations alone.  And the answer is
no, they cannot.  The vector of the misconception counterwise appears to be
as various a virus as the common cold, and every bit as resistant to cure.
I have taken the trouble to enumerate some of the more prevalent strains,
but most of them appear to go back to the 'Principia Mathematica', and
the variety of nominalism called "syntacticism" -- Ges-und-heit! --
that was spread by it, however unwittedly by some of its carriers.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 23

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

In trying to answer the rest of your last note,
it seems that we cannot go any further without
achieving some concrete clarity as to what is
denominated by "standard predicate calculus",
that is, "first order logic", or whatever.

There is a "canonical" presentation of the subject, as I remember it, anyway,
in the following sample of materials from Chang & Keisler's 'Model Theory'.
(There's a newer edition of the book, but this part of the subject hasn't
really changed all that much in ages.)

Model Theory 01-39

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 24

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway

HC: I might object that "teridentity" seems to come
    to a matter of "a=b & b=c", so that a specific
    predicate of teridentity seems unnecessary.

I am presently concerned with expositing and interpreting
the logical system that Peirce laid out in the LOR of 1870.
It is my considered opinion after thirty years of study that
there are untapped resources remaining in this work that have
yet to make it through the filters of that ilk of syntacticism
that was all the rage in the late great 1900's.  I find there
to be an appreciably different point of view on logic that is
embodied in Peirce's work, and until we have made the minimal
effort to read what he wrote it is just plain futile to keep
on pretending that we have already assimilated it, or that
we are qualified to evaluate its cogency.

The symbol "&" that you employ above denotes a mathematical object that
qualifies as a 3-adic relation.  Independently of my own views, there
is an abundance of statements in evidence that mathematical thinkers
from Peirce to Goedel consider the appreciation of facts like this
to mark the boundary between realism and nominalism in regard to
mathematical objects.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 25

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway
JA = Jon Awbrey

HC: I might object that "teridentity" seems to come
    to a matter of "a=b & b=c", so that a specific
    predicate of teridentity seems unnecessary.

JA: I am presently concerned with expositing and interpreting
    the logical system that Peirce laid out in the LOR of 1870.
    It is my considered opinion after thirty years of study that
    there are untapped resources remaining in this work that have
    yet to make it through the filters of that ilk of syntacticism
    that was all the rage in the late great 1900's.  I find there
    to be an appreciably different point of view on logic that is
    embodied in Peirce's work, and until we have made the minimal
    effort to read what he wrote it is just plain futile to keep
    on pretending that we have already assimilated it, or that
    we are qualified to evaluate its cogency.

JA: The symbol "&" that you employ above denotes a mathematical object that
    qualifies as a 3-adic relation.  Independently of my own views, there
    is an abundance of statements in evidence that mathematical thinkers
    from Peirce to Goedel consider the appreciation of facts like this
    to mark the boundary between realism and nominalism in regard to
    mathematical objects.

HC: I would agree, I think, that "&" may be thought of
    as a function mapping pairs of statements onto the
    conjunction of that pair.

Yes, indeed, in the immortal words of my very first college algebra book:
"A binary operation is a ternary relation".  As it happens, the symbol "&"
is equivocal in its interpretation -- computerese today steals a Freudian
line and dubs it "polymorphous" -- it can be regarded in various contexts
as a 3-adic relation on syntactic elements called "sentences", on logical
elements called "propositions", or on truth values collated in the boolean
domain B = {false, true} = {0, 1}.  But the mappings and relations between
all of these interpretive choices are moderately well understood.  Still,
no matter how many ways you enumerate for looking at a B-bird, the "&" is
always 3-adic.  And that is sufficient to meet your objection, so I think
I will just leave it there until next time.

On a related note, that I must postpone until later:
We seem to congrue that there is a skewness between
the way that most mathematicians use logic and some
philosophers talk about logic, but I may not be the
one to set it adjoint, much as I am inclined to try.
At the moment I have this long-post-poned exponency
to carry out.  I will simply recommend for your due
consideration Peirce's 1870 Logic Of Relatives, and
leave it at that.  There's a cornucopiousness to it
that's yet to be dreamt of in the philosophy of the
1900's.  I am doing what I can to infotain you with
the Gardens of Mathematical Recreations that I find
within Peirce's work, and that's in direct response
to many, okay, a couple of requests.  Perhaps I can
not hope to attain the degree of horticultural arts
that Gardners before me have exhibited in this work,
but then again, who could?  Everybody's a critic --
but the better ones read first, and criticize later.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 26

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway

HC: But on the other hand, it is not customary to think of "&" as
    a relation among statements or sentences -- as, for instance,
    logical implication is considered a logical relation between
    statements or sentences.

Actually, it is the custom in many quarters to treat all of the
boolean operations, logical connectives, propositional relations,
or whatever you want to call them, as "equal citizens", having each
their "functional" (f : B^k -> B) and their "relational" (L c B^(k+1))
interpretations and applications.  From this vantage, the interpretive
distinction that is commonly regarded as that between "assertion" and
mere "contemplation" is tantamount to a "pragmatic" difference between
computing the values of a function on a given domain of arguments and
computing the inverse of a function vis-a-vis a prospective true value.
This is the logical analogue of the way that our mathematical models
of reality have long been working, unsuspected and undisturbed by
most philosophers of science, I might add.  If only the logical
side of the ledger were to be developed rather more fully than
it is at present, we might wake one of these days to find our
logical accounts of reality, finally, at long last, after an
overweaningly longish adolescence, beginning to come of age.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 27

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway

HC: For, if I make an assertion A&B, then I am not asserting
    that the statement A stands in a relation to a statement B.
    Instead, I am asserting the conjunction A&B (which logically
    implies both the conjuncts in view of the definition of "&").

Please try to remember where we came in.  This whole play of
animadversions about 3-adicity and 3-identity is set against
the backdrop of a single point, over the issue as to whether
3-adic relations are wholly dispensable or somehow essential
to logic, mathematics, and indeed to argument, communication,
and reasoning in general.  Some folks clamor "Off with their
unnecessary heads!" -- other people, who are forced by their
occupations to pay close attention to the ongoing complexity
of the processes at stake, know that, far from finding 3-ads
in this or that isolated corner of the realm, one can hardly
do anything at all in the ways of logging or mathing without
running smack dab into veritable hosts of them.

I have just shown that "a=b & b=c" involves a 3-adic relation.
Some people would consider this particular 3-adic relation to
be more complex than the 3-identity relation, but that may be
a question of taste.  At any rate, the 3-adic aspect persists.

HC: If "&" counts as a triadic relation, simply because it serves
    to conjoin two statements into a third, then it would seem that
    any binary relation 'R' will count as triadic, simply because
    it places two things into a relation, which is a "third" thing.
    By the same kind of reasoning a triadic relation, as ordinarily
    understood would be really 4-adic.

The rest of your comments are just confused,
and do not use the terms as they are defined.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Discussion Note 28

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o



o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  On Deck

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o



o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Work 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BM: Several discussions could take place there,
    as to the reasons for the number of divisions,
    the reasons of the titles themselves.  Another
    one is my translation from "normal interpretant"
    into "final interpretant" (which one is called
    elsewhere "Eventual Interpretant" or "Destinate
    Interpretant" by CSP).  I let all this aside
    to focus on the following remark:

BM: 6 divisions correspond to individual correlates:

    (S, O_i, O_d, I_i, I_d, I_n),

    3 divisions correspond to dyads:

    (S : O_d, S : I_d, S : I_n),

    and the tenth to a triad:

    (S : O_d : I_n).

    This remark would itself deserve
    a lot of explanations but one
    more time I let this aside.

BM: Then we have the following very clear statement from Peirce:

   | It follows from the Definition of a Sign
   | that since the Dynamoid Object determines
   | the Immediate Object,
   | which determines the Sign,
   | which determines the Destinate Interpretant
   | which determines the Effective Interpretant
   | which determines the Explicit Interpretant
   |
   | the six trichotomies, instead of determining 729 classes of signs,
   | as they would if they were independent, only yield 28 classes; and
   | if, as I strongly opine (not to say almost prove) there are four other
   | trichotomies of signs of the same order of importance, instead of making
   | 59049 classes, these will only come to 66.
   |
   | CSP, "Letter to Lady Welby", 14 Dec 1908, LW, p. 84.

BM: The separation made by CSP between 6 divisions and four others
    seems to rely upon the suggested difference between individual
    correlates and relations.  We get the idea that the 10 divisions
    are ordered on the whole and will end into 66 classes (by means of
    three ordered modal values on each division:  maybe, canbe, wouldbe).
    Finally we have too the ordering for the divisions relative to the
    correlates that I write in my notation:

    Od -> Oi -> S -> If -> Id -> Ii.

BM: This order of "determinations" has bothered many people
    but if we think of it as operative in semiosis, it seems
    to be correct (at least to my eyes).  Thus the question is:
    where, how, and why the "four other trichotomies" fit in this
    schema to obtain a linear ordering on the whole 10 divisions?
    May be the question can be rephrased as:  how intensional
    relationships fit into an extensional one?  Possibly the
    question could be asked the other way.  R. Marty responds
    that in a certain sense the four trichotomies give nothing
    more than the previous six ones but I strongly doubt of this.

BM: I put the problem in graphical form in an attached file
    because my message editor will probably make some mistakes.
    I make a distinction between arrow types drawing because I am
    not sure that the sequence of correlates determinations is of
    the same nature than correlates determination inside relations.

BM: It looks as if the problem amounts to some kind of projection
    of relations on the horizontal axis made of correlates.

BM: If we consider some kind of equivalence (and this seems necessary to
    obtain a linear ordering), by means of Agent -> Patient reductions on
    relations, then erasing transitive determinations leads to:

    Od -> Oi -> S -> S-Od -> If -> S-If -> S-Od-If -> Id -> S-Id -> Ii

BM: While it is interesting to compare the subsequence
    S-Od -> If -> S-If -> S-Od-If with the pragmatic maxim,
    I have no clear idea of the (in-) validity of such a result.
    But I am convinced that the clarity has to come from the
    Logic Of Relatives.

BM: I will be very grateful if you can make something with all that stuff.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Work 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BM: I also found this passage which may be of some interest
    (CP 4.540, Prolegomena to an Apology of Pragmatism):


                   Quote Peirce ----------

                   But though an Interpretant is not necessarily a Conclusion, yet a
                   Conclusion is necessarily an Interpretant. So that if an Interpretant is
                   not subject to the rules of Conclusions there is nothing monstrous in my
                   thinking it is subject to some generalization of such rules. For any
                   evolution of thought, whether it leads to a Conclusion or not, there is a
                   certain normal course, which is to be determined by considerations not in
                   the least psychological, and which I wish to expound in my next
                   article;†1 and while I entirely agree, in opposition to distinguished
                   logicians, that normality can be no criterion for what I call
                   rationalistic reasoning, such as alone is admissible in science, yet it
                   is precisely the criterion of instinctive or common-sense reasoning,
                   which, within its own field, is much more trustworthy than rationalistic
                   reasoning. In my opinion, it is self-control which makes any other than
                   the normal course of thought possible, just as nothing else makes any
                   other than the normal course of action possible; and just as it is
                   precisely that that gives room for an ought-to-be of conduct, I mean
                   Morality, so it equally gives room for an ought-to-be of thought, which
                   is Right Reason; and where there is no self-control, nothing but the
                   normal is possible. If your reflections have led you to a different
                   conclusion from mine, I can still hope that when you come to read my next
                   article, in which I shall endeavor to show what the forms of thought are,
                   in general and in some detail, you may yet find that I have not missed
                   the truth.

                   End Quote------------



JA: Just as I have always feared, this classification mania
    appears to be communicable! But now I must definitely
    review the Welby correspondence, as all this stuff was
    a blur to my sensibilities the last 10 times I read it.


BM:                I think that I understand your reticence. I wonder if:

                   a- the fact that the letters to Lady Welby have been published as such,
                   has not lead to approach the matter in a certain way. 

                   b- other sources, eventually unpublished, would give another lighting on
                   the subject, namely a logical one. I think of MS 339 for example that
                   seems to be part of the Logic Notebook. I have had access to some pages
                   of it, but not to the whole MS.


                   A last remark. I don't think that classification is a mania for CSP but I
                   know that you know that! It is an instrument of thought and I think that
                   it is in this case much more a plan for experimenting than the exposition
                   of a conclusion. Experimenting what ? There is a strange statement in a
                   letter to W. James where CSP says that what is in question in his "second
                   way of dividing signs" is the logical theory of numbers. I give this from
                   memory. I have not the quote at hand now but I will search for it if
                   needed. 

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Work 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

BM = Bernard Morand
JA = Jon Awbrey

JA: ... but now I have to add to my do-list the problems of comparing
    the whole variorum of letters and drafts of letters to Lady Welby.
    I only have the CP 8 and Wiener versions here, so I will depend
    on you for ample excerpts from the Lieb volume.

BM: I made such a kind of comparison some time ago. I selected the following
    3 cases on the criterium of alternate "grounds". Hoping it could save
    some labor. The first rank expressions come from the MS 339 written in
    Oct. 1904 and I label them with an (a). I think that it is interesting to
    note that they were written four years before the letters to Welby and
    just one or two years after the Syllabus which is the usual reference for
    the classification in 3 trichotomies and 10 classes. The second (b) is
    our initial table (from a draft to Lady Welby, Dec. 1908, CP 8.344) and
    the third (c) comes from a letter sent in Dec. 1908 (CP 8.345-8.376). A
    tabular presentation would be better but I can't do it. Comparing (c)
    against (a) and (b) is informative, I think.


          Division 1 

          (a) According to the matter of the Sign

          (b) According to the Mode of Apprehension of the Sign itself

          (c) Signs in respect to their Modes of possible
          Presentation<fontfamily><param>Times</param> 

          </fontfamily><bold>

          </bold>Division 2

          (a)<bold> </bold>According to the Immediate Object

          (b) According to the Mode of Presentation of the Immediate Object

          (c) Objects, as they may be presented<fontfamily><param>Times</param> 


          Division 3

          </fontfamily>(a) According to the Matter of the Dynamic Object

          (b)<italic> </italic>According to the Mode of Being of the Dynamical
          Object

          (c) In respect to the Nature of the Dynamical Objects of Signs


          Division 4<fontfamily><param>Times</param> 

          </fontfamily>(a)<bold> </bold>According to the mode of representing
          object by the Dynamic Object

          (b) According to the Relation of the Sign to its Dynamical Object

          (c) The fourth Trichotomy<fontfamily><param>Times</param> 


          Division 5

          </fontfamily>(a) According to the Immédiate Interpretant

          (b) According to the Mode of Presentation of the Immediate Interpretant

          (c) As to the nature of the Immediate (or Felt ?)
          Interpretant<fontfamily><param>Times</param> 


          Division 6

          </fontfamily>(a)According to the Matter of Dynamic Interpretant

          (b)According to the Mode of Being of the Dynamical Interpretant

          (c)As to the Nature of the Dynamical
          Interpretant<fontfamily><param>Times</param> 


          Division 7

          </fontfamily>(a)<bold> </bold>According to the Mode of Affecting Dynamic
          Interpretant

          (b) According to the relation of the Sign to the Dynamical Interpretant

          (c) As to the Manner of Appeal to the Dynamic Interpretant


          Division 8

          (a) According to the Matter of Representative Interpretant

          (b)<italic> </italic>According to the Nature of the Normal Interpretant

          (c) According to the Purpose of the Eventual
          Interpretant<fontfamily><param>Times</param> 


          Division 9

          </fontfamily>(a) According to the Mode of being represented by
          Representative Interpretant

          (b) According to the the relation of the Sign to the Normal 
          Interpretant

          (c) As to the Nature of the Influence of the Sign


          Division 10<fontfamily><param>Times</param> 

          </fontfamily>(a) According to the Mode of being represented to represent
          object by Sign, Truly

          (b) According to the Triadic Relation of the Sign to its Dynamical Object
          and to its Normal Interpretant

          (c) As to the Nature of the Assurance of the
          Utterance<fontfamily><param>Times</param> 

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Work 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JA: It may appear that one has side-stepped the issue of empiricism
    that way, but then all that stuff about the synthetic a priori
    raises its head, and we have Peirce's insight that mathematics
    is observational and even experimental, and so I must trail off
    into uncoordinated elliptical thoughts ...

HC: In contrast with this it strikes me that not all meanings of "analytic"
    and "synthetic" have much, if anything, to do with the "analytic and the
    synthetic", say, as in Quine's criticism of the "dualism" of empiricism.
    Surely no one thinks that a plausible analysis must be analytic or that 
    synthetic materials tell us much about epistemology.  So, it is not
    clear that anything connected with analyticity or a priori knowledge
    will plausibly or immediately arise from a discussion of analytical
    geometry.  Prevalent mathematical assumptions or postulates, yes --
    but who says these are a prior?  Can't non-Euclidean geometry also
    be treated in the style of analytic geometry? 

HC: I can imagine the a discussion might be forced in
    that direction, but the connections don't strike me
    as at all obvious or pressing.  Perhaps Jon would just
    like to bring up the notion of the synthetic apriori?
    But why?

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Work 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HC = Howard Callaway

HC: But I see you as closer to my theme or challenge, when you say
    "The question is about the minimal adequate resource base for
    defining, deriving, or generating all of the concepts that we
    need for a given but very general type of application that we
    conventinally but equivocally refer to as 'logic'".

HC: I think it is accepted on all sides of the discussion that there
    is some sort of "equivalence" between the standard predicate logic
    and Peirce's graphs.

There you would be mistaken, except perhaps for the fact that
"some sort of equivalence" is vague to the depths of vacuity.
It most particularly does not mean "all sorts of equivalence"
or even "all important sorts of equivalence".  It is usually
interpreted to mean an extremely abstract type of syntactic
equivalence, and that is undoubtedly one important type of
equivalence that it is worth examining whether two formal
systems have or not.  But it precisely here that we find
another symptom of syntacticism, namely, the deprecation
of all other important qualities of formal systems, most
pointedly their "analystic, "semantic", and "pragmatic"
qualities, which make all the difference in how well the
system actually serves its users in a real world practice.
You can almost hear the whining and poohing coming from the
syntactic day camp, but those are the hard facts of the case.

HC: But we find this difference in relation to the vocabulary used to express
    identity.  From the point of view of starting with the predicate calculus,
    we don't need "teridentity".  So, this seems to suggest there is something
    of interesting contrast in Peirce's logic, which brings in this concept.
    The obvious question may be expressed by asking why we need teridentity
    in Peirce's system and how Peirce's system may recommend itself in contrast
    to the standard way with related concepts.  This does seem to call for
    a comparative evaluation of distinctive systems.  That is not an easy task,
    as I think we all understand. But I do think that if it is a goal to have
    Peirce's system better appreciated, then that kind of question must be
    addressed.  If "=" is sufficient in the standard predicate calculus,
    to say whatever we may need to say about the identity of terms, then
    what is the advantage of an alternative system which insists on always 
    expressing identity of triples?

HC: The questions may look quite different, depending on where we start.
    But in any case, I thought I saw some better appreciation of the
    questions in your comments above.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Work 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

It's been that way for about as long as anybody can remember, and
it will remain so, in spite of the spate of history rewriting and
image re-engineering that has become the new rage in self-styled
"analytic" circles.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Work 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

The brands of objection that you continue to make, with no evidence
of reflection on the many explanations that I and others have taken
the time to write out for you, lead me to believe that you are just
not interested in making that effort.  That's okay, life is short,
the arts are long and many, there is always something else to do.

HC: For, if I make an assertion A&B, then I am not asserting
    that the statement A stands in a relation to a statement B.
    Instead, I am asserting the conjunction A&B (which logically
    implies both the conjuncts in view of the definition of "&").
    If "&" counts as a triadic relation, simply because it serves
    to conjoin two statements into a third, then it would seem that
    any binary relation 'R' will count as triadic, simply because
    it places two things into a relation, which is a "third" thing.
    By the same kind of reasoning a triadic relation, as ordinarily
    understood would be really 4-adic.

HC: Now, I think this is the kind of argument you are making, ...

No, it's the kind of argument that you are making.
I am not making that kind of argument, and Peirce
did not make that kind of argument.  Peirce used
his terms subject to definitions that would have
been understandable, and remain understandable,
to those of his readers who understand these
elementary definitions, either though their
prior acquaintance with standard concepts
or through their basic capacity to read
a well-formed, if novel definition.

Peirce made certain observations about the structure of logical concepts
and the structure of their referents.  Those observations are accurate
and important.  He expressed those observations in a form that is clear
to anybody who knows the meanings of the technical terms that he used,
and he is not responsible for the interpretations of those who don't.

HC: ... and it seems to both trivialize the claimed argument
    for teridentity, by trivializing the conception of what
    is to count as a triadic, as contrasted with a binary
    relation, and it also seems to introduce a confusion
    about what is is count as a binary, vs. a triadic
    relation.

Yes, the argument that you are making trivializes
just about everything in sight, but that is the
common and well-known property of any argument
that fails to base itself on a grasp of the
first elements of the subject matter.

HC: If this is mathematical realism, then so much the worse for
    mathematical realism.  I am content to think that we do not
    have a free hand in making up mathematical truth.

No, it's not mathematical realism.  It is your reasoning,
and it exhibits all of the symptoms of syntacticism that
I have already diagnosed.  It's a whole other culture
from what is pandemic in the practice of mathematics,
and it never fails to surprise me that people who
would never call themselves "relativists" in any
other matter of culture suddenly turn into just
that in matters of simple mathematical fact.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Manifolds of Sensuous Impressions

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

MSI.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 11.  On the Hypotheses Which Lie at the Basis of Geometry
|
| On June 10, 1854, Bernhard Riemann delivered his probationary lecture
| as a candidate for an unpaid lectureship at Göttingen.  The title of the
| paper was "Ueber die Hypothesen welche der Geometrie zu Grunde Liegen".
| It is reported that the lecture won enthusiastic praise from Gauss and
| after its publication in 1867 it won similar praise throughout the
| mathematical world.  One of the first to appreciate its importance
| was the English mathematician and friend of the Peirce family,
| William Kingdon Clifford.  A first-rate mathematician himself,
| Clifford recognized the great importance of Riemann's lecture
| and in 1873 he translated it into English.  It is quite probable
| that Clifford's praise of Riemann's work influenced Peirce's judgment
| of it.  Concerning him, Peirce wrote, "Bernhard Riemann is recognized
| by all mathematicians as 'the' highest authority upon the philosophy
| of geometry".  Since Peirce's views on the foundations of geometry
| are based on Riemann's, we must examine the latter's theory in
| some detail.  (MGM, p. 219).
|
| Murray G. Murphey,
|'The Development of Peirce's Philosophy',
| first published, Harvard University Press, Cambridge, MA, 1961.
| reprinted, Hackett Publishing Company, Indianapolis, IN, 1993.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

MSI.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 11.  On the Hypotheses Which Lie at the Basis of Geometry (cont.)
|
| Riemann opened his lecture with the following much-quoted passage.
|
| | It is known that geometry assumes, as things given, both the notion of space and
| | the first principles of constructions in space.  She gives definitions of them
| | which are merely nominal, while the true determinations appear in the form of
| | axioms.  The relation of these assumptions remains consequently in darkness;
| | we neither perceive whether and how far their connection is necessary, nor,
| | 'a priori', whether it is possible.
|
| "From Euclid to Legendre", Riemann declares, nothing
| has been done to remove this obscurity.  He continues:
|
| | The reason of this is doubtless that the general notion of multiply extended
| | magnitudes (in which space-magnitudes are included) remained entirely unworked.
| | I have in the first place, therefore, set myself the task of constructing the
| | notion of a multiply extended magnitude out of general notions of magnitude.
| | It will follow from this that a multiply extended magnitude is capable
| | of different measure-relations, and consequently that space is only a
| | particular case of a triply extended magnitude.  But hence flows as a
| | necessary consequence that the propositions of geometry cannot be derived
| | from general notions of magnitude, but that the properties which distinguish
| | space from other conceivable triply extended magnitudes are only to be deduced
| | from experience.  Thus arises the problem, to discover the simplest matters of
| | fact from which the measure-relations of space may be determined;  a problem
| | which from the nature of the case is not completely determinate, since there
| | may be several systems of matters of fact which suffice to determine the
| | measure-relations of space -- the most important system for our present
| | purpose being that which Euclid has laid down as a foundation.  These
| | matters of fact are -- like all matters of fact -- not necessary, but
| | only of empirical certainty;  they are hypotheses.  We may therefore
| | investigate their probability, which within the limits of observation
| | is of course very great ...
|
| He then proceeded to consider separately the notion of n-ply extended magnitude,
| and of the measure relations possible in such a manifold.  In explicating the
| former Riemann states:
|
| | Magnitude-notions are only possible where there is an antecedent
| | general notion which admits of different specialisations.  According
| | as there exists among these specialisations a continuous path from one
| | to another or not, they form a 'continuous' or 'discrete' manifoldness:
| | the individual specialisations are called in the first case points,
| | in the second case elements, of the manifoldness.
|
| As examples of notions whose specializations
| form a continuous manifoldness Riemann offers
| positions abd colors.  He then continues:
|
| | Definite portions of a manifoldness, distinguished by a mark
| | or by a boundary, are called Quanta.  Their comparison with
| | regard to quantity is accomplished in the case of discrete
| | magnitudes by counting, in the case of continuous magnitudes
| | by measuring.  Measure consists in the superposition of the
| | magnitudes to be compared;  it therefore requires a means
| | of using one magnitude as the standard for another.
|
| (MGM, pp. 219-220)
| 
| Murray G. Murphey,
|'The Development of Peirce's Philosophy',
| first published, Harvard University Press, Cambridge, MA, 1961.
| reprinted, Hackett Publishing Company, Indianapolis, IN, 1993.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

MSI.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 11.  On the Hypotheses Which Lie at the Basis of Geometry (cont.)
|
| Riemann then shows how an n-ply extended manifold
| may be constructed and determination of place in it
| reduced to determinations of quantity.  He then faces
| the question of the measure relations possible in such
| a manifold.  Mathematically, this portion of the essay is
| of great significance, but the technical development need
| not concern us here.  The basic idea may be summarized as
| follows.  Riemann notes that measurement requires quantity
| to be independent of place and he accordingly adopts the
| hypothesis that the length of lines is independent of their
| position so that every line is measurable by every other.
| If we define distance as the square root of a quadratic
| function of the coordinates then Riemann shows that for
| the length of a line to be independent of its position,
| the space in which the line lies must have constant
| curvature.  "The common character of these continua
| whose curvature is constant may also be expressed
| thus, that figures may be moved in them without
| stretching ... whence it follows that in
| aggregates with constant curvature
| figures may have any arbitrary
| position given them."
|
| In the final section of the essay Riemann turns to the
| question of the application of his technical apparatus
| to empirical space for the determination of its metric
| properties.  In a space of constant curvature in which
| line length is independent of position, the empirical
| truth of the Euclidean axiom that the sum of the angles
| of a triangle is equal to two right angles is sufficient
| to determine the metric properties of that space.  But such
| empirical determinations run into difficulty in the cases of
| the infinitely great and the infinitely small.  "The questions
| about the infinitely gerat are for the interpretation of nature
| useless questions", according to Riemann, but the same is not true
| on the side of the infinitely small.  He continues:
|
| | If we suppose that bodies exist independently of position,
| | the curvature is everywhere constant, and it then results
| | from astronomical measurements that it cannot be different
| | from zero;  or at any rate its reciprocal must be an area
| | in comparison with which the range of our telescopes may
| | be neglected.  But if this independence of bodies from
| | position does not exist, we cannot draw conclusions
| | from metric relations of the great, to those of the
| | infinitely small;  in that case the curvature at
| | each point may have an arbitrary value in three
| | directions, provided that the total curvature
| | of every measurable portion of space does not
| | differ sensibly from zero.  ...  Now it seems
| | that the empirical notions on which the metrical
| | determinations of space are founded, the notion of
| | a solid body and a ray of light, cease to be valid
| | for the infinitely small.  We are therefore quite
| | at liberty to suppose that the metric relations
| | of space in the infinitely small do not conform
| | to the hypotheses of geometry;  and we ought in
| | fact to suppose it, if we can thereby obtain a
| | simpler explanation of phenomena.
| |
| | The question of the validity of the hypotheses of
| | geometry in the infinitely small is bound up with
| | the question of the ground of the metric relations
| | of space.  In this last question ... is found the
| | application of the remark made above;  that in a
| | discrete manifoldness, the ground of its metric
| | relations is given in the notion of it, while
| | in a continuous manifoldness, this ground
| | must come from outside.  Either therefore
| | the reality which underlies space must
| | form a discrete manifoldness, or we
| | must seek the ground of its metric
| | relations outside it, in binding
| | forces which act upon it.
|
| But the final answer to this question, Riemann asserts,
| must come from physics rather than from pure mathematics.
|
| MGM, pp. 221-222.
|
| Murray G. Murphey,
|'The Development of Peirce's Philosophy',
| first published, Harvard University Press, Cambridge, MA, 1961.
| reprinted, Hackett Publishing Company, Indianapolis, IN, 1993.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

MSI.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| 11.  On the Hypotheses Which Lie at the Basis of Geometry (cont.)
|


| MGM, pp. 222-223.
|
| Murray G. Murphey,
|'The Development of Peirce's Philosophy',
| first published, Harvard University Press, Cambridge, MA, 1961.
| reprinted, Hackett Publishing Company, Indianapolis, IN, 1993.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Non Sequi Tours

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

NST.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Penumber

|"The N.I.C.E. marks the beginning of a new era -- the 'really' scientific era.
| Up to now, everything has been haphazard.  This is going to put science itself
| on a scientific basis.  There are to be forty interlocking committees sitting
| every day and they've got a wonderful gadget -- I was shown the model last time
| I was in town -- by which the findings of each committee print themselves off
| in their own little compartment on the Analytical Notice-Board every half hour.
| Then, that report slides itself into the right position where it's connected up
| by little arrows with all the relevant parts of the other reports.  A glance at
| the Board shows you the policy of the whole Institute actually taking shape under
| your own eyes.  There'll be a staff of at least twenty experts at the top of the
| building working this Notice-Board in a room rather like the Tube control rooms.
| It's a marvellous gadget.  The different kinds of business all come out in the
| Board in different coloured lights.  It must have cost half a million.  They
| call it a Pragmatometer."
|
| C.S. Lewis, 'That Hideous Strength', 1943

Happy Beethoven's Birthday!

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

NST.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Umber

| All this did not in the least influence his sociological convictions.
| Even if he had been free from Belbury and wholly unambitious, it
| could not have done so, for his education had had the curious
| effect of making things that he read and wrote more real to
| him than things he saw.  Statistics about agricultural
| labourers were the substance;  any real ditcher,
| ploughman, or farmer's boy, was the shadow.
| Though he had never noticed it himself,
| he had a great reluctance, in his work,
| ever to use such words as "man" or "woman".
| He preferred to write about "vocational groups",
| "elements", "classes", and "populations":  for, in
| his own way, he believed as firmly as any mystic in
| the superior reality of the things that are not seen.
|
| C.S. Lewis, 'That Hideous Strength', 1943

Merry Solstice!

http://antwrp.gsfc.nasa.gov/apod/ap021221.html
http://antwrp.gsfc.nasa.gov/apod/ap021222.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

NST.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o



o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Reductions Among Relations

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Subj:  Reductions Among Relations
Date:  Sat, 14 Apr 2001 19:48:55 -0400
From:  Jon Awbrey <jawbrey@oakland.edu>
  To:  Stand Up Ontology <standard-upper-ontology@ieee.org>
  CC:  Matthew West <Matthew.R.West@is.shell.com>

One of the things that makes the general problem of RAR seem
just a little bit ill-defined is that you would have to survey
all of the conceivable ways of "getting new relations from old"
just to say what you mean by "L is reducible to {L_j : j in J}",
in other words, that if you had a set of "simpler" relations L_j,
for indices j in some set J, that this data would somehow fix
the original relation L that you are seeking to analyze,
to determine, to specify, to synthesize, or whatever.

In my experience, however, most people will eventually settle on
either one of two different notions of reducibility as capturing
what they have in mind, namely:

1.  Compositive Reduction
2.  Projective Reduction

As it happens, there is an interesting relationship between these
two notions of reducibility, the implications of which I am still
in the middle of studying, so I will try to treat them in tandem.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Projective Reduction of Relations in General

I will start out with the notion of "projective reduction"
of relations, in part because it is easier and much more
intuitive (in the visual sense), but also because there
are a number of tools that we need for the other brand
of reduction that arise quite naturally as a part of
the projective setting.

Before we get into the operational machinery and the
vocational vocabulary of it all, let me just suggest
that you keep in mind the following style of picture,
which pretty much says it all, in that reducing to a
unity the "motley of the ten thousand terms" (MOT^4)
manner that the aptest genres and the fittest motifs
of representations are genreally found to immanifest.

Picture a k-adic relation L as a body
that resides in k-dimensional space X.
If the dimensions are X_1, ..., X_k,
then the "extension" of L, an object
that I will, for the whole time that
I am working this "extensional" vein,
regard as tantamount to the relation
itself, is a subset of the cartesian
product space X  =  X_1 x ... x X_k.

If you pick out your favorite family F of domains among these
dimensions, say, X_F = {X_j : j in F}, then the "projection" of
a point x of X on the subspace that gets "generated" along these
dimensions of X_F can be denoted by the notation "Proj_F (x)".

By extension, the projection of any relation L on that subspace
is denoted by "Proj_F (L)", or still more simply, by "Proj_F L".

The question of "projective reduction" for k-adic relations
can be stated with moderate generality in the following way:

| Given a set of k-place relations in the same space X and
| a set of projections from X to the associated subspaces,
| do the projections afford sufficient data to tell the
| different relations apart?

Next time, in order to make this discussion more concrete,
I will focus on some examples of triadic relations.  In fact,
to start within the bounds of no doubt familiar examples by now,
I will begin with the examples of sign relations that I used before.

http://suo.ieee.org/email/msg00729.html
http://suo.ieee.org/email/msg01224.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Projective Reduction of Triadic Relations

We are ready to take up the question of whether
3-adic relations, in general, and in particular
cases, are "determined by", "reducible to", or
"reconstructible from" their 2-adic projections.

Suppose that L contained in XxYxZ is an arbitrary 3-adic relation,
and consider the three 2-adic relations that are gotten by taking
its projections, its "shadows", if you will, on each of the three
planes XY, XZ, YZ.  Using the notation that I introduced before,
and compressing it just a bit or two in passing, one can write
these projections in each of the following ways, depending on
which turns out to be most convenient in a given context:

1.  Proj_{X,Y} (L) = Proj_{1,2} (L) = Proj_XY L = Proj_12 L = L_XY = L_12.

2.  Proj_{X,Z} (L) = Proj_{1,3} (L) = Proj_XZ L = Proj_13 L = L_XZ = L_13.

3.  Proj_{Y,Z} (L) = Proj_{2,3} (L) = Proj_YZ L = Proj_23 L = L_YZ = L_23.

If you picture the relation L as a body in the 3-space XYZ, then
the issue of whether L is "reducible to" or "reconstuctible from"
its 2-adic projections is just the question of whether these three
projections, "shadows", or "2-faces" determine the body L uniquely.

Stating the matter the other way around, L is "not reducible to"
or "not reconstructible from" its 2-dim projections if & only if
there are two distinct relations L and L' which have exactly the
same projections on exactly the same planes.

The next series of Tables illustrates the projection operations
by means of their actions on the sign relations L(A) and L(B)
that I introduced earlier on, in the "Sign Relations" thread.
Recall that we had the following set-up:

| L(A) and L(B) are "contained in" or "subsets of" OxSxI:
|
| O  =  {A, B},
|
| S  =  {"A", "B", "i", "u"},
|
| I  =  {"A", "B", "i", "u"}.

| L(A) has the following eight triples
| of the form <o, s, i> in OxSxI:
|
|    <A, "A", "A">
|    <A, "A", "i">
|    <A, "i", "A">
|    <A, "i", "i">
|    <B, "B", "B">
|    <B, "B", "u">
|    <B, "u", "B">
|    <B, "u", "u">

| L(B) has the following eight triples
| of the form <o, s, i> in OxSxI:
|
|    <A, "A", "A">
|    <A, "A", "u">
|    <A, "u", "A">
|    <A, "u", "u">
|    <B, "B", "B">
|    <B, "B", "i">
|    <B, "i", "B">
|    <B, "i", "i">

Taking the 2-adic projections of L(A)
we obtain the following set of data:

| L(A)_OS has these four pairs
| of the form <o, s> in OxS:
|
|    <A, "A">
|    <A, "i">
|    <B, "B">
|    <B, "u">

| L(A)_OI has these four pairs
| of the form <o, i> in OxI:
|
|    <A, "A">
|    <A, "i">
|    <B, "B">
|    <B, "u">

| L(A)_SI has these eight pairs
| of the form <s, i> in SxI:
|
|    <"A", "A">
|    <"A", "i">
|    <"i", "A">
|    <"i", "i">
|    <"B", "B">
|    <"B", "u">
|    <"u", "B">
|    <"u", "u">

Taking the dyadic projections of L(B)
we obtain the following set of data:

| L(B)_OS has these four pairs
| of the form <o, s> in OxS:
|
|    <A, "A">
|    <A, "u">
|    <B, "B">
|    <B, "i">

| L(B)_OI has these four pairs
| of the form <o, i> in OxI:
|
|    <A, "A">
|    <A, "u">
|    <B, "B">
|    <B, "i">

| L(B)_SI has these eight pairs
| of the form <s, i> in SxI:
|
|    <"A", "A">
|    <"A", "u">
|    <"u", "A">
|    <"u", "u">
|    <"B", "B">
|    <"B", "i">
|    <"i", "B">
|    <"i", "i">

A comparison of the corresponding projections for L(A) and L(B)
reveals that the distinction between these two 3-adic relations
is preserved under the operation that takes the full collection
of 2-adic projections into consideration, and this circumstance
allows one to say that this much information, that is, enough to
tell L(A) and L(B) apart, can be derived from their 2-adic faces.

However, in order to say that a 3-adic relation L on OxSxI
is "reducible" or "reconstructible" in the 2-dim projective
sense, it is necessary to show that no distinct L' on OxSxI
exists such that L and L' have the same set of projections,
and this can take a rather more exhaustive or comprehensive
investigation of the space of possible relations on OxSxI.

As it happens, each of the relations L(A) and L(B) turns
out to be uniquely determined by its 2-dim projections.
This can be seen as follows.  Consider any coordinate
position <s, i> in the plane SxI.  If <s, i> is not
in L_SI then there can be no element <o, s, i> in L,
therefore we may restrict our attention to positions
<s, i> in L_SI, knowing that there exist at least
|L_SI| = Cardinality of L_SI = eight elements in L,
and seeking only to determine what objects o exist
such that <o, s, i> is an element in the objective
"fiber" of <s, i>.  In other words, for what o in O
is <o, s, i> in ((Proj_SI)^(-1))(<s, i>)?  Now, the
circumstance that L_OS has exactly one element <o, s>
for each coordinate s in S and that L_OI has exactly
one element <o, i> for each coordinate i in I, plus
the "coincidence" of it being the same o at any one
choice for <s, i>, tells us that L has just the one
element <o, s, i> over each point of SxI.  Together,
this proves that both L(A) and L(B) are reducible in
an informative sense to 3-tuples of 2-adic relations,
that is, they are "projectively 2-adically reducible".

Next time I will give examples of 3-adic relations
that are not reducible to or reconstructible from
their 2-adic projections.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Projective Reduction of Triadic Relations (cont.)

There are a number of preliminary matters that
will need to be addressed before I can proceed.

Last time I gave two cases of 3-adic (or triadic) relations
with projective reductions to 2-adic (or dyadic) relations,
by name, "triadics reducible over projections" (TROP's) or
"triadics reconstructible out of projections" (TROOP's).

Still, one needs to be very careful and hedgey about saying,
even in the presence of such cases, that "all is dyadicity".
I will make some attempt to explain why in the next episode,
and then I will take up examples of 3-adics that happen to
be irreducible in this sense, in effect, that are not able
to be recovered uniquely from their 2-adic projection data.
Call them "triadics irreducible over projections" (TRIOP's)
or "projectively irreducible triadics" (PIT's).

In the story of A and B, it appears to be the case
that that the triadic relations L(A) and L(B) are
distinguished from each other, and what's more,
distinguished from all of the other relations
in the garden of OSI, for the same O, S, I.

At least, so says I and my purported proof.
I am so suspicious of this result myself that
I will probably not really believe it for a while,
until I have revisited the problem and the "proof"
a few times, to see if I can punch any holes in it.

But let it pass for proven for now,
and let my feeble faith go for now.

For the sake of a more balanced account,
it's time to see if we can dig up any cases
of "projectively irreducible triadics" (PIT's).
Any such PIT relation, should we ever fall into one,
is bound to occasion another, since it is a porismatic
part of the definition that a 3-adic relation L is a PIT
if and only if there exists a distinct 3-adic relation L'
such that the 2-adic faces of L and L' are indiscernible.
In this event, then both L and L' fall into the de-genre
of PIT's together.

Well, PIT's are not far to find, once you think to look for them --
indeed, the landscape of "formal or mathematical existence" (FOME)
is both figuratively and litterally rife with them!

What follows is the account of a couple,
that I will dub "L_0" and "L_1".

But first, even though the question of projective reduction
has to do with 3-adic relations as a general class, and is
thus independent of their potential use as sign relations,
it behooves us to consider the bearing of these reduction
properties on the topics of interest to us for the sake
of communication and representation via sign relations.

| Nota Bene.  On the Variety and Reading of Subset Notations.
|
| Let any of the locutions, L c XxYxZ, L on XxYxZ, L sub XxYxZ,
| substitute for the peculiar style of "in-line" or "in-passing"
| reference to subsethood that has become idiomatic in mathematics,
| and that would otherwise use the symbol that has been customary
| since the time of Peano to denote "contained in" or "subset of".

Most likely, any triadic relation L on XxYxZ that is imposed on
the arbitrary domains X, Y, Z could find use as a sign relation,
provided that it embodies any constraint at all, in other words,
so long as it forms a proper subset L of the entire space XxYxZ.
But these sorts of uses of triadic relations are not guaranteed
to capture or constitute any natural examples of sign relations.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Projective Reduction of Triadic Relations (cont.)

Projectively Irreducible Triadic Relations, or
Triadic Relations Irreducible Over Projections:

In order to show what a projectively irreducible 3-adic relation
looks like, I now present a pair of 3-adic relations that have the
same 2-adic projections, and thus cannot be distinguished from each
other on the basis of this data alone.  As it happens, these examples
of triadic relations can be discussed independently of sign relational
concerns, but structures of their basic ilk are frequently found arising
in signal-theoretic applications, and they are no doubt keenly associated
with questions of redundant coding and therefore of reliable communication.

Consider the triadic relations L_0 and L_1
that are specified in the following set-up:

| B = {0, 1}, with the "+" signifying addition mod 2,
| analogous to the "exclusive-or" operation in logic.
|
| B^k = {<x_1, ..., x_k> : x_j in B for j = 1 to k}.

In what follows, the space XxYxZ is isomorphic to BxBxB = B^3.
For lack of a good isomorphism symbol, I will often resort to
writing things like "XxYxZ iso BxBxB" or even "XxYxZ ~=~ B^3".

| Relation L_0
|
| L_0 = {<x, y, z> in B^3 : x + y + z = 0}.
|
| L_0 has the following four triples
| of the form <x, y, z> in B^3:
|
|    <0, 0, 0>
|    <0, 1, 1>
|    <1, 0, 1>
|    <1, 1, 0>

| Relation L_1
|
| L_1 = {<x, y, z> in B^3 : x + y + z = 1}.
|
| L_1 has the following four triples
| of the form <x, y, z> in B^3:
|
|    <0, 0, 1>
|    <0, 1, 0>
|    <1, 0, 0>
|    <1, 1, 1>

Those are the relations,
here are the projections:

Taking the dyadic projections of L_0
we obtain the following set of data:

| (L_0)_XY has these four pairs
| of the form <x, y> in X x Y:
|
|    <0, 0>
|    <0, 1>
|    <1, 0>
|    <1, 1>

| (L_0)_XZ has these four pairs
| of the form <x, z> in X x Z:
|
|    <0, 0>
|    <0, 1>
|    <1, 1>
|    <1, 0>

| (L_0)_YZ has these four pairs
| of the form <y, z> in Y x Z:
|
|    <0, 0>
|    <1, 1>
|    <0, 1>
|    <1, 0>

Taking the dyadic projections of L_1
we obtain the following set of data:

| (L_1)_XY has these four pairs
| of the form <x, y> in X x Y:
| 
|    <0, 0>
|    <0, 1>
|    <1, 0>
|    <1, 1>

| (L_1)_XZ has these four pairs
| of the form <x, z> in X x Z:
| 
|    <0, 1>
|    <0, 0>
|    <1, 0>
|    <1, 1>

| (L_1)_YZ has these four pairs
| of the form <y, z> in Y x Z:
| 
|    <0, 1>
|    <1, 0>
|    <0, 0>
|    <1, 1>

Now, for ease of verifying the data, I have written
these sets of pairs in the order that they fell out
on being projected from the given triadic relations.
But, of course, as sets, their order is irrelevant,
and it is simply a matter of a tedious check to
see that both L_0 and L_1 have exactly the same
projections on each of the corresponding planes.

To summarize:
 
The relations L_0, L_1 sub B^3 are defined by the following equations,
with algebraic operations taking place as in the "Galois Field" GF(2),
that is, with 1 + 1 = 0.

1.  The triple <x, y, z> in B^3 belongs to L_0  iff  x + y + z = 0.
    L_0 is the set of even-parity bit-vectors, with  x + y = z.

2.  The triple <x, y, z> in B^3 belongs to L_1  iff  x + y + z = 1.
    L_1 is the set of odd-parity bit-vectors,  with  x + y = z + 1.

The corresponding projections of L_0 and L_1 are identical.
In fact, all six projections, taken at the level of logical
abstraction, constitute precisely the same dyadic relation,
isomorphic to the whole of BxB and expressible by means of
the universal constant proposition 1 : BxB -> B.  In sum:

1.  (L_0)_XY  =  (L_1)_XY  =  1_XY  ~=~  BxB  =  B^2,

2.  (L_0)_XZ  =  (L_1)_XZ  =  1_XZ  ~=~  BxB  =  B^2,

3.  (L_0)_YZ  =  (L_1)_YZ  =  1_YZ  ~=~  BxB  =  B^2.

Therefore, L_0 and L_1 form an indiscernible couplet
of "triadic relations irreducible over projections"
or "projectively irreducible triadic relations".

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Projective Reduction of Triadic Relations (concl.)

We have pursued the "projective analysis" of 3-adic relations,
tracing the pursuit via a ready quantity of concrete examples,
just far enough to arrive at this clearly demonstrable result:

| Some 3-adic relations are and
| some 3-adic relations are not
| uniquely reconstructible from,
| or informatively reducible to,
| their 2-adic projection data.

Onward!

Prospects for a Compositional Analysis of Triadic Relations

Turn we now to a "compositional analysis" of 3-adic relations,
coining a term to prevent confusion, like there's a chance in
the world of that, but still making use of nothing other than
that "standardly uptaken operation" of relational composition,
the one that constitutes the customary generalization of what
just about every formal, logical, mathematical community that
is known to the writer, anyway, dubs "functional composition".

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Compositional Analysis of Relations

The first order of business under this heading is straightforward enough:
to define what is standardly described as the "composition of relations".
For the time being I limit the discussion to 2-adic and 3-adic relations.

| Remark on the Ancestry, the Application, and
| The Anticipated Broadening of these Concepts.
|
| This is basically the same operation that C.S. Peirce described as
| "relative multiplication", except for the technical distinction that
| he worked primarily with so-called "relative terms", like "lover of",
| "sign of the object ~~~ to", and "warrantor of the fact that ~~~ to",
| rather than with the kinds of extensional and intensional relations
| to which the majority of us are probably more accustomed to use.
|
| It is with regard to this special notion of "composition", and it alone,
| that I plan to discuss the inverse notion of "decomposition".  I try to
| respect other people's "reserved words" as much as I can, even if I can
| only go so far as to take them at their words and their own definitions
| of them in forming my interpretation of what they are apparently saying.
| Therefore, if I want to speak about other ways of building up relations
| from other relations and different ways of breaking down relations into
| other relations, then I will try to think up other names for these ways,
| or revert to a generic usage of terms like "analysis" and "combination".
|
| When a generalized definition of "relational composition" has been given,
| and its specialization to 2-adic relations is duly noted, then one will
| be able to notice that it is simply an aspect of this definition that
| the composition of two 2-adic relations yields nothing more than yet
| another 2-adic relation.  This will, I hope, in more than one sense
| of the word, bring "closure" to this issue, of what can be reduced
| to compositions of 2-adic relations, to wit, just 2-adic relations.

A notion of relational composition is to be defined that generalizes the
usual notion of functional composition.  The "composition of functions"
is that by which -- composing functions "on the right", as they say --
f : X -> Y and g : Y -> Z yield the "composite function" fg : X -> Z.

Accordingly, the "composition" of dyadic relations is that by which --
composing them here by convention in the same left to right fashion --
P c X x Y and Q c Y x Z yield the "composite relation" PQ c X x Z.

There is a neat way of defining relational composition, one that
not only manifests its relationship to the projection operations
that go with any cartesian product space, but also suggests some
natural directions for generalizing relational composition beyond
the limits of the 2-adic case, and even beyond relations that have
any fixed arity, that is, to the general case of formal languages.
I often call this definition "Tarski's Trick", though it probably
goes back further than that.  This is what I will take up next.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 8

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Compositional Analysis of Relations (cont.)

There are one or two confusions that demand to
be cleared up before I can proceed any further.

We had been enjoying our long-anticipated breakthrough on the
allegedly "easy case" of projective reduction, having detected
hidden within that story of our old friends and usual suspects
A and B two examples of 3-adic relations, L(A) and L(B), that
are indeed amenable, not only to being distinguished, one from
the other, between the two of them, but also to being uniquely
determined amongst all of their kin by the information that is
contained in their 2-dimensional projections.  So far, so good.
Had I been thinking fast enough, I would have assigned these the
nomen "triadics reducible in projections over dyadics" (TRIPOD's).
Other good names:  "triadics reducible over projections" (TROP's),
or perhaps "triadics reconstructible out of projections" (TROOP's).

Then we fell upon two examples of triadic relations, L_0 and L_1,
that I described as "projectively irreducible triadics" (PIT's),
because they collapse into an indistinct mass of non-descript
flatness on having their dyadic pictures taken.  That acronym
does not always work for me, so I will give them the alias of
"triadics irreducible by projections over dyadics" (TIBPOD's),
or perhaps "triadics irreducible over projections" (TIOP's).

I'm not accustomed to putting much stock in my own proofs
until I can reflect on them for a suitable period of time,
or until some other people have been able to go over them,
but until that day comes I will just have to move forward
with these results as I presently see them.

In reply to my notes on these topics, Matthew West
has contributed the following pair of commentaries:

1.  Regarding L(A) and L(B)

| Whilst I appreciate the academic support for showing
| that any triadic relation can be represented by some
| number of dyadic relations, the real point is to use
| this fact to seek for an improved analysis based on
| more fundamental concepts.  It is not the objective
| to do something mechanical.

2.  Regarding L_0 and L_1

| I don't think you have shown very much except that reducing
| triadic relations to dyadic relations using the mechanical
| process you have defined can loose information.  I am not
| surprised by this.  My experience of doing this with real,
| rather than abstract examples, is that there are often
| extra things to do.

So I need to clarify that what I think that I showed was
that "some" triadic relations are "reducible" in a given
informational sense to the information that is contained
in their dyadic projections, e.g., as L(A) and L(B) were,
but that others are not reducible in this particular way,
e.g., as L_0 and L_1 were not.

Now, aside from this, I think that Matthew is raising
a very important issue here, which I personally grasp
in terms of two different ways of losing information,
namely:

1.  The information that we lose in forming a trial model,
    in effect, in going from the unformalized "real world"
    over to the formal context or the syntactic medium in
    which models are constrained to live out their lives.

2.  The information that we lose in "turning the crank"
    on the model, that is, in drawing inferences from
    the admittedly reductive and "off'n'wrong" model
    in a way that loses even the initial information
    that it captured about the real-world situation.

To do it justice, though, I will need to return
to this issue in a less frazzled frame of mind.

This will complete the revision of this RARified thread from last Autumn.
I will wind it up, as far as this part of it goes, by recapitulating the
development of the "Rise" relation, from a couple of days ago, this time
working through its analysis and its synthesis as fully as I know how at
the present state of my knowledge.  The good of this exercise, of course,
the reason for doing all of this necessary work, is not because the Rise
relation is so terribly interesting in itself, but rather to demonstrate
the utility of the functional framework and its sundry attached tools in
their application to a nigh unto minimal and thus least obstructive case.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 9

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Compositional Analysis of Relations (cont.)

The good of this whole discussion, the use of it all,
the thing about it that makes it worth going through,
at least, for me, is not just to settle doubts about
the "banal", "common", or figuratively and literally
"trivial" (Latin for locus where "three roads" meet)
type of issue that may have appeared to be its point,
but, as I said in my recent reprise of justification,
to examine and explore "the extent to which it is possible to
construct relations between complex relations and simpler
relations.  The aim here, once we get past questions of
what is reducible in what way and what not in no way,
is to develop concrete and fairly general methods
for analyzing the structures of those relations
that are indeed amenable to a useful analysis --
and here I probably ought to emphasize that
I am talking about the structure of each
relation in itself, at least, to the
extent that it presents itself in
extensional form, and not just
the syntax of this or that
relational expression".

So let me run through this development once more,
this time interlacing its crescendoes with a few
supplemental notes of showcasing or sidelighting,
aimed to render more clearly the aim of the work.

In order to accumulate a stock of ready-mixed concrete instances,
at the same time to supply ourselves with relatively fundamental
materials for building ever more complex and prospectively still
more desirable and elegant structures, maybe, even if it must be
worked out just a little bit gradually, hopefully, incrementally,
and even at times jury-rigged here and there, increasingly still
more useful architectronic forms for our joint contemplation and
and our meet edification, let us then set out once more from the
grounds of what we currently have in our command, and advance in
the directions of greater generalities and expanded scopes, with
the understanding that many such journeys are possible, and that
each is bound to open up on open-ended views at its unlidded top.

By way of a lightly diverting overture, let's begin
with an exemplar of a "degenerate triadic relation" --
do you guess that our opera is in an Italian manor? --
a particular version of the "between relation", but
let us make it as simple as we possibly can and not
attempt to analyze even that much of a case in full
or final detail, but leave something for the finale.

Let B = {0, 1}.

Let the relation named "Rise(2)"
such that Rise(2) c B^2 = B x B,
be exactly this set of 2-tuples:

| Rise(2)
| =
| {
| <0, 0>,
| <0, 1>,
| <1, 1>
| }

Let the relation named "Rise(3)"
such that Rise(3) c B^3 = BxBxB,
be exactly this set of 3-tuples:

| Rise(3)
| =
| {
| <0, 0, 0>,
| <0, 0, 1>,
| <0, 1, 1>,
| <1, 1, 1>
| }

Then Rise(3) is a "degenerate 3-adic relation"
because it can be expressed as the conjunction
of a couple of 2-adic relations, specifically:

Rise(3)<x, y, z>  <=>  [Rise(2)<x, y> and Rise(2)<y, z>].

But wait just a minute!  You read me clearly to say already --
and I know that you believed me! -- that no 3-adic relation
can be decomposed into any 2-adic relations, so what in the
heck is going on!?  Well, "decomposed" implies the converse
of "composition", which has to mean "relational composition"
in the present context, and this composition is a different
operation entirely from the "conjunction" that was employed
above, to express Rise(3) as a conjunction of two Rise(2)'s.

That much we have seen and done before, but in the spirit of
that old saw that "what goes up must come down" we recognize
that there must be a supplementary relation in the scheme of
things that is equally worthy of our note, and so let us dub
this diminuendo the "Fall" relation, and set to define it so:

Let the relation named "Fall(2)"
such that Fall(2) c B^2 = B x B,
be exactly this set of 2-tuples:

| Fall(2)
| =
| {
| <0, 0>,
| <1, 0>,
| <1, 1>
| )

Let the relation named "Fall<3>"
such that Fall<3> c B^3 = BxBxB,
be exactly this set of 3-tuples:

| Fall(3)
| =
| {
| <0, 0, 0>,
| <1, 0, 0>,
| <1, 1, 0>,
| <1, 1, 1>
| }

And on these notes ...
I must rest a bit ...
For an interval ...
To be continuo ...
Let it B ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 10

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Compositional Analysis of Relations (cont.)

Starting from the simplest notions of Rise and Fall,
I may easily have chosen to leave it as an exercise
for the reader to discover suitable generalizations,
say from Rise(k) and Fall(k) for k of order 2 and 3,
to the slightly more general case in which k is any
natural number, that is, finite, integral, positive.
But that is far too easy a calisthenic, and no kind
of a work-out to offer our band of fearless readers,
and so the writer picks up the gage that he himself 
throws down, and for his health runs the easy track!

Let B = {0, 1}.

Let the relation Rise(k) c B^k
be defined in the following way:

| Rise(k)<x_1, ..., x_k>
|
| iff
|
| Rise(2)<x_1, x_2>  and  Rise(k-1)<x_2, ..., x_k>.

Let the relation Fall(k) c B^k
be defined in the following way:

| Fall(k)<x_1, ..., x_k>
|
| iff
|
| Fall(2)<x_1, x_2>  and  Fall(k-1)<x_2, ..., x_k>.

But let me now leave off, for the time being,
from the temptation to go any further in the
direction of increasing k than I ever really
intended to, on beyond 2 or 3 or thereabouts,
for that is not the aim of the present study.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 11

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Compositional Analysis of Relations (cont.)

In this note I revisit the "Between" relation on reals,
and then I rework it as a discrete and finite analogue
of its transcendantal self, as a Between relation on B.
Ultimately, I want to use this construction as working
material to illustrate a method of defining relational
compositions in terms of projections.  So let us begin.

Last time I defined Rise and Fall relations on B^k.
Working polymorphously, as some people like to say,
let us go ahead and define the analogous relations
over the real domain R, not even bothering to make
new names, but merely expecting the reader to find
the aptest sense for a given context of discussion.

Let R be the set of real numbers.

Let the relation named "Rise(2)"
such that Rise(2) c R^2 = R x R,
be defined in the following way:

| Rise(2)<x, y>
|
| iff
|
| [x = y]  or  [x < y]

Let the relation named "Fall(2)"
such that Fall(2) c R^2 = R x R,
be defined in the following way:

| Fall(2)<x, y>
|
| iff
|
| [x > y]  or  [x = y]

There are clearly a number of redundancies
between the definitions of these relations,
but I prefer the symmetry of this approach.

The next pair of definitions will be otiose, too,
if viewed in the light of the comprehensive case
that follows after, but let us go gently for now.

Let the relation named "Rise(3)"
such that Rise(3) c R^3 = RxRxR,
be defined in the following way:

| Rise(3)<x, y, z>
|
| iff
|
| Rise(2)<x, y> and Rise(2)<y, z>

Let the relation named "Fall(3)"
such that Fall(3) c R^3 = RxRxR,
be defined in the following way:

| Fall(3)<x, y, z>
|
| iff
|
| Fall(2)<x, y> and Fall(2)<y, z>

Then Rise(3) and Fall(3) are "degenerate 3-adic relations"
insofar as each of them bears expression as a conjunction
whose conjuncts are expressions of 2-adic relations alone.

Just in order to complete the development
of this thought, let us then finish it so:

Let the relation Rise(k) c R^k
be defined in the following way:

| Rise(k)<x_1, ..., x_k>
|
| iff
|
| Rise(2)<x_1, x_2>  and  Rise(k-1)<x_2, ..., x_k>

Let the relation Fall(k) c R^k
be defined in the following way:

| Fall(k)<x_1, ..., x_k>
|
| iff
|
| Fall(2)<x_1, x_2>  and  Fall(k-1)<x_2, ..., x_k>

If there was a point to writing out this last step,
I think that it may well have been how easy it was
not to write, not literally to "write" at all, but
simply to "cut and paste" the definitions from the
boolean case, and then but to change the parameter
B into the parameter R at a mere one place in each.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 12

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Compositional Analysis of Relations (cont.)

Before we continue with the analysis of the Between relation, let us
take a moment to make sure that we understand the connections between
two topics that may appear at first to be entirely unrelated, namely:

1.  A certain use of the logical conjunction, denoted by "&", as it appears
    in logical expressions of the form "F(x, y, z) = G(x, y) & H(y, z)",
    and that we use to define a 3-adic relation F by means of this "&"
    and in terms of a couple of 2-adic relations G and H.

2.  The concepts of 2-adic "projection" and "projective determination",
    that are invoked in the "weak" notion of "projective reducibility".

Let us begin by drawing ourselves a picture of what is really going on whenever
we formulate a definition of F c XxYxZ via a conjunction of G c XxY and H c YxZ,
as we may choose to do by means of an expression of the following form:

F(x, y, z)  =  G(x, y)  &  H(y, z).

Visualize the 3-adic relation F c XxYxZ as a body in XYZ-space,
while G is a figure in XY-space and H is a figure in YZ-space:

o-------------------------------------------------o
|                                                 |
|                        o                        |
|                       /|\                       |
|                      / | \                      |
|                     /  |  \                     |
|                    /   |   \                    |
|                   /    |    \                   |
|                  /     |     \                  |
|                 /      |      \                 |
|                o       o       o                |
|                |\     / \     /|                |
|                | \   / F \   / |                |
|                |  \ /  *  \ /  |                |
|                |   \  ***  /   |                |
|                |  / \//*\\/ \  |                |
|                | /  /\/ \/\  \ |                |
|                |/  ///\ /\\\  \|                |
|        o       X  ///  Y  \\\  Z       o        |
|        |\       \///   |   \\\/       /|        |
|        | \      ///    |    \\\      / |        |
|        |  \    ///\    |    /\\\    /  |        |
|        |   \  ///  \   |   /  \\\  /   |        |
|        |    \///    \  |  /    \\\/    |        |
|        |    /\/      \ | /      \/\    |        |
|        |   *//\       \|/       /\\*   |        |
|        X   */  Y       o       Y  \*   Z        |
|         \  *   |               |   *  /         |
|          \ G   |               |   H /          |
|           \    |               |    /           |
|            \   |               |   /            |
|             \  |               |  /             |
|              \ |               | /              |
|               \|               |/               |
|                o               o                |
|                                                 |
o-------------------------------------------------o
Figure 1.  Projections of F onto G and H

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 13

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Compositional Analysis of Relations (cont.)

The 2-adic "projections" Proj_XY, Proj_XZ, Proj_YZ, for
any 3-adic relation L c XxYxZ, along with the equivalent
forms of application L_XY, L_XZ, L_YZ, respectively, are
defined as follows:

Proj_XY (L)  =  L_XY  =  {<x, y> in XxY : <x, y, z> in L for some z in Z},

Proj_XZ (L)  =  L_XZ  =  {<x, z> in XxZ : <x, y, z> in L for some y in Y},

Proj_YZ (L)  =  L_YZ  =  {<y, z> in YxZ : <x, y, z> in L for some x in X}.

In light of these definitions, Proj_XY is a mapping
from the space !L!_XYZ of 3-adic relations L c XxYxZ
into the space !L!_XY of 2-adic relations M c XxY, and
similarly, mutatis mutandis, for the other projections.

In mathematics, the inverse relation of a projection is
usually called an "extension", but in view of the ample
confusion that we already have in logic over extensions
and intensions and comprehensions and so on, I will try
to guard against the chance of chaos in this context by
always using the adjective form of "tacit extensions".

The "tacit extensions" TE_XY_Z, TE_XZ_Y, TE_YZ_X,
of the 2-adic relations U c XxY, V c XxZ, W c YxZ,
respectively, can be defined in the following way:

TE_XY_Z (U)  =  {<x, y, z> : <x, y> in U},

TE_XZ_Y (V)  =  {<x, y, z> : <x, z> in V},

TE_YZ_X (W)  =  {<x, y, z> : <y, z> in W}.

It will be clear enough to write TE(U), TE(V), TE(W),
respectively, so long as the contexts are understood.

In our present application, we are making use of
the tacit extension of G c XxY to TE(G) c XxYxZ and
the tacit extension of H c YxZ to TE(H) c XxYxZ, only.

Here are the snapshots:

o-------------------------------------------------o
|                                                 |
|                        o                        |
|                       /|\                       |
|                      / | \                      |
|                     /  |  \                     |
|                    /   |   \                    |
|                   /    |    \                   |
|                  /     |     \                  |
|                 /      |   *  \                 |
|                o       o  **   o                |
|                |\     / \***  /|                |
|                | \   /  ***  / |                |
|                |  \ /  ***\ /  |                |
|                |   \  ***  /   |                |
|                |  / \***  / \  |                |
|                | /  ***  /   \ |                |
|                |/  ***\ /     \|                |
|        o       X  /**  Y       Z       o        |
|        |\       \//*   |      /       /|        |
|        | \      ///    |     /       / |        |
|        |  \    ///\    |    /       /  |        |
|        |   \  ///  \   |   /       /   |        |
|        |    \///    \  |  /       /    |        |
|        |    /\/      \ | /       /     |        |
|        |   *//\       \|/       /  *   |        |
|        X   */  Y       o       Y   *   Z        |
|         \  *   |               |   *  /         |
|          \ G   |               |   H /          |
|           \    |               |    /           |
|            \   |               |   /            |
|             \  |               |  /             |
|              \ |               | /              |
|               \|               |/               |
|                o               o                |
|                                                 |
o-------------------------------------------------o
Figure 2.  Tacit Extension of G to X x Y x Z

o-------------------------------------------------o
|                                                 |
|                        o                        |
|                       /|\                       |
|                      / | \                      |
|                     /  |  \                     |
|                    /   |   \                    |
|                   /    |    \                   |
|                  /     |     \                  |
|                 /  *   |      \                 |
|                o   **  o       o                |
|                |\  ***/ \     /|                |
|                | \  ***  \   / |                |
|                |  \ /***  \ /  |                |
|                |   \  ***  /   |                |
|                |  / \  ***/ \  |                |
|                | /   \  ***  \ |                |
|                |/     \ /***  \|                |
|        o       X       Y  **\  Z       o        |
|        |\       \      |   *\\/       /|        |
|        | \       \     |    \\\      / |        |
|        |  \       \    |    /\\\    /  |        |
|        |   \       \   |   /  \\\  /   |        |
|        |    \       \  |  /    \\\/    |        |
|        |     \       \ | /      \/\    |        |
|        |   *  \       \|/       /\\*   |        |
|        X   *   Y       o       Y  \*   Z        |
|         \  *   |               |   *  /         |
|          \ G   |               |   H /          |
|           \    |               |    /           |
|            \   |               |   /            |
|             \  |               |  /             |
|              \ |               | /              |
|               \|               |/               |
|                o               o                |
|                                                 |
o-------------------------------------------------o
Figure 3.  Tacit Extension of H to X x Y x Z

Finally, we can now supply a visual interpretation
that helps us to see the meaning of a formula like:

F(x, y, z)  =  G(x, y)  &  H(y, z).

The conjunction that is indicated by "&" corresponds as usual
to an intersection of two sets, however, in this case it is
the intersection of the tacit extensions TE(G) and TE(H).

o-------------------------------------------------o
|                                                 |
|                        o                        |
|                       /|\                       |
|                      / | \                      |
|                     /  |  \                     |
|                    /   |   \                    |
|                   /    |    \                   |
|                  /     |     \                  |
|                 /      |      \                 |
|                o       o       o                |
|                |\     / \     /|                |
|                | \   / F \   / |                |
|                |  \ /  *  \ /  |                |
|                |   \  ***  /   |                |
|                |  / \//*\\/ \  |                |
|                | /  /\/ \/\  \ |                |
|                |/  ///\ /\\\  \|                |
|        o       X  ///  Y  \\\  Z       o        |
|        |\       \///   |   \\\/       /|        |
|        | \      ///    |    \\\      / |        |
|        |  \    ///\    |    /\\\    /  |        |
|        |   \  ///  \   |   /  \\\  /   |        |
|        |    \///    \  |  /    \\\/    |        |
|        |    /\/      \ | /      \/\    |        |
|        |   *//\       \|/       /\\*   |        |
|        X   */  Y       o       Y  \*   Z        |
|         \  *   |               |   *  /         |
|          \ G   |               |   H /          |
|           \    |               |    /           |
|            \   |               |   /            |
|             \  |               |  /             |
|              \ |               | /              |
|               \|               |/               |
|                o               o                |
|                                                 |
o-------------------------------------------------o
Figure 4.  F as the Intersection of TE(G) and TE(H)

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 14

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Compositional Analysis of Relations (cont.)

Let us now look at a way of defining the relational composition of
2-adic relations by using the set-theoretic operational resources
of intersections, projections, and tacit extensions.  To be more
specific, we will define the relational composition of a couple
of 2-adic relations in terms of their separate tacit extensions
to 3-adic relations, followed by the set intersection of these
tacit extensions, and then the projection of this intersection,
tantamount to the maximal 3-adic relation that is consistent
with the 'prima facie' 2-adic relational data, into a third
2-adic relation, the computed composition of the first two.

I usually think of this definition of composition as "Tarski's Trick",
because I learned it from a paper of Ulam that attributes it to Tarski,
but I would not be terribly surprised suddenly to recognize it in Peirce,
DeMorgan, or even Newton, for that matter.

| Ulam, S.M. & Bednarek, A.R.,
|"On the Theory of Relational Structures and Schemata for Parallel Computation",
| in Ulam & Bednarek (eds.), 'ABA', pp. 477-508, report dated 1977.
|
| Ulam, F. & Bednarek, A.R. (eds.),
|'Analogies Between Analogies:
| The Mathematical Reports of S.M. Ulam and his Los Alamos Collaborators',
| University of California Press, Berkeley, 1990.

We begin with a pair of 2-adic relations G, H c X x Y.

o-------------------------------------------------o
|                                                 |
|        o                       o                |
|        |\                      |\               |
|        | \                     | \              |
|        |  \                    |  \             |
|        |   \                   |   \            |
|        |    \                  |    \           |
|        |     \                 |     \          |
|        |   *  \                |   *  \         |
|        X   *   Y               X   *   Y        |
|         \  *   |                \  *   |        |
|          \ G   |                 \ H   |        |
|           \    |                  \    |        |
|            \   |                   \   |        |
|             \  |                    \  |        |
|              \ |                     \ |        |
|               \|                      \|        |
|                o                       o        |
|                                                 |
o-------------------------------------------------o
Figure 5.  Dyadic Relations G, H c X x Y

Mark that H is not exactly the same H that we had before,
because this H is presented in the same plane X x Y as G.
Whether you view isomorphic things to be the same things
or not, you still have to specify the exact isomorphisms
that are needed to transform any given representation of
a thing into a required representation of the same thing.
Let us imagine that we have done this, and say how later:

o-------------------------------------------------o
|                                                 |
|        o                               o        |
|        |\                             /|        |
|        | \                           / |        |
|        |  \                         /  |        |
|        |   \                       /   |        |
|        |    \                     /    |        |
|        |     \                   /     |        |
|        |   *  \                 /  *   |        |
|        X   *   Y               Y   *   Z        |
|         \  *   |               |   *  /         |
|          \ G   |               |   H'/          |
|           \    |               |    /           |
|            \   |               |   /            |
|             \  |               |  /             |
|              \ |               | /              |
|               \|               |/               |
|                o               o                |
|                                                 |
o-------------------------------------------------o
Figure 6.  Dyadic Relations G c X x Y and H' c Y x Z

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 15

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Compositional Analysis of Relations (cont.)

We continue with the trick already in progress,
whereby Ulam reports Tarski as defining the
relational composition P o Q of two 2-adic
relations P, Q c X x X in this way:

Definition.  P o Q  =  Proj_13 (P x X |^| X x Q).

To get this drift of this definition, one needs
to understand that it's written within a school
of thought that holds that all 2-adic relations
are, "without loss of generality", covered well
enough, "for all practical purposes", under the
aegis of subsets of a suitable cartesian square,
and thus of the form L c X x X.  So, if one has
started out with a 2-adic relation of the shape
L c U x V, one merely lets X = U |_| V, trading
in the initial L for a new L c X x X as need be.

Proj_13 is just the projection of the cartesian
cube X x X x X on the space of shape X x X that
is spanned by the first and the third domains,
but since they now have the same names and the
same contents it is necessary to distinguish
them by numbering their relational places.

Finally, the notation of the cartesian product sign "x"
is abused, or extended, depending on your point of view,
to signify a couple of other "products" with respect to
a 2-adic relation L c X x X and a subset W c X, like so:

Definition.  L x W  =  {<x, y, z> in X^3 : <x, y> in L and z in W}.

Definition.  W x L  =  {<x, y, z> in X^3 : x in W and <y, z> in L}.

Applying these definitions to the case P, Q c X x X, the two 2-adic relations
whose relational composition P o Q c X x X is about to be defined, one finds:

P x X  =  {<x, y, z> in X^3 : <x, y> in P and z in X},

X x Q  =  {<x, y, z> in X^3 : x in X and <y, z> in Q}.

I hope it is clear that these are just
the appropriate special cases of the
tacit extensions already defined.

P x X  =  TE_12_3 (P),

X x Q  =  TE_23_1 (Q).

In sum, or product, then, the expression

Proj_13 (P x X |^| X x Q)

is the same thing as

Proj_13 (TE_12_3 (P) |^| TE_23_1 (Q)),

which is generalized -- though, with respect to
one's school of thought, perhaps inessentially so --
by the form from my school that I give as follows:

Definition.  P o Q  =  Proj_XZ (TE_XY_Z (P) |^| TE_YZ_X (Q)).

The snapshots are in the developer ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 16

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Compositional Analysis of Relations (cont.)

Let us now render the picture of our composition example
a little less impressionistic and a little more realistic
in the manner of its representation, and let us accomplish
this through the introduction of coordinates, in other words,
concrete names for the objects that we relate through various
forms of relations, 2-adic and 3-adic in the present instance.

Revising the Example along these lines
would give a Figure like the following:

o-------------------------------------------------o
|                                                 |
|                        o                        |
|                       /|\                       |
|                      / | \                      |
|                     /  |  \                     |
|                    /   |   \                    |
|                   /    |    \                   |
|                  /     |     \                  |
|                 /      |      \                 |
|                o       o       o                |
|                |\     / \     /|                |
|                | \   / F \   / |                |
|                |  \ /  *  \ /  |                |
|                |   \  /*\  /   |                |
|                |  / \//*\\/ \  |                |
|                | /  /\/ \/\  \ |                |
|                |/  ///\ /\\\  \|                |
|        o       X  ///  Y  \\\  Z       o        |
|        |\      7\///   |   \\\/7      /|        |
|        | \      6//    |    \\6      / |        |
|        |  \    //5\    |    /5\\    /  |        |
|        |   \  /// 4\   |   /4 \\\  /   |        |
|        |    \///   3\  |  /3   \\\/    |        |
|        |   G/\/     2\ | /2     \/\H   |        |
|        |   *//\      1\|/1      /\\*   |        |
|        X   *\  Y       o       Y  /*   Z        |
|        7\  *\\ |7             7| //*  /7        |
|         6\ |\\\|6             6|///| /6         |
|          5\| \\@5             5@// |/5          |
|           4@  \@4             4@/  @4           |
|            3\  @3             3@  /3            |
|             2\ |2             2| /2             |
|              1\|1             1|/1              |
|                o               o                |
|                                                 |
o-------------------------------------------------o
Figure 7.  F as the Intersection of TE(G) and TE(H)

By way of the representation that is accorded us by these coordinates,
we have the following data with regard to F c XxYxZ, G c XxY, H c YxZ.

F  =  4:3:4  +  4:4:4  +  4:5:4

G  =  4:3    +  4:4    +  4:5

H  =    3:4  +    4:4  +    5:4

Let us now verify that all of the proposed definitions, formulas, and other
relationships check out against the concrete data of the composition example.
The ultimate goal is to develop a clearer picture of what is going on in the
formula that expresses the relational composition in terms of the projection
of the intersection of the tacit extensions:

Let us now verify that all of the proposed definitions,
formulas, and other relationships check out against the
concrete data of the composition example.  The ultimate
goal is to develop a clearer picture of what is going on
in the formula that expresses the relational composition
of 2-adic relations in terms of the extremal projection
of the intersection of their tacit 3-adic extensions:

G o H  =  Proj_XZ (TE_XY_Z (G) |^| TE_YZ_X (H)).

Here is the big picture, with all of the pieces:

o-------------------------------------------------o
|                                                 |
|                        o                        |
|                       / \                       |
|                      /   \                      |
|                     /     \                     |
|                    /       \                    |
|                   /         \                   |
|                  /           \                  |
|                 /    G o H    \                 |
|                X       *       Z                |
|                7\     /|\     /7                |
|                 6\   / | \   /6                 |
|                  5\ /  |  \ /5                  |
|                   4@   |   @4                   |
|                    3\  |  /3                    |
|                     2\ | /2                     |
|                      1\|/1                      |
|                        |                        |
|                        |                        |
|                        |                        |
|                       /|\                       |
|                      / | \                      |
|                     /  |  \                     |
|                    /   |   \                    |
|                   /    |    \                   |
|                  /     |     \                  |
|                 /      |      \                 |
|                o       |       o                |
|                |\     /|\     /|                |
|                | \   / F \   / |                |
|                |  \ /  *  \ /  |                |
|                |   \  /*\  /   |                |
|                |  / \//*\\/ \  |                |
|                | /  /\/ \/\  \ |                |
|                |/  ///\ /\\\  \|                |
|        o       X  ///  Y  \\\  Z       o        |
|        |\       \///   |   \\\/       /|        |
|        | \      ///    |    \\\      / |        |
|        |  \    ///\    |    /\\\    /  |        |
|        |   \  ///  \   |   /  \\\  /   |        |
|        |    \///    \  |  /    \\\/    |        |
|        |   G/\/      \ | /      \/\H   |        |
|        |   *//\       \|/       /\\*   |        |
|        X   *\  Y       o       Y  /*   Z        |
|        7\  *\\ |7             7| //*  /7        |
|         6\ |\\\|6             6|///| /6         |
|          5\| \\@5             5@// |/5          |
|           4@  \@4             4@/  @4           |
|            3\  @3             3@  /3            |
|             2\ |2             2| /2             |
|              1\|1             1|/1              |
|                o               o                |
|                                                 |
o-------------------------------------------------o
Figure 8.  G o H  =  Proj_XZ (TE(G) |^| TE(H))

All that remains to do now is to check the following
collection of data and derivations against Figure 8.

F  =  4:3:4  +  4:4:4  +  4:5:4

G  =  4:3    +  4:4    +  4:5

H  =    3:4  +    4:4  +    5:4

G o H  =  (4:3 + 4:4 + 4:5)(3:4 + 4:4 + 5:4)

       =  4:4

TE(G)  =  TE_XY_Z (G)

       =  Sum_z=1...7 (4:3:z + 4:4:z + 4:5:z)

       =  4:3:1 + 4:4:1 + 4:5:1 +
          4:3:2 + 4:4:2 + 4:5:2 +
          4:3:3 + 4:4:3 + 4:5:3 +
          4:3:4 + 4:4:4 + 4:5:4 +
          4:3:5 + 4:4:5 + 4:5:5 +
          4:3:6 + 4:4:6 + 4:5:6 +
          4:3:7 + 4:4:7 + 4:5:7

TE(H)  =  TE_YZ_X (H)

       =  Sum_x=1...7 (x:3:4 + x:4:4 + x:5:4)

       =  1:3:4 + 1:4:4 + 1:5:4 +
          2:3:4 + 2:4:4 + 2:5:4 +
          3:3:4 + 3:4:4 + 3:5:4 +
          4:3:4 + 4:4:4 + 4:5:4 +
          5:3:4 + 5:4:4 + 5:5:4 +
          6:3:4 + 6:4:4 + 6:5:4 +
          7:3:4 + 7:4:4 + 7:5:4

TE(G) |^| TE(H)  =  4:3:4 + 4:4:4 + 4:5:4

G o H  =  Proj_XZ (TE(G) |^| TE(H))

       =  Proj_XZ (4:3:4 + 4:4:4 + 4:5:4)

       =  4:4

By my lights, anyway, it all checks.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 17

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Relational Composition as Logical Matrix Multiplication

We have it within our reach to pick up another way of representing
2-adic relations, namely, the representation as logical matrices,
and also to grasp the analogy between relational composition and
ordinary matrix multiplication as it appears in linear algebra.

To begin, while we still have the data of a very simple concrete case
in mind, let us reflect on what we did in our last Example in order
to find the composition G o H of the 2-adic relations G and H.

Here is the setup that we had before:

X  =  {1, 2, 3, 4, 5, 6, 7},

G  =  4:3 + 4:4 + 4:5  c  X x X,

H  =  3:4 + 4:4 + 5:4  c  X x X.

Let us recall the rule for finding the relational composition of a pair
of 2-adic relations.  Given the 2-adic relations P c X x Y, Q c Y x Z,
the "relational composition" of P and Q, in that order, is commonly
denoted as "P o Q" or more simply as "PQ" and obtained as follows:

To compute PQ, in general, where P and Q are 2-adic relations,
simply multiply out the two sums in the ordinary distributive
algebraic way, only subject to the following rule for finding
the product of two elementary relations of shapes a:b and c:d.

| (a:b)(c:d)  =  (a:d)  if b = c,
|
| (a:b)(c:d)  =    0    otherwise.

To find the relational composition G o H,
we write it as a quasi-algebraic product:

G o H  =  (4:3 + 4:4 + 4:5)(3:4 + 4:4 + 5:4).

Multiplying this out in accord with the applicable form
of distributive law one obtains the following expansion:

G o H  =  (4:3)(3:4) + (4:3)(4:4) + (4:3)(5:4) +
          (4:4)(3:4) + (4:4)(4:4) + (4:4)(5:4) +
          (4:5)(3:4) + (4:5)(4:4) + (4:5)(5:4)

Applying the rule that determines the product
of elementary relations, we obtain this array:

G o H  =  (4:4) +   0   +   0   +
            0   + (4:4) +   0   +
            0   +   0   + (4:4)

Since the plus sign in this context represents an operation of
logical disjunction or set-theoretic aggregation, all positive
multiplicites count as one, and this gives the ultimate result:

G o H  =  4:4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 18

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Relational Composition as Logical Matrix Multiplication (cont.)

With an eye to extracting a general formula, let us now examine
what we did in multiplying the 2-adic relations G and H together
to obtain their relational composite G o H.

Given the space X = {1, 2, 3, 4, 5, 6, 7}, whose cardinality |X| is 7,
we naturally observe that there are |X x X| = |X| x |X| = 7 x 7 = 49
elementary relations of the form i:j, where i and j range over the
space X.  Although they might be organized in many different ways,
it is convenient to regard the collection of elementary relations
as being arranged in a lexicographic block of the following form:

1:1  1:2  1:3  1:4  1:5  1:6  1:7
2:1  2:2  2:3  2:4  2:5  2:6  2:7
3:1  3:2  3:3  3:4  3:5  3:6  3:7
4:1  4:2  4:3  4:4  4:5  4:6  4:7
5:1  5:2  5:3  5:4  5:5  5:6  5:7
6:1  6:2  6:3  6:4  6:5  6:6  6:7
7:1  7:2  7:3  7:4  7:5  7:6  7:7

We may think of G and H as being logical sums of the following forms:

G  =  Sum_ij G_ij (i:j)

H  =  Sum_ij H_ij (i:j)

The notation "Sum_ij" indicates a logical sum over the collection of
elementary relations i:j, while the factors G_ij, H_ij are values in
the boolean domain B = {0, 1} that are known as the "coefficients"
of the relations G, H, respectively, with regard to each of the
elementary relations i:j in turn.

In general, for a 2-adic relation L, the coefficient L_ij of
the elementary relation i:j in the relation L will be 0 or 1,
respectively, as i:j is excluded from or included in L.

Given all this, we may write out the expansions of G and H as follows:

G  =  4:3 + 4:4 + 4:5  =

0(1:1) + 0(1:2) + 0(1:3) + 0(1:4) + 0(1:5) + 0(1:6) + 0(1:7) +
0(2:1) + 0(2:2) + 0(2:3) + 0(2:4) + 0(2:5) + 0(2:6) + 0(2:7) +
0(3:1) + 0(3:2) + 0(3:3) + 0(3:4) + 0(3:5) + 0(3:6) + 0(3:7) +
0(4:1) + 0(4:2) + 1(4:3) + 1(4:4) + 1(4:5) + 0(4:6) + 0(4:7) +
0(5:1) + 0(5:2) + 0(5:3) + 0(5:4) + 0(5:5) + 0(5:6) + 0(5:7) +
0(6:1) + 0(6:2) + 0(6:3) + 0(6:4) + 0(6:5) + 0(6:6) + 0(6:7) +
0(7:1) + 0(7:2) + 0(7:3) + 0(7:4) + 0(7:5) + 0(7:6) + 0(7:7)

H  =  3:4 + 4:4 + 5:4  =

0(1:1) + 0(1:2) + 0(1:3) + 0(1:4) + 0(1:5) + 0(1:6) + 0(1:7) +
0(2:1) + 0(2:2) + 0(2:3) + 0(2:4) + 0(2:5) + 0(2:6) + 0(2:7) +
0(3:1) + 0(3:2) + 0(3:3) + 1(3:4) + 0(3:5) + 0(3:6) + 0(3:7) +
0(4:1) + 0(4:2) + 0(4:3) + 1(4:4) + 0(4:5) + 0(4:6) + 0(4:7) +
0(5:1) + 0(5:2) + 0(5:3) + 1(5:4) + 0(5:5) + 0(5:6) + 0(5:7) +
0(6:1) + 0(6:2) + 0(6:3) + 0(6:4) + 0(6:5) + 0(6:6) + 0(6:7) +
0(7:1) + 0(7:2) + 0(7:3) + 0(7:4) + 0(7:5) + 0(7:6) + 0(7:7)

Presenting just the coefficients of G and H on the above plan:

G  =

0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 1 1 1 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0

H  =

0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 1 0 0 0
0 0 0 1 0 0 0
0 0 0 1 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0

These are the logical matrix representations of the 2-adic relations G and H.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 19

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Relational Composition as Logical Matrix Multiplication (cont.)

If the 2-adic relations G and H are viewed as logical sums,
then their relational composition G o H can be regarded as
a product of sums, a fact that can be indicated as follows:

G o H  =  (Sum_ij G_ij (i:j))(Sum_ij H_ij (i:j)).

G o H is itself a 2-adic relation over the same space X,
in other words, G o H c X x X, and this means that G o H
must be amenable to being written as a logical sum of the
following form:

G o H  =  Sum_ij (G o H)_ij (i:j).

In this formula, (G o H)_ij is the coefficient of
G o H with respect to the elementary relation i:j.

One of the best ways to reason out what G o H should be is to ask
oneself what its coefficient (G o H)_ij should be for each of the
elementary relations i:j in turn.

So let us pose the question:

(G o H)_ij  =  ?

In order to answer this question, it helps to realize
that the indicated product given above can be written
in the following equivalent form:

G o H  =  (Sum_ik G_ik (i:k))(Sum_kj H_kj (k:j)).

A moment's thought will tell us that (G o H)_ij = 1
if and only if there is an element k in X such that
G_ik = 1 and H_kj = 1.

Consequently, we have the result:

(G o H)_ij  =  Sum_k (G_ik H_kj).

This follows from the properties of boolean arithmetic,
specifically, the fact that the product G_ik H_kj is 1
if and only if both G_ik and H_kj are 1, and from the
fact that Sum_k F_k is 1 if and only if some F_k is 1.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 20

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Relational Composition as Logical Matrix Multiplication (cont.)

All that remains in order to obtain a computational formula for
the relational composite G o H of the 2-adic relations G and H
is to collect the coefficients (G o H)_ij over the appropriate
basis of elementary relations i:j, as i and j range through X.

G o H  =  Sum_ij (G o H)_ij (i:j)  =  Sum_ij (Sum_k (G_ik H_kj)) (i:j).

This is the logical analogue of matrix multiplication in linear algebra,
the difference in the logical setting being that all of the operations
performed on coefficients take place in a system of boolean arithmetic
where summation corresponds to logical disjunction and multiplication
corresponds to logical conjunction.

By way of disentangling this formula, we notice that the form
Sum_k (G_ik H_kj) is what is usually called a "scalar product".
In this case it is the scalar product of the i^th row of G with
the j^th column of H.

To make this statement more concrete, let us go back to
the particular examples of G and H that we came in with:

G  =

0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 1 1 1 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0

H  =

0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 1 0 0 0
0 0 0 1 0 0 0
0 0 0 1 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0

The formula for computing G o H tells us this:

(G o H)_ij

   =   the ij^th entry in the matrix representation for G o H

   =   the entry in the i^th row and the j^th column of G o H

   =   the scalar product of the i^th row of G with the j^th column of H

   =   Sum_k (G_ik H_kj)

As it happens, we are enabled to make exceedingly light work of this example,
since there is only one row of G and one column of H that are not all zeroes.
Taking the scalar product, in a logical way, of the fourth row of G with the
fourth column of H produces the sole non-zero entry for the matrix of G o H.

G o H  =

0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 1 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 21

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Relational Composition as Logical Matrix Multiplication (cont.)

There is another form of representation for 2-adic relations that
is useful to keep in mind, especially for its ability to render the
logic of many complex formulas almost instantly understandable to the
mind's eye.  This is the representation in terms of "bipartite graphs",
or "bigraphs" for short.

Here is what G and H look like in the bigraph picture:

    1   2   3   4   5   6   7
X   o   o   o   o   o   o   o
               /|\
G             / | \
             /  |  \
X   o   o   o   o   o   o   o
    1   2   3   4   5   6   7

Figure 9.  G = 4:3 + 4:4 + 4:5

    1   2   3   4   5   6   7
X   o   o   o   o   o   o   o
             \  |  /
H             \ | /
               \|/
X   o   o   o   o   o   o   o
    1   2   3   4   5   6   7

Figure 10.  H = 3:4 + 4:4 + 5:4

These graphs may be read to say:
G puts 4 in relation to 3, 4, 5.
H puts 3, 4, 5 in relation to 4.

To form the composite relation G o H, we simply follow the bigraph for G
by the bigraph for H, here arranging the bigraphs in order down the page,
and then we proceed to "squeeze out the middle man", that is, we call any
non-empty set of paths of length two between two nodes as being equivalent
to a single directed edge between them in the composite bigraph for G o H.

Here is how it looks in pictures:

    1   2   3   4   5   6   7
X   o   o   o   o   o   o   o
               /|\
G             / | \
             /  |  \
X   o   o   o   o   o   o   o
             \  |  /
H             \ | /
               \|/
X   o   o   o   o   o   o   o
    1   2   3   4   5   6   7

Figure 11.  G Followed By H

    1   2   3   4   5   6   7
X   o   o   o   o   o   o   o
                |
G o H           |
                |
X   o   o   o   o   o   o   o
    1   2   3   4   5   6   7

Figure 12.  G Composed With H

Once again we find that G o H = 4:4.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 22

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Relational Composition as Logical Matrix Multiplication (cont.)

We have now seen three different representations of 2-adic relations.
If one has a strong preference for letters, or numbers, or pictures,
then one may be tempted to take one or the other as being canonical,
but each of them will be found to have its peculiar advantages and
disadvantages in any given application, and the maximum advantage
is therefore approached by keeping all three of them in mind.

To see the promised utility of the bigraph picture of 2-adic relations,
let us devise a slightly more complex example of a composition problem,
and use it to illustrate the logic of the matrix multiplication formula.

Keeping to the same space X = {1, 2, 3, 4, 5, 6, 7},
define the 2-adic relations M, N c X x X as follows:

M  =  2:1 + 2:2 + 2:3 + 4:3 + 4:4 + 4:5 + 6:5 + 6:6 + 6:7

N  =  1:1 + 2:1 + 3:3 + 4:3    +    4:5 + 5:5 + 6:7 + 7:7

Here are the bigraph pictures:

    1   2   3   4   5   6   7
X   o   o   o   o   o   o   o
       /|\     /|\     /|\
M     / | \   / | \   / | \
     /  |  \ /  |  \ /  |  \
X   o   o   o   o   o   o   o
    1   2   3   4   5   6   7

Figure 13.  Dyadic Relation M

    1   2   3   4   5   6   7
X   o   o   o   o   o   o   o
    |  /    |  / \  |    \  |
N   | /     | /   \ |     \ |
    |/      |/     \|      \|
X   o   o   o   o   o   o   o
    1   2   3   4   5   6   7

Figure 14.  Dyadic Relation N

To form the composite relation M o N, we simply follow the bigraph for M
by the bigraph for N, here arranging the bigraphs in order down the page,
and then we proceed to "edge out the middle person", that is, we call any
non-empty set of paths of length two between two nodes as being equivalent
to a single directed edge between them in the composite bigraph for M o N.

Here is how it looks in pictures:

    1   2   3   4   5   6   7
X   o   o   o   o   o   o   o
       /|\     /|\     /|\
M     / | \   / | \   / | \
     /  |  \ /  |  \ /  |  \
X   o   o   o   o   o   o   o
    |  /    |  / \  |    \  |
N   | /     | /   \ |     \ |
    |/      |/     \|      \|
X   o   o   o   o   o   o   o
    1   2   3   4   5   6   7

Figure 15.  M Followed By N

    1   2   3   4   5   6   7
X   o   o   o   o   o   o   o
       / \     / \     / \
MoN   /   \   /   \   /   \
     /     \ /     \ /     \
X   o   o   o   o   o   o   o
    1   2   3   4   5   6   7

Figure 16.  M Composed With N

Let us hark back to that mysterious matrix multiplication formula,
and see how it appears in the light of the bigraph representation.

The coefficient of the composition M o N
between i and j in X is given as follows:

(M o N)_ij  =  Sum_k (M_ik N_kj)

Graphically interpreted, this is a "sum over paths".
Starting at the node i, M_ik being 1 indicates that
there is an edge in the bigraph of M from node i to
node k, and N_kj being 1 indicates that there is an
edge in the bigraph of N from node k to node j.  So
the Sum_k ranges over all possible intermediaries k,
ascending from 0 to 1 just as soon as there happens
to be some path of length two between nodes i and j.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note 23

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Compositional Analysis of Relations (cont.)

There are a number of very instructive observations that we might make
at this point.  One of the most striking is that a composite relation
can be a very simple sort of relation, for all its being compounded
of other relations.  Indeed, in our recent example, G o H is the
elementary relation 4:4, and yet it is evidently composed of the
2-adic relations G and H.  What's more, there is nothing unique
about this decomposition, as many other pairs of factors would
be capable of producing the same result.  What this tells us
is that the complexity of a 2-adic relation is not strongly
related to its properties under relational decomposition.
Thus, if we are seeking a "structure theory" of 2-adic
relations that would identify irreducible primitives
the way that the structure theory of natural numbers
identifies prime numbers as its basis, it will have
to involve other sorts of considerations than just
the relational decomposition of 2-adic relations.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RIG.  Relations In General

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RIG.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

In a realistic computational framework, where incomplete and inconsistent
information is the rule, it is necessary to work with genera of relations
that are increasingly relaxed in their constraining characters but still
preserve a measure of analogy with the fundamental species of relations
that are found to be prevalent in perfect information contexts.

In the present application, the kinds of relations of primary interest are
functions, equivalence relations, and other species of relations that are
defined by their axiomatic properties.  Thus, the information-theoretic
generalizations of these structures lead to partially defined functions
and partially constrained versions of these specially defined classes
of relations.

The purpose of this Subsection is to outline the kinds of generalized functions
and other families of relations that are needed to extend the discussion of the
present example.  In this connection, to frame the problem in concrete terms,
I need to adapt the "equivalence class" notation for two generalizations of
equivalence relations, to be defined below.  But first, a number of broader
issues need to be treated.

Generally speaking, one is free to interpret references
to generalized objects in either one of two fashions:

1.  Distinct indications of partially formed versions of objects.

2.  Partially informed descriptions of distinct types of objects.

I refer to these choices as the "object-theoretic"
and the "sign-theoretic" options, respectively.

The object-theoretic way of reading partial signs assumes that
vague and general references nevertheless have their objective
denotations, but merely to vague and general objects.

The sign-theoretic way of reading partial signs ascribes the
partialities of information to the characters of the signs,
the expressions, and the texts that are doing the denoting.

In most of the cases that arise in casual discussion the choice
between these conventions is purely stylistic.  However, in many
of the more intricate situations that arise in formal discussion,
the object-theoretic choice frequently fails utterly, and whenever
the utmost care is required it will usually be a due attention to
the partialities of signs that saves the day.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RIG.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Local Incidence Properties of Relations

In order to speak of generalized relations I need to outline the dimensions
of variation along which I intend the characters of already familiar orders
of relations to be broadened.  Generally speaking, the taxonomic features
of k-place relations that I wish to liberalize can be read off from their
"local incidence properties" (LIP's).

A "local incidence property" (LIP) of a relation L
is one that depends on the properties of specified
subsets of L that are known as its "local flags".

Suppose that L is a k-place relation L c X_1 x ... x X_k.
Choose a relational domain X_j and one of its elements x.
The notation "L_x@j" denotes a subset of P that we shall
call "the flag of L with x at j", or "the x@j-flag of L".
Given all this, the flag L_x@j c L is defined as follows:

L_x@j  =  {<x_1, ..., x_j, ..., x_k> in L  :  x_j = x}.

Any property C of the local flag L_x@j c L may then be classified as
a "local incidence property" of L with respect to the locus "x at j".

A k-adic relation L c X_1 x ... x X_k is "C-regular at j" if and only if
every flag of L with x at j has the property C, where x is taken to vary
over the "theme" of the fixed domain X_j.  Coded up more symbolically,
L is C-regular at j if and only if C(L_x@j) is true for all x in X_j.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RIG.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Numerical Incidence Properties of Relations

Of particular interest are the local incidence properties of relations
that can be calculated from the cardinalities of their local flags, and
these are naturally enough called "numerical incidence properties" (NIP's).

For example, L is said to be "c-regular at j" if and only if
the cardinality of the local flag L_x@j is c for all x in X_j,
coded in symbols, if and only if |L_x@j| = c for all x in X_j.

In a similar fashion, one can define the NIP's "<c-regular at j",
">c-regular at j", and so on.  For ease of reference, I record a
few of these definitions here:

L is  c-regular at j   iff   |L_x@j| = c for all x in X_j.

L is <c-regular at j   iff   |L_x@j| < c for all x in X_j.

L is >c-regular at j   iff   |L_x@j| > c for all x in X_j.

The definition of a local flag can be broadened from a point x in X_j
to a subset M c X_j, arriving at the definition of a "regional flag".
Suppose that L c X_1 x ... x X_k, and choose a subset M c X_j.  Then
"L_M@j" denotes a subset of L called "the flag of L with M at j", or
"the M@j-flag of L".  The regional flag L_M@j is defined as follows:

L_M@j  =  {<x_1, ..., x_j, ..., x_k> in L  :  x_j in M}.

Returning to 2-adic relations, it is useful to describe some
familiar classes of objects in terms of their local and their
numerical incidence properties.  Let L c S x T be an arbitrary
2-adic relation.  The following properties of L can be defined:

L is "total" at S     iff   L is (>=1)-regular at S.

L is "total" at T     iff   L is (>=1)-regular at T.

L is "tubular" at S   iff   L is (=<1)-regular at S.

L is "tubular" at T   iff   L is (=<1)-regular at T.

If L c S x T is tubular at S, then L is called a "partial function"
or a "prefunction" from S to T, sometimes indicated by giving L an
alternate name, say, "p", and writing L = p : S ~> T.

Just by way of formalizing the definition:

L = p : S ~> T   iff   L is tubular at S.

If L is a prefunction p : S ~> T that happens to be total at S, then L
is called a "function" from S to T, indicated by writing L = f : S -> T.
To say that a relation L c S x T is totally tubular at S is to say that
it is 1-regular at S.  Thus, we may formalize the following definition:

L = f : S -> T   iff   L is 1-regular at S.

In the case of a function f : S -> T, one
has the following additional definitions:

f is "surjective"   iff   f is total at T.

f is "injective"    iff   f is tubular at T.

f is "bijective"    iff   f is 1-regular at T.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RIG.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o



o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RIG.  Work Area

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

A few more comments on terminology are needed in further preparation.
One of the constant practical demands encountered in this project is
to have available a language and a calculus for relations that can
permit discussion and calculation to range over functions, 2-adic
relations, and k-place relations with a minimum amount of trouble
in making transitions from subject to subject and in drawing the
appropriate generalizations.

Up to this point in discussing sign relations, as exemplified by the ones
that arise in the A and B dialogue, the analysis has concerned itself almost
exclusively with the relationship of the 3-adic sign relations L(A) and L(B)
to the 2-adic relations that are obtained from them by taking their projections
on various relational planes.  In particular, a major focus of interest has been
the extent to which the salient properties of sign relations can be gleaned from
a study of their 2-adic projections.

Two important topics for later discussion will be concerned with:  (1) the sense in which every n-place relation can be decomposed in terms of triadic relations, and (2) the fact that not every triadic relation can be further reduced to conjunctions of dyadic relations.

It is one of the constant technical needs of this project to maintain a flexible language for talking about relations, one that permits discussion to shift from functional to relational emphases and from dyadic relations to n-place relations with a maximum of ease.  It is not possible to do this without violating the favored conventions of one technical linguistic community or another.  I have chosen a strategy of use that respects as many different usages as possible, but in the end it cannot help but to reflect a few personal choices.  To some extent my choices are guided by an interest in developing the information, computation, and decision theoretic aspects of the mathematical language used.  Eventually, this requires one to render every distinction, even that of appearing or not in a particular category, as being relative to an interpretive framework.

While operating in this context, it is necessary to distinguish "domains" in the broad sense from "domains of definition" in the narrow sense.  For n-place relations it is convenient to use the terms "domain" and "quorum" as references to the wider and narrower sets, respectively.

For a k-place relation L c X_1 x ... x X_k, I maintain the following usages:

1.  The notation "Dom_j (L)" denotes the set X_j, known
    as the "domain of L at j" or the "j^th domain of L".

2.  The notation "Quo_j (L)" denotes a subset of X_j that is
    called the "quorum of L at j" or the "j^th quorum of L",
    defined as follows:

Quo_j (L)  =  the largest Q c X_j such that L_Q@j is ?>1-regular at j,

           =  the largest Q c X_j such that |L_x@j| ?> 1 for all x in Q c X_j.

In the special case of a 2-adic relation L c X_1 x X_2 = S x T, including
the case of a partial function p : S |> T or a total function f : S -> T,
I will stick to the following conventions:

1.  The arbitrarily designated domains X1 = S and X2 = T that form the widest sets admitted to the dyadic relation are referred to as the "domain" or "source" and the "codomain" or "target", respectively, of the relation in question.

2.  The terms "quota" and "range" are reserved for those uniquely defined sets whose elements actually appear as the 1st and 2nd members, respectively, of the ordered pairs in that relation.  Thus, for a dyadic relation R c SxT, I let Quo (R) = Quo1 (R) c S be identified with what is usually called the "domain of definition" of R, and I let Ran (R) = Quo2 (R) c T be identified with the usual range of R.

A "partial equivalence relation" (PER) on a set X is a relation R c XxX that is an equivalence relation on its domain of definition Quo (R) c X.  In this situation, [x]R is empty for each x in X that is not in Quo (R).  Another way of reaching the same concept is to call a PER a dyadic relation that is symmetric and transitive, but not necessarily reflexive.  Like the "self-identical elements" of old that epitomized the very definition of self-consistent existence in classical logic, the property of being a self-related or self-equivalent element in the purview of a PER on X singles out the members of Quo (R) as those for which a properly meaningful existence can be contemplated.

A "moderate equivalence relation" (MER) on the "modus" M c X is a relation on X whose restriction to M is an equivalence relation on M.  In symbols, R c XxX such that R|M c MxM is an equivalence relation.  Notice that the subset of restriction, or modus M, is a part of the definition, so the same relation R on X could be a MER or not depending on the choice of M.  In spite of how it sounds, a moderate equivalence relation can have more ordered pairs in it than the ordinary sort of equivalence relation on the same set.

In applying the equivalence class notation to a sign relation R, the definitions and examples considered so far only cover the case where the connotative component RSI is a total equivalence relation on the whole syntactic domain S.  The next job is to adapt this usage to PER's.

If R is a sign relation whose syntactic projection RSI is a PER on S, then I still write "[s]R" for the "equivalence class of s under RSI".  But now, [s]R can be empty if s has no interpretant, that is, if s lies outside the "adequately meaningful" subset of the syntactic domain, where synonymy and equivalence of meaning are defined.  Otherwise, if s has an i then it also has an o, by the definition of RSI.  In this case, there is a triple <o, s, i> C R, and it is permissible to let [o]R = [s]R.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

TOR.  Theory Of Relations

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

TOR.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Let's see if we can build up a working theory of relations,
starting out as simply as we possibly can, forgetting most
of the finer subtleties of Peirce's distinctions, and yet
trying to build a system that will be roughly compatible
with the sorts of concepts that Peirce appeared to have
in mind, as it appears, that is, from reading what he
wrote, and from what we may know about the generic
mathematical background of his day.

If it were me, I would begin with a toy universe
like X = {i, j, k}, where the signs "i", "j", "k"
are taken to denote the distinct objects i, j, k,
respectively.  It's not much, but it's enough for
a start.

Here are some relations that immediatedly,
if not exactly unmediatedly, come to mind:

The "2-identity relation" I_2 on X is the following set of ordered pairs:

I_2 = {(i, i), (j, j), (k, k)}

I will probably call it "I", not to be confused with me,
and bowing to convention call it the "identity relation".

For ease of expression, I will write relations in one
of the styles that Peirce was accustomed to write them,
in which the identity relation would be written like so:

I = i:i + j:j + k:k

He often called sets by the name of "aggregates" or "logical sums",
and so the plus sign here signifies only the aggregation of these
ordered pairs into a logical sum, or a "set" to us.

In this vein, the 3-identity relation over X would take the form:

I_3 = i:i:i + j:j:j + k:k:k

In general, a term of the form "x:y:z" denotes the
ordered triple that by any other name is (x, y, z).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

TOR.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

In view of the fact that X = {i, j, k} is a finite universe,
indeed, such a tiny universe, we can easily figure out how
many relations over X there are for any finite arity n
that you might care to name.

n = 1.  A 1-adic relation is a subset of X^1 = X.
        There are exactly 2^3 = 8 subsets of X.
        So there are 8 1-adic relations over X.

n = 2.  A 2-adic relation is a subset of X^2 = X x X.
        There are 3 x 3 = 3^2 = 9 ordered 2-tuples in X^2.
        So there are just 2^9 = 512 2-adic relations over X

n = 3.  A 3-adic relation is a subset of X^3 = X x X x X.
        There are 3 x 3 x 3 = 3^3 = 27 ordered 3-tuples in X^3.
        So there are just 2^27 = 134217728 3-adic relations over X.

Like the man said:

| Of triadic Being the multitude of forms is so terrific that
| I have usually shrunk from the task of enumerating them ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

TOR.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Thus we see the origin and meaning of the term "numerical identity".
For the account that I gave last time, enumerating the first three
generations of relations over the universe X = {i, j, k} will ever
after serve to remind us of one of the things that can give us the
most confidence that we have comprehended the numerical identities
of any types of entities, to wit, a putative ability to count them.

Let us make a few observations of general bearing that we can
already see exhibited in this early but blossoming universe X,
and also take the occasion to set down a few bits of notation.

Let me introduce a bit of language that comes up here.
Very roughly speaking -- for speaking this way ignores
a point of subtlety concerning the distinction between
extensions and intensions, and another point concerned
with the distinction between the "relative term" and
the "relation" proper -- Peirce called the elements
of a relation, its tuples, by the suggestive name
of "elementary relations".  Let us do likewise.

For example, the elementary 2-adic relations that
serve as a basis for all of the 2-adic relations
over X are just the 3 x 3 = 9 ordered 2-tuples
that I list here:

i:i, i:j, i:k,
j:i, j:j, j:k,
k:i, k:j, k:k.

I hope that you will discover this form to be
a suggestive array and not a block to inquiry.

Now that we have an initial notion of what a relation is,
namely, an aggregate, class, collection, set, logical sum,
by whatever name of a similar sort you may wish to call it,
of ordered tuples, and now that we will forever after never
confuse a relation with one of its elemental tuples -- for,
yes, indeed, even a set that consists of a single element,
a "singleton" so-called, is counted as a different entity
from the element thereof -- we may begin to consider the
types of operations to which these relations are subject.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

TOR.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I introduce a few more bits of
terminology that become useful
at this point:

The "cardinality" |A| of a set A is, roughly speaking,
nothing more or less than the number of elements in A.
In the sorts of finite cases that presently occupy us,
roughly speaking will be ready enough.

The "power set" Pow(A) of a set A is the set of all subsets of A.
Hence, in so far as it concerns the finite case, |Pow(A)| = 2^|A|.

Let's use "bang-bang brackets", excuse my Anglish, taking the form "!...!",
as a font-shifting device, to transcribe Fraktur, Greek, or script letters,
for instance, !L! for script L, !P! for Greek Pi, !S! for Greek Sigma, etc.
If we start running out of letters, I will shift to using "scrip brackets",
taking the form "$...$", for script letters, but I'd really prefer not to.

As a convenience, let us institute the following notations:

1.  !L!_1 = Pow(X^1) = {L : L c X^1} = the set of 1-adic relations on X.

2.  !L!_2 = Pow(X^2) = {L : L c X^2} = the set of 2-adic relations on X.

3.  !L!_3 = Pow(X^3) = {L : L c X^3} = the set of 3-adic relations on X.

As an application, let us practice the use of these conventions by
employing them to dress up the facts that we have already observed:

1.  |!L!_1| = 2^(3^1) = 2^3  =  8.

2.  |!L!_2| = 2^(3^2) = 2^9  =  512.

3.  |!L!_3| = 2^(3^3) = 2^27 =  134217728.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

TOR.  Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

In as much as relations are nothing but aggregates, sets, or logical sums
of elementary relations (or ordered tuples), we may do with relations the
whole variety of familiar set-theoretic operations that we are accustomed
to carry out with the category of sets from which relations inherit their
properties as sets, for instance:  complementation, intersection, union,
asymmetric difference, symmetric difference, and all the rest.

As a general rule, especially in the earliest papers, Peirce will analogize
intersections with products, unions with sums, whose sigils are !P! and !S!,
respectively, and he also dubs them the "non-relative aggregate or sum" and
the "non-relative composite or product", respectively, as best I can recall.

By way of acquiring some practical experience with the materials and tools in
this shop, let us devise a concrete example, whose study should be sufficient.

| Example 1.
|
| A = i:j + j:k + k:i
|
| B = i:k + j:i + k:j
|
| L = i:i + i:j + i:k + j:i + j:j + j:k + k:i + k:j + k:k

For the immediate if not exactly the unmitigated future,
we will contemplate only those sorts of operations that
are defined on relations of the same arities or types.
It is possible to get fancier about this, speaking of
formal objects called "relation complexes", but then
we would have to be very careful about what we mean
by expressions like "i + i:j + i:j:k", and how the
sets !L!_1, !L!_2, !L!_3, ... "embed" or "inject"
themselves into a more encompassing family !L!.
I judge that this'd be too distracting at this
stage of the game, so let's not go there, yet.

| Example 1.
|
| 1.  The complement of A in L.
|
|     ~A  =  L - A  =  i:i + i:k + j:i + j:j + k:j + k:k
|
| 2.  The complement of B in L.
|
|     ~B  =  L - B  =  i:i + i:j + j:j + j:k + k:i + k:k
|
| 3.  The intersection or non-relative product of A and B.
|
|     A |^| B  =  {}  =  the empty set, so A and B are "disjoint".
|
| 4.  The union or non-relative sum of A and B.
|
|     A |_| B  =  A + B  =  i:j + j:k + k:i + i:k + j:i + k:j
|
| 5.  Since A and B are disjoint, we have the following facts
|     about their differences and their symmetric difference:
|
|     A - B  =  A
|     B - A  =  B
|     A ± B  =  A + B  =  A |_| B

I am writing this from memories in deep cold-storage.
Modulo the factors of memory and fallibility I think
that this is more or less how it goes but I may need
to go back and retrieve Peirce's actual notations at
some point.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

TOR.  Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

We have now seen enough of the ordinary set-theoretic,
the "non-relative" operations on relations to get the
general idea, but we haven't yet touched the relative
operations on relations.  This is apparently so terra
incognita even for many who speak so interpidly about
the compositions, the decompositions, the productions,
and the reductions of all kinds of relations that I'm
sure that it would come as a shock to them when first
they step off their enfolding maps onto terra firma.

So let us amble onward with freshly opened eyes, as if
seeing our place under the sun for the very first time.

| Example 1 Revisited.
|
| A = i:j + j:k + k:i
|
| B = i:k + j:i + k:j
|
| L = i:i + i:j + i:k + j:i + j:j + j:k + k:i + k:j + k:k

The operation on 2-adic relations that Peirce knew under the
names of "relative multiplication" or "relative product" can
be defined, I say, 'DEFINED' in the following way, where for
the sake of a beginning account I shall give this first time
an informal but perfectly adequate version of the definition.

To compute PQ, in general, where P and Q are 2-adic relations,
simply multiply out the two sums in the ordinary distributive
algebraic way, only subject to the following rule for finding
the product of two elementary relations of shapes a:b and c:d.

| (a:b)(c:d)  =  (a:d)  if b = c,
|
| (a:b)(c:d)  =    0    otherwise.

Here 0 may be taken as the empty set {}, or anything that serves as
the "additive identity element", meaning that C + 0 = C |_| {} = C,
where C is any set.

For example, we may compute the relative product AB as follows:

| AB  =  (i:j + j:k + k:i)(i:k + j:i + k:j)
|
|     =  (i:j)(i:k) + (i:j)(j:i) + (i:j)(k:j) +
|
|        (j:k)(i:k) + (j:k)(j:i) + (j:k)(k:j) +
|
|        (k:i)(i:k) + (k:i)(j:i) + (k:i)(k:j)
|
|     =      0      +    i:i     +     0      +
|            0      +     0      +    j:j     +
|           k:k     +     0      +     0
|
|     =     i:i     +    j:j     +    k:k
|
|     =                   I

You will notice that the very definition of the
relative product of 2-adic relations determines
that the result is again a 2-adic relation.

Therefore, in particular, no 3-adic relation can
result as a relative product of 2-adic relations.

This is all that one means by saying that 3-adic relations
are "irreducible" or "indecomposable" to 2-adic relations,
and it is a matter of basic definition, requiring no proof.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

TOR.  Note 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Once the triad bisector has made it far enough across
the pons asinorum of 3-adic irreducibility to see the
utter futility his or her former ways, being fixated
on the quest to render 3-adic relations simpler than
they really are, we can begin to contemplate a number
of weaker forms of composition and reduction.  These
are "weaker" in the sense that they start out from
the concession to 3-adic irreducibility and then
make this ask "what if we.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Tone, Token, Type

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| A Sign may 'itself' have a "possible" Mode of Being,
| e.g., a hexagon inscribed in or circumscribed about
| a conic.  It is a Sign, in that the collinearity of
| the intersections of opposite sides shows the curve
| to be a conic, if the hexagon is inscribed;  but if
| it be circumscribed the co-punctuality of its three
| diameters (joining opposite vertices).  Its Mode of
| Being may be Actuality:  as with any barometer.  Or
| Necessitant:  as the word "the" or any other in the
| dictionary.  For a "possible" Sign I have no better
| designation than a 'Tone',  though I am considering
| replacing this by "Mark".  Can you suggest a really
| good name?  An Actual Sign I call a 'Token';
| a Necessitant Sign, a 'Type'.
|
| Charles S. Peirce, "Letters to Lady Welby", 31 Jan 1909, page 406 in:
|'Charles S. Peirce:  Selected Writings (Values in a Universe of Chance)',
| Edited with an Introduction by Philip P. Wiener, Dover, New York, NY, 1966.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| A common mode of estimating the amount of matter in a MS. or printed book
| is to count the number of words.  There will ordinarily be about twenty
| 'the's on a page, and of course they count as twenty words.  In another
| sense of the word "word", however, there is but one word "the" in the
| English language;  and it is impossible that this word should lie
| visibly on a page or be heard in any voice, for the reason that
| it is not a Single thing or Single event.  It does not exist;
| it only determines things that do exist.  Such a definitely
| significant Form, I propose to term a 'Type'.  A Single event
| which happens once and whose identity is limited to that one
| happening or a Single object or thing which is in some single
| place at any one instant of time, such event or thing being
| significant only as occurring just when and where it does,
| such as this or that word on a single line of a single page
| of a single copy of a book, I will venture to call a 'Token'.
| An indefinite significant character such as a tone of voice
| can neither be called a Type nor a Token.  I propose to call
| such a Sign a 'Tone'.  In order that a Type may be used, it has
| to be embodied in a Token which shall be a sign of the Type, and
| thereby of the object the Type signifies.  I propose to call such
| a Token of a Type an 'Instance' of the Type.  Thus, there may be
| twenty Instances of the type "the" on a page.  (Peirce, CP 4.537).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Questioner:  Well, if you choose so to make Doing the Be-all
| and the End-all of human life, why do you not make meaning to
| consist simply in doing?  Doing has to be done at a certain time
| upon a certain object.  Individual objects and single events cover
| all reality, as everybody knows, and as a practicalist ought to be
| the first to insist.  Yet, your meaning, as you have described it,
| is 'general'.  Thus, it is of the nature of a mere word and not
| a reality.  You say yourself that your meaning of a proposition
| is only the same proposition in another dress.  But a practical
| man's meaning is the very thing he means.  What do you make to
| be the meaning of "George Washington"?
|
| Pragmaticist:  Forcibly put!  A good half dozen of your points must certainly be
| admitted.  It must be admitted, in the first place, that if pragmaticism really
| made Doing to be the Be-all and the End-all of life, that would be its death.
| For to say that we live for the mere sake of action, as action, regardless of
| the thought it carries out, would be to say that there is no such thing as
| rational purport.  Secondly, it must be admitted that every proposition
| professes to be true of a certain real individual object, often the
| environing universe.  Thirdly, it must be admitted that pragmaticism
| fails to furnish any translation or meaning of a proper name, or other
| designation of an individual object.  Fourthly, the pragmatistic meaning
| is undoubtedly general;  and it is equally undisputable that the general
| is of the nature of a word or sign.  Fifthly, it must be admitted that
| individuals alone exist;  and sixthly, it may be admitted that the
| very meaning of a word or significant object ought to be the very
| essence of reality of what it signifies.
|
| But when those admissions have been unreservedly made, you find the pragmaticist
| still constrained most earnestly to deny the force of your objection, you ought
| to infer that there is some consideration that has escaped you.  Putting the
| admissions together, you will perceive that the pragmaticist grants that a
| proper name (although it is not customary to say that it has a 'meaning'),
| has a certain denotative function peculiar, in each case, to that name and
| its equivalents;  and that he grants that every assertion contains such a
| denotative or pointing-out function.  In its peculiar individuality, the
| pragmaticist excludes this from the rational purport of the assertion,
| although 'the like' of it, being common to all assertions, and so,
| being general and not individual, may enter into the pragmaticistic
| purport.  Whatever exists, 'ex-sists', that is really acts upon other
| existents, so obtains a self-identity, and is definitely individual.
|
| As to the general, it will be a help to thought to notice that there
| are two ways of being general.  A statue of a soldier on some village
| monument, in his overcoat and with his musket, is for each of a hundred
| families the image of its uncle, its sacrifice to the Union.  That statue,
| then, though it is itself single, represents any one man of whom a certain
| predicate may be true.  It is 'objectively' general.  The word "soldier",
| whether spoken or written, is general in the same way;  while the name,
| "George Washington", is not so.  But each of these two terms remains
| one and the same noun, whether it be spoken or written, and whenever
| and wherever it be spoken or written.  This noun is not an existent
| thing:  it is a 'type', or 'form', to which objects, both those that
| are externally existent and those which are imagined, may 'conform',
| but which none of them can exactly be.  This is subjective generality.
| The pragmaticistic purport is general in both ways.
|
| Charles Sanders Peirce, 'Collected Papers', CP 5.429.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 4

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| All general, or definable, Words, whether in the sense of
| Types or of Tokens, are certainly Symbols.  That is to say,
| they denote the objects that they do by virtue only of there
| being a habit that associates their signification with them.
| As to Proper Names, there might perhaps be a difference of
| opinion, especially if the Tokens are meant.  But they should
| probably be regarded as Indices, since the actual connection
| (as we listen to talk) of Instances of the same typical words
| with the same Objects, alone causes them to be interpreted as
| denoting those Objects.  Excepting, if necessary, propositions
| in which all the subjects are such signs as these, no proposition
| can be expressed without the use of Indices.  If, for example, a man
| remarks, "Why, it is raining!" it is only by some such 'circumstances'
| as that he is now standing here looking out at a window as he speaks,
| which would serve as an Index (not, however, as a Symbol) that he is
| speaking of this place at this time, whereby we can be assured that
| he cannot be speaking of the weather on the satellite of Procyon,
| fifty centuries ago.
|
| Charles Sanders Peirce, 'Collected Papers', CP 4.544.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 5

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| So then, a natural class being a family whose members are the sole
| offspring and vehicles of one idea, from which they derive their
| peculiar faculty, to classify by abstract definitions is simply
| a sure means of avoiding a natural classification.  I am not
| decrying definitions.  I have a lively sense of their great
| value in science.  I only say that it should not be by means
| of definitions that one should seek to find natural classes.
| When the classes have been found, then it is proper to try to
| define them;  and one may even, with great caution and reserve,
| allow the definitions to lead us to turn back and see whether
| our classes ought not to have their boundaries differently
| drawn.  After all, boundary lines in some cases can only be
| artificial, although the classes are natural, as we saw in
| the case of the 'kets'.  When one can lay one's finger upon
| the purpose to which a class of things owes its origin, then
| indeed abstract definition may formulate that purpose.  But
| when one cannot do that, but one can trace the genesis of a
| class and ascertain how several have been derived by different
| lines of descent from one less specialized form, this is the
| best route toward an understanding of what the natural classes
| are.  This is true even in biology;  it is much more clearly so
| when the objects generated are, like sciences, themselves of the
| nature of ideas.
|
| Charles Sanders Peirce, 'Collected Papers', CP 1.222.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 6

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| There are cases where we are quite in the dark, alike concerning the creating
| purpose and concerning the genesis of things[,] but where we find a system of
| classes connected with a system of abstract ideas -- most frequently numbers --
| and that in such a manner as to give us reason to guess that those ideas in
| some way, usually obscure, determine the possibilities of things.  For example,
| chemical compounds, generally -- or at least the more decidedly characterized
| of them, including, it would seem, the so-called elements -- seem to belong
| to types, so that, to take a single example, chlorates KClO3, manganates
| KMnO3, bromates KBrO3, rutheniates KRuO3, iodates KIO3, behave chemically
| in strikingly analogous ways.  That this sort of argument for the existence
| of natural classes -- I mean the argument drawn from types, that is, from
| a connection between the things and a system of formal ideas -- may be much
| stronger and more direct than one might expect to find it, is shown by the
| circumstance that ideas themselves -- and are they not the easiest of all
| things to classify naturally, with assured truth? -- can be classified
| on no other grounds than this, except in a few exceptional cases.  Even
| in these few cases, this method would seem to be the safest.  For example,
| in pure mathematics, almost all the classification reposes on the relations
| of the forms classified to numbers or other multitudes.  Thus, in topical
| geometry, figures are classified according to the whole numbers attached
| to their 'choresis', 'cyclosis', 'periphraxis', 'apeiresis', etc.  As for
| the exceptions, such as the classes of hessians, jacobians, invariants,
| vectors, etc., they all depend upon types, too, although upon types of
| a different kind.  It is plain that it must be so;  and all the natural
| classes of logic will be found to have the same character.
|
| Charles Sanders Peirce, 'Collected Papers', CP 1.223.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 7

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Any genuine appreciation of what Peirce has to say about identity,
indices, names, proper or otherwise, and the putative distinctions
between individual, particular, and general terms will have to deal
with what he wrote in 1870 about the "doctrine of individuals".

Notice that this statement, together with the maxims
that "Whatever has comprehension must be general"
and "Whatever has extension must be composite",
pull the ruga -- and all of the elephants --
out from underneath the nominal thinker's
wishful thinking to find ontological
security in individual names, which
said nominal thinker has confused
with the names of individuals,
to turn a phrase back on same.

"A Simple Desultory Philippic"

| In reference to the doctrine of individuals, two distinctions should be
| borne in mind.  The logical atom, or term not capable of logical division,
| must be one of which every predicate may be universally affirmed or denied.
| For, let 'A' be such a term.  Then, if it is neither true that all 'A' is 'X'
| nor that no 'A' is 'X', it must be true that some 'A' is 'X' and some 'A' is
| not 'X';  and therefore 'A' may be divided into 'A' that is 'X' and 'A' that
| is not 'X', which is contrary to its nature as a logical atom.
|
| Such a term can be realized neither in thought nor in sense.
|
| Not in sense, because our organs of sense are special -- the eye,
| for example, not immediately informing us of taste, so that an image
| on the retina is indeterminate in respect to sweetness and non-sweetness.
| When I see a thing, I do not see that it is not sweet, nor do I see that it
| is sweet;  and therefore what I see is capable of logical division into the
| sweet and the not sweet.  It is customary to assume that visual images are
| absolutely determinate in respect to color, but even this may be doubted.
| I know of no facts which prove that there is never the least vagueness
| in the immediate sensation.
|
| In thought, an absolutely determinate term cannot be realized,
| because, not being given by sense, such a concept would have to
| be formed by synthesis, and there would be no end to the synthesis
| because there is no limit to the number of possible predicates.
|
| A logical atom, then, like a point in space, would involve for
| its precise determination an endless process.  We can only say,
| in a general way, that a term, however determinate, may be made
| more determinate still, but not that it can be made absolutely
| determinate.  Such a term as "the second Philip of Macedon" is
| still capable of logical division -- into Philip drunk and
| Philip sober, for example;  but we call it individual because
| that which is denoted by it is in only one place at one time.
| It is a term not 'absolutely' indivisible, but indivisible as
| long as we neglect differences of time and the differences which
| accompany them.  Such differences we habitually disregard in the
| logical division of substances.  In the division of relations,
| etc., we do not, of course, disregard these differences, but we
| disregard some others.  There is nothing to prevent almost any
| sort of difference from being conventionally neglected in some
| discourse, and if 'I' be a term which in consequence of such
| neglect becomes indivisible in that discourse, we have in
| that discourse,
|
| ['I'] = 1.
|
| This distinction between the absolutely indivisible and that which
| is one in number from a particular point of view is shadowed forth
| in the two words 'individual' ('to atomon') and 'singular' ('to kath
| ekaston');  but as those who have used the word 'individual' have not
| been aware that absolute individuality is merely ideal, it has come to
| be used in a more general sense.  (CP 3.93, CE 2, 389-390).
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

Nota Bene.  On the square bracket notation used above:
Peirce explains this notation at CP 3.65 or CE 2, 366.

| I propose to denote the number of a logical term by
| enclosing the term in square brackets, thus, ['t'].

The "number" of an absolute term, as in the case of 'I',
is defined as the number of individuals that it denotes.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 8

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Because the question of Comprehension-Extension-Information that Peirce
investigated in his Harvard and Lowell lectures of 1865-1866 has a lot
of overlap with the subject of Tone-Token-Type, and because the work
that Peirce did on this question is a big part of the thinking that
led up to the New List of 1867, I will list here the links to the
Peirce quotations that I considered earlier this year.

Extension x Comprehension = Information -- Selected Links

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 9

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I probably won't be able to get back to this till later --
but for ease of review, here are the passages I posted:

01.  http://suo.ieee.org/ontology/msg04053.html = Selected Writings, p. 406
02.  http://suo.ieee.org/ontology/msg04325.html = CP 4.537
03.  http://suo.ieee.org/ontology/msg04326.html = CP 5.429
04.  http://suo.ieee.org/ontology/msg04327.html = CP 4.544
05.  http://suo.ieee.org/ontology/msg04328.html = CP 1.222
06.  http://suo.ieee.org/ontology/msg04329.html = CP 1.223

Maybe I will recopy later the ones that Howard and Seth sent.
I am growing very dependent on my non-mental memory, you see.

Speaking of "semio-violence", you are forcing me into
the wilds of quali/sin/legisigns much against my will.

For example:  CP 2.243-246.

JA: I am currently reading "instance" as a relation word,
    but I'll need to check that out further.

PC: Musing interpretatively a bit while on the way for lunch this afternoon,
    it suddenly occurred to me that I was probably wrong in trying to "force"
    a categorisation in terms of degrees of abstraction and individuality onto
    the type-token-instance "triacity".

There is actually some precedent in math for
using "triality" (on analogy with "duality")
for 3-adic relationships of this kind.

PC: Perhaps it might be more useful if we were to try
    thinking of in terms of the basic sign relation:

    O
    |  \
    |     \
    |        \
    R ------> I

    and then entering token (To) into the diagram 
    in the role of  representamen, type (Ty) as
    (immediate) object, and instance (In) as 
    interpretant, as in:

    Ty
    |  \
    |     \
    |        \
    To------> In

PC: Rationale:

PC: 1. The word on the page (Token) is the only part of
       the triacity directly perceivable by our senses.

PC: 2. The Instance is the realization of a specific (cognitive)
       relation between Type and Token for some interpreter.

PC: 3. The Type is revealed to us (only) by the
       specific "event-driven" mediation of the Token.

PC: The Tone I still have problems seeing how it possibly
    might fit into THIS general schema of things, however.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 10

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I think that the ideas are important here, but the terminology
is probably going to stay hopelessly confused from here on out.
There are different sorts of type/token issues in mathematics,
computer science, Peirce's logic, and lately the "ontology via
formal concept analysis" crowd has invented a whole new way of
using these words, so I will have to pick my preferences and
forge ahead.

What interests me in Peirce's Comprehension/Extension/Information,
Quality/Reaction/Symbolization, Tone/Token/Type analogies is that
he started out with trying to understand how inquiry is possible --
the conditions for the possibility of scientific thinking -- and
developed the theory of signs in a supporting role to that effort.

Though I started out with a healthy dose of "pattern recognition AI",
like most folks were doing 20-30 years ago, the way that this issue
comes up in my current applications is more like this, certainly not
a problem about the identity of objects, but about the various kinds
of "partitions", "quotients", "equivalence relations", or "equivalence
class structures" that can be overlaid on a space of signs.  Remember,
too, that "signs" here can mean "data of the senses".  Accordingly, the
strong theme for me is that these clusterings are always "interpretive"
or "perspective-&-purpose-relative", at least, initially.  For instance,
certain species of sign relations lead to various sorts of equiv. classes
that I call "referential", "semiotic", or "logical" equivalence classes,
REC's, SEC's, LEC's, respectively.  The generic picture looks like this:

|   Object Domain          Syntactic Domain  
|                                           
|                           o-----------o
|       o~~~~~~~~~~~~~~~~~~/| s s s ... |\
|      / \                / o-----------o \
|     /   \              /                 \
|    /     \            o-----------o       \
|   o~~~~~~~\~~~~~~~~~~~| s s s ... |        \
|    \       \          o-----------o         \
|     \       \          \         o-----------o
|      \       o~~~~~~~~~~\~~~~~~~~| s s s ... |
|       \     /            \       o-----------o
|        \   /              \                 /
|         \ /                \ o-----------o /
|          o~~~~~~~~~~~~~~~~~~\| s s s ... |/
|                              o-----------o
|
| Figure 1.  Objects Inducing A Sign Partition

In a sense, one "reconstructs" the structure of the Object domain O
within the equivalence class structure of the Syntactic domain S |_| I.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 11

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

These are important issues, and I will return to your note next week
when I have some chance of better focus, but let me say a few things
by way of trying to clear a mental working space.

First, I must respond under a Peircean, if not a Cerberean heading,
as I earnestly believe that any attempt to deal with this issue in
dichotomous terms is doomed to end up two-thirds baked at best.

Second, this is an issue that has occupied me since my
first post-pubescent identity crisis, some time before
I ran into Peirce -- my very first undergrad essay on
Peirce was titled "Distinction and Coincidence" (1972),
in which I compared the various calculi of Peirce with
those of George Spencer Brown.

From my point of view, the critical treatment, the culmination
of five years intense and groundbreaking work, is the 1870
"Description of a Notation for the Logic of Relatives",
(CP 3.45-149;  CE 2, 359-429).  The critical passage
is what he says about the "doctrine of individuals".
Understanding the implications of that critique
would be a big part of grasping Peirce's whole
subsequent temporal evolution.

Third, it is crucial to recognize Aristotle's dimension,
the one that stretches between the things that are closer
to Nature and the things that are closer to us.  Others have
called the opposing directions "Reality" and "Representation" --
for reading Peirce, "Objects" and "Signs" will do well enough.

So, we have to sort out from moment to moment whether we are talking about
relations -- for example, difference and indifference -- among Objects or
among Signs, and then to say what the relationships among these separate
realms of relations are or ought to be.  All very obvious, of course,
but what comes out of Peirce's way of doing this will be very different
from what comes out of, say, Frege's way of doing this.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 12

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I would like to go back to Peirce's critique of
the "doctrine of individuals" and draw out a few
of its implications for the question of identity.
I will break the text into more digestible chunks.

| In reference to the doctrine of individuals, two distinctions should be
| borne in mind.  The logical atom, or term not capable of logical division,
| must be one of which every predicate may be universally affirmed or denied.
| For, let 'A' be such a term.  Then, if it is neither true that all 'A' is 'X'
| nor that no 'A' is 'X', it must be true that some 'A' is 'X' and some 'A' is
| not 'X';  and therefore 'A' may be divided into 'A' that is 'X' and 'A' that
| is not 'X', which is contrary to its nature as a logical atom.
|
| Such a term can be realized neither in thought nor in sense.

First off, Peirce is talking about terms, which go on the sign side of the
Object/Sign ledger, and he is examining their capacity to serve as logical
indivisibles or logical atoms, that is, their ability to determine genuine
individuals as the objects of their denotation.  What he declares here is
very revolutionary -- we do not hear its like again until the outbreaks
of the Information and Computing Revolutions -- but perhaps it is not
so surprising to hear from a person who has just five years earlier
already discovered the initial elements of a subject that he calls
the "Theory of Information".  In the language that would later be
used quite a bit, he is remarking on the information-theoretic
capacity limitations of signs.

Peirce comes to this point in a perfectly straightforward Kantian way.
In accordance with the "hypothesis of reality", there are the objects
in reality, and then there are the manifolds of sensuous impressions
that represent these objects, and finally the utility of concepts is
in helping us to connect the manifold "data of the senses" (DOTS)
into configurations that correspond to real objects.

Now, Peirce does not admit that the objects in themselves are unknowable,
but allows that they are knowable in the forms of their representations,
and this knowledge, as mediated by a representation, is always partial.

Two of the immediate consequences of this observation are these:

1.  There are no such things as absolutely individual terms, properly speaking.
    If you have things that you find it convenient to call "individual terms"
    in a particular discussion, then one thing that they do not do is denote
    or determine absolute indivisibles.  This is the reason that Peirce,
    when he is being precise, will speak of "particulars" instead.

2.  If you seek the "difference that makes a difference" among individual terms,
    particular terms, and general terms, it will not be an absolute or essential
    difference, but rather an interpretive or discourse-relative difference.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 13

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SS: This is a bit of a nuisance, but I wonder if I could ask you to include,
    so far as you are able, the dates of the passages you will be quoting.
    This is not meant as a criticism, but we know that Peirce changed his
    theory of identity at least once, and I think we shall have better luck
    understanding the evolution of his views if we keep track of the dates.
    The passage you quote in this post came from an 1870  publication.
    (I am not suggesting that he changed his views on the ideality of
    absolute individuality after 1870, but the significance of that
    claim has to be assessed in context.)

No problem at all.  I included a full reference
on my first citation of this passage, which for
ease of reference can be found here:

http://suo.ieee.org/ontology/msg04332.html

When you say "we know that Peirce changed his theory of identity at least once",
please list me as agnostic on that point.  So far, "we" have heard little more
than Murphey's opinion and two opinions on Murphey's law, all of which I used
to buy right up until the 'Chronological Edition' started coming out, when I
discovered, much to my initial shock, that many of the things that I thought
were Peirce's last and best ideas were actually his first ideas out of the
starting blocks.  About the only the difference is that he initially wrote
it all out in far more detail than he later apparently tired of repeating.
Of course, there were the usual assortments of minor dead ends and
back tracks, but nothing all that radical, so far as I can yet see.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 14

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I notice the risk of a certain confusion as to what exactly
we are talking about when we talk about a "law of identity",
and I only have time for a short sermon this morning, so ...

The reason that the so-called "law of identity" -- in one of the ways of
laying it down taking the form "A is A", where, of course, its meaning
already depends a little bit, though not all, on what the meaning of
the word "is" is -- is called a law of "logic", a normative science,
instead of a law of "ontology", a descriptive science, is that it
tells us how we ought to use signs if we earnestly desire those
signs to function as they ought to in scientific reasoning,
and thus this law of sign design puts no constraint at all
on the being-in-relation-to-itself of any being, per se,
but only on the beings that would be signs for our sake.

The moral of the sermon being this, that though we use signs to
describe things, that does not make logic a descriptive science,
since not every description is, or even ought to be, logical.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 15

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Identity, Law of.
|
| Given by traditional logicians as "A is A".
| Because of the various possible meanings of
| the 'copula' (q.v.) and the uncertainty as to
| the range of the variable A, this formulation is
| ambiguous.  The traditional law is perhaps best
| identified with the theorem x = x, either of the
| functional calculus of first order with equality,
| or in the theory of types (with equality defined),
| or in the algebra of classes, etc.  It has been, or
| may be, also identified with either of the theorems
| of the propositional calculus, p => p, p = p, or with
| the theorem of the functional calculus of first order,
| F(x) =>_x F(x).
|
| Alonzo Church, in:
| Dagobert D. Runes (ed.),
|'Dictionary of Philosophy',
| Littlefield, Adams, & Co.,
| Totowa, NJ, 1972.

Now, which of these, if any, comes closest
to helping us understand what Peirce meant
by "identity"?

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 16

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

WS = William Thomas Sherman

Explaining a sermon is about like explaining a joke.
I will make one try and then confess my sins in the
appropriate place, returning to the more significant
text in relation to which it was intended as no more
than an incidental sidelight.

WS: You wrote:

JA: "since not every description is, or even ought to be, logical."

WS: Imagine someone reading this statement of yours, by what standard,
    criteria or authority should someone believe what you say?  If for
    example, you might say, experience tells me or us that this is so.
    If experience tells that this is so, how do you derive your "ought",
    and how can experience give us an "ought"?  Indeed, what do you mean
    by "ought?"

Both the "is" and the "ought" occurred in a negative or limitative context:

The "is" part is my empirical summary of descriptions that I have known.
If your experience is terribly different, then I would be surprised, and
most likely try to explain the difference by the hypothesis that we are
using some of the words differently.

The "ought" part is simply my statement that I would like to try
and avoid telling someone that a description ought to be logical
when I do not know the purpose of making it.  If the purpose is
logical, then I would strive to make my advice logical.

I almost added the qualifier "at least, when taken literally" to the
end of the sentence, but I judged that it was most likely redundant.

What I am puzzling over here is simply the fact that we
use terms to describe things, sometimes in a way that the
terms obey "laws of logic" and sometimes not, and this does
not always bear on the goodness of the description unless our
purpose is of a very special sort, namely, logical or oriented
toward a scientific use.  I am merely turning over in my own mind
the issue of Peirce's criterion "as to what 'must be' the characters
of all signs used by a 'scientific' intelligence, that is to say, by an
intelligence capable of learning by experience".  

| Logic, in its general sense, is, as I believe I have shown, only another
| name for 'semiotic' [Greek 'semeiotike'], the quasi-necessary, or formal,
| doctrine of signs.  By describing the doctrine as "quasi-necessary", or
| formal, I mean that we observe the characters of such signs as we know,
| and from such an observation, by a process which I will not object to
| naming Abstraction, we are led to statements, eminently fallible, and
| therefore in one sense by no means necessary, as to what 'must be' the
| characters of all signs used by a "scientific" intelligence, that is to say,
| by an intelligence capable of learning by experience.  As to that process of
| abstraction, it is itself a sort of observation.  The faculty which I call
| abstractive observation is one which ordinary people perfectly recognize,
| but for which the theories of philosophers sometimes hardly leave room.
| It is a familiar experience to every human being to wish for something
| quite beyond his present means, and to follow that wish by the question,
| "Should I wish for that thing just the same, if I had ample means to gratify it?"
| To answer that question, he searches his heart, and in doing so makes what I term
| an abstractive observation.  He makes in his imagination a sort of skeleton diagram,
| or outline sketch, of himself, considers what modifications the hypothetical state
| of things would require to be made in that picture, and then examines it, that is,
| 'observes' what he has imagined, to see whether the same ardent desire is there to
| be discerned.  By such a process, which is at bottom very much like mathematical
| reasoning, we can reach conclusions as to what 'would be' true of signs in all
| cases, so long as the intelligence using them was scientific.  (CP 2.227).
|
| Charles Sanders Peirce, 'Collected Papers', CP 2.227,
| Editor Data:  From An Unidentified Fragment, c. 1897.

WS: Can you give us an example of a description which cannot be made
    consistent with logic?  Or do you mean merely to say that people
    sometimes make descriptions which are illogical but which are,
    nevertheless, accepted as being somehow consistent with reality?

WS: Again, picture someone discovering your statement (above) for
    the first time, and they ask themselves, "is what Jon Awbrey
    saying really true or is it only his opinion?  If really true,
    how will I know it is really true?"

WS: If what you are saying is "really" true, as opposed to an opinion,
    how would you answer these questions so as to command belief?

WS: Asking these questions for a better understanding.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 17

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

The best I remember, you asked a question about Peirce's "theory of identity"
in the context of a discussion about types and tokens, and where we eventually
adduced source materials on the related issues of general names, individuals,
instances, intensions, laws, natural kinds, proper names, universals, etc.

I am aware that the "theory of identity" can mean many things, from the
ontological to the logical, and who knows what else, but I tried to give
you the best information that I have with regard to what is distinctive
about Peirce's thought on this question, as relevant to the context of
issues that were being discussed.

As far as Leibniz's principle goes, aside from the fact that it means
different things when viewed logically vs. ontologically, I know lots
of serious thinkers who just plain treat it as a parameter, like the
parallel axiom, exploring the consequences of A on even days and ~A
on odd days, so I would never count a vacillation here as a serious
revolution in anyone's thinking.

The statements about the various identity relations, taking "identity"
in the sense that it has within the logic of relatives, not in, say,
meteorology, you are just plain misunderstanding, by dint of removing
these statements from the distinctive contexts in which each is true.
There is a definition of "decomposable" that has to be observed here.
This definition is invoked when one says that no 3-adic relation is
"composed" of 2-adic relations.  It is not invoked in the statement
that one fact is "contained" in several other facts, because the
form of that style of "containment" involves the application of
several other 3-adic relations, including 3-identity relations.
In general, the fact that there exist k-identity relations
I_k c X^k for each k = 2, 3, 4, ..., is one thing.  Which
of them can be defined in terms of which others in which
ways -- that is a whole manifold of different questions.

As far as realism versus nominalism goes --
all mathematicians are pythagorean realists.
What age any one of them decides to come out of
the closet about it is a whole different question.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 18

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Let me recall the weather advisories
that I posted for your consideration
when first we set out on this course,
in search of Peirce's "identity".

| It is crucial to recognize Aristotle's dimension, the one that
| stretches between the things that are closer to Nature and the
| things that are closer to us.  Others have called the opposing
| directions "Reality" and "Representation" -- for reading Peirce,
| "Objects" and "Signs" will do well enough.

| So, we have to sort out from moment to moment whether we are talking
| about relations -- for example, difference and indifference -- among
| Objects or among Signs, and then to say what the relationships among
| these separate realms of relations are or ought to be.

Given the experience of the discussion since then, I think that I can
clarify things a little better at this point by adding the contrast
between "Ontology" and "Logic" to the other pairs of comparisons.
In these terms, the points that I have been trying to make are,
first, that Peirce's theory of identity will vary as we pass
in its bearing from descriptive ontology to normative logic,
but more importantly, it is really the relationship between
the similarities of Objects and the similarities of Signs
that is of fundamental interest here.

But it took me about two hours to write that paragraph,
which is a symptom that it's way past my dormitive hour.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 19

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JA = Jon Awbrey
WS = William Thomas Sherman

JA: It is crucial to recognize Aristotle's dimension, the one that
    stretches between the things that are closer to Nature and the
    things that are closer to us. Others have called the opposing
    directions "Reality" and "Representation" -- for reading Peirce,
    "Objects" and "Signs" will do well enough.

WS: Let us be careful about taking for granted assumptions.
    After all, there are those who would argue that language
    (or signhood) and logic are what is most real, and objects
    only derive their reality in our higher cognitive higher
    understandings as they are known through language and logic.
    A physical object may be quite real say to our unreflective
    feelings, but unless mind is present to identify and catalog
    the object felt as something "real" we are not even aware of
    the object as such, only the feeling or sensation of it.

I am only indicating a line of orientation for understanding
what Peirce was about here.  Peirce echoed Aristotle's insight
in pointing out that we must begin from what is closer to us,
in this case, the world of sense-signs and thought-signs, the
latter reaches of which we take part in shaping, and that we
must use these givens and tokens to work toward what is closer
to "Nature", "Reality", "the world beyond the cave", whatever
you wish to call it.  In doing this we hammer out concepts and
construct conceptual architectures from the raw data of sense,
and we use all of this as so much instrumentation to arrive at
a better sense of the things that are slightly more permanent.

How we do this, how science works, is the longstanding question that
Peirce takes up.  In the process of trying to answer this question,
he finds it necessary to reflect on how we use signs, better said,
on the "formal" functions of signs in inquiry, along with the way
that signs come to embody and bear information.  In this pursuit,
Peirce had to develop the theory of sign relations, a beginning
theory of information, and this in turn required him to develop
the logic of relative terms, just to deal with the complexities
that arose in the work.

Probably I should continue to point out that the distinction
between Object and Sign is a distinction of relational roles,
not of absolute essences.  There is no dichotomy or dualism
of that sort being set up here.  It is perfectly possible
to have a sign relation L contained in the product OxSxI
where the object domain O, the sign domain S, and the
interpretant domain I are all the same set, indeed,
we tend to like working toward such cases.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 20

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Working on the philosophy that any external landmark is better than none,
I will repost this gloss from Church, in spite of the anachronisms that it
is bound to contain with respect to Peirce's thought.  My own sense is that
Peirce started out with something much like the traditional meaning, while
the propositional meanings are quite prevalent throughout his work, from
the beginning up through the logical graphs, but you can only read him
as using the "functional calculus" interpretations if you understand
the distinctive point of view that comes out of their mathematical 
provenance, where one is thinking of "aggregates" and "composites"
as Sums and Products and using Sigma's and Pi's to signify them.

| Identity, Law of.
|
| Given by traditional logicians as "A is A".
| Because of the various possible meanings of
| the 'copula' (q.v.) and the uncertainty as to
| the range of the variable A, this formulation is
| ambiguous.  The traditional law is perhaps best
| identified with the theorem x = x, either of the
| functional calculus of first order with equality,
| or in the theory of types (with equality defined),
| or in the algebra of classes, etc.  It has been, or
| may be, also identified with either of the theorems
| of the propositional calculus, p => p, p = p, or with
| the theorem of the functional calculus of first order,
| F(x) =>_x F(x).
|
| Alonzo Church, in:
| Dagobert D. Runes (ed.),
|'Dictionary of Philosophy',
| Littlefield, Adams, & Co.,
| Totowa, NJ, 1972.

Nota Bene.  It gets muffed a bit in ascii, but Church
is using subscripts on infix connectives to signify
the same thing as prefixing universal quantifiers,
that is, "F(x) =>_x F(x)" = "`A`x (F(x) => F(x))".

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 21

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| I shall follow Boole in taking the sign of equality to signify identity.
| Thus, if v denotes the Vice-President of the United States, and p the
| President of the Senate of the United States,
|
| v = p
|
| means that every Vice-President of the United States is President of the
| Senate, and every President of the United States Senate is Vice-President.
|
| Charles Sanders Peirce, "Description of a Notation ...", CP 3.66 (1870).

Now, it's not the end of the story, of course, but it's a start.
The significant thing is what is usually the significant thing,
in mathematics, at least, that two distinct descriptions refer
to the same things.  Incidentally, Peirce is not really being
as indifferent to the distinctions between signs and things,
mention and use, as this ascii text makes him look, but he
uses a host of other type-faces to distinguish the types
and the uses of signs.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 22

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

| Ismism.  The tendency to make inferences of the following forms:
|
| X is good.
| -------------------
| X is the only good.
|
| X is good for something.
| -------------------------
| X is good for everything.

SS: With respect to "theory of identity" meaning different things in different
    contexts in the Peirce papers, that may be true.  A search for the expression
    "theory of identity" in the Collected Papers yielded no examples.  I note, though,
    that in 1873, Peirce wrote that "Logic may be considered as the science of identity"
    (MS229).  In that paper, he also gave a version of Leibniz's law, as one of three
    fundamental principles in the "science of identity", calling it "the principle
    of the singleness of the same", and holding that this principle is the only
    thing that distinguishes the relation expressed by the logical copula from
    other relations of a similar kind.  Maybe he later changed his mind about
    this?

I have told you how most serious thinkers I know of regard Leibniz's Law.
It's a parameter of a formal system, analogous to the Parallel Axiom in
geometry.  As stated, any reference to "all predicates" leaves a lot to
be desired in terms of what you mean by that, and it only makes sense in
a context where the statement can be regarded as well-posed.  Outside of
such a frame, it simply has no meaning at all.  Peirce's thinking on this
score is very nuanced, and I have repeatedly referred people to his note
"On A Limited Universe Of Marks" for a sample of his best thinking on it:

http://suo.ieee.org/ontology/msg03204.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 23

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

It seems to me that the sources
of your confusion are quite clear:

1.  You consistently ignore what Peirce said in his 1870 remark on
    the "doctrine of undividuals", that attributions of identity are
    relative to a context of discourse, an insight that he did not
    abandon but developed to its finest degree in his 1883 remark
    "On a Limited Universe of Marks":

    http://suo.ieee.org/ontology/msg03204.html

2.  You consistently ignore the distinctions and the meanings
    of technical terms in mathematics, as they were used by 
    Peirce and as they have been used for at least 200 years,
    for instance, the meanings of "composition" and "reduction"
    as they are used in the algebra of relations and in the
    logic of relative terms.

I have pointed out all of this to you before.
It is all pretty clear to anybody who does
not have a prior theory of what can qualify
as a theory of identity, or a philosophy,
for that matter, that they keep trying
shove Peirce's more general theory,
and more general philosophy, into.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 24

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Our texts for today:

1870.  http://suo.ieee.org/ontology/msg04332.html
1883.  http://suo.ieee.org/ontology/msg03204.html

For ease of reference, I repeat here once again Peirce's remark
on the "doctrine of individuals", breaking it into smaller parts:

| In reference to the doctrine of individuals, two distinctions should be
| borne in mind.  The logical atom, or term not capable of logical division,
| must be one of which every predicate may be universally affirmed or denied.
| For, let 'A' be such a term.  Then, if it is neither true that all 'A' is 'X'
| nor that no 'A' is 'X', it must be true that some 'A' is 'X' and some 'A' is
| not 'X';  and therefore 'A' may be divided into 'A' that is 'X' and 'A' that
| is not 'X', which is contrary to its nature as a logical atom.
|
| Such a term can be realized neither in thought nor in sense.
|
| Not in sense, because our organs of sense are special -- the eye,
| for example, not immediately informing us of taste, so that an image
| on the retina is indeterminate in respect to sweetness and non-sweetness.
| When I see a thing, I do not see that it is not sweet, nor do I see that it
| is sweet;  and therefore what I see is capable of logical division into the
| sweet and the not sweet.  It is customary to assume that visual images are
| absolutely determinate in respect to color, but even this may be doubted.
| I know of no facts which prove that there is never the least vagueness
| in the immediate sensation.
|
| In thought, an absolutely determinate term cannot be realized,
| because, not being given by sense, such a concept would have to
| be formed by synthesis, and there would be no end to the synthesis
| because there is no limit to the number of possible predicates.
|
| A logical atom, then, like a point in space, would involve for
| its precise determination an endless process.  We can only say,
| in a general way, that a term, however determinate, may be made
| more determinate still, but not that it can be made absolutely
| determinate.  Such a term as "the second Philip of Macedon" is
| still capable of logical division -- into Philip drunk and
| Philip sober, for example;  but we call it individual because
| that which is denoted by it is in only one place at one time.
| It is a term not 'absolutely' indivisible, but indivisible as
| long as we neglect differences of time and the differences which
| accompany them.  Such differences we habitually disregard in the
| logical division of substances.  In the division of relations,
| etc., we do not, of course, disregard these differences, but we
| disregard some others.  There is nothing to prevent almost any
| sort of difference from being conventionally neglected in some
| discourse, and if 'I' be a term which in consequence of such
| neglect becomes indivisible in that discourse, we have in
| that discourse,
|
| ['I'] = 1.
|
| This distinction between the absolutely indivisible and that which
| is one in number from a particular point of view is shadowed forth
| in the two words 'individual' ('to atomon') and 'singular' ('to kath
| ekaston');  but as those who have used the word 'individual' have not
| been aware that absolute individuality is merely ideal, it has come to
| be used in a more general sense.  (CP 3.93, CE 2, 389-390).
|
| Charles Sanders Peirce,
|"Description of a Notation for the Logic of Relatives,
| Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic",
|'Memoirs of the American Academy', Volume 9, pages 317-378, 26 January 1870,
|'Collected Papers' (CP 3.45-149), 'Chronological Edition' (CE 2, 359-429).

Nota Bene.  On the square bracket notation used above:
Peirce explains this notation at CP 3.65 or CE 2, 366.

| I propose to denote the number of a logical term by
| enclosing the term in square brackets, thus, ['t'].

The "number" of an absolute term, as in the case of 'I',
is defined as the number of individuals that it denotes.

Let me emphasize the following statements:

1.  The logical atom, or term not capable of logical division, must be
    one of which every predicate may be universally affirmed or denied.
    Such a term can be realized neither in thought nor in sense.

2.  In thought, an absolutely determinate term cannot be realized,
    because, not being given by sense, such a concept would have to
    be formed by synthesis, and there would be no end to the synthesis
    because there is no limit to the number of possible predicates.

3.  A logical atom, then, like a point in space, would involve for
    its precise determination an endless process.  We can only say,
    in a general way, that a term, however determinate, may be made
    more determinate still, but not that it can be made absolutely
    determinate.

4.  It is a term not 'absolutely' indivisible, but indivisible as
    long as we neglect differences of time and the differences which
    accompany them.  Such differences we habitually disregard in the
    logical division of substances.  In the division of relations, etc.,
    we do not, of course, disregard these differences, but we disregard
    some others.

5.  There is nothing to prevent almost any sort of difference from
    being conventionally neglected in some discourse, and if 'I' be
    a term which in consequence of such neglect becomes indivisible
    in that discourse, we have in that discourse [that the number
    of stipulated individuals that 'I' denotes is exactly one].

6.  This distinction between the absolutely indivisible and that which is
    one in number from a particular point of view is shadowed forth in the
    two words 'individual' ('to atomon') and 'singular' ('to kath ekaston');
    but as those who have used the word 'individual' have not been aware that
    absolute individuality is merely ideal, it has come to be used in a more
    general sense.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 25

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I started to read your comments on Peirce's 1883 remark, and I'm
afraid to say that they are wholly off-base almost from the very
outset.  Anyone who accorded even the most casual notice to the
extended discussion that we had earlier this year on Peirce's
"Extension x Comprehension = Information" equation must have
noted that Peirce has a very different idea about the whole
relationship between extensions and intensions than are
dreamt of in your stock (and pillory) dichotomies.

The mere recognition of the question as to which predicates are "admissible"
to a given context of discussion and stage of play strikes what we normally
regard as a very "modern" chord.  And when one says that two objects are
identical iff they have all "admitted" predicates in common, then it has
an obvious bearing on yielding a "relativized indiscernibility principle".
Sung another way, this is just the question of which hypotheses are
admissible, which is the problem of "giving a rule to abduction",
which rule is none other than the "pragmatic maxim", so I think
it is clear why Peirce emphasizes this identity question as yet
another variation on the main motive of pragmatism.  The Upshot.

I will give it another try tomorrow.  In the meantime you might glance at
a few of the following excerpts that I shared with the Peirce List earlier
this spring in connection with the "Question Regarding Indexicality" and
the "Semiosis & Inference" threads, and that I recall Tom and Fernando
and I discussing quite a bit:

Extension x Comprehension = Information -- Selected Links

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 26

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SS: To put the whole story in a nutshell (which necessarily distorts it a bit),
    recall that in the history of philosophy, there have been two main contenders
    as criteria of identity for existents:  essential properties and spatio-temporal
    continuity.  Peirce's 1870 view clearly falls into the former group, identity of
    existents always being in respect of some property.  Peirce's later view appears
    to fall in the latter group, allowing entities that are completely indiscernable
    at one time, even occupying the same space at that time, to be distinct, because
    they are in spatio-temporal continuity with entities occupying different places
    at another time. 

SS: Recall how this discussion of identity started.  I said that
    I would have expected Peirce to take a "realists" position: 

SS: quoting from SS post of 11/9/02, 10:39 AM:
    
    | namely, that identity is always in respect to some universal
    | or type.  For example, a is the same as b with respect to color,
    | or size, or personhood (the same "person" as), etc.  Then what has
    | been called (since Aristotle) "numerical identity" could be regarded
    | as a "degenerate" form of relative identity, in which a is the same as b
    | in 'every' respect.  This would make Leibniz's law of indiscernables true
    | for numerical identity but not for identity simpliciter, since a could be
    | identical to b in one respect and different in another.

SS: Only I found, on reading Peirce, seemingly conflicted statements that
    prevented me from confirming this hypothesis and indeed, prevented me
    from forming a clear idea of Peirce's theory of identity.

I will continue with the reading from Leibniz, which I began for two reasons:
one, to introduce some of the terminology that Peirce was taking for granted
in his writing about such concepts as "composite", "individual", "primitive",
"simple", and so on, two, in order to give an account of Leibniz's principle
as Leibniz was given to write about it.

As far as what you have been writing on this thread,
I find it at the present time to be incommensurable
with any of the meanings that I know for words like
"identity", "realist", "relative", "degenerate", etc.

So let me ask the following questions:

Why do you call the conflating of identity with similarity a "realist" position?
For that matter, why not call your "relative identity" by the name "similarity"?
The use of "relative" in this way, to refer to a universal or an absolute term,
seems to be just begging for trouble.  Moreover, it introduces a confound with
all of the other sorts of relativity that might be involved in predication.

Why do you call "numerical identity" the "degenerate" form of "relative identity",
and why do you call your "relative identity" by the name "identity simpliciter"?

You obviously understand that any statement involving a phrase
like "all predicates", "all properties", or "every respect" is
to be regarded with extreme circumspection.  Why can you not
accord to Peirce the right that we all assume for ourselves,
to wit, of having to look at it from many different angles?
As I see it, there is/are a host of ambiguities lurking in
all of these concepts, one that cannot be addressed short
of saying what one means by "all", "every", "predicate",
"property", and "respect".

If one finds even the simplest question, for instance,
whether mass is a "property" of a physical "entity",
one whereof one must be silent, then does it not
appear that the issue of Leibniz's principle is
not so much whether it is true, just yet, but
what in the heceity it means?

Finally, I will just point out that your continuing projection of the
3-fold (tone, token, type) upon the 2-some (particular, universal) is
causing more than a bit of distortion in the texts of Peirce you read.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 27

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JA, citing Peirce's "On a Limited Universe of Marks",
    in 'Studies in Logic' (1883, 1983), pp. 182-186,
    CP 2.517-531;  CE 4, 450-453):

    http://suo.ieee.org/ontology/msg03204.html

SS: This passage is opposed to the kind of extensionalism advocated by Quine.
    Quine's extensional languages are ones in which classes substitute for
    properties and relations, two classes being identical if they have the
    same members.  In an intensional language, admitting properties as well
    as classes, different properties may belong to exactly the same things.
    In an intensional language, "a proposition concerning the relations of
    two groups of marks is not necessarily equivalent to any proposition
    concerning classes of things".  Extensional languages, such as the
    first-order predicate calculus, or set theory, are adequate for
    mathematics, but it is controversial whether the sentences of
    ordinary language or the sciences in general can be translated
    into such an extensional language sentence for sentence.

I wish that you would try every now and then reading what Peirce writes
without trying to atomize each and every remark, if not the man himself,
according to your true-false checklist of dichotomies, especially since
the most casual reader of Peirce would know that he would consider your
attempt to pit extensions versus intensions (properly "comprehensions")
to be an utterly false and misleading antagonism.

Just reaching into the bean bag of all possible quotations:

| The moment, then, that we pass from nothing and the vacuity of being
| to any content or sphere, we come at once to a composite content and
| sphere.  In fact, extension and comprehension -- like space and time --
| are quantities which are not composed of ultimate elements;  but
| every part however small is divisible.
|
| The consequence of this fact is that when we wish to enumerate the
| sphere of a term -- a process termed 'division' -- or when we wish
| to run over the content of a term -- a process called 'definition' --
| since we cannot take the elements of our enumeration singly but must
| take them in groups, there is danger that we shall take some element
| twice over, or that we shall omit some.  Hence the extension and
| comprehension which we know will be somewhat indeterminate.  But
| we must distinguish two kinds of these quantities.  If we were to
| subtilize we might make other distinctions but I shall be content
| with two.  They are the extension and comprehension relatively to
| our actual knowledge, and what these would be were our knowledge
| perfect.
|
| Logicians have hitherto left the doctrine of extension
| and comprehension in a very imperfect state owing to the
| blinding influence of a psychological treatment of the
| matter.  They have, therefore, not made this distinction
| and have reduced the comprehension of a term to what it
| would be if we had no knowledge of fact at all.  I mention
| this because if you should come across the matter I am now
| discussing in any book, you would find the matter left in
| quite a different state.
|
| CSP, CE 1, page 462.
|
| Charles Sanders Peirce,
|"The Logic of Science, or, Induction and Hypothesis",
| Lowell Institute Lectures of 1866, pages 357-504 in:
|
|'Writings of Charles S. Peirce:  A Chronological Edition',
|'Volume 1, 1857-1866', Peirce Edition Project,
| Indiana University Press, Bloomington, IN, 1982.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 28

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

CL = Cathy Legg
SS = Seth Sharpless

CL: And the long 1885 quote is about the indiscernibility of identicals only.

SS: Are you sure?  Notice in the 1885 quote:

    | But this relation of identity has peculiar properties.
    | The first is that if i and j are identical, whatever
    | is true of i is true of j.  The other property is that
    | if everything which is true of i is true of j, then
    | i and j are identical.

SS: Isn't the "other property" the identity of indiscernibles?  (Well, almost.
    He obviously slipped, intending the protasis to be "if everything which is
    true of i is true of j 'and' everything which is true of j is true of i,
    then i and j are identical".)

If "i" and "j" are individual terms, then they are determinate
on all of the properties that are available in the discussion.
If j has all of the properties of i, and if i is determinate
on all of the available properties, then, as an atom, j = i.

This assumes, as taken for granted in this context, that the "universe of marks"
is closed under negation, that is, if A is a property then ~A is a property, and,
of course, if A is false of x then ~A is true of x.  Hence, the peculiarity.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Note 29

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o



o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Work 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SS = Seth Sharpless

SS: Yes, the fact that there can be different dimensions along
    which "identity" can be assessed is brought home forcibly
    in lexicography, where, for example, we have to deal with:
    phonetic identity and typographic identity, identity of
    reference, identity of meaning, homonymy, synonymy, etc.
    Philosophic and logical theories of identity often do not
    seem to do justice to this problem of criterial identity
    (i.e., "sameness in respect to"), though it does seem that
    in most commonplace judgements of identity (e.g., same man,
    same word, same river, same word, same meaning, etc.), it
    is always sameness in respect of some criterion or property
    that is at issue.  The hoary problem of indiscernability of
    identicals vs identity of indiscernables is with us yet in
    philosophy of logic.  And for this reason, perhaps, Harris
    has a point in focusing on the type-token distinction as
    a problem area.  However, it seems to me that this problem
    is most acute for the nominalist, who in judging identity,
    has always to come back to "same particular".  As a student
    of Quine, you must have thought a good deal about this problem.
    Quine objects to properties on the ground that we do not have
    criteria of identity for properties.  For Quine, one would need
    criteria of identity for classes (or properties, if one insists
    on admitting them) but no criteria of identity for individuals;
    individuals ARE criteria of identity for Quine.  To know whether
    class a = class b, one looks to the individuals that they (or
    member classes) contain. 

SS: As a realist, I take exactly the opposite view.  I would say
    that one needs criteria of identity for individuals but not
    for properties (or at least not for all properties) since
    properties ARE criteria of identity.  (Of course, Quine --
    like Russell in his nominalistic phase -- has a certain
    amount of trouble in specifying what is an individual;
    time-slices and all that.)

SS: From a logical or metaphysical point of view, this is
    a bewildering and fundamental problem area.  I don't
    feel very confident about Peirce's theory of identity.
    A computer search of 'Collected Papers' has left me
    somewhat confused.  Have you (or any lister) a good
    idea of Peirce's philosophy of identity? Of course,
    I would expect him to take something like what
    I have called the "realist's" position above. 

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Work 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

SS = Seth Sharpless

SS: Thanks to Cathy, Jon, Joe, Howard, and Benjamin for responding to
    my request for help in understanding Peirce's theory of identity.

SS: Let me try to make my quandary clear:  I would have expected Peirce
    to take a position on identity akin to that which I have described as
    a "realist's" position, namely, that identity is always with respect to
    some universal or type.  For example, a is the same as b with respect to
    color, or size, or personhood (the same "person" as), etc.  Then, what has
    been called (since Aristotle) "numerical identity" could be regarded as
    a "degenerate" form of relative identity, in which a is the same as b
    in 'every' respect.  This would make Leibniz's law of indiscernables
    true for numerical identity but not for identity simpliciter, since
    a could be identical to b in one respect and different in another.
    In the late 20th century, the classic defense of a "realist" theory
    of identity of this kind is that of Peter Geach ("Identity", reprinted
    in 'Logic Matters', Univ Cal. Press, 1972, 238-249).  The opposing
    "nominalistic" position is expounded by Quine and Wiggins in their
    respective criticisms of Geach:  Quine, Phil. Rev, 1964, p. 102 and
    Wiggins, 'Identity and Spatio-Temporal Continuity', Blackwell, 1971.

SS: However, surveying passages on identity or sameness in 'Collected Papers' and
    most of of the 'Writings' (I don't have access to Vol 4 of the latter at the
    moment) has failed to support this hypothesis.  Indeed, I find it difficult
    to extract a coherent theory of identity from Peirce's writings, at least
    up until 1903 or thereabouts.  Just to give an example of the problem,
    Cathy and Jon mentioned Peirce's concept of teridentity, which seems
    to be essential to the proof of "Peirce's Theorem" about reducibility
    to triads.  Cathy recommends Burch's papers in Houser's book.  (I have
    looked at these papers in the past, but the book is not available to me
    at the moment, so I'm relying on CP and 'Writings' and dating passages in
    CP is difficult as you know.)  The first reference I can find to teridentity
    in CP is from "A Syllabus of Certain Topics of Logic" (1903).  But here is the
    problem:  As late as 1896, Peirce wrote: 

SS, quoting CSP:

    | Now, identity is essentially a dual relation.
    | That is, it requires two subjects and no more.
    | If three objects are identical, this fact is
    | entirely contained in the fact that the three
    | pairs of objects are identical.  CP1.446 (1896)

SS: So trying to make a coherent story out of Peirce's writings on identity,
    most of which precede his development of the concept of teridentity, even
    though the latter is essential to the proof of "Peirce's theorem" and must
    somehow have played a role in his thinking, if not a conscious one, early
    on, is exceedingly difficult.  There are many interesting comments on
    identity scattered through Peirce's papers, but I have not been able
    to make a coherent story out of them, even of the ones preceding the
    explicit development of the teridentity theory.  I sometimes have the
    impression in his earlier writings on this subject that he is all over
    the map. In any case, I have to say that neither his late theory of
    teridentity nor his earlier treatments of identity as an essentially
    binary relation seem to be compatible with what I have called the
    "realist" theory of identity.  In discussing Peirce's theory of
    identity, one has to remember his devotion of Scotus's haecceities,
    but making things difficult for the interpreter, Peirce has put his
    own spin on haecceities.  This comes out in his observation that
    "Even Duns Scotus is too nominalistic when he says that universals
    are contracted to the mode of individuality in singulars, meaning,
    as he does, by singulars, ordinary existing things" (8.208, 1905). 

SS: Jon and Joe cite the pragmatic maxim, either Peirce's or James's version:
    "A difference that makes no difference is no difference at all", as the
    way to discover Peirce's views on identity.  But one has to remember
    that it took Peirce himself many years and the development of his
    theory of synechism to show that the incommensurability of the
    diagonal conforms to the requirements of the pragmatic maxim,
    that it is a "difference that makes a difference." 

SS: I admit to utter confusion over Peirce's theory of identity,
    and further help would be welcome.

SS: Things are not helped by Peirce's devotion to coining new words.
    Here is a classic bit of Peirceana touching on identity I think?

SS, quoting CSP:

| There is but one ambilative suilation.  It is the juxtalation, or coëxistence.
| There is but one contrambilative suilation:  it is the relation of individual
| identity, called numerical identity by the logicians.  But the adjective seems
| needless.  There is but one ambilative [contra] suilation:  it is the relation
| of individual otherness, or negation.  There is properly no contrambilative
| contrasuilation:  it would be the absurd relation of incompossibility.
| These four relations are to be termed the Four Cardinal Dyadic Relations
| of Second Intention.  It will be enough to call them the cardinilations,
| or cardinal relations.  (CP 3.586).
|
| Any peneperlation or penereperlation is a juxtambilation;
| any perlation or reperlation is, in addition, a juxtasuilation.
| Any penecontraperlation or penecontrareperlation is an extrambilation:
| any contraperlation or contrareperlation is, in addition, an extrasuilation.
| Every ambilation is a penereperlative penereperlation:  every contrambilation
| is a penecontrareperlative penecontraperlation.  Every suilation is
| a juxtareperlative juxtaperlation: every contrasuilation is
| an extrareperlative extraperlation.  (CP 3.587).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Work 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

JA: It seems to me that the sources of your confusion are quite clear:

JA: 1.  You consistently ignore what Peirce said in his 1870 remark on the
        "doctrine of undividuals", that attributions of identity are relative to
        a context of discourse, an insight that he did not abandon but developed
        to its finest degree in his 1883 remarke "On a Limited Universe of Marks":

        http://suo.ieee.org/ontology/msg03204.html

SS: For the benefit of those who may not be following Jon Awbrey's
    arcane argument, I'm going to try to explain it, and to explain
    why it is mostly nonsense.  For convenience, I give here the
    quote to which Awbrey is referring.  Then I shall comment
    on its bearing, if any, on logical criteria of identity.

| On A Limited Universe Of Marks
|
| Boole, De Morgan, and their followers, frequently speak of
| a "limited universe of discourse" in logic.  An unlimited universe
| would comprise the whole realm of the logically possible.  In such
| a universe, every universal proposition, not tautologous, is false;
| every particular proposition, not absurd, is true.  Our discourse
| seldom relates to this universe:  we are either thinking of the
| physically possible, or of the historically existent, or of
| the world of some romance, or of some other limited universe.
|
| But besides its universe of objects, our discourse also refers to
| a universe of characters.  Thus, we might naturally say that virtue
| and an orange have nothing in common.  It is true that the English
| word for each is spelt with six letters, but this is not one of the
| marks of the universe of our discourse.
|
| A universe of things is unlimited in which every combination of characters,
| short of the whole universe of characters, occurs in some object.  In like
| manner, the universe of characters is unlimited in case every aggregate
| of things short of the whole universe of things possesses in common one
| of the characters of the universe of characters.  The conception of
| ordinary syllogistic is so unclear that it would hardly be accurate
| to say that it supposes an unlimited universe of characters;  but
| it comes nearer to that than to any other consistent view.  The
| non-possession of any character is regarded as implying the
| possession of another character the negative of the first.
|
| In our ordinary discourse, on the other hand, not only are both universes limited, but,
| further than that, we have nothing to do with individual objects nor simple marks;
| so that we have simply the two distinct universes of things and marks related to
| one another, in general, in a perfectly indeterminate manner.  The consequence
| is, that a proposition concerning the relations of two groups of marks is not
| necessarily equivalent to any proposition concerning classes of things;  so
| that the distinction between propositions in extension and propositions in
| comprehension is a real one, separating two kinds of facts, whereas in the
| view of ordinary syllogistic the distinction only relates to two modes of
| considering any fact.  To say that every object of the class S is included
| among the class of P's, of course must imply that every common character of
| the P's is a common character of the S's.  But the converse implication is by
| no means necessary, except with an unlimited universe of marks.  The reasonings
| in depth of which I have spoken, suppose, of course, the absence of any general
| regularity about the relations of marks and things.  (CSP, SIL, 182-183).
|
| CSP, SIL, pp. 182-186.  (CP 2.517-531;  CE 4, 450-453).
|
| Charles Sanders Peirce, "On A Limited Universe Of Marks" (1883), in:
| CSP (ed.), 'Studies in Logic, by Members of the Johns Hopkins University',
| Reprinted with an Introduction by Max H. Fisch and a Preface by Achim Eschbach,
|'Foundations of Semiotics, Volume 1', John Benjamins, Amsterdam, NL, 1983.
|
|'Writings of Charles S. Peirce:  A Chronological Edition, Volume 4, 1879-1884',
| Peirce Edition Project, Indiana University Press, Bloomington, IN, 1986.

SS: I comment on this quote, showing that Jon Awbrey's contention that
    it has some deep significance concerning Leibniz's law is nonsense.

SS: This passage is opposed to the kind of extensionalism advocated by Quine.
    Quine's extensional languages are ones in which classes substitute for
    properties and relations, two classes being identical if they have the
    same members.  In an intensional language, admitting properties as well
    as classes, different properties may belong to exactly the same things.
    In an intensional language, "a proposition concerning the relations of
    two groups of marks is not necessarily equivalent to any proposition
    concerning classes of things".  Extensional languages, such as the
    first-order predicate calculus, or set theory, are adequate for
    mathematics, but it is controversial whether the sentences of
    ordinary language or the sciences in general can be translated
    into such an extensional language sentence for sentence.
   
SS: Peirce evidently thinks (as I do) that such translation is not
    always possible for ordinary "limited universes of discourse".
    However, intensionality in itself neither supports nor defeats
    Leibniz's Law.  In spite of adopting an intensional language,
    one could deny Leibniz's Law, saying, as Peirce did late in
    his career, that two raindrops could have all their properties,
    including position in space, in common (being merged together)
    and yet not be numerically identical raindrops, owing to their
    dynamic interactions with one another.  Or one could affirm
    Leibniz's Law in an intensional language, saying that two
    things are numerically identical if and only if anything
    true of one is true of the other, as Peirce did, early
    in his career.

SS: The second paragraph of the quote bears on the law of "excluded middle,"
    since, as Peirce says, in a universe of discourse in which the universe
    of characters is limited, (x)(Px V ~Px) may not hold. [It may not be
    the case that "the non-possession of any character is regarded as
    implying the possession of another character the negative of the
    first."] This paragraph is relevant to whether we require a sortal
    logic or logic admitting of presuppositions to cope with natural
    languages, but it has no bearing on Leibniz's law. 

SS: To understand the third paragraph, we have to know what Peirce means by
    an "unlimited universe of marks."  Well, here is an example: A universe
    consisting of only two "marks", namely the colors red and blue, and two
    "objects," one red and the other blue.  In this case, "the universe of
    characters is unlimited [because] every aggregate of things short of
    the whole universe of things possesses in common one of the characters
    of the universe of characters".  This is because there are only two
    non-empty aggregates short of the whole universe, namely the aggregate
    consisting of the one red thing and the aggregate consisting of the one
    blue thing.  Now, in such a universe as this, Peirce points out, when
    the class S is included in the class P, every common character of the
    S's must also belong to the P's.  Why?  Well, in this case, if class S
    is included in class P, where P is an "aggregate of things short of the
    whole universe," S and P must be the same class. 
 
SS: But so what? Neither in this odd universe, nor in the more common
    "limited" universes of ordinary discourse, described by an intensional
    language, is there reason to deny Leibniz's Law. If Peirce chose to
    deny Leibniz's law, as he did late in his career, he did so on grounds
    not relevant to this passage, in spite of Jon Awbrey's handwaving.

SS: If I can muster up the patience, I will respond to Jon Awbrey's second
    point (below) in another message. But to make a long story short, it is
    just more self-aggrandizing handwaving, having no bearing on the
    significance of the quotes in question. 

JA: 2. You consistently ignore the distinctions and the meanings of technical
       terms in mathematics, as they were used by Peirce and as they have been
       used for at least 200 years, for instance, the meanings of "composition"
       and "reduction" as they are used in the algebra of relations and in the
       logic of relative terms.

JA: I have pointed out all of this to you before. It is all pretty clear to
    anybody who does not have a prior theory of what can qualify as a theory
    of identity, or a philosophy, for that matter, that they keep trying
    shove Peirce's more general theory, and more general philosophy, into.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Subject: Re: Tone, Token, Type
From: "Seth Sharpless" <seth.sharpless@colorado.edu>
Date: Mon, 18 Nov 2002 13:35:39 -0700
X-Message-Number: 7

~~~~~~~~~Quote from Peirce,  CP1.20, 1903~~~~~~~~~~ 
I have since [1871] very carefully and thoroughly revised my
philosophical opinions more than half a dozen times, and have
modified them more or less on most topics.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~   
  
Jon's publication project has finally yielded some passages directly
relevant to the problem I raised concerning Peirce's theory of identity.
I apologize for the length of this post, but the length is necessary to
exhibit the apparent conflict in Peirce's early and late theories of
identity.   

SS: To put the whole story in a nutshell (which necessarily distorts it a bit),
    recall that in the history of philosophy, there have been two main contenders
    as criteria of identity for existents:  essential properties and spatio-temporal
    continuity.  Peirce's 1870 view clearly falls into the former group, identity of
    existents always being in respect of some property.  Peirce's later view appears
    to fall in the latter group, allowing entities that are completely indiscernable
    at one time, even occupying the same space at that time, to be distinct, because
    they are in spatio-temporal continuity with entities occupying different places
    at another time. 

SS: Recall how this discussion of identity started.  I said that
    I would have expected Peirce to take a "realists" position: 

SS: quoting from SS post of 11/9/02, 10:39 AM:
    
    | ... namely, that identity is always in respect to some universal
    | or type.  For example, a is the same as b with respect to color,
    | or size, or personhood (the same "person" as), etc.  Then what has
    | been called (since Aristotle) "numerical identity" could be regarded
    | as a "degenerate" form of relative identity, in which a is the same
    | as b in 'every' respect.  This would make Leibniz's law of indiscernables
    | true for numerical identity but not for identity simpliciter, since a
    | could be identical to b in one respect and different in another.

SS: Only I found, on reading Peirce, seemingly conflicted statements that
    prevented me from confirming this hypothesis and indeed,prevented me
    from forming a clear idea of Peirce's theory of identity.

The passages quoted by Jon that are relevant to my problem are: 

CP 3.93, 1870 (Jon's 11/16/02, 5:30 PM: "Tone, Token, Type")

CP 3.613, 1911 (Jon's 11/17/02, 11:36PM: "Doctrine of Individuals") 

Jon omitted a helpful footnote to the 1870 passage: 

| The absolute individual can not only not be realized in sense or thought,
| but cannot exist, properly speaking.  For whatever lasts for any time,
| however short, is capable of logical division, because in that time
| it will undergo some change in its relations. But what does not
| exist for any time, however short, does not exist at all.
| All, therefore, that we perceive or think, or that exists,
| is general.  So far there is truth in the doctrine of
| scholastic realism.  But all that exists is infinitely
| determinate, and the infinitely determinate is the
| absolutely individual.  This seems paradoxical,
| but the contradiction is easily resolved.  That
| which exists is the object of a true conception.
| This conception may be made more determinate than
| any assignable conception;  and therefore it is
| never so determinate that it is capable of no
| further determination.  (CP 3.93 n.)

The 1870 and 1911 passages seem conflicted.  In 1870, Peirce held that
any logical "individual" would be "infinitely determinate".  By this,
he meant infinitely specifiable, in the sense that however many
distinguishing properties we lay down, we should find that
the identity of the individual remains somewhat vague,
further specification always being possible.  

This inescapable vagueness, according to Peirce, is not due simply to limited
cognitive abilities on our part -- that is, it is not that existing entities
are actually determinate in all their properties, only we cannot apprehend
them in their infinite complexity.  It is not this;  the indeterminacy of
character is a necessary feature of the world as its ontological potential
is realized.  For Peirce, no genuinely determinate individual actually
exists at any given time.  A wholly determinate individual is only
a possibility.  If it existed, he says, it would have to last for
more than an instant, and it would have to undergo some change
from instant to instant; thus, the entity at instant one could
be said to be identical to the entity at instant two only
in some respects, not absolutely.  Absolutely determinate
individuals belong only to the realm of the possible.  

Normally, when we speak of "individuals", we are not speaking of
infinitely determinate individuals, but only what the scholastics
called "singulars", that is, things which undergo change but which
are identifiable because they remain the same in some respect. 

Accordingly, so far as this 1870 passage quoted, I was right
in anticipating that Peirce would take a "realist's" view of
identity in which ordinary existents must be identified with
respect to some universal, so that though identical in one
respect, they remain discernable in other respects. 

The only addition is that Peirce restricts what I called the "degenerate"
form of identity -- identity in all respects -- to the realm of the ideal
or possible (though it has to be remembered that for Peirce, the merely
possible is also real, just not existent).  Actually existing entities,
which endure from instant to instant, cannot exhibit numerical identity,
identity in all respects, because there are always respects in which
they are vague or indeterminate.   

Fine! That was 1870.  Now, let us skip to Peirce's later period,
starting with a quote dated 1911, which must be nearly Peirce's
last word on the matter.   

~~~~~Excerpts from CP3.613 (1911)~~~~~~~~~~~~~~
Another definition which avoids the above difficulties is that an
individual is something which reacts. That is to say, it does react
against some things, and is of such a nature that it might react, or
have reacted, against my will...     
     According to this definition, that which alone immediately presents
itself as an individual is a reaction against the will. But everything
whose identity consists in a continuity of reactions will be a single
logical individual. Thus any portion of space, so far as it can be
regarded as reacting, is for logic a single individual; its spatial
extension is no objection. With this definition there is no difficulty
about the truth that whatever exists is individual, since existence (not
reality) and individuality are essentially the same thing; and whatever
fulfills the present definition equally fulfills the former definition
by virtue of the principles of contradiction and excluded middle,
regarded as mere definitions of the relation expressed by "not".
As for the principle of indiscernibles, if two individual things
are exactly alike in all other respects, they must, according to
this definition, differ in their spatial relations, since space
is nothing but the intuitional presentation of the conditions
of reaction, or of some of them.  But there will be no logical
hindrance to two things being exactly alike in all other respects;
and if they are never so, that is a physical law, not a necessity
of logic. This second definition, therefore, seems to be the
preferable one.
~~~~End of Peirce quote~~~~~~~~~~~~   

The new theory was apparently developed by 1896,
as illustrated by this quote, from 1896.      

~~~~~~~Peirce quote from CP1.458, 1896~~~~~~~~~~~ 
'Hic et nunc' is the phrase perpetually in the mouth of Duns Scotus,
who first elucidated individual existence.  It is a forcible phrase
if understood as Duns did understand it, not as describing individual
existence, but as suggesting it by an example of the attributes found
in this world to accompany it.  Two drops of water retain each its
identity and opposition to the other no matter in what or in how
many respects they are alike.  Even could they interpenetrate one
another like optical images (which are also individual), they would
nevertheless react, though perhaps not at that moment, and by virtue
of that reaction would retain their identities.  The point to be
remarked is that the qualities of the individual thing, however
permanent they may be, neither help nor hinder its identical
existence.  However permanent and peculiar those qualities
may be, they are but accidents;  that is to say, they are
not involved in the mode of being of the thing;  for the
mode of being of the individual thing is existence;
and existence lies in opposition merely.
~~~~~~~~~~End of Peirce quote~~~~~~~~~~~ 

Notice that two drops of water completely merged and indiscernable
at time 't' nevertheless retain their distinct identities at time 't'
because they are continuous with spatially distinct drops interacting
with one another at a different time.   The 1896 quote seems even more
radical than the 1911 quote, in that it was not evident from the latter
that two entities, indiscernable in all respects, 'even with respect to
spatial position', at one time, could nevertheless be distinct, owing to
being in spatio-temporal continuity with entities which are spatially
distinct at another time. 

A problem with this theory of identity is in understanding how spatiotemporal
continuity is to be shown.  If the two drops, a and b, are completely merged
and indiscernable in all qualities at time t, and apart and interacting at
time t+1, how can one know which of the interacting drops at time t+1 is a
and which is b? 

But that is an aside.  The exegetical problem is that of either
rendering the seemingly different theories of identity consistent,
or of coming to terms with the idea that Peirce might have changed
his mind on this, as on other things:  

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Work Area

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

CP 3.398 (1885)
CP 1.456 (1896)
CP 1.458 (1896)
CP 4.311 (1897)

authoritarian dogmatic vs. exploratory hypothetical

personal identity vs. atomic believer

| A_1 vs. ~A_1
|
| ...
|
| A_k vs. ~A_k

"reality is real" vs. "no it isn't"

"hermeneutic equivalence class" (HEC)

atomic philosopher, determinate on every pro-ism vs. con-ism

The difference between a realist and a personal infallibilist
is like the relation between a monotheist and a theomaniac.

It is the difference between
one who thinks that God is one
and one who thinks that one is God.

Let me put my last remarks -- about the false opposition between
any two such positions as might be dedicated solely to extensions
or to intensions, admitting neither a tertium quid nor the chance
of solid integration between these two aspects of a sign relation --
in the context of a contemporary problem that has been discussed
in many circles, namely, that of "language acquisition" (LA).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Work Area

B = {0, 1} = {false, true} = {undicated, indicated}

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COR.  Composition Of Relations

01.  http://suo.ieee.org/ontology/msg04429.html
02.  http://suo.ieee.org/ontology/msg04430.html
03.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

COS.  Classification Of Signs

01.  http://suo.ieee.org/ontology/msg04336.html
02.  http://suo.ieee.org/ontology/msg04337.html
03.  http://suo.ieee.org/ontology/msg04338.html
04.  http://suo.ieee.org/ontology/msg04339.html
05.  http://suo.ieee.org/ontology/msg04342.html
06.  http://suo.ieee.org/ontology/msg04343.html
07.  http://suo.ieee.org/ontology/msg04344.html
08.  http://suo.ieee.org/ontology/msg04345.html
09.  http://suo.ieee.org/ontology/msg04376.html
10.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

DOI.  Doctrine of Individuals

01.  http://suo.ieee.org/ontology/msg04332.html
02.  http://suo.ieee.org/ontology/msg04348.html
03.  http://suo.ieee.org/ontology/msg04352.html
04.  http://suo.ieee.org/ontology/msg04353.html
05.  http://suo.ieee.org/ontology/msg04354.html
06.  http://suo.ieee.org/ontology/msg04363.html
07.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

ECI.  Extension x Comprehension = Information

02.  http://suo.ieee.org/ontology/msg03747.html
03.  http://suo.ieee.org/ontology/msg03749.html
04.  http://suo.ieee.org/ontology/msg03752.html
08.  http://suo.ieee.org/ontology/msg03756.html
09.  http://suo.ieee.org/ontology/msg03757.html
10.  http://suo.ieee.org/ontology/msg03758.html
18.  http://suo.ieee.org/ontology/msg03769.html
19.  http://suo.ieee.org/ontology/msg03770.html
21.  http://suo.ieee.org/ontology/msg03772.html
22.  http://suo.ieee.org/ontology/msg03773.html
23.  http://suo.ieee.org/ontology/msg03774.html
24.  http://suo.ieee.org/ontology/msg03775.html
25.  http://suo.ieee.org/ontology/msg03777.html
27.  http://suo.ieee.org/ontology/msg03779.html
28.  http://suo.ieee.org/ontology/msg03780.html
30.  http://suo.ieee.org/ontology/msg03782.html
31.  http://suo.ieee.org/ontology/msg03784.html
32.  http://suo.ieee.org/ontology/msg03786.html
34.  http://suo.ieee.org/ontology/msg03791.html
35.  http://suo.ieee.org/ontology/msg03792.html
36.  http://suo.ieee.org/ontology/msg03793.html
37.  http://suo.ieee.org/ontology/msg03794.html
38.  http://suo.ieee.org/ontology/msg03796.html
39.  http://suo.ieee.org/ontology/msg03797.html
40.  http://suo.ieee.org/ontology/msg03798.html
41.  http://suo.ieee.org/ontology/msg03800.html
42.  http://suo.ieee.org/ontology/msg03801.html
43.  http://suo.ieee.org/ontology/msg03802.html
44.  http://suo.ieee.org/ontology/msg03803.html
45.  http://suo.ieee.org/ontology/msg03804.html
46.  http://suo.ieee.org/ontology/msg03805.html
47.  http://suo.ieee.org/ontology/msg03806.html
48.  http://suo.ieee.org/ontology/msg03807.html
68.  http://suo.ieee.org/ontology/msg03838.html
70.  http://suo.ieee.org/ontology/msg03847.html
91.  http://suo.ieee.org/ontology/msg04230.html
95.  http://suo.ieee.org/ontology/msg04243.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HEC.  Hermeneutic Equivalence Classes

01.  http://suo.ieee.org/ontology/msg04355.html
02.  http://suo.ieee.org/ontology/msg04356.html
03.  http://suo.ieee.org/ontology/msg04357.html
04.  http://suo.ieee.org/ontology/msg04358.html
05.  http://suo.ieee.org/ontology/msg04359.html
06.  http://suo.ieee.org/ontology/msg04360.html
07.  http://suo.ieee.org/ontology/msg04361.html
08.  http://suo.ieee.org/ontology/msg04362.html
09.  http://suo.ieee.org/ontology/msg04369.html
10.  http://suo.ieee.org/ontology/msg04371.html
11.  http://suo.ieee.org/ontology/msg04402.html
12.  http://suo.ieee.org/ontology/msg04403.html
13.  http://suo.ieee.org/ontology/msg04404.html
14.  http://suo.ieee.org/ontology/msg04407.html
15.  http://suo.ieee.org/ontology/msg04410.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I&T.  Identity & Teridentity

01.  http://suo.ieee.org/ontology/msg04340.html
02.  http://suo.ieee.org/ontology/msg04341.html
03.  http://suo.ieee.org/ontology/msg04346.html
04.  http://suo.ieee.org/ontology/msg04374.html
05.  http://suo.ieee.org/ontology/msg04375.html
06.  http://suo.ieee.org/ontology/msg04394.html
07.  http://suo.ieee.org/ontology/msg04397.html
08.  http://suo.ieee.org/ontology/msg04398.html
09.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I^3.  Inquiry Into Irreducibility

01.  http://suo.ieee.org/ontology/msg03163.html
02.  http://suo.ieee.org/ontology/msg03164.html
03.  http://suo.ieee.org/ontology/msg03165.html
04.  http://suo.ieee.org/ontology/msg03166.html
05.  http://suo.ieee.org/ontology/msg03167.html
06.  http://suo.ieee.org/ontology/msg03168.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LMU.  Limited Mark Universes

01.  http://suo.ieee.org/ontology/msg04349.html
02.  http://suo.ieee.org/ontology/msg04350.html
03.  http://suo.ieee.org/ontology/msg04351.html
04.  http://suo.ieee.org/ontology/msg04364.html
05.  http://suo.ieee.org/ontology/msg04365.html
06.  http://suo.ieee.org/ontology/msg04368.html
07.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

LOR.  Logic Of Relatives

01.  http://suo.ieee.org/ontology/msg04416.html
02.  http://suo.ieee.org/ontology/msg04417.html
03.  http://suo.ieee.org/ontology/msg04418.html
04.  http://suo.ieee.org/ontology/msg04419.html
05.  http://suo.ieee.org/ontology/msg04421.html
06.  http://suo.ieee.org/ontology/msg04422.html
07.  http://suo.ieee.org/ontology/msg04423.html
08.  http://suo.ieee.org/ontology/msg04424.html
09.  http://suo.ieee.org/ontology/msg04425.html
10.  http://suo.ieee.org/ontology/msg04426.html
11.  http://suo.ieee.org/ontology/msg04427.html
12.  http://suo.ieee.org/ontology/msg04431.html
13.  http://suo.ieee.org/ontology/msg04432.html
14.  http://suo.ieee.org/ontology/msg04435.html
15.  http://suo.ieee.org/ontology/msg04436.html
16.  http://suo.ieee.org/ontology/msg04437.html
17.  http://suo.ieee.org/ontology/msg04438.html
18.  http://suo.ieee.org/ontology/msg04439.html
19.  http://suo.ieee.org/ontology/msg04440.html
20.  http://suo.ieee.org/ontology/msg04441.html
21.  http://suo.ieee.org/ontology/msg04442.html
22.  http://suo.ieee.org/ontology/msg04443.html
23.  http://suo.ieee.org/ontology/msg04444.html
24.  http://suo.ieee.org/ontology/msg04445.html
25.  http://suo.ieee.org/ontology/msg04446.html
26.  http://suo.ieee.org/ontology/msg04447.html
27.  http://suo.ieee.org/ontology/msg04448.html
28.  http://suo.ieee.org/ontology/msg04449.html
29.  http://suo.ieee.org/ontology/msg04450.html
30.  http://suo.ieee.org/ontology/msg04451.html
31.  http://suo.ieee.org/ontology/msg04452.html
32.  http://suo.ieee.org/ontology/msg04453.html
33.  http://suo.ieee.org/ontology/msg04454.html
34.  http://suo.ieee.org/ontology/msg04456.html
35.  http://suo.ieee.org/ontology/msg04457.html
36.  http://suo.ieee.org/ontology/msg04458.html
37.  http://suo.ieee.org/ontology/msg04459.html
38.  http://suo.ieee.org/ontology/msg04462.html
39.  http://suo.ieee.org/ontology/msg04464.html
40.  http://suo.ieee.org/ontology/msg04473.html
41.  http://suo.ieee.org/ontology/msg04478.html
42.  http://suo.ieee.org/ontology/msg04484.html
43.  http://suo.ieee.org/ontology/msg04487.html
44.  http://suo.ieee.org/ontology/msg04488.html
45.  http://suo.ieee.org/ontology/msg04492.html
46.  http://suo.ieee.org/ontology/msg04497.html
47.  http://suo.ieee.org/ontology/msg04498.html
48.  http://suo.ieee.org/ontology/msg04499.html
49.  http://suo.ieee.org/ontology/msg04500.html
50.  http://suo.ieee.org/ontology/msg04501.html
51.  http://suo.ieee.org/ontology/msg04502.html
52.  http://suo.ieee.org/ontology/msg04503.html
53.  http://suo.ieee.org/ontology/msg04504.html
54.  http://suo.ieee.org/ontology/msg04506.html
55.  http://suo.ieee.org/ontology/msg04508.html
56.  http://suo.ieee.org/ontology/msg04509.html
57.  http://suo.ieee.org/ontology/msg04510.html
58.  http://suo.ieee.org/ontology/msg04511.html
59.  http://suo.ieee.org/ontology/msg04512.html
60.  http://suo.ieee.org/ontology/msg04513.html
61.  http://suo.ieee.org/ontology/msg04516.html
62.  http://suo.ieee.org/ontology/msg04517.html
63.  http://suo.ieee.org/ontology/msg04518.html
64.  http://suo.ieee.org/ontology/msg04521.html
65.  http://suo.ieee.org/ontology/msg04539.html
66.  http://suo.ieee.org/ontology/msg04541.html
67.  http://suo.ieee.org/ontology/msg04542.html
68.  http://suo.ieee.org/ontology/msg04543.html
69.

LOR.  Logic of Relatives -- Discussion Notes

10.  http://suo.ieee.org/ontology/msg04460.html
11.  http://suo.ieee.org/ontology/msg04461.html
12.  http://suo.ieee.org/ontology/msg04471.html
13.  http://suo.ieee.org/ontology/msg04472.html
14.  http://suo.ieee.org/ontology/msg04475.html
15.  http://suo.ieee.org/ontology/msg04476.html
16.  http://suo.ieee.org/ontology/msg04477.html
17.  http://suo.ieee.org/ontology/msg04479.html
18.  http://suo.ieee.org/ontology/msg04480.html
19.  http://suo.ieee.org/ontology/msg04481.html
20.  http://suo.ieee.org/ontology/msg04482.html
21.  http://suo.ieee.org/ontology/msg04483.html
22.  http://suo.ieee.org/ontology/msg04485.html
23.  http://suo.ieee.org/ontology/msg04486.html
24.  http://suo.ieee.org/ontology/msg04493.html
25.  http://suo.ieee.org/ontology/msg04494.html
26.  http://suo.ieee.org/ontology/msg04495.html
27.  http://suo.ieee.org/ontology/msg04496.html
28.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

MSI.  Manifolds of Sensuous Impressions

01.  http://suo.ieee.org/ontology/msg04370.html
02.  http://suo.ieee.org/ontology/msg04372.html
03.  http://suo.ieee.org/ontology/msg04373.html
04.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

NST.  Non Sequi Tours

01.  http://suo.ieee.org/ontology/msg04420.html
02.  http://suo.ieee.org/ontology/msg04434.html
03.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Reductions Among Relations

Old Version

01.  http://suo.ieee.org/ontology/msg01727.html
02.  http://suo.ieee.org/ontology/msg01738.html
03.  http://suo.ieee.org/ontology/msg01747.html
04.  http://suo.ieee.org/ontology/msg01766.html
05.  http://suo.ieee.org/ontology/msg01818.html
06.  http://suo.ieee.org/ontology/msg01821.html
07.  http://suo.ieee.org/ontology/msg02167.html
08.  http://suo.ieee.org/ontology/msg02475.html

New Version

01.  http://suo.ieee.org/ontology/msg04383.html
02.  http://suo.ieee.org/ontology/msg04384.html
03.  http://suo.ieee.org/ontology/msg04385.html
04.  http://suo.ieee.org/ontology/msg04386.html
05.  http://suo.ieee.org/ontology/msg04387.html
06.  http://suo.ieee.org/ontology/msg04388.html
07.  http://suo.ieee.org/ontology/msg04389.html
08.  http://suo.ieee.org/ontology/msg04390.html
09.  http://suo.ieee.org/ontology/msg04391.html
10.  http://suo.ieee.org/ontology/msg04392.html
11.  http://suo.ieee.org/ontology/msg04393.html
12.  http://suo.ieee.org/ontology/msg04395.html
13.  http://suo.ieee.org/ontology/msg04396.html
14.  http://suo.ieee.org/ontology/msg04399.html
15.  http://suo.ieee.org/ontology/msg04400.html
16.  http://suo.ieee.org/ontology/msg04401.html
17.  http://suo.ieee.org/ontology/msg04405.html
18.  http://suo.ieee.org/ontology/msg04406.html
19.  http://suo.ieee.org/ontology/msg04408.html
20.  http://suo.ieee.org/ontology/msg04411.html
21.  http://suo.ieee.org/ontology/msg04412.html
22.  http://suo.ieee.org/ontology/msg04413.html
23.  http://suo.ieee.org/ontology/msg04414.html
24.  http://suo.ieee.org/ontology/msg04415.html
25.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RIG.  Relations In General

01.  http://suo.ieee.org/ontology/msg04489.html
02.  http://suo.ieee.org/ontology/msg04490.html
03.  http://suo.ieee.org/ontology/msg04491.html
04.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

TOR.  Theory Of Relations

01.  http://suo.ieee.org/ontology/msg04377.html
02.  http://suo.ieee.org/ontology/msg04378.html
03.  http://suo.ieee.org/ontology/msg04379.html
04.  http://suo.ieee.org/ontology/msg04380.html
05.  http://suo.ieee.org/ontology/msg04381.html
06.  http://suo.ieee.org/ontology/msg04382.html
07.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

T^3.  Tone, Token, Type

01.  http://suo.ieee.org/ontology/msg04053.html
02.  http://suo.ieee.org/ontology/msg04325.html
03.  http://suo.ieee.org/ontology/msg04326.html
04.  http://suo.ieee.org/ontology/msg04327.html
05.  http://suo.ieee.org/ontology/msg04328.html
06.  http://suo.ieee.org/ontology/msg04329.html
07.  http://suo.ieee.org/ontology/msg04330.html
08.  http://suo.ieee.org/ontology/msg04331.html
09.  http://suo.ieee.org/ontology/msg04334.html
10.  http://suo.ieee.org/ontology/msg04347.html
11.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

VIT.  Detached Ideas On Virally Important Topics

01.  http://suo.ieee.org/ontology/msg01716.html
02.  http://suo.ieee.org/ontology/msg01722.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o