User:Jon Awbrey/Inquiry Into Irreducibility

From InterSciWiki
Jump to: navigation, search
o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

ISW -- Inquiry Into Irreducibility 2017

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 1

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

HP = Howard Pattee
JA = Jon Awbrey

JA: I append some material on "sign relations"
    and the "irreducibility of triadics" (IOT).

JA: Here is an old note I've been looking for since we started on this bit about isms,
    as I feel like I managed to express in it somewhere my point of view that the key
    to integrating variant persepectives is to treat their contrasting values as axes
    or dimensions rather than so many points on a line to be selected among, each in
    exclusion of all the others.  To express it briefly, it is the difference between
    k-tomic decisions among terminal values and k-adic dimensions of extended variation.

JA: But I think that it is safe to say, for whatever else
    it might be good, tomic thinking is of limited use in
    trying to understand Peirce's thought. 

HP: The way I understood Peirce's -adic thinking depended on
    irreducibility.  This would distinguish them from, say, the
    three binary relations that make up the sides of a triangle,
    or a linear operator on three (or n) elements.  I also assumed
    that this was a conceptual irreducibility or even an ontological
    irreducibility.  Using normal language (since I can't follow Peirce's
    many variations), I would call "sign/interpreter/referent" such an
    irreducible triadic relation, since it is easy to see that no single
    member or pair of the three make any sense without all three. 

HP: Am I too far off base here?  I am not at all sure I understand what else
    Peirce includes in "irreducible".  Could you find some examples or quotes
    that would explain his concept of irreducible?

JA: OK, YAFI (you asked for it).  As it happens, this is precisely what I just used up
    one of the better years of my life trying to explain in the SUO discussion group,
    and so I have a whole lot of material on this, most of it hardly scathed by any
    dint of popular assimilation or external use.

JA: I see a couple of separate questions in what you are asking:

JA: 1.  What is the qualitative character of the 3-adic sign relation?  In particular,
        is it better to comprehend it in the form <object, sign, interpretive agent>,
        or is it best to understand it in the form <object, sign, interpretant sign>?

JA: 2.  What is reducible to what in what way, and what not?

JA: The answer to the first question is writ
    in what we who speak in Peircean tongues
    dub the "Parable of the Sop to Cerberus".
    Peirce would often start out explaining
    his idea of the sign relation, for the
    sake of a gentle exposition, in terms
    of Object, Sign, and Interpreter, and
    then follow with a partial retraction
    of the Agent to the Interpretant Sign
    that occupies the alleged agent's mind.
    Here is the locus classicus for this bit:

JA: There is a critical passage where Peirce explains the relationship
    between his popular illustrations and his technical theory of signs.

CSP: | It is clearly indispensable to start with an accurate
     | and broad analysis of the nature of a Sign.  I define
     | a Sign as anything which is so determined by something
     | else, called its Object, and so determines an effect
     | upon a person, which effect I call its Interpretant,
     | that the latter is thereby mediately determined by
     | the former.  My insertion of "upon a person" is
     | a sop to Cerberus, because I despair of making
     | my own broader conception understood.
     |
     | CSP, 'Selected Writings', page 404.
     |
     | Charles Sanders Peirce, "Letters to Lady Welby", Chapter 24, pages 380-432,
     | in 'Charles S. Peirce: Selected Writings (Values in a Universe of Chance)',
     | Edited with Introduction and Notes by Philip P. Wiener, Dover Publications,
     | New York, NY, 1966.

JA: http://suo.ieee.org/ontology/msg02683.html

JA: Peirce's truer technical conception can be garnered
    from another legendary bit of narrative exposition,
    the story of the "French Interpretant's Memory":

JA: Here is a passage from Peirce that is decisive in clearing up
    the relationship between the interpreter and the interpretant,
    and, not by coincidence, has some bearing on the placement of
    concepts as symbols, as their principal aspects are refracted
    across the spectrum of sign modalities.

CSP: | I think we need to reflect upon the circumstance that every word
     | implies some proposition or, what is the same thing, every word,
     | concept, symbol has an equivalent term -- or one which has become
     | identified with it, -- in short, has an 'interpretant'.
     |
     | Consider, what a word or symbol is;  it is a sort
     | of representation.  Now a representation is something
     | which stands for something.  ...  A thing cannot stand for
     | something without standing 'to' something 'for' that something.
     | Now, what is this that a word stands 'to'?  Is it a person?
     |
     | We usually say that the word 'homme' stands to a Frenchman for 'man'.
     | It would be a little more precise to say that it stands 'to' the
     | Frenchman's mind -- to his memory.  It is still more accurate
     | to say that it addresses a particular remembrance or image
     | in that memory.  And what 'image', what remembrance?
     | Plainly, the one which is the mental equivalent of
     | the word 'homme' -- in short, its interpretant.
     | Whatever a word addresses then or 'stands to',
     | is its interpretant or identified symbol.  ...
     |
     | The interpretant of a term, then, and that which it stands to
     | are identical.  Hence, since it is of the very essence of a symbol
     | that it should stand 'to' something, every symbol -- every word and
     | every 'conception' -- must have an interpretant -- or what is the
     | same thing, must have information or implication.  (CE 1, 466-467).
     |
     | Charles Sanders Peirce, 'Chronological Edition', Volume 1, pages 466-467.

JA: http://suo.ieee.org/email/msg01111.html
JA: http://suo.ieee.org/email/msg01112.html
JA: http://suo.ieee.org/email/msg01113.html

JA: As it happens, this is exactly the sort of conception of semiosis
    that I need in my own work for building bridges between the typical
    brands of tokens that are commonly treated in abstract semiotics and
    the kinds of states in the configuration spaces of dynamic systems
    that are the actual carriers of these signals.  Which explains
    why I discuss this passage toward the end of the introduction
    to my dissertation and make critical use of it throughout.

HP: I would say the triad "DNA (sign) / code or cell (interpreter) / protein (referent)"
    is the primeval case.  This apparently ontological irreducibility is one reason
    the origin of life is so mysterious, but that is another problem.

JA: I am not sure about this, since I do not know for certain what the object of life is.
    It would be just as easy to say that the protein is yet another interpretant sign in
    a process whose main object is to simply to continue itself in the form to which it
    would like to become accustomed.  The only way I know to decide would be to check
    my favorite definition, but there is always a bit of play in the way that it can
    be made to fit any particular concrete process.

HP: There are many other types of more or less epistemological irreducible triads or n-adics,
    popularly known as non-linear systems.  The classical physics case is the three-body problem
    (three masses accelerated by Newton's 2nd law and attracting each other by Newton's law of
    gravitation).  By "more or less epistemological" I just mean that it is unsolvable by any
    closed exact integration, but we can still compute approximate orbits by numerical methods.
    Still, it is easy to see the irreducibility is built into the laws.  However, to a physicist,
    calling this a sign/interpreter/referent relation would be entirely gratuitous ("What can be
    done with fewer assumptions is done in vain with more." -- Ockham). 

HP: What intrigues me as a hierarchy theorist is that the irreducible "sign/interpreter/referent"
    triad at the cognitive level requires an interpreting brain that is some kind of irreducible
    n-adic network (where n >>3).  The brain is initially constructed from cells organized largely
    by the genes.  At that lower level, the "DNA/code/protein irreducibility" works only because
    "coding" itself requires an irreducible triad: "messengerRNA/ribosomes/polypeptides."  At a
    lower level still, all this depends on enzymes which are defined by, and only function as,
    an irreducible triad:  "substrate/enzyme/product".  Furthermore, the function of the enzyme
    depends on its folding into the right shape which is an irreducible n-body problem.  So, it's
    irreducible n-adics all the way down.

JA: Before we get into this I need to make clear that I will be talking about the
    composability and ducibility properties of relations in a purely mathematical
    sense, and if I say "formal", "logical", or "structural" I will be projecting
    what is pretty much a shade off the same tree.  So I will say that a relation
    is "decomposable" or "irreducible" much in the same way that I would say that
    an integer is "prime" or, better still, that a mathematical group is "simple".
    Therefore, whatever we say about the composition, constitution, factorization,
    and structure of a relation will be wholly determined by the definitions that
    we pose for the notions of "composition" and "reduction" that we have in mind.
    Within these bounds it is possible to achieve a limiting degree of certainty
    about what is claimed within.

JA: When it comes to the questions of conceptual, epistemological, ontological,
    or physical structure I am far less certain.  Each of these, I am guessing,
    involves the question of how well a mathematical object captures the being
    of some object outside it, and there is usually a lot of "play in the will"
    when it comes to that.  I do not expect to know much about the ontology of
    anything, in the sense of "what it is", at my rate, not in this brief life,
    so I am pretty much fated to discussing the ways that a thing is described,
    which does include, in partial kinds of compensation, the ways that things
    are circumscribed, encompassed, and surrounded by the gifts of the senses.

JA: As far as episteme goes, I do not know much about it, except that knowledge,
    if we ever get it, can only arrive through the imports of inquiry, broadly
    conceived, and only be validated through the customs of inquiry, narrowly
    conceived, in the stamp of critical, reflective, transmissable inquiry.

JA: If by "conceptual" one means the comprehension of the intensions that are
    implied in our concept or our grasp of an object, then I would be able to
    say that an object relation can be addressed in extension or in intension,
    either way, and that both ways are most likely ultimately necessary for a
    practical grasp of the abstract object in question.  This notwithstanding,
    I am for the moment working from an extensional pons of view on relations,
    for the simple reason that much less work has been done on sign relations
    from the extensional angle so far.

JA: In this view, a relation is a set of tuples.  Notice that I do not yet say "k-tuples".
    This is because it is useful to expand the concept of a relation to include all of
    what are more usually called "formal languages", to wit, subsets of A*, where A
    is any finite set, called, the "alphabet" ("lexicon", "vocabulary") of the
    formal language L c A*.  Starting from this level of generality, some of
    the first few things that I need to know about any relation are:

    1.  Does it have a definite "arity"?
    2.  If so, is the arity finite?
    3.  If so, what is it?

JA: Let us put to one side the need to worry about aritiless (inaritied?) relations
    and focus our discussion on k-adic relations, those that have a finite arity k.

JA: In particular, we are especially interested in sign relations.
    A "sign relation" L is a subset of a cartesian product OxSxI,
    where the sets O, S, I are the "object domain", "sign domain",
    and "interpretant sign domain", respectively, of L c OxSxI.

JA: It is frequently useful to approach the concept
    of a full-fledged sign relation in three phases:

JA: 1.  A "sign relation" simpliciter, L c OxSxI, could be just about any
        3-adic relation on the arbitrary domains O, S, I, so long as it
        satisfies one of the adequate definitions of a sign relation.

JA: 2.  A "sign process" is a sign relation plus a significant sense of transition.
        This means that there is a definite, non-trivial sense in which a sign
        determines its interpretant signs in time with respect to its objects.
        We often find ourselves writing "<o, s, i>" as "<o, s, s'> in such cases,
        where the semiotic transition s ~> s' takes place in respect of the object o.
        If this sounds a tad like an "automaton with input" (AWI), it's not accidental.

JA: 3.  An "inquiry process" is a sign process that has value-directed transitions.
        This means that there is a property, a quality, or a scalar value that can
        be associated with a sign in relation to its objects, and that the transit
        from a sign to an interpretant in regard to an object occurs in such a way
        that the value is increased in the process.  For example, semiotic actions
        like inquiry and computation are directed in such a way as to increase the
        qualities of alacrity, brevity, or clarity of the signs on which they work.

JA: All in all, sign relations are not limited to purely linguistic types of systems.
    They encompass the data of the senses, natural signs, and plastic representation,
    just to name some randomly-chosen species of this very widely disseminated genus.

JA: I feel a coffee craving coming on,
    and so I will interpret that
    as the signal for a break.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 2

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Howard,

Here is an expurgated redaction of the sutra -- more like a mantra --
on "Reductions Among Relations" (RAR) with which I bemused the SUO
Warping Group last Fall and again this Spring.  I reprised all the
main issues in the last two notes of the series, and so it will be
enough to recapitulate merely the upshot couplet here.  Here's One:

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Subj:  Reductions Among Relations
Date:  Sat, 14 Apr 2001 19:48:55 -0400
From:  Jon Awbrey <jawbrey@oakland.edu>
  To:  Stand Up Ontology <standard-upper-ontology@ieee.org>
  CC:  Matthew West <Matthew.R.West@is.shell.com>

One of the things that makes the general problem of RAR seem
just a little bit ill-defined is that you would have to survey
all of the conceivable ways of "getting new relations from old"
just to say what you mean by "L is reducible to {L<j> : j in J}",
in other words, that if you had a set of "simpler" relations L<j>,
for indices j in some set J, that this data would somehow fix
the original relation L that you are seeking to analyze,
to determine, to specify, to synthesize, or whatever.

In my experience, however, most people will eventually settle on
either one of two different notions of reducibility as capturing
what they have in mind, namely:

1.  Compositive Reduction
2.  Projective Reduction

As it happens, there is an interesting relationship between these
two notions of reducibility, the implications of which I am still
in the middle of studying, so I will try to treat them in tandem.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

I will start out with the notion of "projective reduction"
of relations, in part because it is easier and much more
intuitive (in the visual sense), but also because there
are a number of tools that we need for the other brand
of reduction that arise quite naturally as a part of
the projective setting.

Before we get into the operational machinery and the
vocational vocabulary of it all, let me just suggest
that you keep in mind the following sort of picture,
which pretty much says it all, in that reducing the
"motley of the ten thousand terms" (MOTTTT) sort of
style that the aptest genres and the fittest motifs
of representations are genreally found to exhibit.

Picture a k-adic relation L as a body
that resides in k-dimensional space X.
If the dimensions are X<1>, ..., X<k>,
then the "extension" of L, an object
that I will, for the whole time that
I am working this "extensional" vein,
regard as tantamount to the relation
itself, is a subset of the cartesian
product space X = X<1> x ... x X<k>.

If you pick out your favorite family F of domains
among these dimensions, say, X<F> = {X<j> : j in F},
then the "projection" of a point x in X on the subspace
that gets "generated" along these dimensions of X<F> can
be denoted by the notation "Proj<F><x>".

By extension, the projection of any relation L on that subspace
is denoted by "Proj<F><L>", or still more simply, by "Proj<F>L".

The question of "projective reduction" for k-adic relations
can be stated with moderate generality in the following way:

| Given a set of k-place relations in the same space X and
| a set of projections from X to the associated subspaces,
| do the projections afford sufficient data to tell the
| different relations apart?

Next time, in order to make this discussion more concrete,
I will focus on some examples of triadic relations.  In fact,
to start within the bounds of no doubt familiar examples by now,
I will begin with the examples of sign relations that I used before.

http://suo.ieee.org/email/msg00729.html
http://suo.ieee.org/email/msg01224.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Projective Reduction of Triadic Relations

We are ready to take up the question of whether
3-adic relations, in general, and in particular
cases, are "determined by", "reducible to", or
"reconstructible from" their 2-adic projections.

Suppose that L contained in XxYxZ is an arbitrary 3-adic relation,
and consider the three 2-adic relations that are gotten by taking
its projections, its "shadows", if you will, on each of the three
planes XY, XZ, YZ.  Using the notation that I introduced before,
and compressing it just a bit or two in passing, one can write
these projections in each of the following ways, depending on
which turns out to be most convenient in a given context:

1.  Proj<{X,Y}><L> = Proj<{1,2}><L> = Proj<XY>L = Proj<12>L = L<XY> = L<12>.
2.  Proj<{X,Z}><L> = Proj<{1,3}><L> = Proj<XZ>L = Proj<13>L = L<XZ> = L<13>.
3.  Proj<{Y,Z}><L> = Proj<{2,3}><L> = Proj<YZ>L = Proj<23>L = L<YZ> = L<23>.

If you picture the relation L as a body in the 3-space XYZ, then
the issue of whether L is "reducible to" or "reconstuctible from"
its 2-adic projections is just the question of whether these three
projections, "shadows", or "2-faces" determine the body L uniquely.

Stating the matter the other way around, L is "not reducible to"
or "not reconstructible from" its 2-dim projections if & only if
there are two distinct relations L and L' which have exactly the
same projections on exactly the same planes.

The next series of Tables illustrates the projection operations
by means of their actions on the sign relations L(A) and L(B)
that I introduced earlier on, in the "Sign Relations" thread.
Recall that we had the following set-up:

| L(A) and L(B) are "contained in" or "subsets of" OxSxI:
|
| O  =  {A, B},
|
| S  =  {"A", "B", "i", "u"},
|
| I  =  {"A", "B", "i", "u"}.

| L(A) has the following eight triples
| of the form <o, s, i> in OxSxI:
|
|    <A, "A", "A">
|    <A, "A", "i">
|    <A, "i", "A">
|    <A, "i", "i">
|    <B, "B", "B">
|    <B, "B", "u">
|    <B, "u", "B">
|    <B, "u", "u">

| L(B) has the following eight triples
| of the form <o, s, i> in OxSxI:
|
|    <A, "A", "A">
|    <A, "A", "u">
|    <A, "u", "A">
|    <A, "u", "u">
|    <B, "B", "B">
|    <B, "B", "i">
|    <B, "i", "B">
|    <B, "i", "i">

Taking the 2-adic projections of L(A)
we obtain the following set of data:

| L(A)<OS> has these four pairs
| of the form <o, s> in OxS:
| 
|    <A, "A">
|    <A, "i">
|    <B, "B">
|    <B, "u">

| L(A)<OI> has these four pairs
| of the form <o, i> in OxI:
|
|    <A, "A">
|    <A, "i">
|    <B, "B">
|    <B, "u">

| L(A)<SI> has these eight pairs
| of the form <s, i> in SxI:
|
|    <"A", "A">
|    <"A", "i">
|    <"i", "A">
|    <"i", "i">
|    <"B", "B">
|    <"B", "u">
|    <"u", "B">
|    <"u", "u">

Taking the dyadic projections of L(B)
we obtain the following set of data:

| L(B)<OS> has these four pairs
| of the form <o, s> in OxS:
| 
|    <A, "A">
|    <A, "u">
|    <B, "B">
|    <B, "i">

| L(B)<OI> has these four pairs
| of the form <o, i> in OxI:
|
|    <A, "A">
|    <A, "u">
|    <B, "B">
|    <B, "i">

| L(B)<SI> has these eight pairs
| of the form <s, i> in SxI:
|
|    <"A", "A">
|    <"A", "u">
|    <"u", "A">
|    <"u", "u">
|    <"B", "B">
|    <"B", "i">
|    <"i", "B">
|    <"i", "i">
 
A comparison of the corresponding projections for L(A) and L(B)
reveals that the distinction between these two 3-adic relations
is preserved under the operation that takes the full collection
of 2-adic projections into consideration, and this circumstance
allows one to say that this much information, that is, enough to
tell L(A) and L(B) apart, can be derived from their 2-adic faces.

However, in order to say that a 3-adic relation L on OxSxI
is "reducible" or "reconstructible" in the 2-dim projective
sense, it is necessary to show that no distinct L' on OxSxI
exists such that L and L' have the same set of projections,
and this can take a rather more exhaustive or comprehensive
investigation of the space of possible relations on OxSxI.

As it happens, each of the relations L(A) and L(B) turns
out to be uniquely determined by its 2-dim projections.
This can be seen as follows.  Consider any coordinate
position <s, i> in the plane SxI.  If <s, i> is not
in L<SI> then there can be no element <o, s, i> in L,
therefore we may restrict our attention to positions
<s, i> in L<SI>, knowing that there exist at least
|L<SI>| = Cardinality of L<SI> = 8 elements in L,
and seeking only to determine what objects o exist
such that <o, s, i> is an element in the objective
"fiber" of <s, i>.  In other words, for what o in O
is <o, s, i> in ((Proj<SI>)^(-1))(<s, i>)?  Now, the
circumstance that L<OS> has exactly one element <o, s>
for each coordinate s in S and that L<OI> has exactly
one element <o, i> for each coordinate i in I, plus
the "coincidence" of it being the same o at any one
choice for <s, i>, tells us that L has just the one
element <o, s, i> over each point of SxI.  Together,
this proves that both L(A) and L(B) are reducible in
an informative sense to 3-tuples of 2-adic relations,
that is, they are "projectively 2-adically reducible".

Next time I will give examples of 3-adic relations
that are not reducible to or reconstructible from
their 2-adic projections.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Last time I gave two cases of 3-adic (triadic) relations
with projective reductions to 2-adic (dyadic) relations,
by name, "triadics reducible over projections" (TROP's),
"triadics reconstructible out of projections" (TROOP's).

Still, one needs to be very careful and hedgey about saying,
even in the presence of such cases, that "all is dyadicity".
I will make some attempt to explain why in the next episode,
and then I will take up examples of 3-adics that happen to
be irreducible in this sense, in effect, that are not able
to be recovered uniquely from their 2-adic projection data.
Call them "triadics irreducible over projections" (TRIOP's).

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Projective Reduction of Triadic Relations:  The Catch

This scene was initially interjected as a comic relief,
and so I will spare you the drollery of its repetition.
The entire pith and substance of it boils down to this
rather trivial observation, but one whose significance,
such as it may be, gets overlooked with serious punity,
and that is the fact that, even though one we have now
found in the sign relations L(A) and L(B) two examples
of 3-adic relations which are reconstructible from and
in this sense reducible to the 2-dimensional data sets
of their 2-adic projections, we entertain an insidious
self-deception if we think that we have given the slip
to 3-adic relations altogether.  I feel sometimes like
I am straining to call the attentions of fish to water
by blubbering and glubbering "HOH, HOH, HOH" until the
sea shall free me, I guess, whenever I try to say this,
but I guess that I might as well just go ahead and try,
one more time, while still I have the O^2, and so here
I go, again, flailing at the logical immersion of this
entire process of "analysis" (= Greek for washing back)
in hope of rendering its nigh unto invisible influence
slightly more opaque to the light of a higher analysis.
All of which is just my attempt to remind you in a way
that you will not so soon forget that the very conduct
of the whole analytic reduction step is an association
among at least three relations, the analysand plus the
two or more analytic components, and there it goes yet
again, this ineluctable triadicity.

To bring the chase up short, the very idea of reduction,
when it means reducing one thing to more than one other
thing, is itself a relation of post-dyadic valence, and
thus parthenogenetically reproducing itself from itself,
evokes the polycloven hydra's hoof of Persean 3-adicity.

Okay, I just thought that you might welcome an intermezzo,
however scherzo-frenetic it might be, but now it's back
to the opera that the phantom of the operators wrote.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Projective Reduction of Triadic Relations

In the story of A and B, it appears to be the case
that that the triadic relations L(A) and L(B) are
distinguished from each other, and what's more,
distinguished from all of the other relations
in the garden of OSI, for the same O, S, I.

At least, so says I and my purported proof.
I am so suspicious of this result myself that
I will probably not really believe it for a while,
until I have revisited the problem and the "proof"
a few times, to see if I can punch any holes in it.

But let it pass for proven for now,
and let my feeble faith go for now.

For the sake of a more balanced account,
it's time to see if we can dig up any cases
of "projectively irreducible triadics" (PIT's).
Any such PIT relation, should we ever fall into one,
is bound to occasion another, since it is a porismatic
part of the definition that a triadic relation L is a PIT
if and only if there exists a distinct triadic relation L'
such that the dyadic faces of L and L' are indiscernible.
In this event, then both L and L' fall into the de-genre
of PIT's together.

| Note In Passing:
|
| PIT's are the same thing as TRIOP's, but I left in
| this earlier usage for personal historical reasons,
| and just in case I someday decide to go back to it.

Well, PIT's are not far to find, once you think to look for them --
indeed, the landscape of "formal or mathematical existence" (FOME)
is both figuratively and litterally rife with them!

What follows is the account of a couple,
that I will dub "L^0" and "L^1".

But first, even though the question of projective reduction
has to do with 3-adic relations as a general class, and is
thus independent of their potential use as sign relations,
it behooves us to consider the bearing of these reduction
properties on the topics of interest to us for the sake
of communication and representation via sign relations.

| Note On Notation:
|
| Let any of the locutions, L c XxYxZ, L on XxYxZ, L sub XxYxZ,
| substitute for the peculiar style of "in-line" or "in-passing"
| reference to subsethood that has become idiomatic in mathematics,
| and that would otherwise use the symbol that has been customary
| since the time of Peano to denote "contained in" or "subset of".

Most likely, any triadic relation L on XxYxZ that is imposed on
the arbitrary domains X, Y, Z could find use as a sign relation,
provided that it embodies any constraint at all, in other words,
so long as it forms a proper subset L of the entire space XxYxZ.
But these sorts of uses of triadic relations are not guaranteed
to capture or constitute any natural examples of sign relations.

In order to show what a projectively irreducible 3-adic relation
looks like, I now present a pair of 3-adic relations that have the
same 2-adic projections, and thus cannot be distinguished from each
other on the basis of this data alone.  As it happens, these examples
of triadic relations can be discussed independently of sign relational
concerns, but structures of their basic ilk are frequently found arising
in signal-theoretic applications, and they are no doubt keenly associated
with questions of redundant coding and therefore of reliable communication.

Projectively Irreducible Triadic Relations, or
Triadic Relations Irreducible Over Projections:

Consider the triadic relations L^0 and L^1
that are specified in the following set-up:

| B = {0, 1}, with the "+" signifying addition mod 2,
| analogous to the "exclusive-or" operation in logic.
|
| B^k = {<x<1>, ..., x<k>> : x<j> in B for j = 1 to k}.

In what follows, the space XxYxZ is isomorphic to BxBxB = B^3.
For lack of a good isomorphism symbol, I will often resort to
writing things like "XxYxZ iso BxBxB" or even "XxYxZ = B^3".

| Relation L^0
|
| L^0 = {<x, y, z> in B^3 : x + y + z = 0}.
|
| L^0 has the following four triples
| of the form <x, y, z> in B^3:
|
|    <0, 0, 0>
|    <0, 1, 1>
|    <1, 0, 1>
|    <1, 1, 0>

| Relation L^1
|
| L^1 = {<x, y, z> in B^3 : x + y + z = 1}.
|
| L^1 has the following four triples
| of the form <x, y, z> in B^3:
|
|    <0, 0, 1>
|    <0, 1, 0>
|    <1, 0, 0>
|    <1, 1, 1>

Those are the relations,
here are the projections:

Taking the dyadic projections of L^0
we obtain the following set of data:

| (L^0)<XY> has these four pairs
| of the form <x, y> in XxY:
| 
|    <0, 0>
|    <0, 1>
|    <1, 0>
|    <1, 1>

| (L^0)<XZ> has these four pairs
| of the form <x, z> in XxZ:
| 
|    <0, 0>
|    <0, 1>
|    <1, 1>
|    <1, 0>

| (L^0)<YZ> has these four pairs
| of the form <y, z> in YxZ:
| 
|    <0, 0>
|    <1, 1>
|    <0, 1>
|    <1, 0>

Taking the dyadic projections of L^1
we obtain the following set of data:

| (L^1)<XY> has these four pairs
| of the form <x, y> in XxY:
| 
|    <0, 0>
|    <0, 1>
|    <1, 0>
|    <1, 1>

| (L^1)<XZ> has these four pairs
| of the form <x, z> in XxZ:
| 
|    <0, 1>
|    <0, 0>
|    <1, 0>
|    <1, 1>

| (L^1)<YZ> has these four pairs
| of the form <y, z> in YxZ:
| 
|    <0, 1>
|    <1, 0>
|    <0, 0>
|    <1, 1>

Now, for ease of verifying the data, I have written
these sets of pairs in the order that they fell out
on being projected from the given triadic relations.
But, of course, as sets, their order is irrelevant,
and it is simply a matter of a tedious check to
see that both L^0 and L^1 have exactly the same
projections on each of the corresponding planes.

To summarize:
 
The relations L^0, L^1 sub B^3 are defined by the following equations,
with algebraic operations taking place as in the "Galois Field" GF(2),
that is, with 1 + 1 = 0.

1.  The triple <x, y, z> in B^3 belongs to L^0  iff  x + y + z = 0.
    L^0 is the set of even-parity bit-vectors, with  x + y = z.

2.  The triple <x, y, z> in B^3 belongs to L^1  iff  x + y + z = 1.
    L^1 is the set of odd-parity bit-vectors,  with  x + y = z + 1.

The corresponding projections of L^0 and L^1 are identical.
In fact, all six projections, taken at the level of logical
abstraction, constitute precisely the same dyadic relation,
isomorphic to the whole of BxB and expressible by means of
the universal constant proposition 1 : BxB -> B.  In sum:

1.  (L^0)<XY>  =  (L^1)<XY>  =  1<XY>  =  BxB  =  B^2,
2.  (L^0)<XZ>  =  (L^1)<XZ>  =  1<XZ>  =  BxB  =  B^2,
3.  (L^0)<YZ>  =  (L^1)<YZ>  =  1<YZ>  =  BxB  =  B^2.

Therefore, L^0 and L^1 constitute examples of
"projectively irreducible triadic relations",
"triadic relations irreducible on projections".

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

We have pursued the "projective analysis" of 3-adic relations,
tracing the pursuit via a ready quantity of concrete examples,
just far enough to arrive at this clearly demonstrable result:

| Some 3-adic relations are and
| some 3-adic relations are not
| uniquely reconstructible from,
| or informatively reducible to,
| their 2-adic projection data.

We now take up the "compositive analysis" of 3-adic relations,
coining a term to prevent confusion, like there's a chance in
the world of that, but still making use of nothing other than
that "standardly uptaken operation" of relational composition,
the one that constitutes the customary generalization of what
just about every formal, logical, mathematical community that
is known to the writer, anyway, dubs "functional composition".

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Compositive Reduction of Relations

The first order of business under this heading is straightforward enough:
to define what is standardly described as the "composition of relations".

| Notes on the Ancestry, the Application, and
| The Anticipated Broadening of these Concepts:
|
| This is basically the same operation that C.S. Peirce described as
| "relative multiplication", except for the technical distinction that
| he worked primarily with so-called "relative terms", like "lover of",
| "sign of the object ~~~ to", and "warrantor of the fact that ~~~ to",
| rather than with the kinds of extensional and intensional relations
| to which the majority of us are probably more accustomed to use.
|
| It is with regard to this special notion of "composition", and it alone,
| that I plan to discuss the inverse notion of "decomposition".  I try to
| respect other people's "reserved words" as much as I can, even if I can
| only go so far as to take them at their words and their own definitions
| of them in forming my interpretation of what they are apparently saying.
| Therefore, if I want to speak about other ways of building up relations
| from other relations and different ways of breaking down relations into
| other relations, then I will try to think up other names for these ways,
| or revert to a generic usage of terms like "analysis" and "combination".
|
| When a generalized definition of "relational composition" has been given,
| and its specialization to 2-adic relations is duly noted, then one will
| be able to notice that it is simply an aspect of this definition that
| the composition of two 2-adic relations yields nothing more than yet
| another 2-adic relation.  This will, I hope, in more than one sense
| of the word, bring "closure" to this issue, of what can be reduced
| to compositions of 2-adic relations, to wit, just 2-adic relations.

A notion of relational composition is to be defined that generalizes the
usual notion of functional composition.  The "composition of functions"
is that by which -- composing functions "on the right", as they say --
f : X -> Y and g : Y -> Z yield the "composite function" fg : X -> Z.

Accordingly, the "composition" of dyadic relations is that by which --
composing them here by convention in the same left to right fashion --
P sub XxY and Q sub YxZ yield the "composite relation" PQ sub XxZ.

There is a neat way of defining relational composition, one that
not only manifests its relationship to the projection operations
that go with any cartesian product space, but also suggests some
natural directions for generalizing relational composition beyond
the limits of the 2-adic case, and even beyond relations that have
any fixed arity, that is, to the general case of formal languages.
I often call this definition Tarski's Trick, though it probably
goes back further than that.  This is what I will take up next.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

There are one or two confusions that demand to
be cleared up before I can proceed any further.

We had been enjoying our long-anticipated breakthrough on the
allegedly "easy case" of projective reduction, having detected
hidden within that story of our old friends and usual suspects
A and B two examples of 3-adic relations, L(A) and L(B), that
are indeed amenable, not only to being distinguished, one from
the other, between the two of them, but also to being uniquely
determined amongst all of their kin by the information that is
contained in their 2-dimensional projections.  So far, so good.
Had I been thinking fast enough, I would have assigned these the
nomen "triadics reducible in projections over dyadics" (TRIPOD's).
Other good names:  "triadics reducible over projections" (TROP's),
or perhaps "triadics reconstructible out of projections" (TROOP's).

Then we found two examples of triadic relations, L^0 and L^1,
that I described as "projectively irreducible triadics" (PIT's),
because they collapse into an indistinct mass of non-descript
flatness on having their dyadic pictures taken.  That acronym
does not always work for me, so I will give them the alias of
"triadics irreducible by projections over dyadics" (TIBPOD's),
or perhaps "triadics irreducible over projections" (TIOP's).

| The Author Addresses The Exasperants Of His Foibles:
|
| Please do not concern yourselves too much, or be too irritated by
| my extravagant struggles to hash out memorable hash codes for the
| produce the formal farm and the offerings of the logical scullery.
| I will eventually sort it all out and settle on some few that fit.

I am not accustomed to putting much stock in my own proofs
until I can reflect on them for a suitable period of time
or until a few other people have been able to go over them,
but until that happens I will just have to go with these
results as I presently see them.

In reply to my notes on these topics, Matthew West
has contributed the following pair of commentaries:

1.  Regarding L(A) and L(B)

| Whilst I appreciate the academic support for showing
| that any triadic relation can be represented by some
| number of dyadic relations, the real point is to use
| this fact to seek for an improved analysis based on
| more fundamental concepts.  It is not the objective
| to do something mechanical.

2.  Regarding L^0 and L^1

| I don't think you have shown very much except that reducing
| triadic relations to dyadic relations using the mechanical
| process you have defined can loose information.  I am not
| surprised by this.  My experience of doing this with real,
| rather than abstract examples, is that there are often
| extra things to do.

So I need to clarify that what I think that I showed was
that "some" triadic relations are "reducible" in a given
informational sense to the information that is contained
in their dyadic projections, e.g., as L(A) and L(B) were,
but that others are not reducible in this particular way,
e.g., as L^0 and L^1 were not.

Now, aside from this, I think that Matthew is raising
a very important issue here, which I personally grasp
in terms of two different ways of losing information,
namely:

1.  The information that we lose in forming a trial model,
    in effect, in going from the unformalized "real world"
    over to the formal context or the syntactic medium in
    which models are constrained to live out their lives.

2.  The information that we lose in "turning the crank"
    on the model, that is, in drawing inferences from
    the admittedly reductive and "off'n'wrong" model
    in a way that loses even the initial information
    that it captured about the real-world situation.

To do it justice, though, I will need to return
to this issue in a less frazzled frame of mind.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

This will complete the revision of this RARified thread from last Autumn.
I will wind it up, as far as this part of it goes, by recapitulating the
development of the "Rise" relation, from a couple of days ago, this time
working through its analysis and its synthesis as fully as I know how at
the present state of my knowledge.  The good of this exercise, of course,
the reason for doing all of this necessary work, is not because the Rise
relation is so terribly interesting in itself, but rather to demonstrate
the utility of the functional framework and its sundry attached tools in
their application to a nigh unto minimal and thus least obstructive case.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

The good of this whole discussion, the use of it all,
the thing about it that makes it worth going through,
at least, for me, is not just to settle doubts about
the "banal", "common", or figuratively and literally
"trivial" (Latin for locus where "three roads" meet)
type of issue that may have appeared to be its point,
but, as I said in my recent reprise of justification,
to examine and explore "the extent to which it is possible to
construct relations between complex relations and simpler
relations.  The aim here, once we get past questions of
what is reducible in what way and what not in no way,
is to develop concrete and fairly general methods
for analyzing the structures of those relations
that are indeed amenable to a useful analysis --
and here I probably ought to emphasize that
I am talking about the structure of each
relation in itself, at least, to the
extent that it presents itself in
extensional form, and not just
the syntax of this or that
relational expression."

So let me run through this development once more,
this time interlacing its crescendoes with a few
supplemental notes of showcasing or sidelighting,
aimed to render more clearly the aim of the work.

In order to accumulate a stock of ready-mixed concrete instances,
at the same time to supply ourselves with relatively fundamental
materials for building ever more complex and prospectively still
more desirable and elegant structures, maybe, even if it must be
worked out just a little bit gradually, hopefully, incrementally,
and even at times jury-rigged here and there, increasingly still
more useful architectronic forms for our joint contemplation and
and our meet edification, let us then set out once more from the
grounds of what we currently have in our command, and advance in
the directions of greater generalities and expanded scopes, with
the understanding that many such journeys are possible, and that
each is bound to open up on open-ended views at its unlidded top.

By way of a lightly diverting overture, let's begin
with an exemplar of a "degenerate triadic relation" --
do you guess that our opera is in an Italian manor? --
a particular version of the "between relation", but
let us make it as simple as we possibly can and not
attempt to analyze even that much of a case in full
or final detail, but leave something for the finale.

Let B = {0, 1}.

Let the relation named "Rise<2>"
such that Rise<2> c B^2 = B x B,
be exactly this set of 2-tuples:

| Rise<2>  =  {<0, 0>,
|              <0, 1>,
|              <1, 1>}

Let the relation named "Rise<3>"
such that Rise<3> c B^3 = BxBxB,
be exactly this set of 3-tuples:

| Rise<3>  =  {<0, 0, 0>,
|              <0, 0, 1>,
|              <0, 1, 1>,
|              <1, 1, 1>}

Then Rise<3> is a "degenerate 3-adic relation"
because it can be expressed as the conjunction
of a couple of 2-adic relations, specifically:

Rise<3><x, y, z>  <=>  [Rise<2><x, y> and Rise<2><y, z>].

But wait just a minute!  You read me clearly to say already --
and I know that you believed me! -- that no 3-adic relation
can be decomposed into any 2-adic relations, so what in the
heck is going on!?  Well, "decomposed" implies the converse
of "composition", which has to mean "relational composition"
in the present context, and this composition is a different
operation entirely from the "conjunction" that was employed
above, to express Rise<3> as a conjunction of two Rise<2>'s.

That much we have seen and done before, but in the spirit of
that old saw that "what goes up must come down" we recognize
that there must be a supplementary relation in the scheme of
things that is equally worthy of our note, and so let us dub
this diminuendo the "Fall" relation, and set to define it so:

Let the relation named "Fall<2>"
such that Fall<2> c B^2 = B x B,
be exactly this set of 2-tuples:

| Fall<2>  =  {<0, 0>,
|              <1, 0>,
|              <1, 1>}

Let the relation named "Fall<3>"
such that Fall<3> c B^3 = BxBxB,
be exactly this set of 3-tuples:

| Fall<3>  =  {<0, 0, 0>,
|              <1, 0, 0>,
|              <1, 1, 0>,
|              <1, 1, 1>}

And on those notes I must rest ...
To be continued ...

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Starting from the simplest notions of Rise and Fall,
I may easily have chosen to leave it as an exercise
for the reader to discover suitable generalizations,
say from Rise<k> and Fall<k> for k of order 2 and 3,
to the slightly more general case in which k is any
natural number, that is, finite, integral, positive.
But that is far too easy a calisthenic, and no kind
of a work-out to offer our band of fearless readers,
and so the writer picks up the gage that he himself 
throws down, and for his health runs the easy track!

Let B = {0, 1}.

Let the relation Rise<k> c B^k
be defined in the following way:

| Rise<k><x<1>, ..., x<k>>
|
| iff
|
| Rise<2><x<1>, x<2>>  and  Rise<k-1><x<2>, ..., x<k>>

Let the relation Fall<k> c B^k
be defined in the following way:

| Fall<k><x<1>, ..., x<k>>
|
| iff
|
| Fall<2><x<1>, x<2>>  and  Fall<k-1><x<2>, ..., x<k>>

But let me now leave off, for the time being,
from the temptation to go any further in the
direction of increasing k than I ever really
intended to, on beyond 2 or 3 or thereabouts,
for that is not the aim of the present study.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

In this note I revisit the "Between" relation on reals,
and then I rework it as a discrete and finite analogue
of its transcendantal self, as a Between relation on B.
Ultimately, I want to use this construction as working
material to illustrate a method of defining relational
compositions in terms of projections.  So let us begin.

Last time I defined Rise and Fall relations on B^k.
Working polymorphously, as some people like to say,
let us go ahead and define the analogous relations
over the real domain R, not even bothering to make
new names, but merely expecting the reader to find
the aptest sense for a given context of discussion.

Let R be the set of real numbers.

Let the relation named "Rise<2>"
such that Rise<2> c R^2 = R x R,
be defined in the following way:

| Rise<2><x, y>
|
| iff
|
| [x = y]  or  [x < y]

Let the relation named "Fall<2>"
such that Fall<2> c R^2 = R x R,
be defined in the following way:

| Fall<2><x, y>
|
| iff
|
| [x > y]  or  [x = y]

There are clearly a number of redundancies
between the definitions of these relations,
but I prefer the symmetry of this approach.

The next pair of definitions will be otiose, too,
if viewed in the light of the comprehensive case
that follows after, but let us go gently for now.

Let the relation named "Rise<3>"
such that Rise<3> c R^3 = RxRxR,
be defined in the following way:

| Rise<3><x, y, z>
|
| iff
|
| Rise<2><x, y> and Rise<2><y, z>

Let the relation named "Fall<3>"
such that Fall<3> c R^3 = RxRxR,
be defined in the following way:

| Fall<3><x, y, z>
|
| iff
|
| Fall<2><x, y> and Fall<2><y, z>

Then Rise<3> and Fall<3> are "degenerate 3-adic relations"
insofar as each of them bears expression as a conjunction
whose conjuncts are expressions of 2-adic relations alone.

Just in order to complete the development
of this thought, let us then finish it so:

Let the relation Rise<k> c R^k
be defined in the following way:

| Rise<k><x<1>, ..., x<k>>
|
| iff
|
| Rise<2><x<1>, x<2>>  and  Rise<k-1><x<2>, ..., x<k>>

Let the relation Fall<k> c R^k
be defined in the following way:

| Fall<k><x<1>, ..., x<k>>
|
| iff
|
| Fall<2><x<1>, x<2>>  and  Fall<k-1><x<2>, ..., x<k>>

If there was a point to writing out this last step,
I think that it may well have been how easy it was
not to write, not literally to "write" at all, but
simply to "cut and paste" the definitions from the
boolean case, and then but to change the parameter
B into the parameter R at a mere one place in each.

Let us then push on, in a retrograde way, returning to the orbit
of those very first relations that got us into the midst of this
quandary in the first place, to wit, the relations in medias res,
the relations of betwixt and between and all of their sundry kin.
But let us this time place the paltry special relations on which
we fixed the first time around back within the setting of a much
broader and a much more systematically examined context, that is,
an extended family of related relations or variations on a theme.

I hope that you will be able to recall that cousin of
the Between relation that we took up here once before,
and that you will be able to recognize its characters,
even if I now disguise it under a new name and partly
dissemble them under a new manner of parameterization.
Where you might take the name "IO" to mean "in order",
it is the relation defined on three real numbers thus:

Let the relation named "IO<213>"
such that IO<213> c R^3 = RxRxR,
be defined in the following way:

| IO<213><x, a, b>  iff  [a < x < b],
|
| equivalently,
|
| IO<213><x, a, b>  iff  [a < x] and [x < b].

Corresponding to the 3-adic relation IO<213> c R^3 = RxRxR,
there is a "proposition", a function io<213> : R^3 -> B,
that I will describe, until a better name comes along,
as the "relation map" that is "dual to" the relation.
It is also known as the "indicator" of that relation.

Consider the boolean analogue or the logical variant of IO,
with real domains : R now replaced by boolean domains : B.

The boolean analogue of the ordering "<" is the implication "=>",
so the logical variant of the relation IO<213> is given this way:

Let the relation named "IO<213>"
such that IO<213> c B^3 = BxBxB,
be defined in the following way:

| IO<213><x, a, b>
|
| iff
|
| [a => x] and [x => b]

When it does not risk any confusion,
one can express this also like this:

| IO<213><x, a, b>
|
| iff
|
| a => x => b

Corresponding to the 3-adic relation IO<213> c B^3 = BxBxB,
there is a "proposition", a function io<213> : B^3 -> B,
that I will describe, until a better name comes along,
as the "relation map" that is "dual to" the relation.
It is also known as the "indicator" of that relation.

At this point I want to try and get away with a bit
of additional flexibility in the syntax that I use,
reusing some of the same names for what are distinct
but closely related types of mathematical objects.

In particular, I would like to have the license
to speak a bit more loosely about these objects,
to ignore the distinction between "relations" of
the form Q c X<1> x ... x X<k> and "relation maps"
of the form q : X<1> x ... x X<k> -> B, and even
on sundry informal occasions to use the very same
names for them -- The Horror! -- hoping to let the
context determine the appropriate type of object,
except where it may be necessary to maintain this
distinction in order to avoid risking confusion.

In order to keep track of all of the players -- not to mention all of the refs! --
it may help to re-introduce a diagram that I have used many times before, as a
kind of a play-book or programme, to sort out the burgeoning teams of objects
and the cryptic arrays of signs that we need to follow throughout the course
of this rather extended into overtime game:

--------------------------------///--------------------------------------
    Objective Framework (OF)            Interpretive Framework (IF)
--------------------------------///--------------------------------------
 Formal or Mathematical Objects     Formal or Mathematical Signs & Texts
--------------------------------///--------------------------------------

          Propositions                          Expressions
           (Logical)                          (Propositional)
               o                                     o
               |                                     |
               |                                     |
               o                                     o
              / \                                   / \
             /   \                                 /   \
            o     o                               o     o
        Sets       Maps                  Set Names       Map Names
  (Geometric)     (Functional)          (Geometric)     (Functional)

--------------------------------///--------------------------------------
   B^k             B^k -> B              "IO<213>"       "io<213>"
   R^k             R^k -> B              "IO<213>"       "io<213>"
   X^k             X^k -> B              "Q"             "q"
--------------------------------///--------------------------------------

Matthew West wrote to Adam Pease:

| I think there are two key issues here.
| 
| 1.  When you use a larger relation as a primitive,
|     you will have more types of relation.  Whereas if
|     you analyse them down to more fundamental relations,
|     you may have more relations, but there will be fewer
|     types of relation.  This is usually an advantage.
|     Indeed we have found that there is reuse at the
|     individual relation level too, which means an
|     effective reduction in repetition (with the
|     opportunity for inconsistency that gives).
|
| 2.  When you analyse to more fundamental concepts
|     you give yourself the opportunity to be able to make
|     inferences that arise from teh more fundamental level.
|
| In my view it is OK to have more complex relations provided
| they are rigourously defined in terms of underlying concepts.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

What Matthew says here makes a lot of sense to me.

By way of illustrating the application of these heuristic principles
in the case of Adam Pease's Between relation, we might consider the
following analysis:

Given a =/= b, I will pull the usual trick and say,
"without loss of generality" (WOLOG), assume a < b.

Then:

| x is between a and b
|
| if and only if
|
| a < x and x < b

Now, if we are operating under what are likely to be
the default assumptions, the intended context, or the
standard interpretation of these statements, then the
sentence "x < y" invokes a dyadic relation LT c R x R,
where R = Reals, and this in turn corresponds to the
proposition lt : RxR -> B, where B = {0, 1} as usual.

This allows us to articulate the sentence "x is between a and b"
as invoking a certain 3-adic relation "1st between 2nd and 3rd",
or let us call it "IO<213>" for short, such that IO<213> c R^3,
and this in turn corresponds to the proposition  io<213> : R^3 -> B.

Last but not least, recall that conjunction is just a function
that maps two propositions into a proposition, in other words,
a function of the form 'and' : (X -> B) x (X -> B) -> (X -> B).

Now here's a good question:  What is X?
To be specific, is it R^2 or is it R^3?

We appear to have a little bit of a problem
matching up the divers pieces of the puzzle.

What this requires is a little gimmick that I call a "tacit extension".
In this case, we need a way of taking the function lt : R^2 -> B and
using it to construct two different functions of type : R^3 -> B.

Thus we need a couple of functions on functions, both having this form:

   te : (R<1> x R<2> -> B) -> (R<1'> x R<2'> x R<3'> -> B),

Here we must now distinguish particular copies of R
in the cartesian products RxR = R^2 and RxRxR = R^3.

| Tacit extension 21 is defined by:  (te<21>(f))(x, a, b) = f(a, x).
|
| Tacit extension 13 is defined by:  (te<13>(f))(x, a, b) = f(x, b).

Applying these tacit extensions to lt we get:

| (te<21>(lt))(x, a, b)  =  lt<21>(x, a, b) = lt(a, x)
|
| (te<13>(lt))(x, a, b)  =  lt<13>(x, a, b) = lt(x, b)

In effect, the tacit extension of the "less than" map,
given three arguments, just compares the proper pair
of them, with a "don't care" condition on the other.

Putting all of these pieces together, we have this picture:

          lt(a, x)  .       .  lt(x, b)
                    |       |
                    |       |
            te<21>  o       o  te<13>
                    |       |
                    |       |
   lt<21>(x, a, b)  .       .  lt<13>(x, a, b)
                     \     /
                      \   /
                       \ /
                      [and]
                        |
                        |
                        |
                        .
                 io<213>(x, a, b)

This gives a functional decomposition of the proposition io<213>
making use of two different applications of the proposition lt,
a couple of tacit extensions te<21> and te<13>, and the reusable
function 'and' that connects the tacit extensions of lt together.

What is the use of all of this? -- especially in view of the fact
that we did not reduce the maximum arity of the relations involved?

In sum, we started with a 3-adic relation map  io<213> : R^3 -> B,
getting a pair of 2-adic relation maps of the form  lt : R^2 -> B,
plus two tacit extensions of the form  te : (R^2 -> B)  ->  (R^3 -> B),
and a 3-adic relation of the shape  'and' : (R^3 -> B)^2 -> (R^3 -> B).

Well, the advantage, over the long haul, comes precisely from the fact
that the seemingly more complex pieces that we reached in the analysis
are pieces that are highly reusable in situation after situation, and
so it is worth the overhead, eventually, that it takes to understand
these recurring components as thoroughly as possible.

That brings an end to these Legends of the Fall,
And so I wish a gentle night to you one and all.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Note 3

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Howard,

Here is another installment on "Reductions Among Relations":

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

RAR.  Note

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Subj:  Reductions Among Relations
Date:  Mon, 14 May 2001 15:30:25 -0400

I thought that it might be a beneficial exercise for us to
work out an explicit definition of "relational composition",
partly as an object example of how such a thing ought to look.

I will begin at the  beginning, roughly with the way that C.S. Peirce
intially defined and outlined his notion of "relative multiplication",
though I will be doing this from memory -- so be gentle, I will check
with the original papers (circa 1870) later on.  An interesting thing
from my perspective, and what first drew me back to this stage in the
history of logic from the problems that I was investigating years ago,
is the way that Peirce had already, at the very beginning, integrated
the aspects of the logic of relations that we usually problematize as
the "extensional" versus the "intensional".  Some of that integration
is dimly apparent even in these simple cases, but the remainder of it
would be well worth detailed and extended study.

Here are some excerpts from the "Influenza" example:

Part 1.  Graph-Theoretical Picture of Composition

There is this 3-adic transaction among
three relational domains.  Let us say:
"Transmitters", "Vectors", "Receivers",
and let us symbolize:  C c T x V x R.

In order to prove the proposition, for instance, as in a court of law,
that "John J gave-the-flu-to Mary M", it is necessary but by no means
enough to convince an arbiter that an infectious colony or a virulent
sample of particular micro-organisms of the genus known as "influenza"
was transported from John J (SSN 1 TBA) to Mary M (SSN 2 TBA) on some
well-specified occasion in question.

In other words, the "evidence" for the 2-adic relation that bears
the form and the description F c T x R : "-- gave-the-flu-to --",
is found solely within the 3-adic relation of "communication" C.

Let us assume that this long chain of causal and physical "influences"
can be more conveniently summarized, for our present purposes, in the
form of a 3-adic relation that connects a transmitter t, a "vector" v,
and a receiver r.  Thus a bona fide incident or a genuine instance of
the "communication relation" C c TxVxR will be "minimally adequately",
as they say in epidemiology, charted in a datum of the form <t, v, r>.

What is the character of the relationship between
the 3-adic relation of "communication" C c TxVxR
and the 2-adic relation "-- gave-the-flu-to --"?

This particular relation among relations --
you may be about to read my mention, but
will not if I can help it find me to use
the term "meta-relation" for this notion --
is broadly nomenclated as a "projection",
with type here being Proj : TxVxR -> TxR.
Our use of it in this presenting case is
an example of how we transit from caring
about the "detail of the evidence" (DOTE)
to desiring only a brief sum of the fact.

For now, let us stipulate that we have the following
sample of data about the 3-adic relation C c TxVxR :

   {..., <John J, Agent A, Mary M>, ...}.

In other words, we are fixing on a single element:

   <John J, Agent A, Mary M>  in  C  c  TxVxR.

Let us now contemplate the generalization of ordinary functional composition
to 2-adic relations, called, not too surprisingly, "relational composition",
and roughly information-equivalent to Peirce's "relative multiplication".

I will employ the data of our present case to illustrate two different
styles of picture that we can use to help us reason out the operation
of this particular form of relational composition.

First I show one of my favorite genres of pictures for 2-adic relations,
availing itself of the species of graphs known as "bipartite graphs",
or "bigraphs", for short.

Let an instance of the 2-adic relation E c TxV
informally defined by {<t, v> : t exhales v},
be expressed in the form "t exhales v".

Let an instance of the 2-adic relation I c VxR
informally defined by {<v, r> : v infects r},
be expressed in the form "v infects r".

Just for concreteness in the example, let us imagine that:

1.  John J exhales three viral particles numbered 1, 3, 5.

2.  Mary M inhales three viral particles numbered 3, 5, 7,
           each of which infects her with influenza.

The 2-adic relation E that exists in this situation is
imaged by the bigraph on the T and the V columns below.

The 2-adic relation I that exists in this situation is
imaged by the bigraph on the V and the R columns below.

        E     I
     T---->V---->R

     o     1     o
          /
         /
     o  /  2     o
       /
      /
   J o-----3     o
      \     \
       \     \
     o  \  4  \  o
         \     \
          \     \
     o     5-----o M
                /
               /
     o     6  /  o
             /
            /
     o     7     o

Let us now use this picture to illustrate for ourselves,
by way of concrete examples, many of the distinct types
of set-theoretic constructs that would arise in general
when contemplating any similar relational configuration.

First of all, there is in fact a particular 3-adic relation Q
that is determined by the data of these two 2-adic relations.
It cannot be what we are calling the "relational composition"
or the "relative product", of course, since that is defined --
forgive me if I must for this moment be emphatic -- DEFINED
to be yet another 2-adic relation.  Just about every writer
that I have read who has discovered this construction has
appeared to come up with a different name for it, and I
have already forgotten the one that I was using last,
so let me just define it and we will name it later:

What we want is easy enough to see in visible form,
as far as the present case goes, if we look at the
composite sketch already given.  There the mystery
3-adic relation has exactly the 3-tuples <t, v, r>
that are found on the marked paths of this diagram.

That much insight should provide enough of a hint
to find a duly officious set-theoretic definition:

   Q  =  {<t, v, r> : <t, v> in E and <v, r> in I}.

There is yet another, very convenient, way to define this,
the recipe of which construction proceeds by these stages:

1.  For 2-adic relation G c TxV, define GxR,
    named the "extension" of G to TxVxR, as:
    {<t, v, r> in TxVxR : <t, v> in G}.

2.  For 2-adic relation H c VxR, define TxH,
    named the "extension" of H to TxVxR, as:
    {<t, v, r> in TxVxR : <v, r> in H}.

In effect, these extensions just keep the constraint
of the 2-adic relation "in its places" while letting
the other elements roam freely.

Given the ingredients of these two extensions,
at the elemental level enjoying the two types:
TxV -> TxVxR  and  VxR -> TxVxR, respectively,
we can define the 3-adic Q as an intersection:

   Q(G, H)  =  GxR  |^|  TxH

One way to comprehend what this construction means
is to recognize that it is the largest relation on
TxVxR that is congruent with having its projection
on TxV be G and its projection on VxR be H.

Thus, the particular Q in our present example is:

   Q(E, I)  =  ExR  |^|  TxI

This is the relation on TxVxR, to us, embodying an assumption
about the "evidence" that underlies the case, which restricts
itself to the information given, imposing no extra constraint.

And finally -- though it does amount to something like the "scenic tour",
it will turn out to be useful that we did things in this roundabout way --
we define the relational composition of the 2-adic relations G and H as:

   G o H  =  Proj<T, R> Q(G, H)  =  Proj<T, R> (GxR |^| TxH)

| Reference:
|
| Although it no doubt goes way back, I am used to thinking
| of this formula as "Tarski's Trick", because I first took
| notice of it in a book by Ulam, who made this attribution.
|
| Ulam & Bednarek,
|"On the Theory of Relational Structures
| And Schemata for Parallel Computation",
| Original report dated May 1977, in:
| Ulam, 'Analogies Between Analogies',
| University of California Press, Berkely, CA, 1990.

Applying this general formula to our immediate situation:

   E o I  =  Proj<T, R> Q(E, I)  =  Proj<T, R> (ExR |^| TxI)

We arrive at this picture of the composition E o I c TxR:

       EoI
     T---->R

     o     o

     o     o

   J o     o
      \
       \
     o  \  o
         \
          \
     o     o M

     o     o

     o     o

In summation, E o I = {<John J, Mary M>}.

By the way, you may have noticed that I am using here
what strikes me as a more natural order for composing
2-adic relations, but the opposite of what is usually
employed for functions.  In the present ordering, one
can read the appearances of the relational domains in
what seems like a much more straightforward way, just
as they are invoked by the series of relation symbols.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Part 2.  Matrix-Theoretical Picture of Composition

Let us declare a "logical basis" -- and leave it
as an exercise for the reader to improvise a fit
definition of what is, and what ought to be that --
but anyway a collection of elements of this form:

   "Basic Entia"  K  =  T |_| V |_| R

   "Transmissia"  T  =  {t1, t2, t3, t4, t5, t6, t7}

   "Viral Entia"  V  =  {v1, v2, v3, v4, v5, v6, v7}

   "Receptacula"  R  =  {r1, r2, r3, r4, r5, r6, r7}

Just by way of orientation to the way that we speak "way out here",

   t3  =  John.
   r5  =  Mary.

So far, so good, but here we have come to one of those junctures
where personal tastes are noticed to be notoriously divergent in
matters of notation, and so at this point I will simply describe
a few of the most popular options:

1.  One may lump all of these elements together and work
    with a cubic array that has dimensions  21 x 21 x 21,
    taking its projections into square matrices  21 x 21.

2.  One may consider the very like possibilty, here only,
    that the T's and the R's are abstractly the same set,
    and reduce the representation in a corresponding way.

3.  One may treat the relational domains T, V, R as three
    distinct sets, start with a 3-adic relation Q c TxVxR
    represented as a cubic array of dimensions  7 x 7 x 7,
    taking its projections into square matrices of  7 x 7.

Option 3 seems easier for us here,
just as a way of conserving space.

The extensions of the 2-adic relations E and I
are the following collections of ordered pairs:

   E  =  {<t3, v1>, <t3, v3>, <t3, v5>}

   I  =  {<v3, r5>, <v5, r5>, <v7, r5>}

Peirce represented 2-adic relations in this form:

   E  =  t3:v1 +, t3:v3 +, t3:v5

   I  =  v3:r5 +, v5:r5 +, v7:r5

It is very often convenient, though by no means obligatory,
to arrange these quasi-algebraic terms in forms like these:

T x V  =

[
|  t1:v1,  t1:v2,  t1:v3,  t1:v4,  t1:v5,  t1:v6,  t1:v7,
|  t2:v1,  t2:v2,  t2:v3,  t2:v4,  t2:v5,  t2:v6,  t2:v7,
|  t3:v1,  t3:v2,  t3:v3,  t3:v4,  t3:v5,  t3:v6,  t3:v7,
|  t4:v1,  t4:v2,  t4:v3,  t4:v4,  t4:v5,  t4:v6,  t4:v7,
|  t5:v1,  t5:v2,  t5:v3,  t5:v4,  t5:v5,  t5:v6,  t5:v7,
|  t6:v1,  t6:v2,  t6:v3,  t6:v4,  t6:v5,  t6:v6,  t6:v7,
|  t7:v1,  t7:v2,  t7:v3,  t7:v4,  t7:v5,  t7:v6,  t7:v7,
]

V x R  =

[
|  v1:r1,  v1:r2,  v1:r3,  v1:r4,  v1:r5,  v1:r6,  v1:r7,
|  v2:r1,  v2:r2,  v2:r3,  v2:r4,  v2:r5,  v2:r6,  v2:r7,
|  v3:r1,  v3:r2,  v3:r3,  v3:r4,  v3:r5,  v3:r6,  v3:r7,
|  v4:r1,  v4:r2,  v4:r3,  v4:r4,  v4:r5,  v4:r6,  v4:r7,
|  v5:r1,  v5:r2,  v5:r3,  v5:r4,  v5:r5,  v5:r6,  v5:r7,
|  v6:r1,  v6:r2,  v6:r3,  v6:r4,  v6:r5,  v6:r6,  v6:r7,
|  v7:r1,  v7:r2,  v7:r3,  v7:r4,  v7:r5,  v7:r6,  v7:r7,
]

Now, taking these generic motifs as scenic -- or, at least, schematic --
backdrops, one can permit the particular characters of one's favorite
2-adic relations to represent themselves and to play out their action
on this stage, by attaching affirming or nullifying "coefficients" to
the appropriate places of the thus-arrayed company of possible actors.

E  =

[
|  0 t1:v1,  0 t1:v2,  0 t1:v3,  0 t1:v4,  0 t1:v5,  0 t1:v6,  0 t1:v7,
|  0 t2:v1,  0 t2:v2,  0 t2:v3,  0 t2:v4,  0 t2:v5,  0 t2:v6,  0 t2:v7,
|  1 t3:v1,  0 t3:v2,  1 t3:v3,  0 t3:v4,  1 t3:v5,  0 t3:v6,  0 t3:v7,
|  0 t4:v1,  0 t4:v2,  0 t4:v3,  0 t4:v4,  0 t4:v5,  0 t4:v6,  0 t4:v7,
|  0 t5:v1,  0 t5:v2,  0 t5:v3,  0 t5:v4,  0 t5:v5,  0 t5:v6,  0 t5:v7,
|  0 t6:v1,  0 t6:v2,  0 t6:v3,  0 t6:v4,  0 t6:v5,  0 t6:v6,  0 t6:v7,
|  0 t7:v1,  0 t7:v2,  0 t7:v3,  0 t7:v4,  0 t7:v5,  0 t7:v6,  0 t7:v7,
]

I  =

[
|  0 v1:r1,  0 v1:r2,  0 v1:r3,  0 v1:r4,  0 v1:r5,  0 v1:r6,  0 v1:r7,
|  0 v2:r1,  0 v2:r2,  0 v2:r3,  0 v2:r4,  0 v2:r5,  0 v2:r6,  0 v2:r7,
|  0 v3:r1,  0 v3:r2,  0 v3:r3,  0 v3:r4,  1 v3:r5,  0 v3:r6,  0 v3:r7,
|  0 v4:r1,  0 v4:r2,  0 v4:r3,  0 v4:r4,  0 v4:r5,  0 v4:r6,  0 v4:r7,
|  0 v5:r1,  0 v5:r2,  0 v5:r3,  0 v5:r4,  1 v5:r5,  0 v5:r6,  0 v5:r7,
|  0 v6:r1,  0 v6:r2,  0 v6:r3,  0 v6:r4,  0 v6:r5,  0 v6:r6,  0 v6:r7,
|  0 v7:r1,  0 v7:r2,  0 v7:r3,  0 v7:r4,  1 v7:r5,  0 v7:r6,  0 v7:r7,
]

And then there are times when it is not so convenient!

At any rate, it is then conceivable to push the level
of abstraction in our so-arrayed representations even
one step further, and so long as we keep in mind what
the now-suppressed row-indices and column-indices are
supposed to signify, logically speaking, in the first
place, then we can push them even deeper into the dim
and tacit background of the overriding interpretation.

E  =

[
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  1,  0,  1,  0,  1,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
]

I  =

[
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  1,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  1,  0,  0,
|  0,  0,  0,  0,  0,  0,  0,
|  0,  0,  0,  0,  1,  0,  0,
]

When all of this is said and done, that is to say,
when all of this is said and done the fitting way,
then one can represent relative multiplication or
relational composition in terms of an appropriate
quasi-algebraic "multiplication" operation on the
rectangular matrices that represent the relations.
The logical operation of the relative product has
to be qualified as "quasi-algebraic" just to help
us keep in mind the fact that it is not precisely
the one that algebraically-minded folks would put
on the same brands of {0, 1}-coefficient matrices.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Onward ...

Let me start by going back to the way that Peirce
intially represented 2-adic relations in terms of
2-tuples that have an algebra defined on them:

| Peirce represented 2-adic relations in this form:
|
|   E  =  t3:v1 +, t3:v3 +, t3:v5
|
|   I  =  v3:r5 +, v5:r5 +, v7:r5

The composite symbol "+," will have to serve for
the variety of different symbols that Peirce and
his variety of different typographers have taken
to using at different points in time to signify
the logical operation of inclusive disjunction.
Especially in this matrix algebra setting, the
ordinary plus sign "+" is a strictly reserved
keysign for the additive operation of a field.

Given a basis $X$ = {x<1>, ..., x<n>}, where each element
stands in for an entity, an object, a particular, a thing,
or whatever in the universe of discourse, Peirce employed
a "colonial notation" of the form "s<1>: ... :s<k>", with
s<j> in $X$ for j = 1 to k, to represent ordered k-tuples.

Each type of relational operation that one defines over
this initial basis and over this k-tuply extended basis
corresponds to a way of combining f-tuples and g-tuples
to arrive at specified h-tuples for appropriate choices
of positive integers f, g, h.

In the case of the ordinary relational composition of
two 2-adic relations to get a product 2-adic relation,
the quasi-algebraic rule is this:

|   (u:v)(x:y) = (u:y) if v = x.
|   (u:v)(x:y) =   0   otherwise.

For example, consider the problem to compute the
relative product EI = EoI of the pair of 2-adic
relations, E and I, that are given as follows:

|   E  =  t3:v1 +, t3:v3 +, t3:v5
|   I  =  v3:r5 +, v5:r5 +, v7:r5

One simply forms the indicated product:

|   EI = (t3:v1 +, t3:v3 +, t3:v5)(v3:r5 +, v5:r5 +, v7:r5)

and then proceeds to multiply the whole thing out,
using a composition rule of the form given above
to reduce complex terms whenever possible.

Like so:

|   EI  =  (t3:v1 +, t3:v3 +, t3:v5)(v3:r5 +, v5:r5 +, v7:r5)
|
|       =  (t3:v1)(v3:r5) +, (t3:v1)(v5:r5) +, (t3:v1)(v7:r5) +,
|          (t3:v3)(v3:r5) +, (t3:v3)(v5:r5) +, (t3:v3)(v7:r5) +,
|          (t3:v5)(v3:r5) +, (t3:v5)(v5:r5) +, (t3:v5)(v7:r5)
|
|       =    0    +,    0    +,    0    +,
|          t3:r5  +,    0    +,    0    +,
|            0    +,  t3:r5  +,    0
|
|       =  t3:r5
|
|       =  John:Mary

In conclusion, we compute that John gave the flu to Mary.

Okay, that is an illustration of yet another way to compute
the relative product, and it is still not the kind of exact
definition that I had in mind when I started this note, but
I will need to rest a while and get back to that task later.
I now retreat to a quiet nook to read some poetry to myself.

I leave you to your own devices.

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Inquiry Into Irreducibility

http://suo.ieee.org/ontology/msg03163.html
http://suo.ieee.org/ontology/msg03164.html
http://suo.ieee.org/ontology/msg03165.html
http://suo.ieee.org/ontology/msg03166.html
http://suo.ieee.org/ontology/msg03167.html
http://suo.ieee.org/ontology/msg03168.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Detached Ideas On Virally Important Topics

http://suo.ieee.org/ontology/msg01716.html
http://suo.ieee.org/ontology/msg01722.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o

Reductions Among Relations

http://suo.ieee.org/ontology/msg01727.html
http://suo.ieee.org/ontology/msg01738.html
http://suo.ieee.org/ontology/msg01747.html
http://suo.ieee.org/ontology/msg01766.html
http://suo.ieee.org/ontology/msg01818.html
http://suo.ieee.org/ontology/msg01821.html
http://suo.ieee.org/ontology/msg02167.html
http://suo.ieee.org/ontology/msg02475.html

o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o~~~~~~~~~o