Home > Uncategorized > Communication without information

Communication without information

This is a turn in an ongoing conversation with Alan Ruttenberg; exploratory.

A system consists of a collection of parts that interact with one another over some period of time. A jazz concert is a system in which musicians interact with instruments and an audience. A scientific study is a system in which technicians interact with laboratory apparatus to manipulate and perform measurements on subjects. A computer is a system in which electronic components interact via changes in electric potentials, and so on.

Because parts act on other parts, they are sometimes called “agents” (“agent” and “act” having the same etymological root).

Examples of interactions include two rods linked by a joint, so that they are held together at a point but can pivot; a door held closed against a frame by a magnet; a lever used to apply a brake. Interactions induce connections or correlations between parts: when A is connected to B, its state correlates with B’s state. A’s state is affected by what has happened in A’s past, so after an interaction with B, B’s future states are affected by A’s past as well.

Certain kinds of interactions are ordinarily called out as communication interactions or communication events. (Interactions that are not communication interactions, such as those mentioned above, are sometimes called “physical” although of course communication is a physical phenomenon as well.) In a communication interaction a message (the word “token” in semiotics is related) is chosen or created by part A, placed or transmitted on a channel, and received or sensed over the channel by part B. Communication interactions are just like any other kind in that they establish correlations between the states of the interacting parts.

Part A is built so that it generates message M1 in some states, message M2 in some other states, and so on. Part B is build so that when it receives M1, its state alters in one way, and if it gets M2, its state alters in another way, etc. This mechanistic account allows us to apply this pattern of communicating parts to the simplest of agents, such as components of an electronic circuit, procedures in a computer program, or molecules in a cell.

Contrived example

The system consists of Alice and Bob, who are separated by a partition so that they can hear but not see one another; a pair of lights, visible only to Alice; and a pair of buttons, visible only to Bob. When Alice sees the red light go on, she says “sheep”, and when she sees the green light go on, she says “goat”. (Why? Perhaps she is paid to.) When Bob hears “sheep” he presses the black button, and when he hears “goat” he presses the white button.

Or more pedantically: Alice’s state changes from an idle state to “I’ve seen the red light”, then to “I need to say ‘sheep'”, then to saying sheep, then back to idle. Bob goes through a similar series of state transitions. The states of the lights, of Alice, or Bob, and the buttons become correlated with one another, e.g. the red light going on correlates with the black button being pressed a little bit later.

The overall effect of the system is that the red light going on leads to the black button being pressed, and the green light going on leads to the white button being pressed. I think most people would say that this effect is achieved in part through communication between Alice and Bob (the shouting/hearing events).

Observe: Alice does not need to “know” why she is doing what she does; she doesn’t need to know the “meaning” of the lights (other than that she is supposed to say certain words when they go on); in particular she doesn’t need to know what Bob does on hearing what she says. Similarly, Bob needn’t know why Alice says what she says. If you were to ask Alice what “sheep” means (or implies), she might say it means that the red light has gone on; but if you were to ask Bob, he might say that it means he is supposed to press the black button. To us, as outside observers who know all about what is going on, “sheep” means both of these, but to the agents in the system, the word has different meanings, and that’s OK.

So where is “information transfer” in this account?

In the traditional account of communication, not only are the states of A and B correlated, but B comes to have some property that formerly A had – that property is transferred from A to B. The property is usually described as “knowing P” or “having the information that P” where P is some piece of information (sometimes called a “proposition”), and we say that B has “learned” P from A.

(In the Information Artifact Ontology – ignore this paragraph if you’re not part of this debate – this is described differently, but the effect is exactly the same: the property is that of having a part that concretizes some information content entity P; we start with A possessing this property; the message M somehow comes to concretize the same ICE P somehow; and successful communication consists of B acquiring the property (of having a part that concretizes P).)

Since the message M has a special role in such an interaction, it seems closely connected with P, so we say that P is the “information” carried by M (or that M concretizes P) and that M “encodes” P.

We have seen above that property transfer of this sort is not a necessary element of an account of communication, not is it necessary for systems that depend on communication-like processes to function correctly. It may be a useful pattern in certain kinds of theories (e.g. MTRANS in Schank’s script theory), and it may accurately describe many kinds of interactions between rational agents, but it is not essential.

The property-transfer idea raises the ontological problem of figuring out what kind of thing or substance “information” is [see “Use and its place in meaning” by Quine]. Because “information” seems non-physical it is mysterious and problematic from an empirical point of view. Skinner calls ideas like this “explanatory fictions”. In designing or checking the functioning of a system, one if faced with the question: How do we tell whether one of our agents has the information it’s supposed to have? If the agent is a person, perhaps they can be quizzed, or be subjected to brain imaging; a computer might have its memory inspected. Such tests are difficult – they break abstraction barriers, i.e. they require intimate knowledge of the internal structure and functioning of the agents. Such knowledge may not be available, e.g. we may have a computer sealed inside a laboratory instrument with no ability to inspect its internals.

And such tests of the “information state” of agents don’t really get to the point, which is whether the system will work, or does work, as expected. A person might respond correctly in a quiz, and a computer might hold data structures that might be described as “having information”, yet in neither case do we have assurance that the agent will act correctly on the information they supposedly have. Ultimately we don’t really care what messages agents use to communicate, or what they “know”. We care whether the system has particular properties, such as the ability to help us do science.

(Alan now wants me to explain a few things like reference (or denotation) and miscommunication, which I think I’ll have to do separately.)

Updated: “linked” changed to “connected”

With a nod to Peter Hurd, and Quine of course

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: