Home > Uncategorized > How can you tell whether a robot is referring?

How can you tell whether a robot is referring?

In brief: I still don’t know yet.

I keep saying I’m going to work up to an examination of reference and objects. I’m not ready for this yet, but I wanted to put down a few thoughts.

Recall by way of motivation that on first encountering the so-called ‘semantic web’ and its dogma of ‘identification’ I felt that it didn’t belong in an engineering context without further explanation. When I expressed my discomfort to one of the principals, he challenged me to fix it.

I’ve claimed that propositions and the semantics (or pragmatics) of complete messages can be put on a foundation solid enough for scientific and engineering analysis. The question then is whether we can do something similar for reference. I put forth two explanations for this: one in terms of state spaces, and the other in terms of engineering specifications.

The state space explanation says that a proposition is a bipartition of the state space of a system (or world) into a block of states in which the proposition is true, and a block in which the proposition is false. State spaces are familiar in all kinds of systems analysis and are well within the comfort zone of engineers and mathematicians. Propositions can be related to one another, and whether an agent generates a message is itself a proposition. So this seems tidy.

The engineering-specification explanation says that a proposition is something that can be tested. We can say that a message means a proposition if, roughly speaking, the message is generated when, and only when, the proposition is true. This kind of condition is fine as a specification if we can determine when the proposition holds, and many propositions are amenable to such a test – and the ones that aren’t are ones we probably don’t or shouldn’t care about.

So if someone claims that by message M, agent A means proposition P, we have ways to test whether it does – we don’t have to take such a claim on faith, and we don’t need to introspect to get the answer. It just does or doesn’t, and this can be determined experimentally.

The problem with reference (or the meaning of a noun phrase; I’m not going to bother yet with Frege’s sense/reference distinction) is how to get a comfortable corresponding story around claims of the form: by generating a message that has message part Z, agent A is referring to X. Suppose there were to be a dispute over the claim that by Z, A refers to X. How would it be settled? Not by repeating the claim, and not by introspection or projection, I hope.

This is an especially severe question when A is an engineered artifact (what I’ve been calling a ‘robot’) that is doing the putative referring (i.e. is sending the message that has the putatively referring part Z). How can you tell whether a robot is referring?

I take as given that robots *can* refer, since I believe that humans as language-speaking agents are different only in degree, not kind, from robots. There is no secret sauce that only humans have that lets them refer.

My homework: make another assault on On the Origin of Objects by Brian Cantwell Smith.

Pointless logics

I happened on a passage in Wikipedia calling out a connection between description logic and propositional dynamic logic (PDL). Both of these formal systems are pointless; they don’t have reference in the usual form. This is certainly appealing for someone trying to eliminate and rebuild reference. For any proposition, there is an implicit subject, an ‘individual’ in the DL case and a ‘world state’ in the PDL case. One can ‘talk about’ a new subject by saying what operator to apply to get from the current subject to the new one. You don’t refer to a subject, you give a path to access a new subject from an old one.

Objects as spindles

Here is something I keep picturing. Sentences that share a common subject (phrase) mean propositions that are all about the same thing. (Modulo homonyms that is, but please let me ignore those.) So we might say, as a way to eliminate referents from the account, that an object is just what a particular collection of related propositions is about. Think of the object as corresponding to a spindle, and the propositions about it are all impaled on the same spindle. The purpose of objects might be to organize propositions.

Inferentialism

I like inferentialism, and was interested to hear the inferentialist take on reference. Jeremy Wanderer’s book on Brandom talks about the ‘the challenge of subsentential structure’ – that nails the problem. But it then goes on to repeat Quine’s idea (in Use and its place in meaning) that substitution of one phrase for another, or coreference, is the best one can do by way of explanation. I find this to be very unsatisfying. If in a given language there were only one phrase that could refer to X, then we would have no account at all of the meaning of that phrase, which is absurd.

Advertisements
Categories: Uncategorized
  1. 2015-03-14 at 21:09

    “So if someone claims that by message M, agent A means proposition P”
    Consider not ascribing meaning to messages outside of agents.
    By message M, agent A means that agent A believes P and intends all recipients B to believe P too.

    This model accounts for trust, lying, ambiguity of references, being misinformed.

    P might include a reference Z that A intends to refer to X. B reads M and decides to trust A enough to start believing P as well, but B’s understanding of Z is different; A thinks Z means X but B thinks Z means something different. If the difference matters, then B was wrong to have so completely trusted A without translation

    • 2015-04-19 at 01:43

      I’m not sure why you think I’m ascribing meaning to messages outside of agents. I was careful to establish early on in this series of posts that it’s better not to do so, and I think I’ve been careful. If I’ve omitted the agent in places that’s because in context the applicable agent is clear.

      I don’t think belief is necessarily linked to communication, even when it’s propositional communication. If A generates M if and only if P, then A generating M means that P, regardless of what A believes. We might conclude from A generating M that A believes M, but that is not part of the meaning of M; and it could easily be wrong, as when A experiences a failure. And it is odd to ascribe meaning to a robot generating a message M. We don’t generally think of robots as being mentally enriched or ‘intentional’ enough to be capable of belief. They just do what they do without indulging in propositional attitudes.

      Communication and trust aren’t uniquely linked. If there is a failure in an engineered system, that is just a flaw in the system or its design, or operation outside its operating zone. That’s true whether it’s a system of communicating agents or some other kinds of system. If a system has multiple parts made by different engineers, and the interests of the engineers are not aligned, that’s when we will see a difference between a flaw (e.g. misinterpretation) and sabotage (e.g. lying, misinformation, betrayal), and that is where trust comes in. Trust is a judgment as to whether another party will act in your interest or not, in some particular situation. But trust figures into all kinds of system interactions, not just communication, so it should be treated orthogonally. Agents’ communication behavior is like the interaction of a bolt and a nut; trust is about whether generation and interpretation will meet a spec (or unwritten social norm), or whether a bolt will hold.

      My point in the post is that it’s really unclear how to scientifically assay what B’s understanding of a message-part Z is. What experiments do you do on B to find out what that understanding is? You can see what B does with any number of messages that contain Z, but that only tells you the meanings of B generating/interpreting those messages, not anything about what it does with messages containing Z that you didn’t assay.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: