CS 337 -- Intro to Semantic Information Processing-- L. Birnbaum






To repeat our conclusions from the last lecture:


    Understanding language comprises understanding the situations

    (states, actions, objects, etc.) it describes, in  conjunction with

    the goals of the speaker or writer in speaking or writing as he did.

    In other words, understanding means being able to explain why and

    how what happened happened and why it was reported as it was.


    Such understanding requires plausible inferences based on the

    text, on context, and on background world knowledge.


    In the case of language understanding, these inferences

    disambiguate and reveal implicit content.


    We need a KNOWLEDGE REPRESENTATION system capable of representing

    both the MEANING of texts and the BACKGROUND KNOWLEDGE used in

    understanding those texts.


    Our ability to distinguish among the distinct meanings of

    ambiguous words and utterances shows that the underlying

    representations of those distinct meanings are different.

    That is, the representation itself must be relatively unambiguous.


We now approach the question of criteria for knowledge

representations.  There are two sorts:


    CONTENT: A representation must be able to express the ideas

    (facts, opinions, etc.) that are needed to perform the tasks at hand.


    FUNCTION: What are those tasks, and how do they affect the form

    that the representation should take?


    In slogan form:  STRUCTURE = FUNCTION + CONTENT


    (Of course, the primary functional requirement is the need to

    express the appropriate content.)


It is hard to say anything concrete about how to go about constructing

a representation system that can express the content you need.  You

must look at a lot of examples, and try to represent a lot of facts.

When you can't find a way to represent something you need, you change

the representation.


    It seems obvious to us that if one wants to describe a domain, one

    goes about it by looking for the concepts in that domain that serve

    to organize the most information.  For each domain then, there

    should be some set of [such] primitives.  (Schank & Carbonell)


    The symptom of having got [the representation] wrong is that it

    seems hard to say anything very useful about the concepts one has

    proposed.  It is easier, fortunately, to recognize when one [has got

    it right]: assertions suggest themselves faster than one can write

    them down.  (Hayes)


AI has more to say about functional concerns, however.  In order to

understand the relevance of such concerns to representation, we must

first distinguish three (related) levels at which representations can

be discussed:


    Representations themselves (R): Elements of this level of analysis

    are the representations of facts and utterances (or rather, specific

    interpretations of utterances) themselves -- the kinds of structures

    a computer program would compute during understanding and inference.


    Representation system, or vocabulary (V): Elements of this level

    of analysis are the vocabulary out of which representations at the

    R level are built, and the inference rules by which those

    vocabulary items are related to each other.


    Representation theory (T): The elements of this level of analysis

    are claims and hypotheses about the other two levels and their

    relationship.  As pointed out abive, these claims and hypotheses

    should, as much as possible, be motivated by the considerations of

    FUNCTION and CONTENT.  In fact, a large part of the theory is

    simply delineating these considerations.


The best possible theory, i.e., set of claims in T, would completely

determine every decision about V and R.  No such T exists for any

reasonable-sized domain, so parts of every V, and hence every

representation, are somewhat arbitrary.  Nevertheless, there is a

great deal that we can indeed say.


Functional considerations form the basis of Schank's conceptual

dependency theory:


    A good representation must facilitate the plausible inference and

    memory processing needed to understand.  (This is a claim about R;

    the claim itself is part of T.)


    Thus, a good representation system must make it possible to build

    such representations.  (This is a claim about V; again, the claim

    itself is part of T.)


Content considerations:  What is the domain of facts and utterances that

we wish to represent?  The answers form part of T.


    We are concerned primarily with the problem of representing

    everyday discourse.  (Claim about R.)


    Thus we need vocabulary that allow us build representations of

    utterances about such things.  (Claim about V.)


    This is a tall order; we must be satisfied with partial success.




    What content must be represented to adequately state facts about a

    domain is a T level question.  From the point of view of the R and

    V levels, the answer is a given.


Turning to how functional considerations apply: These considerations

immediately rule out words and sentences of natural language

themselves as an acceptable meaning representation, because of

ambiguity and ellipsis.


    Example: "Mary gave John a million dollars."


            implies that John possesses a million dollars.


           "Mary gave John a kiss."


            doesn't imply that John possesses anything.


    It is difficult to prevent such confusions, however, if the

    appropriate inference rules must be posed in terms of the words



So we can conclude the following (claims about R and V):


1a R: Unambiguous representation of utterances, i.e., one that

    reflects differences in meaning, facilitates plausible inference.


1b V (following from 1a): Unambiguous representational vocabulary

    facilitates construction of unambiguous representations.


Another problem with natural language as a meaning representation is

that the same meaning can be expressed in many ways that seem

superficially quite different.


    A good knowledge representation must, to the greatest extent

    possible, represent similar (or related) meanings in similar (or

    related) ways.


This facilitates memory search, for example in question answering:


    Shakespeare wrote "Hamlet."


    Who was the author of "Hamlet"?


It also facilitates inference: to the extent that similarities in

meaning are reflected in similarities in representation, the plausible

inferences they share can be captured by shared rules.  Example:


    John sold Bill a bicycle.


    Bill bought a bicycle from John.


Finally, it is also important for sharing conceptual information

across languages.


    Otherwise, when learning French for example, new inference rules

    would have to be learned to associate French sentences with their

    inferences, rather than relying on knowledge acquired learning



So we can conclude:


2a R: Similarities in meaning between utterances should be reflected

    in similar representations, to facilitate memory search and

    sharing of inference rules.


The need to restrict the representational vocabulary springs from

these requirements.


    Related meanings must, to the greatest extent possible, be

    reflected in the sharing of representational elements.


    Unnecessary representational elements lead to unnecessary

    complexity of memory and inferential processing.


    On the other hand, if the vocabulary is too restricted, one may

    not be able to express the necessary concepts, or only be able to

    do so in a cumbersome way (a nebulous way of saying that other

    functional considerations may impose a "conciseness" requirement).


These considerations lead us to:


2b V (following from 2a): Restricting the representational vocabulary

    as much as is possible will require similar meanings to be

    represented in terms of shared vocabulary items.


To this we can add:


3a R: The representation of an utterance should make explicit those

    inferences that were needed in order to understand it.  (These

    vary depending on context and purpose.)


3b V (following from 3a): Not much to say here, except that

    representational vocabulary must be able to express the inferences

    that are made in understanding.


To sum up so far: the representation of an utterance must be

unambiguous -- or at least, far less ambiguous than the utterance

itself -- it must make explicit the implicit content that was

inferred, and it must reflect semantic relatedness to other utterances

(e.g., paraphrases).


How else can representations facilitate inference?  Most directly, by

pointing out what inferences are necessary or possible.


    What is already known about some instance of a concept being

    represented, and what remains to be known, should (as much as

    possible) be obvious by inspection.


    Representations should supply EXPECTATIONS.


We can get at this another way by recognizing that representations

cannot just be unstructured bundles of semantic attributes.


    "John killed the bear" does not mean the same thing as "The bear

    killed John." Because representations must be unambiguous, they

    must reflect such distinctions.


    Representations must have STRUCTURE.  They must display the

    relations among the concepts that make them up.  In slogan form:

    "Semantics requires syntax."


These structural relations give rise to expectations.  A

representation must indicate what relations are necessary or possible,

and which have already been established and which have not.


To sum up:


4a R: The representation of a concept should make clear what is known

    about  it and what remains to be known, i.e., representations should

    supply expectations.


4b V (following from 4a and 1a): Vocabulary items must have structure,

    which  are  used  to  specify the inference rules that determine how

    they can be related to each other.


Let's look briefly at physical objects.  What attributes do they have,

and how are they related?


    Volume, density, mass, function, ownership, etc.


    mass = volume x density


    People who own an object with some function probably need to make

    use of that function.


Let's now turn to claims that stem from considerations of content (the

claims themselves are T level):


5a R: Because people talk about actions, we must be able to represent

    utterances about actions.


    At the V level, we know that this entails a vocabulary capable of

    constructing such representations.  But, do we need vocabulary

    items that represent actions?  Or can we get by with representing



    Consider McDermott's example of a man who runs around a track

    three times.  At the end, the state of the world is not changed,

    except that the man is more tired -- but this could be a result of

    doing sit-ups the entire time, or just running in place.  To

    distinguish these, we must represent the action of running itself.

    Or consider another example: How to distinguish movement from



5b V (follows from 5a and 1a): The representation system must include

    a vocabulary of actions.


More informally (still at T level), conceptual dependency is most

concerned with vocabulary that allows us to represent:


      physical actions (e.g., eating)

      mental actions (e.g., concluding)

      temporal and causal relations among actions and states


    It is somewhat concerned with:


      physical states (e.g., health)

      mental states (e.g., happiness)


    It has very little to say about physical or mental objects (e.g.,




Reading assignment:


Charniak & McDermott, ch. 1, pp. 8-28


Schank, CIP, ch. 3, pp. 22-82




Write an ELIZA (see Winston & Horn, ch. 17); the purpose of this

assignment is to get up to speed in LISP, and to learn about