donderdag 6 december 2012

Quote #7: Amy Perfors

One implication of this is that Fodor's point is true but trivial. If one understands ‘‘concept’’ to mean ‘‘something that can be represented by the brain’’, then all of our concepts are innate – they exist in the latent hypothesis space of possible things the brain can represent (a space implicitly defined by the structure of the brain). In the more interesting sense, where ‘‘having concept’’ means that the concept is available at the cognitive level – it is capable of being manipulated by the cognitive system – then it need not be innate (i.e., having always been available at that level).

Amy Perfors (2012). Bayesian Models of Cognition. What's Built In After All? Philosophy Compass 7/2, p. 132

dinsdag 20 november 2012

Arab Strap - The Shy Retirer

Quote #6: Edward Tsang (2008)

It is also worth noting that computation itself involves a cost. Knowledge acquisition (e.g. to find out the travelling costs between two cities) could also involve costs. A rational agent should not only minimize travelling cost. It should attempt to minimize the travelling cost plus the cost of computation and knowledge acquisition. (p. 63-64)

Tsang, E. P. K. (2008). Computational intelligence determines effective rationality. International Journal of Automation and Computing 5(1), 63–66.

Quote #5: Roger Brown (1957)

This man has said so many brilliant things in Words and things. I need to work out the connection between 1950s psycholinguistics, early generative grammar and the emergence of information theory he's alluding to somewhere in the beginning, by the way
Read the chapter on linguistic determinism the other night. Some quotes:

"Murdock [G.P, (1949). Social Structure] has studied kinship terminology in 250 societies; he notes the the English word "aunt" applies to four distinct biological relationships. We don't have separate words for these while some other languages do. The absences of words is not the same as the absence of names. Murdock calls the four relationships "father's sister," "mother's sister," "father's brother's wife," and "mother's brother's wife". In all our examples of denotational discrepancy, it is not correct to say that one language has names for distinctions which another language cannot or does not name. It is always possible to name the categories in both languages so long as the nonlinguistic experiences are familiar. Since members of both linguistic communities are able to make differential response at the same points, we must conclude that both are able to see the differences in question. This seems to leave us with the conclusion that the world views of the two linguistic communities do not differ in this regard." (p. 235)

"Doob [(1950). Goebbel's principles of Propaganda, Public Opinion Quarterly 14:419-452] has suggested that Zipf's Law bears on Whorf's thesis. Suppose we generalize the finding beyond Zipf's formulation and propose that the length of a verbal expression (codability) provides an index of its frequency in speech, and that this, in turn, is an index of the frequency with which the relevant judgements of difference and equivalence are made. [...] I will go further and propose that a perceptual category that is frequently utilized is more available than one less frequently utilized. [...] It is proposed, really, that categories with shorter names (higher codability) are nearer the top of the cognitive deck -- more likely to be used in ordinary perception, more available for expectancies and inventions" (p. 235-236)

After reading Ferreira & Patson (2007)

If a constraint, such as bounded rationality or limited memory, can be used as an explanatory factor for some behavior, you kill two birds with one stone. On the one hand, you make a weaker assumption about the potential of the object of modelling, and on the other hand, you explain more of its behavior. An important agenda for the cognitive sciences is to find out the boundaries of cognition, and use them to explain behavior.

maandag 19 november 2012

Ferreira & Patson (2007): The 'Good Enough' Approach to Language Comprehension

Ferreira and Patson (FP) give an overview of their "Good Enough" view of language processing, which holds that "... the language comprehension system creates syntactic and semantic representations that are merely ‘good enough’ (GE) given the task that the comprehender needs to perform. GE representations contrast with ones that are detailed, complete, and accurate with respect to the input." This is a very sympathetic idea, esp. when considering FP's argument against 'unbounded rationality'. A system that arrives quickly at the correct interpretation most of the time using low-level heuristics might has an evolutionary advantage over a system that arrives at the correct interpretation all of the time, but does so very slowly.

When we accept that the existence of time pressure on the processing side might lead to a local-heuristics system, there are profound consequences for the cooperative speaker. Assume that the cooperative speaker knows the hearer is like himself in that, depending on the task, she will use certain heuristics to infer the most likely speaker intention. If this is the case, the speaker will adjust his verbalization in such a way that the heuristics can lead to the hearer inferring the speaker's intention. This can be done by using the usual suspects: minimize code length while maximizing the desambiguation potential (MDL, basically), use as much shared code as possible (convention). At the same time, the multiplicity of construals of a situation leading to the same, simple code allows the speaker to manoeuvre the message for his own benefit.

(Aside: doesn't the speaker perspective help explain language's dislike of pleonastic verbalization: the speaker is unduly burdening the hearer's memory.)

What would this mean for language acquisition? If the developing child understands himself to be a communicative agent, he will (at a certain age) also understand others to have the same properties. Therefore, assuming an utterance U = (w1...wn) uttered given a (hypothesized) set of intentions I, the child will expect
1) that the elements of intentions that signify it (cf. Verhagen 2009) to be the minimal set of smaller meaning constructs that maximally distinguishes the intention i from all other intentions i' ∈ I
2) that this set of verbalizable semantic elements at the same time maximizes the intention's potential to be expressed: that is, the elements will be the most conventional and entrenched ones.

(connections with Chafe's 1970 linearization & deletion ideas, Chafe's observed asymmetry between production and perception, metonymy-as-grammar in Verhagen 2009)

vrijdag 16 november 2012

Quote #4: Chafe (1970)

"It is not necessary, either, to assume that the speaker operates always and only in terms of underlying forms and processes. As a matter of fact, he undoubtedly memorizes directly the phonetic structure of specific words and sentences a good deal of the time. In so doing, it is significant to note, he is actually achieving one kind of economy while bypassing another. He is not taking advantage of the generalizations which underlying forms and processes afford, but he is making things simpler in individual instances by ignoring that whole abstract apparatus in favor of a more direct jump from meaning to sound. It is only in terms of the language as a whole, not in terms of individual, frequently used items, that the device of underlying forms and processes represent a greater economy. I suspect, in fact, that a speaker of a language achieves some kind of balance, in a way that is not at present understood, between direct phonetic symbolization and symbolization mediated through processes of the sort described above [Chafe's post-semantic processes; BB]. I suspect also that different speakers may achieve different balances between these two opposite kinds of economy. It may even be that such differences between speakers constitute one of the principal causes of further language change itself, so that the interaction between language change and the complexities of symbolization perpetuates itself through a momentum of its own." (Chafe, Meaning and the structure of language, pp. 37-38)
Verdere goeie ideeën tot dusverre: semantiek als dieptestructuur die gelinearizeerd wordt om aan een zin gekoppeld te worden - dus het verder incorporeren van de syntaxis in de semantiek.

donderdag 15 november 2012

Quote #3: Dennett

Max van Duijn zij dank: "Big discoveries are soon going to come from liberal arts faculties where clever students sit alone with their laptops--not from the giant labs where they are too busy working out the details" (Dennett in Dennettcolloquium of daaromtrent)

zaterdag 3 november 2012

Connan Mockasin

Forever Dolphin Love Remember The Time Live op Metropolis Festival 2012

vrijdag 2 november 2012

quote #2

[...] the complexities of the universe, linguistic or otherwise, are so vast that one cannot but help but be awed and humbled by them, and that arrogance in a linguist betrays at least a lack of perspective on the problems which confront him. (Wallace Chafe, in Meaning and the structure of language. Chicago, 1970. p. 2)

quote #1

(Harald Baayen, talking today @ UofT about the preponderance of linear models in linguistics and a neat, categorical, modular world view) "A simple world with a simple statistical technique" (continuing why he believes the mental operations and representations is more complex, dynamic and non-modular:) "Under such a world view, you don't expect things to be linear all the time"