Friday, October 29, 2010

Comments on “Why the Mind is not a Computer” by Raymond Tallis

1. I strongly agree with the aim of this book, which is to combat, by reaching the truth, the destructive effects of neurophilosophy and the Identity Theory (IT) on human freedom, dignity and behaviour. Tallis starts with an excellent deconstruction of the neurobabble that dominates so much present day discussion in the area of brain and consciousness (vide Frege, and Ryle, and later Dennett, Chalmers, Blakemore, Carter, etc.). His principle diagnosis is that these “science-cringers” (what a marvelous term!) believe that people are machines. They also attribute to the brain properties and activities that properly belong to the person whose brain it is. In particular he claims that the brain does not compute—people compute. Computers do not compute either—they are merely accessories to the people that program and use them. He also rejects the Identity theory and says that certain brain activities are necessary but not necessary and sufficient conditions for “ordinary consciousness and behaviour” (p. 29). He puts it thus: “To see people as machines—genetically determined or programmable—is no light matter…Neurophilosophy is simply wrong about human beings and their place in—and outside of—nature.”
He equates, provisionally, neurophilosophy with scientism and quotes Tzvetan Todorov, who linked scientism with the development of Nazi and Stalinist ideology. Tallis qualifies this by saying that we should perhaps not take scientism in a healthy society too seriously—“or perhaps we should”. He suggests that only in a society sick for other reasons does scientism leads to wickedness. On this point, I suggest, one needs to read John Cornwell’s book “Hitler’s Scientists” which will soon incline us to leave out any ‘perhaps’ in our estimation of the potential evils of scientism given the frailty of human beings.

79 comments:

  1. 2. Viktor Frankl, Professor of Neurology and Psychiatry in the University of Vienna, was one of the leading psychotherapists of his day. I first met him at the Alpbach Symposium in 1968. He founded a system of psychotherapy called logotherapy based on the search for meaning in life. His survival of Auschwitz gives him particular authority to speak on these matters. In his book The Doctor and the Soul (Frankl, 1973b) he says that the social reductionism of modern psychology—that man is nothing more than a mind-machine, a pawn of drives and reactions, the mere product of instinct, heredity and environment—leads to nihilism and corruption. He says that the gas chambers of Auschwitz were
    “…the ultimate consequence of the theory that man is nothing but the product of heredity and environment—or, as the Nazis liked to say ‘of Blood and Soil’… I am absolutely convinced that the gas chambers of Auschwitz…were ultimately prepared not in some ministry or other in Berlin, but rather at the desks and in the lecture halls of nihilistic scientists and philosophers”.

    ReplyDelete
  2. 3. However, I disagree with some of the details of Tallis’s argument.
    Let me start with a key sentence:
    “The truth is, we cannot hold that the brain is a computational machine and that it carries out calculations, most of which are unconscious and that these calculations are the basis (and the essence of) consciousness.” (p. 42)
    This sentence raises three independent questions.
    —Does the brain carry out calculations (computations, etc.)?
    —If so are “most of” these unconscious?
    —And if so do these calculations form “the basis (and the essence of) consciousness.”?

    I do not give the first question the importance Tallis does. I regard the third question as crucial. I suggest that the important point is not whether we say (A) the brain computes, or whether we say (B) the brain is the seat of vastly complex neurochemical reactions. The important question is what relation all this brain activity (whatever we call it) has to (C) the events that we experience in consciousness? If we say that either, or both, A and/or B are identical to C, then we are committed to agreeing that people are, after all, machines. If we deny this, and can develop a robust theory supported by facts to show that consciousness is ontologically independent of the brain, then events in a person’s brain become causally related to, and not identical with, all the events in that person’s consciousness—as detailed in my previous postings about the theory of material (spatial dualism). This would stop zombieism dead in its tracks.

    ReplyDelete
  3. 4. I also certainly agree we cannot say things about the mental capacities of the brain that involve conscious events. We cannot say, for example, that the brain thinks, or sees, hears, perceives, or feels pain, or hates, or loves, or has hallucinations or delusions—all of which include existential elements of phenomenal consciousness. The truth is that I think, I see, I feel, I hate, I feel pain, etc. This format, I feel, is more basic than saying that a person does all these things. My classification as a person, I suggest, comes afterwards. Of course I would not deny that I am a person. But when I voluntarily lift my hand I do not say to myself “A person is doing this”—I say “I am doing this”.
    My brain merely contributes some of the wholly unconscious mechanism that allows me to do these things. But I see no reason why we cannot say that parts of the brain are involved in wholly unconscious activities using words that can also be used to describe what people do. The corpus striatum, for example, is involved in the wholly unconscious activity of planning movements. We can also say, because there is clinical evidence to prove it, that the visual system can evaluate the salience of a visual stimulus. The visual system can estimate the probability that the input from the optic nerve corresponds to the existence of the object that it purports to. If this system in the brain finds this probability to low, there is now evidence that it can override the retinal input and provide (by causal relations between the optic cortex and the visual field)) another more probable phenomenal object in the visual field in consciousness instead (Kovacs et al. 1996).

    ReplyDelete
  4. 5. So I would answer the three questions put by Tallis thus:
    —Yes, in certain instances, but this may not be important.
    —All of them are unconscious.
    —No, none of them form “the basis and essence of consciousness”. Instead some brain events are causally related (in both directions) to the events that make up a person’s consciousness.

    I suggest that the way to defeat attempts by neurophilosophers and Identity Theorists to show that human beings are zombies, is to establish the ontological independence and integrity of phenomenal consciousness that includes its subjective Self.

    ReplyDelete
  5. I am hoping that Ray Tallis will himself reply to John's quite relevant posting and his cogent questions, but I am certainly glad that John sees the value of Ray's little book and the overall validity of his line of argumentation.

    Sharing as I do John's humanistic sympathies (as does Ray), I shall only comment (for now) that one should perhaps distinguish between "importance" (or perhaps better: "explanatory value") and *logic* in judging theories of brain function, and especially as relates to consciousness.

    To attribute thinking, let alone 'planning,' 'evaluating salience,' 'estimating probability,' to parts of the brain is just what Tallis expressly seeks to deny is taking place, but on *logical* grounds alone, because as I have stated previously, it entails fallacious reasoning and, surely, our goal should be to reason correctly. What, then, is the fallacy in this case?

    There is a short and concise Wikipedia entry for the "fallacy of division," which fallacy occurs when one reasons logically that something true of a thing must also be true of all or some of its parts:
    http://en.wikipedia.org/wiki/Fallacy_of_division
    As it happens, the Wikipedia uses the brain "thinking" as an example to illustrate the fallacy (and what would constitute examples of what T.K. Österreich called "pseudo-localization").

    The main point is that it is the *reasoning itself* that is fallacious, not the particular example of brain and thinking: It is people who think, not brains, whether it is I who is thinking, or you who is thinking. We have good reason to believe that animals also think, if not in the way or to the degree of intelligence with which we do.

    ReplyDelete
  6. Funny to say, but even the Wikipedia entry for the "fallacy of division" itself commits the fallacy by ascribing thinking to the brain rather than to people! (I didn't catch that.)

    ReplyDelete
  7. 1. An example of what might be called unconscious 'planning' in the brain is the activity that goes on in the corpus striatum and striato-cortical networks before a voluntary movement is made, say reaching out to pick up an apple. This co-ordinated unconscious activity takes place in order to get the muscles in the right positions so that the voluntary movement can be made most smoothly and efficiently. However, if the word 'planned' leads to undesirable metaphysical overtones, it seems to me that it would be quite in order to replace it with "co-ordinated unconscious activity" or "CUC". Would that satisfy Ray?
    As for 'estimating probability', if one does not allow that It becomes necessary to come up with something else to explain the Kovacs experiment. However, I do not think that the logical problem of the "fallacy of devision" "...something true of a thing must also be true of all or some of its parts:" arises in this case. In the Kovacs experiment the 'whole' (the 'person', the 'organism' the 'body') is not making any probability judgement. The entire process in unconscious. Only the part is making the probability judgement i.e. the visual cortex. Consciousness is involved only to the extent that qualia are changed. The only judgement here is the recognition judgement between a scene of mixed monkeys and leaves, and scenes of only monkey and only leaf. No probability judgement is involved at this level at all.

    ReplyDelete
  8. 2. As for 'estimating salience', I think we are dealing here with a two-level process—not a part-whole process.
    Estimating salience is attributing importance (or potential importance) to an in-coming stimulus pattern (e.g "predator!!!"). At the conscious level, say, I am at a party in a room with a lot of people milling about doing nothing in particular, Then, I suddenly notice the person of my lawyer entering the room. I fix my eyes upon him. He turns and walks towards me. I focus closer. Then I notice he has sinister-looking document in his hand—I focus closer still. His presence and behaviour have developed increasing amounts of salience.
    Contrast this with the neurological process that builds this salience up. My brain receives a continual and massive sensory inflow—most of which is not important. At the party I let my attention wander. Then a potentially salient stimulus appears—my lawyer. The impulses from my retina are distributed throughout the visual cortex and stimulate two separate populations of particular and distributed neurons that fire whenever the figure of my lawyer enters their receptive field. Result—I recognise my lawyer. But I do not recognise my lawyer by examining the qualia in my phenomenal visual field in consciousness and saying to myself "Aha! Here's Jones: I wonder what the trouble is?" If I was afflicted with the disease known as apperceptive agnosia I would have all the same qualia, but I would be unable to recognise my lawyer. Conversely, in blind sight, I would have no qualia but could still recognise my lawyer by extra-V1 vision. (That may be somewhat of an exaggeration!— but it works in the case of simpler objects). As Edmond Wright has concluded "Qualia are phenomenological but not epistemic factors in vision." So I see my lawyer using one set of brain mechanisms and recognise him by using a different set of brain mechanisms. So, I suggest, recognising the salience of a stimulus cannot be attributed to a simple response of the 'person' or of the whole organism. It is rather a compound response involving two separate parts of the organism acting in collaboration.

    ReplyDelete
  9. Unfortunately the language of intentionality, which implies conscious activity, is now so deeply ingrained in (cognitive) (neuro)science and used routinely in relationship to non-conscious and/or physiological processes, that John can hardly be blamed for employing it, anymore than we all probably lapse into it for that reason.

    But we really need a more conceptually neutral technical language that is perhaps closer by analogy to "machine language" in computer science, and which has no relationship to natural language whatsoever except to be called "language." It runs the hardware to instantiate computer programs that are in yet another "language" ("programming" language) which, itself, is devised to carry out a computation or analysis that a person could also, in principle, perform--and, most importantly, the performance of which is the very reason for the computer: Computers have been designed by people to carry out what people want them to do.

    If anything, artificial intelligence, so conceived, is better called "automated intelligence" because it is automating our intelligence or intelligent behavior, much like a pocket calculator performs calculations that it has been programmed to perform (or even an old-fashioned adding machine or cash register).

    It is not that the use of intentional language leads to metaphysical complications, rather it is because it is illogical and involves unsound arguments, as the example from the fallacy of division illustrates. Again, it is people who plan, or estimate probabilities (with or without pocket calculators), not parts of their brains, otherwise one falls prey to the same logical defect that assails phrenology and faculty psychology.

    Unfortunately neuroscience seems blissfully unaware of this difficulty for the most part!

    ReplyDelete
  10. I would counter that it is still the person, as a whole, that recognizes the lawyer. "Recognizing" is a cognitive behavior, in a sense involving the whole body of the person, because if one's circulation stopped, no recognition would take place.

    ReplyDelete
  11. In his chapter "Functions of the Brain," I believe that William James already in 1911 best explained the logical defect of the fallacy of division, though without referring to it, but rather to "homunculi," an analogy that was subsequently picked up by generations of psychologists and neuroscientists that followed but who, alas, by and large, did not recognize that it (the homunculus problem) is primarily a logical and not an empirical problem:
    http://books.google.com/books?id=lbtE-xb5U-oC&pg=PA29&lpg=PA29&dq=%22william+james%22+homunculi&source=bl&ots=uSTV3bV8y5&sig=MFIoVcIwt0kYxW2sxKyZE5SxFp8&hl=en&ei=IRHPTLXrK4-4sAPZ-NnjDg&sa=X&oi=book_result&ct=result&resnum=4&ved=0CCEQ6AEwAw#v=onepage&q&f=false

    ReplyDelete
  12. It would be nice to have some comments on my contribution (i.e. CUC) to providing a CNTL (conceptually neutral technical language) for the word 'planned'.
    Re "Information": we certainly need a CNTL for that. Ray may provide one (p.68) where he suggests the use of "potential information" for information outside a conscious organism. Could we say that nerve impulses and neuronal electro-chemical operations carry 'potential information'? This is only converted into information proper when this brain activity (NCCs) has caused the necessary correlated activity in the ontologically independent consciousness of that individual.
    Also could one postulate that the entire physical universe is merely a device for Selves to communicate with each other? The physical universe thus could be said to carry information but not create it?
    So engrained are these bad habits that the neuroscience community will need a megaton jolt before they give them. Another universal bad habit—using 'visual field' to cover both the stimulus field (outside the organism) and the visual field proper (inside the organism).

    ReplyDelete
  13. A counter-counter to Bill's counter which was:

    "I would counter that it is still the person, as a whole, that recognizes the lawyer. "Recognizing" is a cognitive behavior, in a sense involving the whole body of the person, because if one's circulation stopped, no recognition would take place".

    I agree that the person as a whole, not just his/her brain, not just his/her body, recognises the lawyer. For this, activity in a particular part of the brain (that which is damaged in associative agnosia) is a necessary, but not a necessary and sufficient, condition. The person has to be conscious too (in the medical sense). The key point, however, is that a ' person' is not composed only of his/her "whole body" : people have conscious (phenomenal) minds too. Recognition is a function of the whole person—body and mind (ignoring NDEs for the moment). Furthermore I think that 'recognising' is more than a behaviour—it is an experience also.
    .

    ReplyDelete
  14. Granted, recognition involves whole people, not just whole bodies, and also is experienced--often at first as a "feeling of knowing" (a so-called FOK experience).

    I should think an obvious model for a CNTL are other organs and organ systems in the body, and how science talks about them and their electrical activity. Rama was pointing out to me last week that the stomach has its own nervous system, for example. How is its action described?

    It is not at all obvious that (visual) consciousness and brain activity more than parallel each other, as in psychophysical parallelism, of some form or other. In some ways we have not progressed very far from the neural correlates of vision reviewed by Wm. James, in which he talks of "sensorial blindness" vs. "psychic (mental) blindness" of cortical origin, the former being total blindness, the latter being what today we call associative agnosia. That was in *1911!*

    ReplyDelete
  15. Glad we agree about whole people Bill!

    I am here concerned with the sheer scale of the problem of introducing logical and metaphysical order to neuroscience (“metaphysics” here means being transparent about one’s own, often unconscious, assumptions in the E.A. Burt sense).
    To illustrate this I have gone through one important paper by two neuroscientists at Queen Square, who I know well (pals of Terry too). I have based a neurological paper of my own on the data contained in this paper (from quote 8 below).

    Here is a quiz based on what I found about their metaphysics presented as an exercise in the use of ‘information’, ‘processing’, ‘representation’, ‘signal’, ‘perception’. Which of these quoted usages do you consider to run counter to the Tallis criteria and how should these be examples altered to conform with these criteria?

    ReplyDelete
  16. [From Yu & Dayan 2002 “Acetylcholine in cortical inference” Neural Networks 15, 719-730.]

    1. In this paper, we present a theory of cortical cholinergic
    function in perceptual inference based on combining the
    physiological evidence that ACh can differentially modulate
    synaptic transmission to control states of cortical dynamics,
    with theoretical ideas about the information carried by the
    ACh signal. Crudely speaking, perception involves inferring
    the most appropriate representation for sensory inputs. This
    inference is influenced by both top-down inputs, providing
    contextual information, and bottom-up inputs from sensory
    processing. We propose that ACh reports on the uncertainty
    associated with top-down information, and has the effect of
    modulating the relative strengths of these two input sources.
    Many cognitive functions affected by ACh levels can be
    recast in the conceptual framework of representational
    inference.

    ReplyDelete
  17. 2. We have suggested that one role for ACh in cortical
    processing is to report contextual uncertainty in order to
    control the balance between stimulus-bound, bottom-up
    processing, and context-bound, top-down processing.

    3. The models have in common the idea
    that neuromodulation should alter network state or
    dynamics based on information associated with top-down
    processing. However, the nature of the information that
    controls the neuromodulatory signal, and the effect of
    neuromodulation on cortical inference are quite different in
    the two models.

    4. We propose that ACh levels reflect the uncertainty associated with topdown
    information, and have the effect of modulating the interaction between top-down and bottom-up processing in determining the
    appropriate neural representations for inputs.

    5. This inference is influenced by both top-down inputs, providing
    contextual information, and bottom-up inputs from sensory
    processing.

    ReplyDelete
  18. 6. For simplicity, we only consider one of the most basic
    forms of top-down contextual information, namely that
    coming from the recent past. That is, we consider a series of
    sensory inputs whose internal representations are individually
    ambiguous.

    7. …uses temporal contextual information,
    consisting of existing knowledge built up from past
    observations,

    8. At the network level, ACh seems selectively to promote
    the flow of information in the feed-forward pathway over
    that in the top-down feedback pathway. Existent data
    suggest that ACh selectively enhances thalamo-cortical
    synapses via nicotinic receptors (Gil, Conners, & Amitai,
    1997) and strongly suppresses intracortical synaptic transmission
    in the visual cortex through pre-synaptic muscarinic
    receptors (Kimura et al., 1999).

    ReplyDelete
  19. Comment. These are just a few of the instances in this paper of what Ray calls the ‘technical’ use of these terms.
    In view of the formidable extent of this usage, I suggest that a simpler solution than my previous remedy be tried. This would be to use italics, e.g. ‘information’, whenever the term was being used in its technical sense—combined with self-discipline to avoid the slithering between the technical and non-technical usages Ray refers to. Not that neuroscientists are likely to pay any attention to this advice, but, if they do, only the simplest form is likely to be taken up.

    ReplyDelete
  20. All these excerpts exemplify the unchecked "neurobabble" and its terms that Ray defines in his lexicon, and that I would say results in an Emperor's New Clothes situation. The brain has become like a hugely complex Rorschach ink blot, a "projective test," in which neuroscientists can project their biocybernetic fancies. I cannot imagine them discussing the action of the GI tract in a similar manner (but I may be wrong).

    For this to be more than reading tea leaves (or pigeon entrails), there would have to be some criteria for *proving* the theory of ACh "function" they offer which, among other things, seems to redefine perception itself in the process ("crudely speaking, perception involves inferring the most appropriate representation for sensory inputs"--that is a theory of perception, not just an interpretation of neurophysiological processes).

    I don't think saying as a caveat that neurobabble involves a "technical" use of the words in question will do, because it is not just their use that is a problem, but the logic behind their use--as Ray has explained in his book. Whether technical or not, it is the logic that is flawed as much as the choice of words.

    This shows why liberal arts education traditionally started with philosophy and the classics: to teach young college students how to think clearly, rationally, and logically, using the great tradition of Western philosophy as an example for that purpose.

    ReplyDelete
  21. I thought that I might interject a few thoughts into the debate here. It strikes me that the debate over whether the brain is or is not a computer is rather like the debate of whether the eye is or is not a camera; a lot depends on how broad or narrow a definition is used, and very robust senses can be used in both cases which do not invoke the existence of designers and are not specific concerning materials used etc. There is even a fairly innocuous use of intentionality, (which Dennett, and this is one or the rare cases where I agree with him, calls the "intentional stance") which just involves the fact that it is possible to describe a system as purposive without invoking anything like an Aristotelian entelechy into it.Thus, I think that it would help the debate to define "computer" and then say in what respects the brain either is or is not one.

    ReplyDelete
  22. Of course, the pros and cons of whether the mind (or brain) is a computer have already been reviewed and debated in Ray Tallis's book, Bob. He responds to Dennett's idea of "intentional stance" by showing that Dennett makes a straw man of intentionality, denying its objective existence in order to support such a "stance" (something that therefore seems inherently self defeating to his position)
    http://books.google.com/books?id=xxCQ72zMBKsC&pg=PA49&dq=%22intentional+stance%22+intitle:why+intitle:the+intitle:mind+intitle:is+intitle:not+intitle:a+intitle:computer&hl=en&ei=SYLRTJOoCpCusAPPwNDWCw&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCgQ6AEwAA#v=onepage&q=%22intentional%20stance%22%20intitle%3Awhy%20intitle%3Athe%20intitle%3Amind%20intitle%3Ais%20intitle%3Anot%20intitle%3Aa%20intitle%3Acomputer&f=false

    So it might be useful to respond to Ray's objections, especially since he has already covered this ground in his book, and because he is reading this blog.

    I would deny categorically that a "robust" case can be made that the mind/brain is a computer, for the various reasons I have already given, as well as those in Ray's book, and that a weak sense of the mind/brain being a computer is trivial, because it is based only on an analogy which (conveniently) ignores evolutionary biology, and how a computer would evolve in the natural world.

    Probably first and foremost is the fact that computers are man-made devices (machines), not made by nature, so ascribing the function of "computation" to a living organism seems a bit fatuous (and anthropomorphic) on the face of it, yet it is routinely done without justification.

    Since the idea that the mind/brain was a computer did not come into play until computers did, doesn't it seem a little too much of a coincidence that one keeps "upgrading" the technological metaphors used to describe the mind/brain from telegraph, to telephone, to TV, and now to a computer and that--voila--we have finally hit upon the technology that most adequately characterizes the brain having now invented computers, which are *intentionally* designed by computer manufacturers to mimic our thought processes?

    Running the risk of repeating myself, a computer is something used by people. We do say that we "use" our minds, that we "think" with our brains, but how literally one can take those statements, and how far, is what Ray has sought to elucidate in his book. I find it perhaps significant that most of us do not say that we "think with our computer," though a surprising number of youngsters falsely believe that their computers do think!

    If the mind/brain is a computer in any sense, why do we not say that we think with the computer between our ears? Somehow that linguistic stretch has not been made in ordinary language, even steeped as we are today in computerspeak.

    ReplyDelete
  23. Further to the above I heartily recommend reading what Israeli neuroscientist Yadin Dudai writes about the "Model/Metaphor Fallacy" in his chapter ""The Neurosciences: The Danger that we will think that we have understood it all" in "The new brain sciences: perils and prospects" (CUP, 2004): http://books.google.com/books?hl=en&lr=&id=qVotYD655wkC&oi=fnd&pg=PA167&dq=metaphor+dudai&ots=6LIJ9teeES&sig=4S87t_HJYKXSSd2T_M90-2KBb3s#v=onepage&q=metaphor%20dudai&f=false

    Dudai notes, "Though very helpful, metaphors, like working models, should be treated as no more than stepping stones to better understanding. The investigator must not fall into the trap of believing that the metaphor has sufficient explanatory power to account for reality, so confusing the two. Further, even when the metaphor is understood as not have [sic] sufficient explanatory power, some attributes may leak from the metaphorical domain to the real world target domain, and lure investigators into false parallels."

    ReplyDelete
  24. I have a bit of a quibble Bill with the claim that computers need be man-made, since one can consider such situations as robots constructing computers, and computer aided design, but perhaps your claim is that man has to be ultimately involved here somewhere. If nature evolves something that is functionally isomorphic with a manmade computer, not calling it also a computer sounds to me a bit arbitrary, but it is a matter of definitions. I will admit though that I don't think that any of this at all helps in understanding how the brain generates consciousness.

    ReplyDelete
  25. Exactly, Bob. The same situation exists in comparing any biological system with something man-made since, as you note, even robots are man-made. It is not even clear that Nature "makes" things in the same way we do.

    Calling the brain a computer, as I summarized in a previous historical note here, did not result from a process of definition, but from analogy, first from talk of "human information processing" without specific reference to the brain, to computation, to the brain actually being a computer.

    So it is a matter of taking an analogy literally, when it is not clear that it applies except in a superficial way--especially since it is people who designed computers to mimic what we do (computations), not the other way around. For one thing we know that myelin is not a very good insulator, and if neurons are wires, they leak quite a bit--thus not a very good electrical system by comparison with my Canon laptop.

    Should one assume that all the other technological devices to which the brain has been likened are also "functionally isomorphic" with it? I have yet to see a thorough conceptual analysis that specifically addresses the requirements of brain function being functionally isomorphic with computer function and, as I posted recently, there is now a growing experimental literature that is arguing against the whole analogy in order to make sense of experimental results, in which the analogy no longer proves useful to interpret them.

    Why should a computer or something like one be necessary for consciousness at all? Of course Marr and his associates conceive simple vision as being a computational process. John has already cited As Edmond Wright (above) that "Qualia are phenomenological but not epistemic factors in vision." This accords rather well with my contention that VS is not particularly "smart," given how much use it is to persons suffering from associative agnosia, which John and I have described above.

    ReplyDelete
  26. I have already referred here, I believe, to Howard Bursen's brilliant monograph, "Dismantling the Memory Machine: A Philosophical Investigation of Machine Theories of Memory" (1978) which caused those in A.I. who reviewed the book to have conniptions, failing to realize as they did that in showing any trace theory of memory to be implausible because it requires (1) a homunculus or (2) works by magic, that Bursen was ultimately referring to biological memory, not A.I. Nor did they seem to understand that A.I. is only *automating* what we already do, or finding ways to do that, but may/may not shed any light on how we actually do it (thus, again, Crick's quip in response to one such A.I. proposal: "we want to know how Nature does it!")

    ReplyDelete
  27. The idea that the brain "is" a computer goes back farther than I thought. Just Googling that phrase produced a monograph by early cyberneticist, Frank Honywill George, "The Brain as a Computer," first published in 1962. I must confess that I do not recall the book or Prof. George, not that that means anything per se, except I do not remember encountering his name or his book when I studied cognitive psychology in the 1970s. George was apparently Professor of Cybernetics at Brunel University in the U.K.

    He writes toward the end of the book that it is his hope "that most readers will find the arguments sufficiently clear and cogent to convince them that there is a perfectly good sense in which we should want to say that the 'brain in a computer.'" Notice that he writes "sense" and "say," which already implies something not literal, instead of a definite, positive, declarative statement: "The brain is a computer."

    Of course, what he does not say is that as a cyberneticist he has an axe to grind in making this claim, namely, to promote the idea (the belief) that what is called A.I. is the same as human intelligence--a claim that has never been proven.

    ReplyDelete
  28. Need I add that in a field that defined intelligence in behavioral (or operational) terms, as A.I. has, that it should be satisfied with criteria of identity considerably less rigorous than those of Leibniz. Even the axiomatic "Turing test" to determine if a machine exhibits intelligence (read: intelligent behavior) has never been satisfactorily passed by a machine. So in this regard A.I. does not even comply with its own philosophical assumptions. Yet the brain is supposed to be a computer?

    ReplyDelete
  29. 1. It seems appropriate at this point to ask how, in practice, we can eliminate, or at least diminish, neurobabble in the real world? Unfortunately, the great majority of neuroscientists use neurobabble all the time, feel perfectly at home in it, and see no reason to give it up—read any paper in the literature! They do not regard it as any kind of babble, and, if you try and persuade them it is, your only reward will be a very frosty look—as I got, and more, from Francis Crick when I reviewed his "The Astonishing Hypothesis" in the journal Brain. I pointed out that he had made a mistake saying that the 'stimulus field' was identical to the ‘visual field’— I then found myself at the receiving end of a ten-minute Crickian dressing down!

    ReplyDelete
  30. 2. Many Identity theorists tend to suffer from what we might call a “Vatican Complex”. They know that they are right, are impervious to any argument, and regard all people that disagree with them much as Pope Urban VIII regarded Galileo.
    So I suggest that we need to find a method that might work in practice.
    Two strategies suggest themselves:
    A. The first recognizes with Bob that there are degrees of neurobabble that can be dealt with differently. Obviously, statements like “The brain thinks.” (which means “Only the brain thinks.”) reduces humans to zombies. This untrue and highly noxious statement is compatible only with the very unsatisfactory Identity Theory. But is the statement “During human mental processes the brain transmits ‘information’ (interpreted as “The brain is one of the parts of the human organism that transmits information”) logically compatible with material dualism, which implies, inter alia, that phenomenal consciousness, located outside the brain (or with Simon that consciousness is associated with holograms existing outside the brain), also deals with information? Can the brain be said to transmit information to a dualistic consciousness: the latter then presents this information, in the form of what goes on in the visual field to the subjective Self? That is of course a vast over-simplification needing detailed scrutiny (e.g. we have make adjustments to allow for phenomena like agnosia and blind sight). But it forms the basis for debate as dualistic theories have not so far been discussed in detail.

    ReplyDelete
  31. 3. B. The second strategy is to ask neuroscientists to abandon all uses of ‘information’, except what is rigorously correct in our estimation. It is, however, clear that they are never going to do this. Furthermore it would be very difficult for them to do so in practice. For example, it is hard to see how Yu and Dayan could rephrase the following statement into a format that Ray would accept. without completely restructuring their entire thinking—
    "We propose that ACh levels reflect the uncertainty associated with topdown
information, and have the effect of modulating the interaction between top-down and bottom-up processing in determining the
appropriate neural representations for inputs.”
    We can take it that they, and others in the same camp, are not going to restructure any of their thinking, let alone all of it.
    If we adopt the second strategy is there not also a danger that they will then also not listen to us when we ask them to give up saying things like “Brains think.”? But, if we ask them to give up saying “Brains think” (and thus help protect human freedom and dignity), and refrain from criticizing them when they use ‘information’ as a technical term well defined, is it faintly possible that we might get somewhere? —but even that I very much doubt. As I said before, I predict that no more than a handful of neuroscientists will pay any attention to any advice from well meaning philosophers—particularly as they have themselves the support of several very distinguished philosophers who, traitors to truth as I suggest they are, support their case.

    ReplyDelete
  32. 4. Perhaps our present group could devise a series of technical terms and ideas that are based on true logic and that neuroscientists could themselves happily adopt?
    Ray and Bill argue that the trouble is caused by neuroscientists who make a number of logical mistakes in their arguments—like attributing to a part properties that belong only to the whole for one. I suggest, however, that the rot goes deeper. I feel that the real problem is that Eliminative Materialists and Identity Theorists are practising Scientism not science. In science one approaches solving a problem (e.g. how brain activity is related to activity in phenomenal consciousness) by the time-honored process of setting up an hypothesis and then testing it. You do not assume the nature of your discovery before you have made it. In Scientism one reverses this process. You first set up a Dogma, which is not to be questioned (e.g. the mind and brain are identical) and then skew the whole of one’s enquiry around that. This mind-set then pollutes the whole of neuroscientific activity.
    I suggest that the best way to defeat the Identity Theorists is to set up a rival hypothesis (e.g. some form of material (spatial) dualism) and test that by experiment. A novel way of doing this has been suggested by Jean-Pierre.

    ReplyDelete
  33. 1. A new explanation for the Kovacs experiment.

    Kovacs et al. (1996) carried out a binocular rivalry experiment in which they took two photographs, one of a group of monkeys and the other of jungle leaves. They cut these up into pieces like a jig-saw puzzle and made two pastiches out of the bits. In each there was a piece showing monkeys where the other had a piece showing leaves. When these two ‘complementary’ pastiches were shown binocularly one would have expected that the subjects would have seen two pastiches in retinal rivalry. Instead “some” subjects saw a complete monkey picture alternating in retinal rivalry with a complete lead picture.
    The experimenters accounted for their result by suggesting that
    “It clearly indicates that binocular rivalry can be driven by pattern coherency, not only by eye of origin. The reported phenomena show that the brain has many different ways to assemble new “realities” from competing pieces of concurrent external and internal events.”
    This can be interpreted that the brain finds the mixed pastiche photograph too improbable and replaces it with the conscious perception of two more probable photographs. Crick (1994) enlarged on this by saying “What you see is not what is really there; it is what your brain believes is there.”

    ReplyDelete
  34. Kovacs 2.
    However another interpretation of the results of this experiment is possible. The Kovacs/Crick explanation requires extensive neural machinery that continually calculates the probability of a stimulus complex and replaces improbable combinations. A simpler explanation may be as follows. The retinal input is distributed to the occipital cortex in such a way that L eye neurons alternate with R eye neurons in layer IV in a regular alternating array. Thus, in the case of the two complementary pastiches, the visual cortex will be occupied by two topographically distributed complementary patterns of stimulation. If one thinks of two superimposed parallel pieces of paper, each with one of the complementary patterns, everywhere a part of a monkey is shown, the other will show a piece of the leaves. Translated to the cortex this entails that every single eye neuronal column related to the monkey pattern will be in close proximity, and physically connected by collaterals, to the other eye column next door that is related to a piece of the leaf pattern. In this event all that has to happen, to produce the ‘all monkey’ and ‘all leaves’ perceptions, is to invoke a reshuffling of neuronal activities effected by Freemanian non-linear dynamics of the interlocked dynamic populations of neurons. This requires no complex additional machinery.

    ReplyDelete
  35. Kovacs 3.
    Two main pillars for Crick’s theory (about seeing only what the brain vets as probable) are the interpretation of‘not seeing your own eyes moving in a mirror’ experiment) and the interpretation of the Kovacs experiment. Bill has already cast doubt on the first, with his observation about the limits of foveal vision, and I am now casting doubt on the second. The third is contained in deductions made on the detailed anatomy of the cholinergic innervation of the cortex. These also may be wrong. (Watch this space).

    ReplyDelete
  36. My insight today is that neuroscience has been hopelessly polluted, as John says, and that the basis of neurobabble is really cyberbabble, i.e., cybernetic theory, in which no essential distinction is made between man-made and natural (biological) systems, doubtless because systems theory, which is the foundation of cybernetics, comes from a very abstract source: mathematics. How convenient for its adherents!

    In principle, though, one can probably disentangle the interpretation from the data, simply by following the lead of Paul Churchland: one simply eliminates all the cybertalk from neuroscience. What remains may just seem like uninterpreted histology (cell biology), but it would be nice to know what that is first without it being run through the conceptual scheme of cybernetics (control systems). We should be guided by what the data tell us without the layers of cybernetic gloss that are routinely added, but seldom criticized (at least very thoroughly).

    At bottom, more than anything all that neuroscience is reporting are various kinds of correlations between behavior, experience, and brain activity. It seems to me that finessing the description of the correlations without projecting into them ideas about information transmission, information processing, etc., is what may show the most promise.

    The problem with the leaf/monkey interpretation is that it is not clear that the visual cortex is doing anything like that in spite of the neuroanatomical connections John (correctly) describes. To repeat what I quoted from Hubel & Wiesel (1979): "It follows that this [the visual cortex] cannot by any stretch of the imagination be the place where actual perception is enshrined. Whatever these cortical areas are doing, it must be some kind of local analysis of the sensory world. One can only assume that as the information on vision or touch or sound is relayed from one cortical area to the next the map becomes progressively more blurred and the information more abstract." This would clearly be the opposite of any sort of image synthesis, thus, as H & W write: "It is important to realize that this part of the cortex [the visual areas] is operating only locally, on bits of the form; how the entire form is analyzed or handled by the brain--how this information is worked on and synthesized at later stages, if indeed it is--is still not known." This view is just not consistent with the brain "transmitting information" that would then be realized as something like an image, whether elsewhere in the brain, or in another space-time system as John has envisioned--the problem is the same regardless of where VS may be.

    It was Sickles' contention that the neuroelectric impulse in the CNS contains no information, and is therefore not transmitting any. My own hunch is that something much more bizarre is going on than we can presently even imagine, perhaps more akin to what Simon is envisioning, or what follows from John Wheeler's "Observer-Participant" model of the universe, in which the observer and universe are locked together as observer-participant, as it were "looking at each other."

    Could it be that VS is a completely separate entity that "looks at" the percipient's body from outside it? Again, Schilder talked of dissociations that occur between the observing self and observed self...

    ReplyDelete
  37. There is it seems an entire website devoted to substance dualism: http://www.newdualism.org/
    Evidently last year there was a whole special section of an issue of the journal "Cortex" (Elsevier) devoted to OBEs, prefaced by a most interesting editorial:
    http://www.newdualism.org/nde-papers/Bruggera/Bruggera-Cortex_2009-45-137-140-1.pdf

    ReplyDelete
  38. Having now slept on the matter, I think one of the most useful terms going is "neural correlate," because it stresses the *correlational* nature of the observation, rather than committing to a causal relationship (which, as I have noted already, is illogical, if it is believed that consciousness is "caused" both *by* the brain an *in* the brain). But the question is how to define "correlate"? Surely a case could be made that eyes and optic chiasm are also neural correlates of vision.

    Thus the problem is how to qualify/quantify the notion of "co-relate." In statistical correlation it is a matter of percentage, the higher the percentage, the "higher" the correlation, and it is then often related to predictive measures of probability.

    In his last work, published as "On Certainty," Wittgenstein was at pains to show how often what are thought to be empirical propositions are actually logical propositions, and the trick is to recognize the difference. Following G.E. Moore, Wittgenstein felt that a statement such as "There are physical objects" (or not, as some philosophers have perversely denied) are nonsensical as empirical propositions, but refer instead to language, concepts, and logic. With an empirical proposition its negation makes sense, and depends for verification by the state of things in the world; with a logical proposition, its negation is invariably unintelligible. That's an important difference!

    One question we therefore need to answer is what sort of proposition is implied by "neural correlate"--is it empirical or logical?

    ReplyDelete
  39. My favorite Wittgenstein quote in this context is this: "'I know that I am a human being.' In order to see how unclear the sense of this proposition is, consider its negation. At most it might be taken to mean 'I know I have the organs of a human.' (E.g. a brain, which, after all, no one has ever yet seen.) But what about such a proposition as 'I know I have a brain'? Can I doubt it? Grounds for *doubt* are lacking! Everything speaks in its favour, nothing against it. Nevertheless it is imaginable that my skull should turn out empty when it was operated on." ("On Certainty," p. 4).

    ReplyDelete
  40. I was thinking about the ubiquitous terms "code" and "encoding" in cog. neurosci., and John's desire that the notion of "information" could be salvaged in the context of brain function. "Code" and "information" are corollary terms in IT-speak, and it seems to me that Ray has already rather nicely addressed this (in a passage I recalled after thinking about Rama telling me that the stomach has its own nervous system): "Clearly you cannot process something you haven't got: a stomach isn't a dinner-processing machine unless it gets a dinner from somewhere. If the impulses in the nervous system convey information rather than making it themselves (as we are conventionally told), where does the information come from?" In the previous paragraph, Ray notes, "For, although the nervous system seems quite good at transducing various forms of energy into its own dialect of energy (propagated electrical changes) it doesn't seem to do anything corresponding to *the transformation of energy into information.*" (p. 59)

    ReplyDelete
  41. More generally what seems to be at issue are errors in inductive reasoning, especially of the "faulty causal" variety, in which one typically reasons that because two things are closely linked in space or time that one causes the other. This is commonly called the "chicken and egg fallacy" or "cart before the horse fallacy," but is classically known as "Hysteron-Proteron," or "wrong direction" fallacy, which assumes that one event caused a second, when, in fact, the second event caused the first.

    There is also a parallel to the directionality of analogy that we have been discussing here, too, namely, that brains are like computers, when it is computers that may be like brains (i.e., brains came first, not computers). This seems to generally forget that the natural world came first, then the man-made one.

    ReplyDelete
  42. Bill seems to have left something interesting in midair, so to speak, with his remark above— "...a weak sense of the mind/brain being a computer is trivial, because it is based only on an analogy which (conveniently) ignores evolutionary biology, and how a computer would evolve in the natural world." Has anyone considered in any detailed specific manner (not just vague speculations) how a computer could evolve during the course of millions of years of evolution out of a few neurons in a primitive organism? I will do some digging on this subject.

    ReplyDelete
  43. While John is researching how Nature might evolve a computer of out proto-neurons, perhaps someone else might investigate how it might have also evolved TV, race cars, and nylons (as someone once quipped).

    ReplyDelete
  44. I think that there is a real difference between "evolving" an artifact external to an organism, such as a race car, and evolving part of an organism, such as the brain. This raises the question though as to why the brain is a computer, in an important sense, and not a race car. I tried to find a good dictionary definition of a computer, and about the best I found was one in my American Heritage dictionary, as being "a device that computes; especially an electronic machine that performs highspeed mathematical or logical calculations or that assembles, stores, correlates, or otherwise processes and prints information derived from coded data in accordance with a predetermined program." Clearly several elements of this definition, including being a "device" and a "machine" are anthropormorphic in character, and thus the question arises as to whether a good more generalized (presumably purely functional) definition can be given. I don't have a good one (if anyone in the group has a better artificial intelligence background than I do, maybe they can help) but presumably it would have to mention something about the ability to perform various transformations on input so as to create output. Ruling in and out the right sort of things on the nature of the input, output, and transformations, is clearly not easy. I should again emphasize that I don't think that any of us is sympathetic with functional analyses of conscious experiences though.

    ReplyDelete
  45. I am glad that you concur that there is a difference between Nature evolving something and mankind evolving something, Bob.

    Of course, Ray Tallis and I (and doubtless others) have already denied that the brain is a computer, have argued that there is no proof that it is one (and cast doubt on how one would determine that it was one), and that somehow this basic fact has gotten under the radar and been propagated as if it were scientific fact rather than, as Ray says, a kind of myth.

    For that matter, I first discovered the term "neuro-mythology" in the book "Critical Studies in Neurology" (1948) by the British neurologist, Sir Francis Walshe: "[T]he hypothesis that consciousness is to be 'located' in the hypothalamus reveals a crude animistic conception of this function that might almost have derived from the animistic mythology of some savage tribe(p. xii)" and, accordingly, refers to "chapters in neurological literature that might justly be styled 'neuro-mythology." (p. xiv) For example he suggested that some of the details appearing in cytoarchitectonic "maps" such as the visual areas may 'constitute contributions to neuromythology rather than to neurology'!
    So the computer analogy is the latest one in a venerable mythological tradition, in which Aristotle suggested that the brain was a "cooling" mechanism.

    I really do believe it can be demonstrated that the contemporary "myth" that the brain computes and is a computer can be traced to the fact that the computers have been designed to carry out computations that we ordinarily do. It is important to get clear first what is meant by someone computing, which Ray explains in his book analyzing what is meant by the verb "calculate" (q.v.) Otherwise one is faced with the same question one might pose in looking at a humanoid robot, as in the science fiction film "Forbidden Planet" (1956). Why does it resemble a human and carry out tasks humans do? Because it was designed that way. The big difference is that we did not design ourselves.

    ReplyDelete
  46. Perhaps the larger question is: "Does Nature perform computations?" One would really have to examine carefully the underlying assumptions of biological cybernetics, but my suspicion is that in them is an unfounded assumption (or claim) that Nature can perform computations, probably based on a simplifying analogy that does not contextualize how computation came about in the course of evolution, but starts by comparing people to the activity of machines--thus getting the proverbial cart before the horse.

    ReplyDelete
  47. It is too bad that David Marr is no longer alive to join in this conversation but he died back in 1980 at the very young age of 35. I was rereading some of the things that he says in his book Vision, and he does discuss a computational process as involving input and output representations and an algorithm for accomplishing transformations on the input representations. He also claims that the algorithm may be serial or parallel and can be embodied physically in many different ways including with electronic devices, biologically and even with tinkertoys. Is there something specific that you disagree with here Bill?

    ReplyDelete
  48. Yes, Bob, alas poor Marr died way too young and what a pity that he is not here with us to participate, but I don't think he is the innovator by any means of the conceptual scheme you outline.

    *Conceptualizing* the various physiological processes involved in vision in this functional manner (which is based not on Nature, but on man-made machine construction) is, need I point out, at least one remove from the physiological processes themselves. But the $64,000 question remains: Is that what the visual "system" is really doing? How does science *prove* that it is doing any of those things or that it is just a nice model that "makes sense" in simulations?

    IMO, it is not a question of whether one can conceptualize them that way, or make an image processing device that can work that way, but how much the conceptualization is justified by the end product for us--or at least the end product of interest to us here: VS. Thus far it doesn't seem to work in the final analysis, and what I have tried to show is that part of the problem may indeed just be at the conceptual level, which until fairly recently, has been uncritically accepted by many vision scientists. But we really have no evidence that the eyes are providing "input," nor the brain creating "representations," nor that "algorithms" are involved, nor that there is something akin to "serial/parallel processing" going on in the CNS, even though logically all those things make sense *in a model.*

    Both Ray Tallis and I have explained what we think is wrong with that sort of model already.
    As Ray explained, it is one thing for us to design a machine to process images, because we are able to know what the input and desirable output will be and design the system accordingly. How is Nature supposed to do that? Surely selective evolutionary pressure is not enough of an explanation.

    ReplyDelete
  49. Marr definitely was interested in the biology of his model since among other things he rejects several possible models of stereopsis, including a cooperative one, as not being biologically realistic, and he definitely gets into the details of such things as cells in the retina performing a Laplacian operation as part of his primal sketch. I don't feel competent about judging the particular merits of this, but I did want to point out that he was not just concerned with an artifical intelligence project of finding a possible way to conduct visual processing but was definitely also concerned with at least attempting to be biologically realistic.

    ReplyDelete
  50. Of course the whole thrust of biological cybernetics (and thus the journal of that name which has now existed for 50 years) is that biological "systems" can be treated just like man-conceived and man-made ones. That's a major assumption in of itself!

    The fundamental problem (and objection) is the layer of conceptualization that has been superimposed on the interpretation of neurophysiological data has rendered many (most?) findings theory laden, or, as I have stated previously, "over determined by theory" (Quine), i.e. they are "seen" through the "lens" of cybernetic information processing, even though it has never been proven that to interpret neurophysiological workings in that way is even valid on logical grounds.

    As I may have suggested already, we should follow the lead of the late maverick psychologist Sigmund Koch, and perform what he called an "epistemopathectomy" on all the neuro-hype, which has tended to give an inflated impression that neuroscience knows more than it actually does about sensory "function" (one of our readers has suggested that "neuro-baloney" might be a more apt term of censure). Again, Howard Bursen's "Dismantling the Memory Machine" might serve as a guide to follow in performing the procedure, especially should Paul MacGeoch (M.D.) want to assist.

    ReplyDelete
  51. Bill asks
    "As Ray explained, it is one thing for us to design a machine to process images, because we are able to know what the input and desirable output will be and design the system accordingly. How is Nature supposed to do that? Surely selective evolutionary pressure is not enough of an explanation."
    I thought a basic law in evolutionary theory is that, if an organism develops anything, no matter what, (providing it is biologically possible) that promotes its competitive advantage—say any system that improves its interaction with the environment—then Nature will see to it that that organism will thrive. Is not the question then whether processing images results in competitive advantage over organisms that do not process images?

    ReplyDelete
  52. The downside of most evolutionary explanations is that it is very easy to produce ones that make sense of almost anything. Survival "advantage" is such a general one that one could justify almost any attribute ex post facto that has survived and evolved.

    If we reserve the word "processing" to that which occurs in people who do not suffer from associative agnosia, clearly there is an adaptive advantage, given the trouble that the latter experience relative to their visual world. But that is different from explaining VS itself, which to many neurophilosophy types seems like a sort of "extra," an anomaly, as if theory in neuroscience didn't really need it (Paul Churchland especially).

    ReplyDelete
  53. One point that I do want to make concerning some of the functional language, including some of Marr's, which you object to Bill, is that perhaps it can be seen as a use of Dennett's intentional stance. If this can also be cashed out (i. e., redescribed) in terms of what Dennett calls the "physical stance" then I don't see this as being problematic (it is just a useful shorthand), but of course the trick is to do so in detail.

    ReplyDelete
  54. I would say that Ray Tallis has already refuted Dennett's "intentional stance" as being valid, because as Ray explains, it omits the fact that it really requires a whole person to work, depending as it does upon intentionality (if a weakened form of it).

    So I don't think that just reformulating it in terms of Dennett's "physical stance" overcomes that basic objection, and thus re-description is not enough to overcome the basic logical glitch. Once again, this is a logical problem, not an empirical one.

    ReplyDelete
  55. A while back I invited Bill Adams to contribute to the blog, having read an excellent (and relevant) blog entry on perspective that he posted on his own blog http://stray-ideas.blogspot.com/2008_06_01_archive.html
    in which he advances the idea that seeing perspective as depth is the result of perceptual learning and does not seem to exist in every culture (I may have alluded to this in previous comments of my own).

    It seems Bill has been reading our blog and has given permission to post a relevant comment that he emailed me about this thread:

    "I agree with you that 'information processing' is an inapt metaphor for mentality. I went into cognitive psychology specifically to find out if the mind is like a computer. I did a dissertation on 'human information processing' (Sperling paradigm, if you know that). I even left academia and worked twenty years in the IT industry before returning to the fold. It took me almost a lifetime to to understand exactly why the mind is not a computational device."

    My hope is that he will share some of his insights about this here . . . .

    ReplyDelete
  56. While reviewing responses to Roger Penrose's 1989 book "The Emperor's New Clothes," I found an interesting paper entitled "Penrose's Philosophical Error," by mathematical physicist, Larry Landau. It was first published in a volume entitled "Concepts for Neural Networks" (Springer, 1997). One thing that strikes me about Landau's analysis is that the word "philosophical" appears only once in the paper, which is rather surprising considering his title, and then only in quoting from a book review by philosopher of mind Hilary Putnam, reviewing Penrose's next book "Shadows of the Mind":

    "In 1961 John Lucas - an Oxford philosopher well known for espousing controversial views - published a paper in which he purported to show that a famous theorem of mathematical logic known as Gödel's Second Incompleteness Theorem implies that human intelligence cannot be simulated by a computer. Roger Penrose is perhaps the only well-known present-day thinker to be convinced by Lucas's argument.... The right moral to draw from Lucas's argument, according to Penrose, is that noncomputational processes do somehow go on in the brain, and we need a new kind of physics to account for them.

    "'Shadows of the mind' will be hailed as a `controversial' book . . . yet this reviewer regards its appearance as a sad episode in our current intellectual life. Roger Penrose ... is convinced by - and has produced this book as well as the earlier 'The emperor's new mind' to defend - an argument that all experts in mathematical logic have long rejected as fallacious. The fact that the experts all reject Lucas's infamous argument counts for nothing in Penrose's eyes. He mistakenly believes that he has a philosophical disagreement with the logical community, when in fact this is a straightforward case of a mathematical fallacy."

    Of course, by dismissing Lucas's interpretation of Gödel's Second Incompleteness Theorem, Putnam makes a straw man of the larger question, since no one has ever proven that the mind (or brain) computes or is a computer, nor even shown how that basic premise might be argued let alone confirmed.

    None of the arguments presented by Ray Tallis seem to figure in Landau's paper, nor in Putnam's thinking, at least that I can ascertain (especially after Putnam rejected functionalism and his "functional isomorphism" to which Bob has previously referred).

    For those who are "married" to the idea of brain/mind qua computer, hope springs eternal! I believe it was Paul Weiss who said that often discredited ideas or theories often only die with their creator--and then sometimes not.

    ReplyDelete
  57. Here is an interesting paper linking DNA methylation and epigenesis to memory formation. This takes us away from the current fixation on modulating synaptic strengths to wider and more interesting aspects of brain function—

    DNA methylation and memory formation
    Jeremy J Day & J David Sweatt

    Nature Neuroscience 13, 1319–1323 (2010) doi:10.1038/nn.2666

    Memory formation and storage require long-lasting changes in memory-related neuronal circuits. Recent evidence indicates that DNA methylation may serve as a contributing mechanism in memory formation and storage. These emerging findings suggest a role for an epigenetic mechanism in learning and long-term memory maintenance and raise apparent conundrums and questions. For example, it is unclear how DNA methylation might be reversed during the formation of a memory, how changes in DNA methylation alter neuronal function to promote memory formation, and how DNA methylation patterns differ between neuronal structures to enable both consolidation and storage of memories. Here we evaluate the existing evidence supporting a role for DNA methylation in memory, discuss how DNA methylation may affect genetic and neuronal function to contribute to behavior, propose several future directions for the emerging subfield of neuroepigenetics, and begin to address some of the broader implications of this work.

    ReplyDelete
  58. The current orthodox on opinion in neuroscience, visual science and cognitive science is that seeing is a function of the eye and brain, and them alone. In other words certain brain events are both necessary and sufficient for seeing to occur. This applies to dreams and hallucinations as well as ordinary seeing. Introspectionist psychologists start from a different base. They examine the contents of phenomenal space and recognize an observer (“O”, “I”, Self, Ego) that experiences a number of sensory fields, images and thoughts. Here also the orthodox opinion is that all this activity, including the Self, is identical with certain brain events (Daniel & Pollen, 2008, “Fundamental Requirements for Primary Visual Perception”, Cerebral Cortex, 18:1991-1998)—although no details of how this is actually done are supplied. All dualist theories are firmly rejected.
    The data on visual perception during NDEs presented by Jean-Pierre not only has important implications for cosmology, but it also throws doubt on the whole of this story. Jean-Pierre has found that most elaborate forms of seeing occur during NDEs when no brain electrical activity can be detected. Moreover, the format of this seeing has remarkable properties. Prominent among these is evidence that amongst what is being seen under these conditions is a direct view of the physical world from a 5th dimensional perspective. If this is the case then we have to say that seeing is a fundamental property, or function, of the Self, and not the eye-brain. This is suggested by dreams and psychedelic experiences, but in these cases, the reductionist always has the escape route that the claim that the whole self-seen complex is a brain state. In the case of Jean-Pierre’s observations it does not seem likely that a brain with a flat EEG could support the highly detailed observations and analyses made by his subjects in the NDE state.

    ReplyDelete
  59. Therefore, we can conclude that there is now evidence to support the claim that seeing is a fundamental property of the Self, not the brain. Normal everyday perception takes place when the Self is confronted with its visual field covered with moving patches of color. The visual field is part of phenomenal consciousness (or consciousness module). The function of the eye-brain-VF complex is merely to collect (eye), and organize (brain) the causal precursors of sensations, and then present, in the VF, the actual sensations (moving patches of color) that are seen (experienced) by the Self. The same reasoning applies, mutatis mutando ,to somatic and other sensations
    This formulation also solves the old ‘homunculus problem’. This idea was first thought of by Descartes, developed by Gilbert Ryle and Francis Crick, and has filtered down to many since. The argument goes that we cannot say that there are any pictures in the brain (or mind, or VF) because, if we do, we have to ask what is looking at them and, as Crick put, trying hard to follow what is going on. A little green man, or homunculus, with an eye and a brain, is the answer. Inside his head there is another little green man who, etc., etc. Jerry Fodor said that this was a bad argument on the ground that, just because seeing an object needs an image, it does not follow that seeing an image requires another image. We can now add to the rebuttal. Firstly we can locate the picture, not in the brain but in the VF. Then we can say that what is observing this is the Self, not a little green man. He was only introduced under the mistaken impression that seeing requires an eye and a brain. Since Jean-Pierre has produced evidence to show that seeing does not require an eye and a brain, the little green man argument collapses. In OBEs and NDEs (and there only) it seems that naïve realism has suddenly become true (!), and the physical world (or parts of it) are seen directly without any mediation by eye or brain.

    ReplyDelete
  60. 1. I would start by questioning the application of the word "instrospection" in this context, because it is not at all what was meant by that term by the introspectionists themselves, including Wundt, who applied it to *mental* processes, not perceiving. That's not what is being described by John above, though, which concerns noticing aspects of perception. Introspection, as it was originally understood, included introspecting on the self, for example, where it appeared to be localized with respect to the percipient's body, etc., but the self was not necessarily part of its general framework. I suspect that introspection as it was originally construed (especially by Titchener, Wundt's disciple) involves a form of naive realism.

    The claim that in NDE one is seeing from a fifth dimension would require careful scrutiny because, as I have already stated previously, I believe what are being reported or different topological conditions, not additional dimensions (at least in the strict sense of the word).

    It would be nice inasmuch as Dr. Jourdan is already listed as a contributor to the blog if he could clarif some of these points for us.

    ReplyDelete
  61. 2. Of course we have already discussed the homunculus problem quite a bit previously, and I suggested that situations in which it is invoked are a "red flag" indicating that the fallacy of division is involved in a given conceptualization. Is there disagreement on that point?

    I have already tried to explain why events in the visual cortex, even though causally linked, seem unlikely "precursors" to visual consciousness and would indeed seem to be serving another purpose entirely (e.g., ultimately running the muscles), but since my objections have not received comment, I cannot say more.

    In any case, the homunculus problem does not come by way of Descartes or Ryle, but from phrenology and faculty psychology as critiqued by William James in his "Principles of Psychology," in which he appears to be the first to explicitly use the word homunculus in this connection (years ago I did a great deal of historical research to ascertain the origin of the "homunculus problem" in the psychology literature, and wrote an extensive paper about what I found that was rejected by "American Psychologist" and I never bothered to submit it elsewhere).

    Whether in the brain or somewhere else, simply asserting the existence of something like an "image" doesn't get us very much further than what we ordinarily experience, thus bringing us back to first base, entailing a kind of circular reasoning. That's the main point of the problem of an infinite regress, not the homunculus itself or the localization of visual space per se, as John stresses.

    ReplyDelete
  62. Here is the opinion of one neurobiologist, who confirmed a hypothesis advanced by Crick in 1982 (i.e., that dendritic spines "twitch"):

    "The central nervous system functions primarily to convert patterns of activity in sensory receptors into patterns of muscle activity that constitute appropriate behavior." Andrew Matus, "Actin-Based Plasticity in Dendritic Spines," Science (27 Oct. 2000, p. 754)

    The author says nothing about this process creating perceptual space!

    ReplyDelete
  63. "Seeing" is presumably primarily an activity of people, not eyes or brains (again, the fallacy of division sends up a red flag!) To say that seeing is a property of the self just restates the problem anew.

    ReplyDelete
  64. Introspection does not have to be all of one kind, or to have only one focus of enquiry. There is difficult Wundtian type where the Self is the focus: this does not seem to be around much these days. Then there is what Francis Galton, Richard Gregory, Rama and many others did and do, where sensations, images and other 'inner' entities and events provide the focus—which field of research is flourishing. Of course, anyone, who does not like to call this 'introspection', or 'introspectionist psychology', is free to call it by another name. I do not think 'psychophysics' will do for that usually entails studying how perceptions of physical stimuli change under different circumstances. What we are looking for is a word that describes what scientists do when they examine their own sensations, images, etc, as Francis Galton did when he discovered number forms.

    The problem with blanket statements like "seeing is the activity of people" is that they tend to blur the distinctions between the different kinds of seeing that people do—as when they are seeing their tax returns, seeing daggers floating in the air, seeing during a dream and the sort of seeing that goes on in a NDE. It is certainly true that these 'seeings' are all done by people, but that, by itself, is not a very interesting statement. We have to ask ourselves "For what purpose is this statement made?" I suggest that the statement "seeing is done by people" really stresses (correctly) that, in no way, is seeing done by brains. That is where we need to focus to combat a tide that is running very much against us. I have just read all the issues of "Cortex" for the last 10 years—very depressing! To assert that the Self is involved as the subjective element in people who are seeing seems to me, not a 'part-whole' error, but a simple fact: to deny that the Self has this role seems to me to represent a step on the slippery slope that leads to Zombieism.

    The comment by Matus that Bill quotes merely expresses the crude behaviorism so prevalent in neuroscience today: so I do not see why Bill is surprised that Matus does not discuss the creation of visual space.

    ReplyDelete
  65. The reason I believe introspection as it is being used today is a misnomer is because it was never used in the past for talk of sensations and what we *perceive.*

    The "intro-" part of the word suggests that it refers to inspecting the workings of one's mind, which indeed is what the term was intended to denote. It really has no place in the study of sensory perception per se, in which psychophysics is still the main paradigm, i.e., correlating sensory magnitudes with physical ones of stimuli (and now, of course, with correlated brain events as well--one could call that modern psychophysics, as Rama does).

    Except for the interoceptive senses, psychophysics does not explicitly talk of sensations as being something "inner," in contrast to the "inner" workings inspected through introspection.

    I completely endorse the sentiments expressed by John above, and that he concurs we must take great pains to be certain what kind of "seeing" we mean, i.e., which semantic sense we mean, as there are several of them in the OED. Whether philosophy has studied more than one sense of the word (say, in the context of linguistic philosophy as much as in the philosophy of perception), I really don't know offhand, but I recall a reference in either one of John's old papers or one by Lord Brain that cites this very issue (was it a book on the philosophy of biology, John?).

    That most so-called "neuroscience" has grown out of two distinct sources (1) clinical (and experimental) neurology and (2) experimental "behavioral neurobiology" with animals probably explains why there is no coherent conceptual framework to deal with sensations, and why they are often seen as a kind of anomaly in the neurophilosopher's scheme of things.

    Need I remind readers that one of the main progenitors of behaviorial neurobiology was Pavlov--an arch behaviorist!

    Still, if neuroscience's picture of sensory function is complete without sensations, we need to understand how that came to be, but I suspect it is because of its behaviorist background, which as I would today argue, represents a kind of "third person naive realism," namely, the scientist being the observer who observes the animal's responses, because he cannot communicate with the animal verbally!

    Historical analysis is crucial to this aim because in retracing the historical development of the field, it is as I have said before rather like checking an apparatus or experimental set up used in an experiment to see why it is not working properly.

    So this is why I say we need to pay particularly close attention to the "chain of inferences" involved here, and the language used in them.

    ReplyDelete
  66. Unless we can find a simple way of distinguishing the self from the person, I'm not sure how useful the self as a sort of abstraction is to our discussion here which, after all, is the properties of visual space!

    ReplyDelete
  67. Yes, indeed, how do we do that?

    Entangled with the concept of concept of Self is the concept of person, which everyone has been using as if there was no problem about what it means. There are various meanings of ’person’ listed on Wikipedia. But the topic I will focus on here may be put in the form of questions.
    1. Is a person no more than a properly functioning living human body? If a Superscientist were to manufacture from biochemical components (as has recently been done for a bacterium) a perfectly functioning, loving, intelligent, loyal, etc. human body, would that body be a person? Or suppose the body was made out of metal, as in the case of R2D2, so long as the metal ‘creature’ was intelligent, witty, loyal, ready to sacrifice itself for you, etc., would R2D2 count as a person? Likewise, is HAL, the computer who broke loose—a person? Or is it necessary to have a soul to be a person? Whoever wrote the bit in Wikipedia suggests there are degrees of being a person.
    2. Is Ludwig—the famous brain-in-a-vat— a person? If not, at what point in Professor Smythson’s surgical removal of the crushed remains of Ludwig’s physical body, and replacement of these parts by super-prosthetics, does Ludwig cease to be a person?

    Possible answers:
    1. All human beings are persons and need to be treated with consideration and respect.
    Obviously no good: Hitler and Stalin were human beings.
    2. All human beings, who obey the rules of the culture in which they live, are persons.
    No good either: does not apply to Nazis who obeyed the rules of the culture in which they lived,
    3. All human beings who obey the eternal, non-relative rules of ethics, are persons.
    Objection! Who made these eternal rules of ethics?

    And so on.

    ReplyDelete
  68. Ethical concerns aside (all of them, of course, valid and important), I don't think we need to debate what a person is for our purposes here, nor resort to bizarre thought experiments to explore what is thinkable. Yes, perhaps it is possible that elsewhere in the universe there are the equivalent of people composed of other substances, but that doesn't help us very much. Rather the most useful verity is that the person is a kind of whole, not something localized or localizable in the head or brain. So a more holistic view may be more applicable, with the proviso that we can include in it something like a point of view, which may depend upon whether we have two eyes, one eye, or no eyes. Does a blind person still have a point of view? Where is his observing ego if his visual cortex is non-functional?

    ReplyDelete
  69. I do not think that Stanley Kubrick and Jonathan Harrison were indulging in "bizarre" thought experiments when they introduced us to HAL and Ludwig, but were presenting important issues in a literary form—part of a long and worthy line starting with Gulliver's Travels. Moreover did not Einstein started off with thought experiments about what it might be like to travel at the speed of light.
    A retinally blind person certainly has a point of view—she sees just as we see in the dark. It can even be argued that the Ego of a cortically blind person has a visual function of a sort, or rather the person realizes that she no longer has a visual function, in that the visual field is replaced by nothing. But the loss is noticed and, in acute cases (retinal image stabilization experiments) is terrifying. If all visual function were lost would one not expect perhaps an agnosognostic response?

    ReplyDelete
  70. I said "bizarre" thought experiments, as I have no problem with thought experiments in general (they were an idea promoted by Mach, which is where Einstein learned the concept). In this instance, the "brain in a vat" experiment goes back to Curt Siodmak's 1942 sci-fi novel "Donovan's Brain," which was made into a movie twice, first in 1944, and then in 1953 http://www.youtube.com/watch?v=KnlJ1Lrr3IU
    The idea of humanoid automata is ancient and has held an enduring fascination with people for ages. But how do either brains-in-vats or automata serve to advance our inquiry since (1) there are no brains in vats that are alive (to the best of my knowledge) and (2) we have no reason to believe that automata are conscious?

    More interesting--and perhaps useful--would be to know what sort of "point of view" congenitally blind people have, but only those who have experienced no light sensations whatsoever, because there is a problem in the optic tract. Isn't a point of view without a view an oxymoron?

    ReplyDelete
  71. "But why not eat?"

    "Because the faculties become refined when you starve them. Why, surely, as a doctor, my dear Watson, you must admit that what your digestion gains in the way of blood supply is so much lost to the brain. I am a brain, Watson. The rest of me is a mere appendix. Therefore, it is the brain I must consider." (Watson and Holmes, in "The Adventure of the Mazarin Stone").

    ReplyDelete
  72. Wittgenstein also learned of thought experiments from reading Mach's "Knowledge and Error."

    Speaking of Wittgenstein, it might be worth quoting some comments from him (c. 1933-34) that are quite relevant to the topic of Ray's Lexicon:

    "'Is it possible for a machine to think?' (whether the action of this machine can be described and predicted by the laws of physics or, possibly, only by laws of a different kind applying to the behaviour of organisms). And the trouble which is expressed in this question is not really that we don't yet know a machine which could do the job. The question is not analogous to that which someone might have asked a hundred years ago: 'Can a machine liquefy a gas?' The trouble is rather that the sentence, 'A machine thinks (perceives, wishes)': seems somehow nonsensical. Is is as though we had asked 'Has the number 3 a colour?' ('What colour could it be, as it obviously has none of the colours known to us?') For in one aspect of the matter, personal experience, far from being the *product* of physical, chemical, physiological processes, seems to be the very *basis* of all that we say with any sense about such processes." ("The Blue Book," p. 47f, Harper edition).

    ReplyDelete
  73. Re vision in congenitally blind people: see fascinating paper by Ring, K. & Cooper, S. Near-Death and Out-of-Body Experiences in the Blind. Journal of Near Death Studies. winter 1997, 101-147.

    Neat quote from Sherlock! He was almost a brain-in-a-vat but not quite: as evidenced by this quote:

    "What is the meaning of it all, Watson?" said Holmes, solemnly, as he laid down the paper. "What object is served by this circle of misery and violence and fear? It must tend to some end, or else our universe is ruled by chance which is unthinkable. But to what end? There is the great outstanding perennial problem to which human reason is as far from an answer as ever." (The Cardboard Box).

    Powerful stuff from Wittgenstein (sadly missing form Hawking's new book on which I have just posted a comment).

    ReplyDelete
  74. Yes, Kenneth Ring's student, whose research that paper reflects, presented the rather astounding finding people that these people reported being able to see for the first time during their NDE. That's pretty amazing! And it suggests that cortical blindness may involve something like neuronal inhibition rather than a failure of excitation as would be supposed by a non-functioning (or destroyed) brain area.

    Holmes was quite a philosopher, if an early proponent of the naive idea that everything can be reduced to logic. And that's where Wittgenstein comes into play, because his remarks about a machine thinking are an example of his distinction between empirical and logical propositions: Empirical propositions depend upon the state of the world to decide, whereas the negation or converse of a logical proposition may often be just *nonsense.*

    Alas, poor Stephen Hawking has let us down I fear, as John's new posting reveals. He has jumped on the proverbial neurobabble bandwagon. What a pity!

    ReplyDelete
  75. Very nice we blog and useful! I feel i will come back one day !

    PIC Scheme Singapore

    ReplyDelete
  76. This is a Great Website You might find Fascinating that we Motivate A person.



    PIC Bonus Singapore

    ReplyDelete
  77. Thanks for this article. It's just what I was searching for. I am always interested in this subject.

    Pic Scheme For Sme

    ReplyDelete
  78. Nice post! This is a very nice blog that I will definitively come back to more times this year! Thanks for informative post. oxford ciekawe miejsca

    ReplyDelete
  79. It is very gratifying to know that now eight years later there are readers who find our deliberations of interest. These are by no means dead issues but very much of the moment because there is so much woolly thinking that partly stems from computer industry hype and technophilia. Unfortunately it only creates and perpetuates conceptual confusion rather than clarifies. We are not machines. We make machines to do work. That's called technology (used to be just plain "engineering"). Fancy that Caltech--the California Institute of Technology--was originally called the "Throop Polytechnic Institute." It was attended by a famous Hollywood movie director, Frank Capra.

    ReplyDelete

Note: Only a member of this blog may post a comment.