Ethics and computational theory of mind

by time news

Let’s start by saying that, as Mark Coeckelbergh explains [1]a good number of philosophers, often followers of the analytical tradition of philosophy, have a vision of the human being that supports artificial intelligence researchers who believe that the brain and mind they are and they really work as their computational equivalents: this is what is called computational theory of mind. And although in the 1980s the cognitive sciences began to take the importance of neurobiology more seriously, the computationalist paradigm has remained intact to this day [2, 3].

Eliminative materialism

Coeckelbergh cites Paul Churchland and Daniel Dennett as examples. Churchland thinks that science, and in particular evolutionary biology, neuroscience and artificial intelligence, can fully explain human consciousness. [4]. What he calls eliminative materialism denies the real existence of immaterial thoughts and experiences, which are nothing more than mental states; consciousness is an epiphenomenon of brain function, and the very concept of consciousness will end up being “eliminated” when neuroscience progresses enough. For his part, Dennett also denies the existence of anything beyond what happens inside the body: he believes that we ourselves “are something like a robot.” [5].

In the same vein, Patricia Churchland, wife of Paul Churchland, also expresses herself in response to one of the 20 great questions about the future of humanity. When asked if neuroscience will change the penal code, she answers [6]:

In all probability the brain is a causal machine, in the sense that it goes from state to state depending on its antecedent conditions.

In all likelihood, the brain is a causal machine, in the sense that it goes from state to state as a function of antecedent conditions.

This certainty in affirming that in all probability the brain is a causal machine is certainly a dare, since it is a statement that cannot be proved. In reality, the only thing that this statement reflects is its own materialistic and mechanistic assumption: the only thing that exists in the universe, the only thing we can know are the causal phenomena, that is, those that have an antecedent and a consequent, and these phenomena are somehow related by physical-mechanical laws. Therefore, the brain is a causal machine. because it can’t be anything else.

(By the way, this theory is the consecration of the argument to man, since whatever one says –including one’s own computational theory of mind– is explained by the past cognitive history of each one: “you say this because you are such and such, because you have had such experiences, because you you ascribe to this or that other current”. Appealing to logical arguments to prove or refute any theory will not go from being a lamentable fiction.)

Without consequences for the penal code?

However, according to her this should not have any consequence for the penal code (the implications of this for criminal law are absolutely nil). The thesis that Patricia Churchland defends in The Moral Brain: What Neuroscience Tells Us About Morality is that self-control can be achieved in animals, and particularly in mammals, through reward and punishment [7]. Therefore, the supposed fact that the brain is a causal machine does not serve as an excuse to justify the inevitability of criminal behavior.

Now, what Churchland does not explain in his answer is why, or how, a causal machine –and a society made up of causal machines– could and above all, ought prevent criminal acts. That is to say, if all of us –including judges and legislators– are causal machines, with future states determined by past states, isn’t the decision to modify, or not, the penal code also determined? What is the point then of arguing for or against? (This is nothing more and nothing less than the neuro-lawyer fallacy I wrote about earlier.)

Eliminative materialism and the computational theory of mind seek to reduce ethics to evolutionarily acquired brain functions, thanks to the fingerprint that reward and punishment leave in the brain. This, by the way, is characteristic of one of the artificial intelligence techniques called reinforcement learning: accumulate the prizes, accumulate the punishments, and from there algorithmically “decide” the future behavior of the system.

But all this is only the denial of ethicsbecause the only thing left in this theory to govern human behavior is the psychological pressure to act in a certain way, to adapt to the custom social: the mores (Latin word from which “moral” derives). Socially unacceptable behaviour, such as the use of foul language, will cause social rejection, and this will leave its mark on the brain, in a purely mechanical “learning” (or rather “conditioning”) process. In a different social context, coarse language, for example, will not be penalized, but excessively refined language. I mean, actually the only thing that counts to act in one way or another is rejection or social acceptance. The true ethical perspective has been completely lost, which is to seek good as good: not because a reward or a punishment is received.

Causes and reasons

Certainly, human behavior can be understood from the mechanical-causal perspective, and therefore as the result of a certain evolution, that is, as the result of some initial conditions modified in some way by the accumulation of rewards and punishments. But ethics is not limited to studying the causes of behavior and self-control, but rather aspires to know its reasons. In this position that he affirms that “in all probability the brain is a causal machine” there seems to be no room for reasons, but only for causes. (Here the distinction between cause as physical-mechanical explanation and reason as logical justification [8].) Therefore, there is no place for ethics, for rational, reasoned free behavior. There is only room for the causal, empirical study of human behavior: that is, there is only room for sociology (or, as I like to say, sociology). costumbrology).

This self-limitation of reason to what can be known empirically is not only harmful to ethics. It is also profoundly destructive of any understanding of technology. Because any artifact, let’s say a mousetrap, a gas boiler or an electronic calculator, cannot be understood only from an empirical point of view. [9]. Of course it is necessary to explain the functioning of the mousetrap from the causal perspective: “when the rodent arrives, it bites the bait, and that releases the spring, which…”. But from a mousetrap, if i just explain it causally, I understand everything, except that it is a mousetrap. Because I have not explained his Reason to beits purpose.

So I beg the reader to excuse me if I don’t have great appreciation for those who hold eliminative materialism and the computational theory of mind. They don’t just leave ethics off the map (although they claim to “explain” it). It’s that they leave me without a job as an engineer.

******

Paco Mariscal wrote a courageous review of the moral brain on his blog Newton’s Barrel. I discovered her a few years ago and it can be said that my comment from then on her blog is the germ of this article that I am publishing now.

This article is sent to us Gonzalo Genoa, professor at the Carlos III University of Madrid. Apart from my computer science classes, I also teach courses in the humanities where I cover topics in philosophy of technology and critical thinking.

You can read all my articles on Naukas in this link. In addition to using the Naukas social networks, if you want to comment and discuss more in depth you can visit my blog Of machines and intentions (reflections on technology, science and society)where this entry will be available in a couple of days.

Scientific references, notes and more information:

[1] Coeckelbergh, M. (2021). ethics of artificial intelligence, Madrid: Chair. (AI Ethics. Cambridge, Massachusetts: MIT Press, 2020.)

[2] Arana, J. (2015). the unexplained conscience. Madrid: New Library.

[3] Barret, N., Güell F., Murillo, JI (2015). “The limits of computational understanding of the brain”. account and reason34:71-76.

[4] Churchland, P.S., Koch, C., Sejowski, T.J. (1990). “What Is Computational Neuroscience?” In E.L. Schwartz (ed.), Computational Neurosciencepp. 46-55. Cambridge, MA: MIT Press.

[5] Dennett, D.C. (1997). “Consciousness in Human and Robot Minds”. En M. Ito et al. (eds.), Cognition, Computation and ConsciousnessNew York: Oxford University Press, 1997, pp. 17-29.

[6] Churchland, P. (2016). “20 Big Questions about the Future of Humanity”. Scientific American 315(3):85-86 (doi:10.1038/scientificamerican0916-28). https://www.scientificamerican.com/article/20-big-questions-about-the-future-of-humanity/

[7] Churchland, P. (2012). The moral brain. What neuroscience tells us about morality. Barcelona: Paidos. (Braintrust: What neuroscience tells us about morality. Princeton: Princeton University Press, 2011.)

[8] García Norro, JJ (2012). “Is intelligence natural?” In M. Oriol (ed.), intelligence and philosophy. Madrid: Marova, pp. 151-169.

[9] Génova, G., Quintanilla Navarro, I. (2018b). “Discovering the principle of finality in computational machines”. Foundations of Science 23(4):779–794.

You may also like

Leave a Comment