ChatGPT, value and knowledge – International Workers League

by time news

2023-06-07 13:56:52

I invited my colleague and co-author of our latest book, Guglielmo Carchedi, to write this post (Michael Roberts).

Guglielmo Carchedi

In a comment on Michael Roberts’ article on artificial intelligence (AI) and the new language learning machines (LLMs), author and commentator Jack Rasmus posed some pertinent questions, which I felt compelled to answer.

Jack said:Marx’s analysis of machinery, and his view that machinery is a frozen labor-value transferred to the commodity as it depreciates, fully applies to AI software-based machines with the increasing ability to self-sustain and update your own code without human intervention, i.e. don’t they depreciate?

My answer to Jack’s legitimate question presupposes the development of a Marxist epistemology (a theory of knowledge), an area of ​​inquiry that has remained relatively unexplored and underdeveloped.

In my opinion, one of the main features of a Marxist approach is to make a distinction between “objective production” (the production of objective things) and “mental production” (the production of knowledge). Most importantly, knowledge must be seen as material, not as intangible nor as a reflection of material reality. This allows us to distinguish between two types of means of production (MP): objective and mental; but both are material. Marx concentrated mainly, but not exclusively, on the former. However, in his works there are many clues about how we should understand knowledge.

A machine is a target MP; the knowledge embodied in it (or disembodied) is a mental MP. Therefore, AI (including ChatGPT) should be seen as mental MP. In my opinion, considering that knowledge is material, mental PMs are just as material as objective PMs. Therefore, mental MPs have value and produce surplus value if they are the result of human mental work done for capital. Therefore, AI involves human labor. It’s just mental work.

Just like objective PMs, mental PMs increase productivity and eliminate human labor. Its value can be measured in working hours. The productivity of a mental PM can be measured, for example, by the number of times ChatGPT is sold, downloaded or applied to mental work processes. Just like an objective MP, its value increases as improvements (more knowledge) are added to it (by human work) and decreases due to wear and tear. Therefore, Mental MPs (AI) not only depreciate, but also depreciate at a very fast rate. This depreciation is due to technological competition (obsolescence), and not to physical depreciation. And, just like objective MPs, their productivity will affect the redistribution of surplus value. As the newer ChatGPT models replace the older ones, due to productivity differences and their effects on the redistribution of surplus value (Marx’s price theory), the older models lose value to the newer and older ones. productive.

Jack asks:Is that capacity based on human labor or not? If not, what does a ‘no’ mean for Marx’s key concept of the organic composition of capital and, in turn, for his frequently stated (MR and mine – GC) support for the trend-falling rate hypothesis? of profit?«.

My answer above was that this «capacity», in fact, is not based only on human (mental) work, but rather it’s human work. From this perspective, there is no problem with Marx’s concept of the organic composition of capital (C)[1]. Since the AI, and therefore the ChatGPT, are new forms of knowledge, of mental MP, the numerator of C is the sum of the value of the objective MP plus the mental MP. The denominator is the sum of the variable capital spent in both sectors. Therefore, the rate of profit is the surplus value generated in both sectors divided by (a) the sum of the MP in both sectors plus (b) the variable capital spent also in both sectors. Thus, the law of the tendential fall of the rate of profit is not altered by the mental PM.

To better understand the points above, we need to unravel and develop Marx’s implicit theory of knowledge. That is what the following paragraphs do, albeit in an extremely succinct version.

Consider first classical computers. They transform knowledge based on formal logic, Boolean logic or algebra, which excludes the possibility that the same statement is true and false at the same time. Formal logic and therefore computers exclude contradictions. If you could perceive them, these would be logical errors. The same applies to quantum computers.

In other words, formal logic explains predetermined mental work processes (where the outcome of the process is known in advance and therefore not contradictory to the knowledge that goes into that work process), but excludes mental work processes. open (in which the result emerges as something new, not yet known). An open process is based on a potential and formless stock of knowledge, which has a contradictory nature due to the contradictory nature of the elements sedimented in it. Unlike formal logic, open logic [o lógica dialéctica, ndt.] it is based on contradictions, including the contradiction between potential and realized aspects of knowledge. This is the source of the contradictions between aspects of reality, including elements of knowledge.

Going back to the previous example, for open mental work processes, A=A and also A¹A. There is no contradiction here. A=A because A, as a realized entity, is equal to itself by definition; but it is also equal to A¹A because the realized A can be contradictory to the potential A.

This also applies to Artificial Intelligence (AI). Like computers, AI works on the basis of formal logic. For example, when asked if A=A and also if, at the same time, it can be equal to A¹A, the ChatGPT responds in the negative. Because it works on the basis of formal logic, AI lacks the potential knowledge pool to extract more knowledge. She can’t conceive of contradictions because she can’t conceive of potential. These contradictions are humus of creative thought, that is, of the generation of new knowledge, still unknown. AI can only recombine, select and duplicate already existing forms of knowledge. In tasks such as vision, image recognition, reasoning, reading comprehension, and gaming, they can perform much better than humans. But they cannot generate new knowledge.

Consider facial recognition, a technique that compares an individual’s photograph against a database of known faces to find a match. The database consists of a number of known faces. [Para] finding a correspondence selects a face already realized, that is, already known. There is no generation of new knowledge (new faces). Facial recognition can find a match much faster than a human. This makes human labor more productive. But selection is not creation. Selection is a predetermined mental process; creation is an open mental process.

See another example. The ChatGPT seems to emulate human creative writing. In fact, it doesn’t. He gets his knowledge from abundant text data (the objects of mental production). The texts are divided into smaller parts, phrases, words or syllables, the so-called tokens. [fichas]. When ChatGPT writes a fragment, it doesn’t choose the next token according to the logic of the argument (as humans do). Instead, it chooses the most likely token. The output written is a string of tokens assembled based on the statistically most likely combination. This is a selection and recombination of already realized knowledge elements, and not the creation of new knowledge.

Read also | Capitalism and Artificial Intelligence

As Chomsky et al. (2023) state: “AI takes vast amounts of data, looks for patterns in it, and becomes increasingly proficient at generating statistically probable results—such as apparently human language and thought—… [El ChatGPT] it simply summarizes the standard arguments of the literature”.

It may happen that the ChatGPT produces a text that humans have never thought of. But that would still be a summary and restatement of data already known. No creative writing could emerge from this, because new realized knowledge can only emerge from the contradictions inherent in potential knowledge.

Morozov (2023) provides a relevant example: “Marcel Duchamp’s 1917 artwork Fountain. Before Duchamp’s work, a urinal was just a urinal. But, with a change of perspective, Duchamp turned it into a work of art. When asked what Duchamp’s bottle holder, snow shovel and urinal had in common, ChatGPT correctly answered that they were all everyday objects that Duchamp transformed into art. But when asked what current objects Duchamp could transform into art, he suggested smartphones, electronic skateboards, and face masks. There is no hint of any genuine “intelligence” here. This is a well-run but predictable statistical machine”.

Marx provides the appropriate theoretical framework for the understanding of knowledge. Human beings, in addition to being unique concrete individuals, are also bearers of social relationships, as abstract individuals. As abstract individuals, “humans” is a general designation that obliterates the differences between individuals, all of whom have different interests and worldviews. Even if machines (computers) could think, they could not think like class-determined human beings, with different and class-determined conceptions of what is true and false, right or wrong. Believing that computers are capable of thinking like human beings is not only wrong, it is also a pro-capital ideology, because that is to be blind to the class content of the knowledge stored in the workforce, and therefore to the inherent contradictions. to the generation of knowledge.

Fuente: Michael Roberts, ChatGPT, Value and knowledge

Translation: Natalia Estrada.


[1] The organic composition of capital (C) is the result of dividing constant capital by variable capital (machines and other means of production) (wages). C = c/v.

#ChatGPT #knowledge #International #Workers #League

You may also like

Leave a Comment