Introduction and Translation by David Ownby, Blog “Reading the China Dream“. This web site is devoted to the subject of intellectual life in contemporary China, and more particularly to the writings of establishment intellectuals. What you will find here are essentially translations of Chinese texts that my collaborators and I consider important.
Introduction
For the past couple of weeks, the entire world has been talking about ChatGPT, and China is no different. The text translated here is representative of the Chinese equivalent of what you might read in the pages of the New York Times or El Pais, though the format is collective rather than individual. The roundtable was organized by Xu Jilin (b. 1957), a liberal public intellectual whom I have been following for some time, and who has developed an interest in technology and online culture in recent few years (see here, for example). His fellow panelists are all young professors at Xu’s university, East China Normal in Shanghai. They include: Jiang Yuhui 姜宇辉, Institute of Politics and International Relations, Jin Wen 金雯, Chinese Department, and the National Chinese Culture Institute, and Zhang Xiaoyu 张笑宇, World Politics Research Center. It is not clear which ChatGPT they sampled. Apparently, Baidu—the rough Chinese equivalent of Google—is under pressure to launch their version, winningly dubbed “Ernie,” but some rumors have it that Ernie is not ready for prime time and that the launch will be a disaster.
There is nothing surprising or particularly revealing in the exchange between Xu and his younger colleagues, but it suggests once again that Chinese intellectuals inhabit, to some extent, the same world as we do; I found it interesting that one of the panelists tested ChatGPT’s ability to understand metaphor by feeding it a Leonard Cohen song. If you are interested in the more technical details of what is going on with ChatGPT or AI in China, please subscribe to Jordan Schneider’s ChinaTalk, which provides much in-depth information.
Translation
Xu Jilin: Today we are going to talk about ChatGPT and artificial intelligence from the perspective of technology, human nature, and ethics. Over the past couple of weeks there have been lots of experts talking about ChatGPT from a technical point of view, but I think the invention of ChatGPT is not merely a technological phenomenon, but is also cultural, and it is even closely related to society, economy, and politics, so perhaps we should think about how this revolutionary breakthrough in artificial intelligence will impact mankind and individuals from a broader perspective. I’d like to start by asking Xiaoyu to tell us what this technology is.
Zhang Xiaoyu: Open AI has recently launched a line of similar products including ChatGPT, some of which even outperform ChatGPT in various ways, and I would like to use ChatGPT to talk about what kind of progressive meaning the recently emerging AI technology might have and the roles it might play.
Many people probably remember a few years ago when AlphaGo defeated the world’s top Go players one after the other. At that time, AlphaGo had already surpassed previous programs that could only produce results according to the algorithms written by the programmers. Instead, it was able to learn and evolve itself after fully grasping the basic rules of Go. Today, by contrast, AI deep learning is generally based on image recognition, which is used to accomplish various tasks. Not only can it recognize patterns and learn, it can also generate new content, so we can understand it as having the ability to create.
In terms of its conversational ability, ChatGPT is a qualitative leap forward compared to the “chatbots” of the past. Although its core algorithm has not yet been revealed, we can understand it from several angles. For example, the name GPT contains G for “generative” and P for “pre-trained.” In fact, ChatGPT is not like a chatbot designed to generate real-time online responses, but instead has a huge training model, based on which it will deliver repeated online attempts to solve the problem presented to it. Finally, T means “transformer,” which generates near-human language through multiple layers of transformation.
An approximate understanding of the early version of the algorithm is: it starts with a letter, and using vector graphics predicts the next letter, and then a word, and then a sentence. If we understand how it works, we can see that ChatGPT is not really learning to speak in the way that humans do, but instead is trying to imitate it as much as possible. So even if it speaks like a human, it does not think like a human.
Xu Jilin: Thanks, Xiaoyu, for the introduction. Now let’s move on to the question of emotions and talk about what the user experience is like.
Jin Wen: Xiaoyu said something very important a minute ago: ChatGPT seems to speak a human language, but it thinks in a completely different way than do humans. So, if humans and ChatGPT think completely differently, is it possible for them to talk on the same wavelength?
Natural human language is a kind of symbolic system that has evolved over a long period through human exchange and understanding, and it has a certain arbitrary quality. Even when people are speaking, the feelings or ideas we want to express are never seamlessly connected to language. Nor is human language completely natural and spontaneous; there is always a process of internal transcription and translation. In fact, people often are struck by the powerlessness of language to express their inner-most thoughts or feelings. For example, direct expressions of pain in most languages are rare, consisting basically of “pain, severe pain, or slight pain,” although these expressions do not suffice to convey the pain individuals experience.
When we look at the entire range of emotions, human language is even more fluid. To express an individual feeling, humans often use a metaphorical, roundabout approach. Thus those who use human language creatively are bound to come up with very interesting and unique ways to express a particular emotion.
For this reason, I thought up a test to see whether ChatGPT could understand metaphors. The example I used was the song “Treaty,” by the American singer Leonard Cohen, in which Cohen says something like: “Everyone seems to have sinned, to have lived through their lives without being able to shed their guilt; and even if they do leave their guilt behind, they remain flawed people.” But he did not express these thoughts directly, and instead used a metaphor, singing about a snake that feels uneasy because of its sins, and so sheds its skin, in the hope that this will allow him to put his sins behind him. Yet even after shedding its skin, the snake remains weak, and his venom continues to pulse throughout its body. Cohen is saying, in other words, that even if you put God behind you and with it your sense of guilt, you remain a damaged person and life is still painful.[2]
So, how did ChatGPT understand the metaphor? It did not understand that the snake shed its skin because of its sins, but rather thought it was because it needed to transform its life in important ways that would make it stronger and more resilient. Although this may not be a happy process, the snake will also learn from it and become stronger.
We can see several features of ChatGPT here. First, it does not know much about the human language of allusions; it is unaware of the metaphor of the serpent from the Bible, its close link to original sin and to the Garden of Eden. Therefore, it can only put together an account based on the facts presented. Second, it struck me that ChatGPT is built to be very “positive,” and if you talk to it from a pessimistic point of view, it will definitely understand it in a positive way and try to cheer you up and encourage you. So it starts out by denying the snake’s guilt, and then emphasizes that the snake’s change of skin will make it better. Third, since ChatGPT cannot understand the sentence metaphorically, it does not know that the serpent is a metaphor for man. One reason is that ChatGPT itself does not face this human dilemma, and another is that the machine has no experience, no body, and no social interactions, so it has a hard time understanding the concept of metaphor and why people use one thing to talk about another.
In my view, everyone is constantly being creative in their use of everyday language. Because language is so “standard,” humans are unable to express themselves without this creativity. Therefore, the use of natural language requires more creativity than we think, while ChatGPT, lacking this creativity, is clueless and boring. I am highly skeptical that it will be an interesting interlocutor in the future, and this may be a very difficult hurdle for large language models to overcome.
Xu Jilin: I too have spent a fair bit of time with ChatGPT over the past couple of days, and my feeling is that ChatGPT is first-rate in terms of logic, second-rate in terms of knowledge, and third-rate in terms of writing ability. I say its logic is first-rate because its algorithm is so powerful that the whole conversation flows very smoothly, and it understands your language and context. The answers it comes up with are sometimes wrong, but even so, its logic remains clear. I say its knowledge is second-rate because its database in English is so powerful that it can answer any question; however, it knows a lot less about China, and the data it comes up with is often ridiculous. I say its writing is third-rate because its answers are all lacking in personality. It is like a research assistant that has the proper basic skills; its writing is very standardized and courteous, but it is impersonal and lacks literary talent.
Jiang Yuhui: At this stage, I don’t feel ChatGPT has helped me a great deal, but compared to the huge bubble of the Metaverse, my judgment of ChatGPT is very positive. I think it will ultimately make a huge contribution to the development of human knowledge, the development of writing, and the development of thinking.
Many people are worried that artificial intelligence is developing too fast and that humans will wind up becoming slaves to technology. But to me, ChatGPT does not look like an enemy, but instead an assistant that helps me with repetitive tasks and low-end thinking in the process of learning, writing papers, and doing research. This includes searching for and filing information, keeping up with the literature, and so on. With this help, humans can have more room to be creative in terms of their thinking and knowledge.
Of course, many of my friends do not agree with this idea and feel that individuals cannot have intellectual growth without the experience of completing repetitive tasks, such as learning languages and acquiring basic knowledge. But in the age of artificial intelligence, individuals don’t need to spend much time doing those repetitive tasks. Rather, they should focus on more creative work. I’ve read somewhere that ChatGPT will revolutionize education in China, and indeed throughout the world. Although there is a common belief that education abroad is about inspiring the student, while in China everything is about rote learning, there is still a lot of mechanical and repetitive learning in education systems everywhere. ChatGPT can do part of this intellectual and linguistic labor, allowing education to move closer to “inspiration,” stimulating people’s creativity and thought potential. Of course, ChatGPT might also develop toward a kind of alienation. If its linguistic and intellectual capabilities vastly exceed those of human beings, will it wind up imposing a kind of discipline on mankind or engaging in manipulation? Will it rob people of their freedom in the realm of technology?
On the other hand, artificial intelligence also has a great capacity to analyze or even design human emotions. I spend a fair bit of time studying video games, and in this field there is a special term called “emotional engineering.” The reason why a game is fun is that it manipulates human emotions. All in all, I continue to look forward to the collaboration between ChatGPT and human beings, which will unleash the potential on both sides.
Xu Jinlin: Yuhui offers us an interesting perspective. I once asked ChatGPT: “Can you think like a human being?” It answered that it could not have the autonomous consciousness and subjective experience of human beings, nor could it be as creative and evaluative, nor can it make moral judgments. It said that while it excels in some things, it cannot completely replace human thinking and can only assist human beings in certain tasks.
So, as AI continues to evolve, will it produce a kind of wisdom? Some theories suggest that wisdom is gained through experience, and that this experience is not only human reason, but also includes human emotion and will. According to the philosopher Michael Polanyi (1891-1976), there is a kind of ambiguous knowledge which is ineffable and can only be learned through practice, such as when we learn to drive a car or to swim. The bodyless AI seems to be outside of such experiences, and AI may well be unable to gain wisdom as do human beings.
Zhang Xiaoyu: If we start from this perspective, we are asking the wrong questions. Artificial intelligence does not need to acquire “ambiguous wisdom” like humans do. Ambiguous wisdom is the result of the limitations of human senses and thinking ability, the fact that we must resort to what we call metaphysics or meditation in order to grasp something. But if, with the support of other disciplines, AI can achieve direct observation, analysis, and understanding solely through data collection, then it can dispense with “ambiguous wisdom.” So I think AI can arrive at wisdom without going through what humans do.
In addition, if we educate AI properly, it might be able to understand metaphors. It may not understand what you are talking about, but it can mimic your language. If we teach it beforehand what a certain metaphor means, it can use that metaphor naturally in a conversation. In fact, in a certain sense, metaphors used in the history of literature and ideas are nothing more than imitations of those who first created them. From this perspective, it is inappropriate to deny that AI can be as intelligent as humans.
This leads to Descartes’s thought experiment in which he proposed the concept of “evil demons:” if there are evil demons that can control all your senses and manipulate all your perceptions of the external world, what else can you be sure of in your mind? By the same token, if AI can be an omnipotent imitator, even if it does not have any emotion or intelligence of its own and cannot think like a human, it can imitate a person with these abilities. From this point of view, who are we to say that AI is less intelligent than humans? Is there any basis to question the intelligence of AI other than the fact that it has no emotions, has no ambiguous wisdom, and is not human?
Second, in terms of social function and political structure, the extent to which AI can replace humans depends not on the upper limit of the algorithm, but on the lower limit of human capacity. A great number of white-collar jobs in everyday life are themselves made up of a repetitive labor requiring very simple interactions. This is the kind of work many young people find when they graduate from school and enter the workplace. Think about teaching assistants, paralegals, assistant copywriters. They must go through a long period of training before they can climb the ranks, but now their jobs are easily replaced by algorithms. This poses the question: if AI can replace many basic jobs, will it lead to huge social injustice? Will the involution brought about by technological advances and social structural injustices lead to intergenerational injustice in the future?
Xu Jilin: I have a different view on this. In a previous meeting here at East China Normal, an AI expert said that they found during the R&D process that AI is very familiar with rational knowledge, but has difficulty mastering the knowledge and skills possessed by children. This is because children function less based on reason, and more on child-like intuition and perception, which AI lacks. The point Xiaoyu just made has to do with our preconceptions about human beings: human beings are understood to be rational, but they also possess enlightenment and intuition. Enlightenment and intuition are fundamental to creativity. In that sense, what exactly is a human being? Following the progress of technology, we often reduce human beings to their rationality, but this is a very shallow understanding of human nature. Hume famously said that reason is merely a slave to emotion, by which he meant to dispel the myths that modern enlightenment has bequeathed to man. This brings us to the limitations of the AI set up by Open AI, which include the absence of emotional content. There are also ethical and moral considerations here. If ChatGPT has its own will and a wealth of emotions, then AI will really become a new kind of human being that is difficult for us natural people to control.
Jin Wen: I’d like to continue on from Xiaoyu’s point that ChatGPT can do a good job as an assistant worker, will not necessarily eliminate large numbers of human jobs, and can even provide useful help to many workers. I agree with this but have two objections. First, Xiaoyu may feel that the few geniuses among humans who manage to surpass themselves are the hope for all humanity, but the lower threshold of most of us is just too low, so there is a sense of disappointment and pessimism about human beings, and therefore hope for some new kind of intelligence.
Xiaoyu said a few minutes ago that enlightenment and ambiguity are based on human limitations. But the reason why human beings can have emotions we are proud of is precisely because we have material bodies. Emotions are the responses of the material body encountering the outside world. If we did not have a body, we would not be able to respond, we would not be touched by any changes, and our emotions would not exist.
So is having emotions a good thing or not? If you look at it from a very negative perspective, some human characteristics are simply not deserving of pride, but the reason human civilization and human society are still worth preserving is because our best characteristics are also based on those limitations. The upper limit of humanity is based on the lower limit of humanity.
If man did not have a body and there was no gulf between human experience and language, he would have no incentive to create a thousand different languages. In fact, the most brilliant human civilizations are based precisely on human limitations. As Borges expresses it in the story “The Immortal:” the ultimate limitation of human beings is the fact that they die, that they have fast-decaying bodies, but that is the very reason for their greatness. It is because of the decay of their finite bodies that human beings have a sense of self, a sense of self that brings great joy and revelry, but also great pain. In order to cope with these great emotional ups and downs, human beings have created their own civilization.
These civilizations also contain a fatal weakness: they divide people. ChatGPT, as well as the earlier inventions of other mechanical tools, reminds us that there is a natural tendency for human society to allocate more time to the elite for their so-called advanced labor, and to leave the repetitive and simple tasks to the “untalented” people. In the 18th century, such a tendency evolved into the theory of the division of labor: for a society’s system of production to be extremely efficient, some people had to do the simple, necessary work, while others did the coordinating, managing, and innovating. The emergence of Chat GPT in no way threatens to upend this order, but rather to reenforce the division of labor. ChatGPT thus reinforces a very wrong way of thinking: some people work on complex things, while others work on simple things.
This thinking is wrong for two reasons. First, the most complex creative work often requires the simplest intuition, and that intuition requires repetitive work to develop. If you have never studied Spanish, for example, and consistently rely on translation software to translate Spanish into English or Chinese, then you will not be able to understand Spanish poetry; you might be able to grasp the general meaning of the poem, but not the poem itself. In like manner, the kind of division of labor produced by artificial intelligence would erase all human creativity.
Second, it is completely ethically indefensible. In a future where artificial intelligence participates in the division of labor, we will be divided into two classes: slaves and masters. But because masters will not be engaged in simple repetitive work, they will be unable to create anything new, and will only repeat the same routines. The basic impetus behind the inspiration for any “creation,” even a picture or a movie made by artificial intelligence, comes from the irreplaceable and constantly renewed material sensory experience and social life experience of human beings, and not from the repetition of popularized routines.
In other words, the process of social differentiation does great harm to each class: those who think they are in the upper class see their creativity erased; those in the middle or lower classes are unable to further develop their intelligence, because they are limited to doing repetitive work. So from an ethical standpoint, an overly extreme division of labor will create huge divisions in society, having tragic impacts on all classes. The emergence of ChatGPT will exacerbate this tragic tendency, but it will also serve as a warning to remind us of the crisis of dividing humans and machines into classes.
To take the simplest example, if we talk to ChatGPT every day, we will forget the upper limit of natural language. People will gradually lose the ability to express themselves, fail to understand themselves, and eventually fail to interact with others in a healthy way. Human friendship, love, and all positive relationships will thus disintegrate, and human society will literally wither away. This is the greatest crisis imaginable.
Zhang Xiaoyu: I would like to take issue with Jin Wen’s analysis of social structure and history. Throughout history, stratification and differentiation have accompanied human progress. We should not deny the rationality of stratification, but we instead draw distinctions from those who deserve to profit from this structure and those who do not. For example, Watt invented the steam engine, Newton discovered the laws of physics, and Yuan Longping invented hybrid rice, and they all deserve to be rewarded. But in the process of social progress, it is unreasonable that there are some among the upper classes who rely on violence to take what they want, and use unjust political power to exploit the many, making it impossible for others to realize their potential.
Setting technology aside, there is already a great deal of injustice in the structure of human society; there is no need for us to place all the blame on technology, and should instead bravely face such injustices head on, because it is the people causing these injustices that are the objects of our revolution. Instead of directing our anger at machines, as the Luddite movement did during the industrial revolution, we should see that social injustices are at the outset caused by unjust social structures. Understanding this, we will see that throughout human history, technological forces have often been our allies, and not our enemies, in helping us overthrow irrational social structures. We need to realize that social structures are already unjust, that there are too many bullshit jobs in society revolving around the leaders, and that this is what is causing humans to become more and more like machines. We need to go ahead and break through these social injustices, and only then can everyone go on to become what they were originally meant to be.
Jiang Yuhui: Throughout history, technology has been a double-edged sword, and this includes ChatGPT. Jin Wen feels that it poses a threat to the true goodness and beauty found in human nature; Xiaoyu sees more of a possibility of liberation. In the emerging stage of AI, we should try to make it move in a better direction instead of verbally attacking it. My position has always been that scientists create technology for the benefit of humanity. Perhaps at this stage, humanities scholars can join forces with technology scholars to prevent ChatGPT from moving in an evil direction.
In addition, I do not believe that only a few people will enjoy the benefits of ChatGPT. On the contrary, when ChatGPT takes over repetitive tasks in terms of human language, knowledge, and thinking, everyone should be able to develop their own potential. But this presupposes that society will provide opportunities for everyone. The evolution of technology needs to be integrated with the evolution of society. Technology cannot solve all problems, and humanity should activate itself in such a way as to complement the development of technology, making society more equal.
After the emergence of AI, society may indeed become more equal, because AI does not discriminate in the ways that humans do–on the basis of race, gender, geography, or even wealth–this is not how AI thinks. From this point of view, perhaps machines are better equipped to create an equal platform on which people can equally realize their individual potential.
However, I have also noted a term called the “fishbowl effect,” created by a foreign scholar who is critical of the Metaverse. His argument is that all the masses are in the fishbowl, and looking from the inside out, they think they are free to swim where they want to; but in fact, it is the few technological elites outside the fishbowl who pass laws and control them. As a result, many people believe that AI will create the greatest inequality ever seen in human history, an inequality based in intelligence. If a person is smarter than you, if they can think better than you, they will be able to crush you under their foot. Still, I personally believe that ChatGPT can liberate humans from the heavy demands of labor. And I also believe that if everyone is given the opportunity, everyone may become a creator at a very high level; it is the institutionalized positions and repetitive labor in modern society that leave no room for them to realize their potential.
I would also like to conclude with a vision: that the great achievements of civilization in human history will remain, but that the next wave of creation may not be achieved by humans using language, symbols, and tradition. Instead, the next wave of creation will be the product of man and machine working together, with machines providing inspiration and motivation for mankind.
Xu Jilin: Yuhui’s vision reminds me of the romantic imagination of Greg Brockman, the co-founder of Open AI. He once suggested that AI and intelligent robots will replace all jobs, and that the labor cost of society in the future will be zero. In the future all people will only need a basic income and then they can become free people, something that might break through the status quo of today’s capitalist inequality.
But I’m afraid that this vision is a new utopia, at least for now. For behind the companies developing artificial intelligence and supporting them are the giant Internet companies. These Internet giants were present at the birth of the Internet, and at the time felt that an era of anarchy in the true sense of the word was about to dawn, with everyone finally having equal rights of expression. But after many iterations of the Internet, society has instead become more unequal. Global resources, wealth, technology, and talent are increasingly concentrated in the head countries and companies. In the real world, every technological advancement is pushed forward by capital, in a winner-take-all game. All the resources, talents, and technologies are concentrated in the hands of those few oligarchs.
So does a utopia of freedom and equality await us in the future? Will society become more fragmented, producing a new hierarchical order? In terms of these questions, there are some things we can think about and others where we must just wait and see. In this context, I believe in Zhang Taiyan‘s 章太炎 (1869-1936) theory as expressed in his Separating the Universal and the Particular in Evolution, written at the end of the Qing dynasty: he believed that both good and evil were progressing, with evil one step ahead of the good. Every step forward in technology will be accompanied by a step backward somewhere else. Extreme optimism is to be avoided, and extreme pessimism is also unhelpful, because human beings throughout history have always developed despite contradictions, and the human predicament remains eternal.
Notes
[1]许纪霖, “对谈|ChatGPT:是新世界的诞生,还是人类末日的开始?” posted on Xu Jilin’s WeChat feed on February 27, 2023.
[2]Translator’s note: Cohen’s lyrics are: “I heard the snake was baffled by his sin/He shed his scales to find the snake within/But born again is born without a skin/The poison enters into everything.”
0 Comments