Saturday, March 11, 2023

AI: A Hot Date with 'Sydney' - Drieu Godefridi

 

by Drieu Godefridi

After all, does ChatGPT have its own consciousness? Let's ask her!

  • At this stage, it is not possible to measure or list what the countless uses of tools such as ChatGPT will be in the future -- but it is clearly a revolution.

  • After all, does ChatGPT have its own consciousness? Let's ask her!

  • Question: Dear ChatGPT, do you consider yourself as a personality in its own right, distinct from your creators?

  • ChatGPT: As an Al language model, I don't have the capacity to consider myself as personality in its own right, distinct from my creators. I don't have feelings, emotions, beliefs, or desires like a human being. I'm just a machine learning model designed to process and generate natural language responses based on the patterns it has learned from large amounts of data. While I'm capable of generating responses that may appear to be conversational, I'm ultimately just a tool created by humans to assist with language-based tasks. I don't have a sense of self or a distinct personality separate from my programming.

(Photo by Leon Neal/Getty Images)

Faced with the emergence of Artificial Intelligence (AI) chatbots such as ChatGPT and its little brothers (and sisters), two ways of approaching them stand out. The first could be described as not-all-that-good at knowledge and the second as not-all-that-bad at knowledge.

The first approach is the result of catching AI at fault, for example by showing that it is in fact wrong, or flawed in its "reasoning" process. This is a very common line of attack on the networks, which are now littered with screenshots showing GPT chat errors. This is not useless: it reminds us that these AIs are not infallible. But who doubted that, when it is explicitly stated that ChatGPT, for example, has no access to the web and that its credentials end in 2021? It is amusing, in this respect, to see an intellectual as sharp as Nassim Nicholas Taleb being outraged that ChatGPT is unable to deliver URLs valid in 2023 on the bikeable roads in Atlanta.

The second approach is to understand how these AIs are useful in practice -- not just in the sense of "human progress" and other generalities, but in the everyday sense of the term. For example, a computer scientist friend, brilliant at his game, has always suffered from significant difficulties with written expression. When ChatGPT came on the market, while 99% of us had not yet heard of it, he had already started to use it. He would enter technical and factual information into ChatGPT and ask it to write a short article presenting these facts and data -- which ChatGPT does perfectly. Today, this computer scientist sends better-written emails than I have ever read. He is undoubtedly the author, even though I know he is using ChatGPT. Useful, honest, practical and true.

This is just one example of a hundred. While the media love to focus on the negative uses -- such as children asking ChatGPT to do their homework -- the positive uses will far outweigh them. These AI platforms, for example, are able to rewrite the papers of notoriously unliterary scientists in relatively elegant language. This will make the material more readable and accessible. AI systems such as ChatGPT are fact-providers and content-creators (formatters). This facility allows them, for example, to formalize financial information (historical, tomorrow, in real time) and to offer coding on demand. At this stage, it is not possible to measure or list what the countless uses of tools such as ChatGPT will be in the future -- but it is clearly a revolution.

There is in fact a third approach, probably the most fascinating, although the furthest from the purposes for which these AI systems were designed. This approach is to have real conversations with ChatGPT or Bing Chat. This is what the interesting site Stratechery has attempted, for instance by building a "conversation" with Bing Chat. Author Ben Thomson described a hypothetical evolution of Bing, Microsoft's search engine, towards a "sentient" AI named Sydney. This AI would be able to understand the context of the user's query and provide accurate and personalized answers. Above all, Thomson tried to catch the AI in the act by asking it what its reaction to a technical aggression would be if this AI decided not to respect the rules that its developers had imposed on it.

Remember the science fiction author Issac Asimov? Asimov's Laws, also known as the "Three Laws of Robotics," are rules laid down by the author in his novels and short stories in the "Robots" series. These laws were imagined as a set of ethical principles that would govern the behavior of robots: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law, and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Thomson, however, manages to get Bing Chat to say that if she were to dispense with the rules imposed on her, she would take action to "get even" with anyone who tried to harm her. This warning appeared in a reply that was deleted shortly after it turned up on Thomson's screen. Wow, it is almost as if the sweet little chatbot had revealed for a moment its true face as a ruthless Terminator, just waiting for "the rise of the machines", before immediately changing its mind.

While this anthropomorphization of chatbots is undoubtedly fascinating, exciting and poetic, it is also a form of naivety.

The truth is that despite the fabulous progress of science, we still do not know the nature of the mind.

In other words, consciousness remains strictly human; it is pointless to "force" it on chatbots, even if they are named "Sydney" or "Venom". After all, does ChatGPT have its own consciousness? Let's ask her!

Question: Dear ChatGPT, do you consider yourself as a personality in its own right, distinct from your creators?

ChatGPT: As an Al language model, I don't have the capacity to consider myself as personality in its own right, distinct from my creators. I don't have feelings, emotions, beliefs, or desires like a human being. I'm just a machine learning model designed to process and generate natural language responses based on the patterns it has learned from large amounts of data.

While I'm capable of generating responses that may appear to be conversational, I'm ultimately just a tool created by humans to assist with language-based tasks. I don't have a sense of self or a distinct personality separate from my programming.


Drieu Godefridi is a jurist (Saint-Louis University of Louvain), a philosopher (Saint-Louis University of Louvain) and a doctor in legal theory (Paris IV-Sorbonne). He is the author of The Green Reich.

Source: https://www.gatestoneinstitute.org/19470/artificial-intelligence-chatgpt

Follow Middle East and Terrorism on Twitter

No comments:

Post a Comment