Tuesday, November 28, 2023

Forget Prompt Engineering - Part 3: Suspension of disbelief

(This is based on my previous texts such as https://laibyrinth.blogspot.com/2023/11/forget-prompt-engineering-there-are.html - and you likely will not get what I'm talking about without reading them).

Feedback to my texts on (or rather: against) "Prompt Engineering" was plenty; I got a lot of support, but a lot of criticism as well.
It turns out there was a main point of objection, connected to a main problem about understanding what ChatGPT is, and how to best work with it.

What is it that did people "object" to?
I got a lot of replies that went along the lines of "Uhm, it seems you are 'talking' with ChatGPT as it were *actually* intelligent - but it's not."
Or "Your ideas / methods / suggestions *would* work, if ChatGPT were intelligent, and could respond in an intelligent way. But it's not."

These people claimed that ChatGPT is just "a stochastic parrot", a machine, a program, that makes it appear as if *were* intelligent by some clever tricks and tomfoolery.
They then claimed that because of this, "my" idea of simply talking with ChatGPT in an intelligent way in order to get a task done could not work; instead one needed to have a grasp of the underlying program, code, hidden behind this "illusion of intelligence" that ChatGPT creates, and has to access this "hidden layer"; which, they claim, is when prompt engineering comes in, and is useful.

And indeed, ever since ChatGPT started its "rise to fame", there have been quite some hot debates about the actual nature of ChatGPT and its output.
While some people on the fringe got some quite wild theories, such as that ChatGPT is almost, just as, or even more intelligent as a human being, that it is sentient, sapient, or even a gift sent by aliens to earth and other "fantastic" stuff like that, humans on the other side of the spectrum repeat the above mentioned claims - ChatGPT "fakes" intelligence, its a trick, an illusion, a parrot.

"No Hay Banda. There is no band. It's all a recording".

But, the surprising answer is that when working with ChatGPT, all of this is entirely irrelevant. It does not matter. It's not important for you to ponder these things, or wonder about these things, if you want to collaborate with ChatGPT on a project. Thus the idea that "one could not use ChatGPT by having a conversation with it, but has to resort to prompt engineering instead, because of its purely 'simulated' intelligence", is not correct.

Now, let's look at why it does not matter.
This is indeed a bit hard to explain using "mere words" - so I'll resort to analogy, metaphor, and example.

Imagine someone is playing an "interactive fiction" computer game. He is playing the character of a secret agent in the game.
A friend asks him whether he likes the game.
He replies: "It's all great and fun. But I got stuck at the part with the security guard. I tried everything, but the guard would not let me pass, so I can't get into the building".
The friend then says: "Well, you need to bribe the guard, then he will let you through."

Now our example person replies: "Woah, man! Slow down for a moment. This is just a computer game. All bits and bytes. This would mean that the computer would understand what a security guard is, what a bribe is, that someone might be prone to a bribe, and so on! Are you implying the computer running the game is intelligent? That the computer can understand all this?".

Well, the computer is not intelligent (in that case). But you would still get ahead in the game by "bribing the guard". Because the computer would not need to "truly" understand our world, or human behavior, to simulate this game, and this part of the game's story.

Because the (hypothetical) game was programmed to simulate the human world, the fictional life of secret agents, and things that come with it. And, in order to play such games, it's best to just *go with it* and imagine that things work similar to one would expect them to do in our very real world.

I know the above example is a bit abstract, and might even seem a bit unrelated to the topic of AI.
It's just a 'thought experiment', intended to help you when working with AI by creating a temporary 'suspension of disbelief', as it is called in media theory.
I could give many more examples like that, but that would make this text overly lengthy.
So, let's get back to the debate about ChatGPT:

ChatGPT is designed to be an AI and a ChatBot. It is designed to understand intelligent conversation, to act according to it, and to provide intelligent output based on it.
And it does this very well.

The "inner workings" of ChatGPT are not very important to the everyday user. If it works - it works!

So if you are able collaborate well with ChatGPT by talking with it in a meaningful way, then *go with it*.
And it doesn't matters if the "creators" used one or more tricks to let ChatGPT feign intelligence or not.
As long as it gets the job done, right? And as long as the job gets done in an intelligent way.
Thus you have no need to use "prompt engineering" if you want to work with ChatGpt.

*Just talk with ChatGPT just like you would talk with a human being*
Just like you would with a human collaborator.

You would not "prompt" a (human) secretary. You would not "prompt" a (human) visual artist.
So please do not prompt a ChatBot!

No comments:

Post a Comment