Exploring Realistic Use and Metaphors for Understanding ChatGPT
If you work with AI, you are probably inundated with ChatGPT news and takes. ChatGPT’s release has seen it generate a lot of hype, including comparisons with the invention of the steam engine and fire—or even God. However, these comparisons (apart from being sensationalized) are also based on limited unsystematic use and minimal domain knowledge. I am a conversational AI researcher getting a Ph.D. in HCI, and I decided to try using ChatGPT in a realistic work-related context for a month to understand this phenomenon which has made some very serious people convulse with delirium. Here are the ground rules:
- I would not go out of my way to use ChatGPT. I would only use it if needed for work.
- I would document all my conversations with ChatGPT using extensive journaling.
I began this autoethnography on the 6th of March, and I am now in week 3. At this stage, I have a more concrete understanding of what to expect from ChatGPT, more interestingly, what ChatGPT is really like. I will use the incredible power of metaphors to present some ways of thinking about ChatGPT that might help you conceptualize your interactions with it. Spoiler: Do not think of ChatGPT as God.
ChatGPT as Robin (and not Alfred)
ChatGPT is not going to be the end of expertise. However, it can help you feel like more of an expert. One of my most popular use cases was asking ChatGPT something I already knew the answer to but wanted to confirm. Having that added layer of confirmation made me feel a lot more secure in what I was writing. A good example of this was checking if my complicated search query would yield the expected results in a library database. Although I knew it would, it was good to get it confirmed. Another example was asking ChatGPT if my review of a research paper sounded rude (it did). Using ChatGPT in these low-stakes scenarios worked perfectly. You already know the answer but would not mind getting a second opinion. A good sidekick.
Conversely, avoid using ChatGPT when you lack expertise in the topic you seek information on. ChatGPT might not know the answer, but it will sound pretty convincing, and automation bias will take over. If it’s essential, you must double-check the credibility of the information it gives you. I found this extra step to fact-check ChatGPT to be quite time-consuming. This is also the reason why it is a bit premature to say ChatGPT will replace Google search. In many cases, a conversation is counter-productive, and search is one of those cases—especially with ChatGPT. If you have to confirm ChatGPT’s results on Google, might as well use Google and save yourself some time.
ChatGPT as Keyser Söze
I have alluded to this before, but ChatGPT is a master bullshitter. Like Keyser Söze from “The Usual Suspects,” it is adept at generating stories with little context or hints. The key difference is that Keyser Söze was limited to objects in the interrogation room to weave his masterful thread, ChatGPT has no such limits. It has access to the entire internet and uses it well to tell some pretty interesting stories. However, a reliable narrator, it is not. Here’s a fun experiment for you. Think of something you know and understand well—something you can deliver a lecture on without any preparation. Now ask something related to this to ChatGPT. Its answer might cure you of your imposter syndrome.
ChatGPT’s lack of accountability is also a big problem. If you call it out on a lie, it will quickly resort to “I am an AI language model…I am not perfect…trained on dataset..” This shift from posing as an omniscient being to a mere language model can feel quite absurd (and hilarious). It also makes all of its incessant apologies moot. Is it actually sorry? Or is it trying to avoid an argument?
ChatGPT as Ace Ventura
If your cat is coming home with a fun party hat every day and you are wondering where it is coming from, call someone like Ace Ventura. But if your cat does not return home, call someone like Sherlock Holmes. Similarly, if you are fretting about writing an introductory email to a researcher you admire, use ChatGPT. But if you need help writing a literature review for a research article, call your advisor. This is to say that if something is important to you, you do not want to use ChatGPT. I am not saying ChatGPT is completely useless. I find it to be very useful in scenarios where the outcome is not important and minimal effort is good enough.
I used ChatGPT in HCI-related research projects. I understand that the results might be different for someone in another field. However, I do think that imagining your interactions using the metaphors of Robin, Keyser Söze, and Ace Ventura will set realistic expectations and help you in delegating appropriate tasks.