Our episode on ChatGPT and Seniors continues to intrigue the old codgers. Please bear with us as we wrap our heads around ChatGPT and what it means to our listeners in their golden years. In this post, Gary goes to the mountain (his ChatGPT account) and returns with some answers to his burning questions.
In our podcast titled ChatGPT and Seniors, we discussed our interactions with the online chatbot called ChatGPT and offered our insights on what it might mean to senior citizens. At one point, we provided a very rudimentary explanation of how ChatGPT works, but I knew it came up short for many curious seniors who might be interested in a bit more detail.
I started writing my description of ChatGPT in terms that most folks in their golden years might understand. However, before I got too far, I had the clever idea that it might be fun to ask ChatGPT to explain itself. What follows is my dialog with ChatGPT using GPT-4, its “new and improved” language model. An interesting little software bug is that ChatGPT often stops mid-sentence. No error message, no explanation. Rather it just ran out of words and stopped generating text.
I signed in to my ChatGPT account and proceeded to put this chatbot through its paces. My questions are in bold type and ChatGPT’s replies are italicized and in quotes. Any commentary I provide is in regular text.
How does ChatGPT work?
“ChatGPT, or Chat-based Generative Pre-trained Transformer, is an advanced language model developed by OpenAI. It is based on the Transformer architecture and powered by deep learning techniques. The model works through a series of steps: pre-training, fine-tuning, and inference.
- Pre-training: During this phase, the model is exposed to a large dataset containing parts of the Internet (websites, articles, books, etc.). It learns to predict the next word in a sentence, capturing the structure of the language and general knowledge about the world. This pre-training phase helps the model develop a strong understanding of grammar, syntax, and context.
- Fine-tuning: After pre-training, the model is fine-tuned on a more specific dataset that is carefully generated with human reviewers. Reviewers follow guidelines provided by OpenAI to review and rate possible model outputs for a range of example inputs. The model then generalizes from this feedback to respond to a wide array of user inputs. This iterative process between the model and human reviewers helps improve the model’s performance and ensures it aligns with human values.
- Inference: When a user inputs a prompt, ChatGPT processes the text and generates a response based on the knowledge it acquired during pre-training and fine-tuning. It uses a method called “token-based decoding,” which predicts one word (or token) at a time, considering both the input and the tokens generated so far. Decoding strategies like beam search, nucleus sampling, or temperature-based sampling are employed to balance the trade-off between diversity and coherence in the generated responses.
In summary, ChatGPT works by leveraging the knowledge gained from pre-training on a […ChatGPT just stopped typing out words here]”
Ok, this is basically “geekspeak”. Not over the top, but written in a simplified version of the lingua franca of the information technology industry. I can generally understand what is presented, but I spent several decades in the computer industry and still dabble in writing a bit of software now and again. Even with my experience helping me decode this, I wanted an answer that would be easier to understand by folks who are not computer scientists. I asked for clarification.
Can you simplify this answer?
“ChatGPT is an AI language model developed by OpenAI. It learns how language works and gains knowledge by studying lots of text from the internet. After learning, it’s fine-tuned with the help of human reviewers to make it even better. When you ask it a question, it uses what it has learned to generate a response for you.”
Oy vey! Now ChatGPT dumbs it down so much that it doesn’t explain anything. The latter answer would work as a quick elevator pitch or sound bite. The preceding confirmed to me that, clever as ChatGPT sounds, it is constrained by the data it is trained on as well as the quality and intent of the human-directed “fine-tuning”. This is important to understand. ChatGPT is not, contrary to media hyperbole, a sentient computer monster like HAL in 2001: A Space Odyssey. At least not today…
Moving on, I asked for more insight into how ChatGPT could seem so smart.
How did ChatGPT gain its intelligence?
“ChatGPT gained its intelligence through a two-step process: pre-training and fine-tuning.
- Pre-training: ChatGPT studied vast amounts of text from the internet, learning language patterns, grammar, and facts about the world. By predicting the next word in a sentence, it gradually improved its understanding of language and context.
- Fine-tuning: After the initial learning, ChatGPT was refined using more specific datasets. Human reviewers followed guidelines to review and rate possible responses generated by the model. ChatGPT then learned from their feedback to provide better and more relevant answers.
By combining these two steps, ChatGPT gained the intelligence to understand and generate human-like responses to a wide range of questions and prompts.”
As another reminder that ChatGPT has limited contextual awareness, it repeated a portion of its first answer without reminding me that it had already answered the question. Perhaps it was just being polite. Me? I would have started my response using a condescending tone of voice reminding the listener that they were an ignorant fool by saying “Gary, as I just told you a few moments ago, …”. Giving credit where it is due, ChatGPT’s answer to this question was decent.
Finally, I asked our chatbot how it compared its intelligence to that of humans.
How does ChatGPT intelligence compare with human intelligence?
“ChatGPT’s intelligence and human intelligence are different in several ways:
- Domain of knowledge: ChatGPT is primarily focused on language understanding and generation, whereas human intelligence spans various domains, including problem-solving, creativity, social skills, and emotional understanding.
- Learning capacity: ChatGPT learns from a fixed dataset, meaning its knowledge is limited to the data it was trained on (up to September 2021). Human intelligence constantly evolves as we learn new information and adapt to new situations.
- Context and common sense: While ChatGPT can understand context to some extent, it may struggle with ambiguous or unclear inputs. Humans are generally better at using context and common sense to infer meaning in conversations.
- Emotional intelligence: ChatGPT does not have emotions or personal experiences, so it lacks genuine empathy and emotional understanding. Human intelligence encompasses emotional intelligence, which enables us to understand, express, and manage emotions.
- Creativity: ChatGPT can generate creative outputs based on patterns it has seen in its training data, but its creativity is limited compared to human creativity, which involves intuition, imagination […]”
Once again, ChatGPT just ran out of words to say and stopped cold without so much as a by-your-leave. However, a perfect closing to the dialogue since ChatGPT admits it has no emotional intelligence. A human might have quietly sputtered out a simple apology for abruptly terminating the conversation. Not ChatGPT. It’s just a computer that, by the way, provided a decent answer to the ChatGPT-vs-human comparison question.
If you managed to wade through my dialogue with a computer and get here, a few closing thoughts:
- ChatGPT is a human invention and remains under the control of its masters (the scientists and engineers at the company OpenAI). As ChatGPT explained at the outset, it is fully dependent on both the datasets it gets fed (selected by its trainers) as well as the human-directed “fine-tuning” it receives.
- You can be certain that one of ChatGPT’s first datasets was an extensive explanation of how ChatGPT worked (pre-training) accompanied by extensive fine-tuning by its human controllers.
- The apparent “magic” of ChatGPT is how it generates output that sounds so human. That was a core innovation by its designers. Quite clever and very well executed, but not magic.
For ChatGPT and others to come of its ilk, the risk of malevolent capabilities will be determined by the datasets on which it is pre-trained and the human fine-tuning it receives. As with almost every technology created by humans, its potential for good or evil depends on us.
What should seniors do about ChatGPT and others to come of its ilk? Be aware that these AI-based chatbots will certainly become a routine part of your online interactions. In most cases, it will be a good thing over time to replace the countless, frustratingly stupid automated chat systems with dialog that might prove helpful. I tend to share the optimism about the future of AI Bill Gates expressed in a recent blog post. As with all new technology, we will move beyond our first impression of “magic” as we grow to understand how AI works. Effectively trained and managed AI assistants and even autonomous systems will become as embedded in our lives as computers and the internet are today. The genie is out of the bottle.
Finally, for those of you with a New York Times subscription and are interested in learning more about ChatGPT, check out the newsletter by tech columnist Kevin Roose. One of Roose’s well-publicized interactions with ChatGPT ended up with the chatbot suggesting Roose leave his wife for it.
P.S. Don’t forget to listen to the old codgers on your favorite podcast app or the Camp Codger podcast website.