What do you think of ChatGPT?
One of the common conversations I’ve had over the last two months involves ChatGPT and the impact it has on our lives, and potentially our classrooms.
Have no doubt, things are definitely different. I’m really excited and curious about ChatGPT. But, I’m more interested in the technology behind it.
In this post, I’ll get into the weeds a bit and unpack the technology behind ChatGPT, GPT-3, and the company that is providing it. It’s important to understand the technology and advances that we’re witnessing, without being hyperbolic. It’s also important to pay attention to the companies and human beings behind these advances. All of these will change. Technology will iterate. Tools, companies, and platforms will pop up and ultimately sunset. This post is designed as a starting point to make sense of what is happening right now.
A neural network is a network of algorithms (or computer-based recipes) and simple processing nodes that work together to make sense of information by thinking about the weight and threshold it contains. These values, the weight and threshold of information are known as parameters (or hyperparameters), or the values used to control the machine learning process. Neural networks are a subset of machine learning and deep learning. To keep it simple, these are forms of artificial intelligence (AI) in which programs are designed to learn on their own.
Think about what happens when you make a decision while driving. Your brain is considering multiple pathways that will lead to the destination. For each option, there are values and scores placed on these options based on your prior knowledge, traffic patterns, time of day, weather, etc. These processing nodes and algorithms can process a lot more information than our human brains.
As an example, you want to create a program that tells the difference between cars and trucks. Using a data set, you indicate that cars have four wheels, but trucks may have four or more. Cars and trucks also have engines, but truck engines may often be larger. Your data may also suggest that cars are designed to carry people, but not things. The data may indicate that trucks carry things, but also carry people. You can use the program and associated data set to review vehicles and determine whether they’re cars or trucks…or neither. Over time, you can include more and more data, and have the program learn from its mistakes to improve the accuracy of these guesses. Over time, the program is learning what elements make up a car and a truck. This is machine learning.
The third generation of the Generative Pre-Trained Transformer (GPT-3) is a neural network, machine learning model trained using data available on the Internet that can be used to produce human-like text. Developed by OpenAI, it requires a small amount of input text to generate large volumes of relevant and sophisticated machine-generated text. Crunchbase describes OpenAI as an AI research and deployment company that aims to ensure that artificial general intelligence benefits all of humanity.
GPT-3 can create anything with a text structure. It has been used to create articles, poetry, stories, news reports and dialogue using a small amount of input text. Developers are now looking for opportunities to create websites, write code, build images, create memes, translate programming languages, etc.
Please Note: OpenAI is one company in this business of machine learning, deep learning, and neural networks. There are numerous competitors in this space, including DeepMind (acquired by Alphabet [Google]) and numerous other public and private entities. If you can think back to Pre-COVID times, Zoom was a little known, and little used video conferencing platform (IMHO). It now is a fundamental component of human infrastructure and pervades all aspects of our lives. Pay attention to who, what are they doing, and why are they doing this when you interact with these tools.
GPT-3 & ChatGPT
GPT-3 and ChatGPT are both machine learning models developed by OpenAI. Chat GPT is a smaller language model that has been trained for the specific task. It has been trained for the task of conversational response generation. Put simply, it has been built, and has been learning how to serve as a chatbot or virtual assistant to provide natural, human-like responses to user inquiries.
As a result, GPT-3 is much larger in size. GPT-3 has many more billion parameters than ChatGPT and is able to generate more complex and diverse responses and outputs, but this also requires more computing resources. ChatGPT is more lightweight and can be run on a wider range of devices and platforms.
Connecting with GPT-3 & ChatGPT
In future posts, I’ll unpack some of the barriers and accelerators these advances in AI and machine learning mean for our classrooms and lives. I believe the popularity of ChatGPT has thrust the idea of machine learning and AI into a global spotlight. The truth is that we’ve been using machine learning algorithms for some time now.
When you search for something online, the search engine is folding in what is knows about you to give you the best results it thinks you want to see. This happens with your social media apps, online video portals, search engine results, online marketplaces, dating apps, etc. The goal has been to get you to continue using the tool, app, or platform as long as possible. They want your attention.
We’ve also been using AI and machine learning tools in other capacities. When you talk to your smart speaker in your home, type a search query into your browser, or use a writing/grammar app (e.g., Grammarly), you’re tapping into the power of machine learning. An example of this is Google Autocomplete.
Machine learning, AI virtual assistants have been working behind the scenes for years impacting human thought and behaviors. I believe one of the major benefits of ChatGPT and more importantly the work with GPT-3 is that the power of machine learning in put into the hands of everyday users. Instead of unseen AI bots slipping in and out of our daily workflow, we now can interact directly with a source.
As I stated at the start of this post, I’ll have more posts coming soon about the challenges and opportunities involved in these tools, and more importantly some questions that it brings up. Newer and more powerful tools, platforms, and services will come out and these will extend the discussion. This is just the start of the discussion.