AI is making increasing progress in a number of aspects of our lives. A recent research paper by Michal Kosinski, a computational psychologist at Stanford University suggests that the latest version of ChatGPT (this is GPT 3.5 if you’ve read my other post) has passed the Theory of Mind Test, a fundamental part of human cognition that allows us to comprehend other people’s beliefs, desires, and intentions.
Theory of Mind (ToM) refers to the ability to understand that other people have their own thoughts, feelings, and perspectives that may be different from your own. In other words, it’s capacity to attribute mental states to others and to understand that they see the world differently than you do.
This post examines the implications of ToM in AI, its limitations, and real-world examples of AI systems utilizing this concept.
The Emergence of Theory of Mind in AI
Researchers are striving to create machines with cognitive abilities similar to humans. These machines must be able to comprehend the intentions of others and anticipate behavior through the use of advanced machine-learning algorithms.
Computers can predict the behavior of other agents through probabilistic models representing their beliefs and desires. These models can be updated as new data is obtained. Alternatively, neural networks can detect patterns and make predictions based on large datasets of social interactions.
In this research, Kosinski tested ChatGPT with standard psychological tests that are typically used on humans. In previous tests, with simpler language learning models (LLMs), Kosinski tested versions of ChatGPT released before 2022 and after, finding that the latter had the ability to solve 70% of ToM tests, roughly equivalent to a 7-year-old’s level, whereas the most recent version tested in November had a 93% success rate, similar to a 9-year-old’s level.
Applications of Theory of Mind in AI
There are several examples of AI systems that use the ToM concept. Kismet was introduced in 2000 and could recognize emotions and replicate them through facial features such as eyes, eyebrows, lips, and ears.
Sophia, a humanoid robot, was released in 2016 along with her human-like likeness was also able to “see” emotions and respond appropriately.
Another example is ToMnet, a Theory of Mind-powered AI system created by Neil Rabinowitz and the team at DeepMind in London. ToMnet observes other AI systems, learns their characteristics and functions, and predicts their behavior.
Advances in AI and considerations of ToM present challenges in identifying what problems to solve and how to measure success. This also raises questions about the role and place of AI in our lives. Machines have yet to be equipped with ToM capacities, until the emergence of AI systems last year. This has altered our understanding of AI, and perhaps cognition, learning, and consciousness.
Much of the current discussion around AI in education focuses on ChatGPT and the possibility of learners cheating on assignments. As more machine learning models are evaluated and critiqued in terms of ToM, we’ll need to think more about potential applications of AI and what role they might play. In addition, it is essential to consider the potential risks associated with the development of machines with ToM capabilities. It goes without saying that steps need to be taken in terms of education, governance, and outreach to ensure they are used ethically and responsibly.
It is also important to understand the limitations of this concept in AI and to work towards addressing them. Overall, ToM in AI opens up new possibilities for understanding and interacting with the world around us.
If you’d like to stay on top of areas like this, you should be reading my weekly newsletter. You can follow here or on Substack.