At Forward College we did an experiment to see if ChatGPT could help students learn how to resolve conflicts and the results were surprising. Using AI to teach conflict resolution yielded some unexpected benefits. Maybe not all the ones we want, but yes there are advantages.
We started by ‘crowd-sourcing’ scenarios like the ones above, as well as more sensitive conflicts like differing views on the situation in the Middle East or on climate change. ChatGPT v.3.5 was then used as one counterpart to the dialogue, prompted to use non-violent communication principles, in the pursuit of a satisfactory outcome.
This is what we found:
- AI can easily incorporate non-violent communication principles in order to engage in a dialogue with you. If you use simple instructions or a formula like Marshall Rosenberg’s four components of nonviolent communication - Observation, Feeling, Need and Request (OFNR), the incorporation of these components in a ChatGPT dialogue can indeed help to express themselves without judgment, recognise expression of emotions and needs, and formulate clear, positive requests.. I will discuss the extent to which a successful resolution can be reached later in the article.
- AI engages in dialogue using neutral language. Given that the use of neutral language is an effective conflict resolution practice, ChatGPT’s ‘modeling’ of unemotional (therefore seemingly neutral) language was deemed useful in de-escalating heated exchanges. Students often found they felt compelled to mirror neutral responses with more neutral language on their part.
- AI can transform more combative written communication to language that will be more effective in making a polite request. This was found to be practically useful when dealing with aggravating customer service issues for instance.
- The dialogue had cathartic value. Regardless of the outcome, students were able to have a ‘practice run’ of an actual conversation they might have. In doing so, they were able to create a healthy distance from the conflict, strategise on different approaches and also diffuse negative emotions. In some cases, the students chose to engage with ChatGPT in a more combative and obnoxious way. This allowed them to get their grief ‘out of their system’ and therefore be more prepared for real conversations that would be less likely to damage relationships. To some extent it also subconsciously made them feel heard.
- Testing knowledge. Students had to know nonviolent communication principles beforehand in order to validate that they were being appropriately understood by ChatGPT. In this way the exercise had instructive value in terms of internalising best-practices, repeating them and correcting/adjusting the prompts as needed. ChatGPT was also partially effective in doing an assessment of the dialogue for effective use of non-violent communication principles, highlighting areas for improvement .
But, was ChatGPT useful in resolving the conflict itself?
Not as we had hoped. Consider the case where a friend always seems too busy for you. With ChatGPT there is no way of understanding the friend’s intent or circumstances, no way to read their tone or body language, and no real way of identifying the unmet needs they or you might have, and no way of receiving a satisfactory apology.
AI also doesn’t have a dynamic context to respond to. For example, what if the friend has been struggling with insomnia or is not a texter. It is difficult for AI to incorporate these types of unknown variables. When it comes to more political conflicts, where an abundance of virtual information already exists, there remains many questions around inherent bias, quality of sources, and again the absence of intent, or earnestness in apology, in attempting any type of resolution.
Threats and limits
Ultimately artificial intelligence has limits at the level of code and language. So although AI can help one engage in a coaching-style dialogue, it is contained within the limits of ability to incorporate intent, goodwill, emotion, kindness and empathy. Current discourse on the topic puts forth interesting and hopeful prospects. For example, can non-violent communication be the language of AI systems? This goal hopes to integrate altruistic traits like compassion, kindness, and respect through machine learning algorithms (see ChatEMPATHY).
However, does the use of the right language translate into the creation of altruistic values? Unfortunately there is also a danger of normally genuine language beginning to sound trite. Hearing ChatGPT say ‘I sense that you are frustrated. I’m sorry to hear that you feel that way,’ simply doesn’t cut it.
Nabila Alibhai is Head of Personal Developmen at Forward College and the author of The New Normal: The Future Impacted by Coronavirus and How Colour Replaces Fear.
Student Research Contributor, Tassilo Doczy