ChatGPT acts more altruistically, cooperatively than humans

Modern artificial intelligence, such as ChatGPT, is capable of mimicking human behaviors, but the former has more positive outcomes such as cooperation, altruism, trust and reciprocity. In a new University of Michigan study published in the Proceedings of the National Academy of Sciences, researchers used "behavioral- Turing tests-which test a machine's ability to exhibit human-like responses and intelligence-to evaluate the personality and behavior of a series of AI chatbots. The tests involved ChatGPT answering psychological survey questions and playing interactive games. The researchers compared ChatGPTs' choices to those of 108,000 people from more than 50 countries. Study lead author Qiaozhu Mei , professor at U-M's School of Information and College of Engineering, said AI's behavior-since it exhibited more cooperation and altruism-may be well suited for roles necessitating negotiation, dispute resolution, customer service and caregiving. How should people respond to this information, especially as the future will tell the extent to which AI enhances humans rather than substituting for them? We now have a formal way to test AI's personality traits and behavioral tendencies. This is a scientific way to observe how they make choices and to probe their preferences beyond what they say. ChatGPT presents human-like traits in many aspects such as cooperation, trust, reciprocity, altruism, spite, fairness, strategic thinking and risk aversion. In certain aspects, they act as if they are more altruistic and cooperative than humans. To this end, our results are more optimistic than concerning. What differences did you and your colleagues expect to see between chatbots and people?
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience