What psychologists need to know about the evolution of generative AI

What psychologists need to know about the evolution of generative AI

Psychologists are exploring how this new technology can simplify or amplify their efforts—and leading the charge to bring behavioral insights into the creation and deployment of generative AI tolos
15 Enero 2024

Generative artificial intelligence (AI), which can rapidly produce original text, images, audio, and more, is here to stay. Psychologists are exploring how the new technology can simplify or amplify their efforts—and are increasingly leading efforts to bring behavioral insights into the creation and deployment of generative AI tools.

Within the field, ChatGPT and other AI models are changing the way psychologists teach, conduct research, and diagnose and treat patients. While concern about the ongoing development of generative AI is legitimate, especially given the moratorium proposed by technology developers themselves last year, psychologists should accept AI as a reality and work with rather than against it, said Jessica Jackson, PhD, a licensed psychologist and clinical strategy manager for a mental health startup.

While there are plenty of reasons to be cautious about therapeutic algorithms, for example, they also offer an opportunity to dramatically expand access to mental health care, psychologists say.

“If people can’t afford therapy, we can’t stop them from logging on to a computer and talking to a chatbot,” said Jackson, who is chair of APA’s Mental Health Technology Advisory Committee. “We can’t control this, so how can we form strategic partnerships that help us embrace and optimize it?”

Psychologists bring a behavioral perspective to the development and rollout of new technologies, producing research insights about how people view AI’s competence, credibility, morality, and more.

“Most of the work in the human-technology interaction field is very heavy on technology and very thin on humans,” said Kai Chi (Sam) Yam, PhD, a professor of psychology and head of the Department of Management and Organization at the National University of Singapore. “Psychologists are now working to fill the gap on the human side of that equation.”

Ultimately, the field can offer a nuanced exploration of new technologies that helps developers, users, and regulators grasp their inherent complexity.

“There are complex trade-offs in AI’s potential. It won’t be all good or all bad,” said Adam Miner, PsyD, a clinical assistant professor of psychiatry and behavioral sciences at Stanford University who studies AI in health care. “Fortunately, psychologists are accustomed to weighing complex trade-offs in their research and clinical work, so we are ready to meet this challenge.”

AI in higher education

One way to quantify the impact of AI on a profession—both how it can help and who it threatens to replace—is to break down work into a set of tasks and skills, said Johannes Eichstaedt, PhD, a computational psychologist and an assistant professor at Stanford University. For researchers, many of those tasks can increasingly be automated.

“The truth, whether we want to admit it or not, is that a lot of what we do in the scientific process is quite formulaic,” Eichstaedt said.

Specialized generative AI tools, such as Genei, can help with literature searches, literature summarization, and academic writing. ChatGPT can generate items for scales, detect themes in qualitative textual data, and write Python and R code for statistical analyses.

“The key is to figure out how to make GPT function either like a research assistant or like a participant,” said Kurt Gray, PhD, a professor of psychology and neuroscience at the University of North Carolina at Chapel Hill who studies how people make sense of emerging technology.

Gray’s research shows that the latter—using GPT to replace participants in certain types of experiments—is not only possible, but it could also save experimenters valuable time and resources (Trends in Cognitive Sciences, Vol. 27, No. 7, 2023). For example, he and his colleagues tested GPT on 500 moral judgment scenarios and found that its answers correlated nearly perfectly (.95) with human answers. They suggest that AI could be used in pilot studies to refine measures or as an extra layer of evidence to augment results from humans.

“We’re not arguing that you never need to ask people anymore, but what are the times when it might make more sense to ask AI?” Gray said.

For teaching psychology, generative AI can simplify a range of time-consuming tasks, from drafting slides, outlines, and exam questions to mentoring trainees on therapeutic techniques. But the technology is also fundamentally changing the learning environment, leaving many educators worried about how to detect cheating and ensure that students are actually learning.

“That’s a legitimate concern, but we’re living in a world where these tools are at our fingertips, so what we teach will ultimately have to change,” Eichstaedt said.

For example, students may be able to focus more on higher-level cognitive functions—such as finding a punchy example or deciding how to frame an argument—instead of worrying about the order of sentences in a paragraph, he said.

Because of generative AI’s ubiquity, most higher education institutions are trying to embrace it rather than ban it. But research in progress suggests that approach may have unintended consequences. Yam and his colleagues randomly assigned college students to write an essay about a topic they recently learned about or to use ChatGPT to write the essay. Those who used ChatGPT later reported less interest in the topic and less intrinsic motivation to study it further.

“Once college students used ChatGPT to write an essay for a given topic, they found that topic to be less exciting,” Yam said. “We find that quite worrying, because allowing the use of ChatGPT may actually be undermining the intrinsic motivation to learn.”

Chatbots in therapy

In the clinical sphere, psychologists are also proceeding with caution. Generative AI clearly holds potential for automating administrative tasks such as documentation and note-taking. Tools like ChatGPT can also help trainees practice delivering psychological interventions to a simulated patient. And natural language processing tools can provide insights to help licensed clinicians up their game, said Miner.

“Clinicians probably don’t want AI telling them what to do, but it might help us find gaps in our training or areas for improvement,” he said.

For example, a clinician might respond differently to a patient who mentions self-harm in the last 5 minutes of a session than they would to a patient who mentions self-harm during the first 5 minutes. Based on large data sets of therapist-client interactions, what are positive examples of how to respond? (npj Mental Health Research, Vol. 1, 2022)

The biggest open question is whether and how generative AI can help address the shortage of mental health service providers (see What’s ahead for clinical practice?), both in the United States and worldwide. Can chatbots safely and effectively deliver therapy?

“The prevalence of mental health issues in America is reaching a critical point,” said Scott Wallace, PhD, a clinical psychologist and director of clinical innovation at Remble, a mental health technology company that offers chatbot support and other tools. “With the devastating crisis we’re facing, we can’t afford to write off promising innovations before fully exploring how they might expand access and improve care.”

Chatbots can provide support on day-to-day challenges, such as conflict with a spouse, trouble sleeping, and stress related to work or school. Wallace said using AI to deliver research-backed advice can help reduce the burden on human therapists of providing services.

Mental health chatbots may be useful for more serious concerns, but harm is also possible. Companies have delivered AI therapy without informed consent and perpetuated bias with their algorithms (Obermeyer, Z., et al., Science, Vol. 366, No. 6464, 2019).

“If you leave psychologists out of the development process, it’s going to be harmful,” Jackson said. “Currently, there’s no true bridge between technology and psychology, so the people who are building these tools aren’t always aware of the ethical issues at play.”

One relatively safe way to begin using AI tools for therapy is to maintain clinical supervision of patients. For example, patients can use an app to practice cognitive behavioral therapy or dialectical behavior therapy skills between sessions.

“By thoughtfully integrating AI to augment professionals, but not replace them, we can build on human strengths while benefiting from data-based insights,” Wallace said.

See more about the nuance of current AI use in Monetizing mental health is harder than it looks.

Agents of replacement

As generative AI continues to permeate society, knowledge of how it impacts individuals, relationships, and societies is suddenly in high demand.

 “Soon, we’ll have AIs that are super intelligent—in the technical sense, they’re better at most things than most people,” Eichstaedt said. “They will tutor our children, but they were never designed to be encouraging, understanding, or wise.”

Theoretically, those qualities can be fine-tuned, but that requires significant attention to the social, developmental, and identity contexts in which they will be applied, he said.

In religious settings, Yam and psychologist Joshua Jackson, PhD, of the University of Chicago, have shown that people find robots to be equally competent—but less credible—than humans (Journal of Experimental Psychology: General, Vol. 152, No.12, 2023). Yam, Gray, and their colleagues have also found that in the workplace, the fear of being replaced by AI is associated with burnout and incivility (Journal of Applied Psychology, Vol. 108, No. 5, 2023). Nearly 4 in 10 U.S. workers worry that AI will eventually replace all or most of their job duties, according to APA’s 2023 Work in America survey.

“The big issue around AI is that these are agents of replacement,” Gray said. “We design artificial agents to replace human labor, but when we are confronted with that, we’re not always sure about how to act.”

What’s on the horizon? Eichstaedt points to new multimodal generative AI tools, such as Google’s Gemini, that can seamlessly shift between images, text, and other types of data. To keep up with the rapid pace of development and generate research insights that can be quickly applied, Yam said field research—as well as better incentives for cross-disciplinary collaboration—will be key. Engineers often disseminate findings through conference proceedings, for example, while psychologists aim to publish journal articles.

We may even see new subfields geared toward understanding the reasons, rationales, and capacities of AI systems, said Peter Hancock, PhD, DSc, a professor of psychology at the University of Central Florida who has studied human-machine interaction in a variety of contexts. “Machine psychology” could apply the methods, simulations, and analyses long used to scrutinize the human mind to glean insights about how generative AI processes information.

Despite its risks, some psychologists are proceeding with cautious optimism about the potential for AI to improve mental health care and beyond.

“Are there open challenges? Absolutely. But sticking to the status quo isn’t working,” Wallace said. “I believe together we can shape AI into a driving force for much needed progress and healing. But we have to be willing to take those first steps.”

¿Qué opinas de este artículo?