“There is a lot of hype around Artificial Intelligence, or AI.”
I wrote that line four years ago for a post on CCP’s website titled, “Artificial Intelligence and its Promising Application in Global Health”.
Talk about an understatement.
Today, conversations on AI dominate the media and the workplace. The promise and peril of AI, detailed every day in countless articles, podcasts, and television news, are hard to ignore.
With the launch of ChatGPT late in 2022 and the explosion of generative AI tools such as Bard and Midjourney, every industry — including those of us working in the fields of social and behavior change and knowledge management in global health — is making sense of what is hype and what is really coming with regards to this technology. Will it be a new set of tools in the public health tool chest?
I recently delivered a talk on digital health to the 2023 participants of CCP’s Leadership in Strategic Communication Workshop and we had robust conversation on the topic of AI and ethics, equity, and automation. At this point in generative AI’s “hype cycle,” it is critical for global health practitioners to understand what generative AI is and what is it not; who will benefit and who will be left behind; and how it will change the way we do our work.
Here are broad-based takeaways to align expectations about the current state of AI’s promise – and how to separate the hype from the reality as it exists today.
AI relies on data.
AI systems sound futuristic, but we are helped by AI systems every day. As I write this, Microsoft Office is offering to finish my sentences – this is machine learning in a language that I speak, so it is easy for me to discern if I want to accept these suggestions. We as global health practitioners work in contexts where English is not the primary language, where connectivity is challenging, and where gender disparities influence access to technology. Efficiencies gained by AI are only relevant and useful if the data of the population being served are represented.
A robust prompt produces a more detailed result.
There are many examples of generative AI composing songs, writing poems, and creating, but can generative AI produce high-quality health communication content? At the leadership workshop, participant Christina Cherel, MPH, of the Bill & Melinda Gates Institute for Population and Reproductive Health, and her team, wanted AI to help them write a song aiming to improve male engagement in family planning in Lualaba, DRC.
After developing insights and key constraints in the first part of the planning process, the team fed that information into ChatGPT. Here is what it came up with:
((Verse 1) Yeah, let’s talk about life, let’s talk about plans, See, a wise man always understands. It’s not about control, it’s not about power, it’s about making the right call in the right hour.
We talkin’ ’bout family, yeah we talkin’ ’bout life, A future with children, maybe a wife. But it’s all about timing, it’s all about choice, so let’s get educated and use our voice.
(Chorus) Family planning, ain’t nothing wrong, it’s about being prepared, being strong. Let’s change the game, redefine the man, show the world we can have a plan.
(Verse 2) It’s not just a woman’s job, that ain’t right, family planning’s for everyone, day and night. We can share the responsibility, share the care., When it comes to our future, it’s only fair.
So let’s learn the methods, understand the reasons, make informed choices, no matter the seasons. Prevent unexpected, plan the expected, in this game of life, let’s be well directed.
It was clever. And, more importantly, it showcased the promise of generative AI: saving the team time that they could better spend thinking about the intervention strategy and outcomes.
Is this the best way to approach content generation in SBC programming? Generative AI may provide the starting point; however, human input is still needed to quality check and provide expertise into context. It is naïve, though, to think these tools won’t factor into future content development.
Will future pretesting be done with AI, as long as it happens alongside community interaction?
Historical global health biases exist in AI.
“AI learns by absorbing what exists online,” says Arsenii Alenichev, PhD, a research colleague at Oxford who, together with Koen Peeters Grietens, PhD, from ITM Antwerp, developed an exhibition of global health imagery generated from AI (Midjourney v 5.1. released in May 2023) and showcased it as part of this year’s Oxford Global Health and Bioethics International Conference.
Alenichev and Grietens found that colonial imagery and disease associations that have appeared online historically are now being generated by AI image tools. Results of prompts related to giving and receiving medical care, vaccines, food and water fell into stereotypes and tropes. Even when prompted to depict recipients of care as white children, in 350 rendered images, recipients of care were depicted as people of color, and the AI occasionally showed white doctors even when the prompts explicitly asked for Black African doctors.
What can be done? The first step is to be aware of bias in the data; the second is to act on it. If you use an AI image generator and see this type of bias, report it on the system; these platforms evolve with user input and feedback. Remember, AI relies on data. Striving for and demanding the highest quality data is what we can do now.
Curious how AI would write this article? Here’s one written by ChatGPT on “the implications of large language models and ChatGPT on the field of global health and development” that you might enjoy.
Marla Shaivitz is CCP’s director of digital strategy.