Special Issue | AI in Health Communication

Guest Editors: Nadine Bol (Tilburg University), Julia van Weert (University of Amsterdam), & Angelika Augustine (Bielefeld University)

Download Call as PDF

Artificial Intelligence (AI) is transforming health communication, offering unprecedented opportunities for disease prevention, health promotion, public health, and healthcare. AI refers to the capability of algorithms, integrated into systems and tools, to learn from data and perform automated tasks without explicit human programming (Hancock et al., 2020). From AI-powered virtual agents supporting people with mental health challenges to large language models (LLMs) generating personalized health content, AI is reshaping how health information is created, shared, and interpreted. Examples of AI applications in health communication are AI-driven conversational agents providing social support to improve mental health and well-being (Van Wezel et al., 2020), clinical applications of Generative AI and LLMs to summarize consultations in real time (Haniff et al., 2025), and AI-generated messages for health awareness (Lim & Schmälzle, 2023).

At the same time, AI also raises critical questions that must be addressed to fully realize its potential while clarifying its limitations in health communication. Key concerns regarding the use of AI for health communication include legal-ethical concerns and issues of accuracy and reliability of information (van Kolfschooten, 2022), risks of exacerbating health inequalities due to biased training data (Stypińska, 2023; Obermeyer et al., 2019) or limited digital literacy (Bierbooms et al., 2025), difficulties in establishing appropriate levels of trust towards AI (Lockey et al., 2021; Siau & Wang, 2018), and misalignment in language style due to LLMs’ agreeable and affirmative nature and excessive positivity (Wang et al., 2025). But even ‘simpler’ administrative clinical tasks, such as summarization of clinical notes, can yield unpredictable effects on clinician decision-making and should be carefully considered before implementing AI in clinical practice (Goodman et al., 2024).

Despite these challenges, the opportunities offered by AI in health communication indicate that it is here to stay. This makes it essential to study how individuals, healthcare professionals, and organizations engage with AI, and to examine how its integration in clinical practice and daily life shapes health communication processes and ultimately affects health outcomes. Further research is needed to explore which AI applications could be used as a substitute or supplement human sources of health information.

The special issue therefore calls for papers studying AI in health communication in terms of its causes (e.g., why do people use Generative AI for health-related advice?), content (e.g., what are the communication dynamics in human-AI interactions about health?), and consequences (e.g., what are the effects of AI-powered clinical support tools on doctor-patient communication?). We invite submissions that address the role of AI in health communication across diverse contexts (e.g., disease prevention, health promotion, healthcare) and for a wide variety of audiences (e.g., patients, healthcare providers). We are also open to include articles with a methodological focus on the role of AI in health communication research (e.g., content analysis has changed rapidly, and machine-learning technique allow us to analyse much larger/different data sets than 10 years ago). Contributions from various academic fields—including public health, psychology, medical sciences, sociology, social robotics, and related disciplines—are welcome, provided they are based on a health communication perspective.

The special issue is open but not limited to studies that address the following topics:

  • Health communication using AI-based applications and/or tools;
  • Seeking of AI-generated health information;
  • Risks and benefits of using AI-based health applications for doctor-patient communication and/or shared decision making;
  • The role of digital or AI literacy;
  • User-centred design of AI-based health (communication) applications;
  • Determinants of effective usage of AI-based health communication;
  • The effects of using AI in health communication on health-related outcomes;
  • Challenges and ethical considerations of using AI in health communication;
  • AI and health equity, including risks and benefits of using AI-based health communication for less well-represented people;
  • AI in health communication about sensitive health topics;
  • Implementation of AI-based health (communication) applications;
  • Methodological innovations in researching AI and health communication (e.g., advances in computational methods, ML–assisted content analysis, new approaches to studying human-AI interactions).

Submission format

We welcome submissions that fit any of the EJHC formats: original research papers, theoretical papers, methodological papers, review articles, brief research reports. For further information on the article types, please see http://www.ejhc.org/about/submissions.

Manuscripts should be prepared in accordance with the EJHC author guidelines and be submitted via the journal website.

Deadline for submission is 1 April 2026.

Review Process

All articles will undergo a rigorous peer review process. Once the paper has been assessed as appropriate by the editorial management team (with regard to form, content, and quality), it will be peer-reviewed by at least two reviewers in a double-blind review process, meaning that reviewers are not disclosed to authors, and authors are not disclosed to reviewers. To ensure short publication processes, EJHC releases articles online on a rolling basis, expected to start in August 2026.           

European Journal of Health Communication

The European Journal of Health Communication (EJHC) is a peer-reviewerd open access journal for high-quality health communication research with relevance to Europe or specific European countries. The journal aims to represent the international character of health communication research given the cultural, political, economic, and academic diversity in Europe.

Contact Guest Editors and Links

Nadine Bol, Tilburg University: Nadine.Bol@tilburguniversity.edu
Julia van Weert, University of Amsterdam: J.C.M.vanWeert@uva.nl
Angelika Augustine, Bielefeld University: Angelika.Augustine@uni-bielefeld.de

Journal website: www.ejhc.org
Journal e-mail address: contact@ejhc.org

References

Bierbooms, J., van Egmond, M., Hermans, A.-M., & De Looper, M. (2025). Inequality matters: The role of economic, social, cultural, and person capital in explaining inequalities in the accessibility and usability of digital health technologies. European Journal of Health Communication6(2), 17-32. https://doi.org/10.47368/ejhc.2025.202

Goodman, K. E., Paul, H. Y., & Morgan, D. J. (2024). AI-generated clinical summaries require more than accuracy. Jama, 331(8), 637-638. https://doi.org/10.1001/jama.2024.0555

Haniff, Q., Meng, Z., Pongkemmanun, T., Sia, Z. C., Newport, H., Ooi, Y., & Jani, B. D. (2025). Use of artificial intelligence to transcribe and summarise general practice consultations. Journal of Medical Artificial Intelligence, 8, 43. https://doi.org/10.21037/jmai-24-257.

Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100. https://doi.org/10.1093/jcmc/zmz022

Lim, S., & Schmälzle, R. (2023). Artificial intelligence for health message generation: An empirical study using a large language model (LLM) and prompt engineering. Frontiers in Communication, 8, 1129082. https://doi.org/10.3389/fcomm.2023.1129082

Lockey, S., Gillespie, N., Holm, D., & Someh, I. A. (2021). A review of trust in artificial intelligence: Challenges, vulnerabilities and future directions. In Proceedings of the 54th Hawaii International Conference on System Sciences (pp. 5463-5791). University of Hawaii at Mānoa. https://hdl.handle.net/10125/71334

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342

Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31(2), 47–53. https://ink.library.smu.edu.sg/sis_research/9371

Stypińska, J. (2023). AI ageism: A critical roadmap for studying age discrimination and exclusion in digitalized societies. AI & Society, 38, 665–677. https://doi.org/10.1007/s00146-022-01553-5

Van Kolfschooten, H. (2022). EU regulation of artificial intelligence: Challenges for patients’ rights. Common Market Law Review, 59(1), 81-112. https://doi.org/10.54648/cola2022005

Van Wezel, M. M., Croes, E. A., & Antheunis, M. L. (2020, November). “I’m here for you”: can social chatbots truly support their users? A literature review. In International Workshop on Chatbot Research and Design (pp. 96-113). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-68288-0_7

Wang, Y., Wang, Y., Xiao, Y., Escamilla, L., Augustine, B., Crace, K., Zhou, G., & Zhang, Y. (2025). Evaluating an LLM-powered chatbot for cognitive restructuring: Insights from mental health professionals. arXiv preprint arXiv:2501.15599. https://arxiv.org/abs/2501.15599