Ethical Considerations in Automating Medical Writing Workflows

Where It All Began 

For decades, automation and other traditional forms of artificial intelligence (AI) have surfaced within the medical writing scene as helpful tools, assisting medical writing teams in handling regulatory submissions, clinical trial documentation, and literature reviews. In recent years, the inspiration to further expedite manual processes using technology has been bolstered by the introduction of generative AI and machine learning (ML), which take automation a step further by assisting with generation of summary content, translation into plain language, or other “cognitive” tasks which, though tedious and time-consuming, typically fall to medical writers. Successful use of AI and ML for these purposes in drug development has lightened the load on medical writing teams by increasing accuracy and efficiency without sacrificing valuable time or resources. Further benefits are promised through the automation of routine tasks, such as data extraction and document formatting, which can lead to substantial productivity gains in medical writing workflows and reduce turnaround times while minimizing errors. However, the advent of AI has brought with it many ethical concerns from medical writers and other stakeholders. 

Key Concerns 

Several key ethical concerns emerge when introducing AI into medical writing workflows. See a few listed here. 

  1. Job Displacement. With AI taking over administrative tasks like document formatting, data analysis, or regulatory compliance checks, medical writers may feel that they no longer hold value in their organizations. When highly educated and trained individuals are displaced from their roles, it calls into question whether organizations should offer alternative job or responsibility placements that not only recognize the value of their years of knowledge and experience but also leverage these qualities through upskilling, enabling them to work alongside AI. 
  2. Data Privacy and Security. As AI is given access to sensitive medical data like personal health information (PHI), clinical trial data, or proprietary research, medical communicators may feel uneasy about the responsible handling of data in accordance with GDPR, HIPAA, or other data privacy laws. Proper data protections may be important not only for protecting patient data but also for gaining the approval of the medical writing community in parallel to support the ongoing acceptance and use of such tools in medical writing. 
  3. Bias. Where AI tools are often still trained on biased algorithms or datasets and may therefore produce biased outputs, medical writers may believe it is unlikely that AI-assisted medical writing will remain accurate and fair.  
  4. Copyright. As AI tools continue to author new content, organizations like the Copyright Clearance Center (CCC) have put out recent statements around AI authorship and AI-related rights when incorporating existing content into AI. Many medical writers are not aware of these recent developments, but where copyright is a key and evolving concern, they may have to weigh how their obligations to stay true to fair use can be upheld even as such standards evolve. 

Helpless or Just Hesitant? 

Any good medical communicator recognizes those key tenets that make our job as important as it is: fairness, transparency, accountability, accuracy, compliance—the list goes on. To take it a step further, patient privacy and safety is the ultimate cornerstone of any medical writing endeavor, so these aspects must also be given due consideration as AI tools are trained and retrained for any healthcare or healthcare-adjacent purpose. What is not as often recognized is that these tenets are also at the forefront of responsibly developed AI tools, serving a critical role in automated decision-making processes. The common concern in the modern equation is – Where do I fit in as a medical writer? – and this question cannot be overstated.  

The Bottom Line 

AI will never replace medical communicators, we know, but it is becoming an essential supplement to the work that medical communicators do. There remains little room for medical writers to continue agonizing over meticulous tasks like formatting, data entry, or structured content authoring, or over laborious ones like the translation of technical jargon into plain language, when generative AI can swiftly handle these tasks. To ensure high ethical standards, we need to develop unbiased AI systems with wide training datasets; to ensure strong data protection, we must invest in robust privacy and security safeguards for any information AI handles. On the flip side, medical communicators—humans—need to upskill and invest in training to work well alongside AI. These roles will vary from strategic messaging to submission strategy and from interpreting nuanced clinical data to making complex ethical decisions. Further human input will be required for promoting transparency in AI decision-making and refining and approving outputs. This hybrid model of collaboration with AI not only preserves the necessary role of the medical communicator but also allows AI to do what it’s meant to do – handle repetitive, data-heavy tasks in an efficient manner. 

Looking Ahead 

AI is going to be an essential part of the medical communicator’s workday in the near future. Ethical concerns around what AI does and how it does it must be given the appropriate time and attention, but the ultimate responsibility for upholding these values will continue to remain with the medical writer in the years to come. This responsibility is especially important in a prospective rather than retrospective context when AI tools are developed, necessitating input from medical communicators and other subject matter experts early on in the process. Additionally, where the medical communicator may start to take on more oversight-based tasks rather than authoring-based tasks in clinical documentation, day-to-day activities like drafting documents or formatting submissions may start to look more like checks for strategy universality, key messaging, and accuracy and consistency. What will remain the same are those core tenets mentioned earlier—ethical principles like fairness and transparency—as well as the soft skills that unite teams and stakeholders and that lay the foundation for all that we do as medical writers. Innovators and regulators must continue to shape the future of the field in a way that respects AI’s strengths while allowing medical writers to apply complex ethics to automated processes and outputs. Part of this process will necessarily include the ethical development, design, and deployment of these technologies, making the incorporation of AI sustainable not only for medical writing teams but also for the patients benefiting from their work.  

For more information, check out our related blogs on writing with integrity or how to extend GxP principles in AI-enabled healthcare. 

Posted in

Jessica Soto

Leave a Comment