Privacy Policy
Signals Blog

Contributors

Categories

OpenAI’s recent release of GPT-4 has made headlines in the last week. It is the latest version of the large language model that powers its popular artificial intelligence (AI) chat bot: ChatGPT. Passing the U.S. bar exam is one of its notable accomplishments. Plus, this version analyzes images (for example, it can assess a photo containing different types of food and generate a recipe using them all), and it is 40 per cent more accurate than previous versions of the tool.

While some folks may secretly wonder if AI chat bots are going to replace them in the workforce, there’s no need to worry. Although the World Economic Forum has predicted that machines and algorithms will increase their contribution to specific job tasks by an average of 57 per cent by 2025, roles with “human traits,” like experience, values and judgment, will remain in demand, resulting in a net increase in jobs in the future.

As a frequent blogger about science communications here on Signals, I wondered how ChatGPT could help scientists and engineers improve their science communications (“scicomm”) skills and benefit their day-to-day scicomm work.

As I dove into researching the topic, I was interested to see that the creation of scientific papers is one aspect of ChatGPT’s capabilities that has already been put to the test. In December 2022, ChatGPT was asked to generate research abstracts based on its review of authentic research abstracts published in high-impact journals. While the human reviewers of the AI-generated abstracts said it was “surprisingly difficult” to tell the difference between them and the real things, the ChatGPT versions were “vaguer and had a formulaic feel to the writing.” It was clear from this exercise that the human touch has a significant impact in developing effective communications.

For many communications-oriented tasks, learning to harness AI tools will make day-to-day work more streamlined and productive. This includes supporting somewhat routine tasks like writing article outlines or headlines, analyzing data, or creating social media posts. Assisting with this work can free up time to focus on the uniquely human areas that ChatGPT can’t support, like refining the content it produces, checking for accuracy, and adding a unique voice with minor edits.

That said, science communicators might be wondering if ChatGPT’s benefits are so considerable that it’s worth adding as a tool in the science communications toolbox. To help me understand its value, I went right to the source – ChatGPT – and asked how it can help.

Curious about the potential benefits, I spent a few minutes writing a clear and specific prompt asking how scientists and engineers can use ChatGPT as a tool to improve their science communications. Here’s what it told me.

***

Scientists and engineers can use ChatGPT in several ways to improve their science communications:

  1. Generating clear and concise explanations: ChatGPT can be used to generate clear and concise explanations of complex scientific concepts. By inputting a scientific question or concept, ChatGPT can generate a clear and simple answer that can be used in presentations, papers, or other communication materials.
  2. Improving readability: Scientists and engineers can use ChatGPT to improve the readability of their scientific papers or reports. By inputting a scientific document, ChatGPT can provide suggestions for sentence structure, word choice, and other factors that can make the document easier to read and understand.
  3. Providing alternative perspectives: ChatGPT can be used to provide alternative perspectives on scientific concepts. Scientists and engineers can input their own understanding of a concept and ask ChatGPT to generate alternative explanations, which can be useful in understanding different points of view and communicating with a wider audience.
  4. Developing engaging outreach materials: ChatGPT can be used to develop engaging outreach materials, such as social media posts or interactive educational tools. By inputting scientific concepts and questions, ChatGPT can generate responses that are easy to understand and can be used to create engaging and informative materials for a wider audience.

Overall, ChatGPT can be a valuable tool for scientists and engineers looking to improve their science communications skills for the reasons outlined above.

***

Alright then. I would say that ChatGPT has made a pretty good case for why science communicators should use it to enhance their efforts.

Nevertheless, ChatGPT is associated with concerns that science communicators must be mindful of. These issues include:

  • Limited knowledge: ChatGPT has a limited knowledge of world and events after 2021. In the regenerative medicine world, this could mean that an output from ChatGPT does not reflect the most timely and accurate published research, regulations or ethical considerations on a topic. Be sure to carefully read and verify the ideas, recommendations and text that ChatGPT produces to ensure accuracy and relevancy based on your communications objectives.
  • Hallucinations: Even with the improvements in the most recent version, ChatGPT may still generate incorrect information, which industry experts refer to as “hallucinations.” Again, be sure to verify the facts that ChatGPT generates.
  • Inherent bias: It may produce harmful instructions or biased content. Please ensure that these types of wording aren’t used in your science communications efforts.
  • Privacy: Data that are entered in the prompt field are owned by the company who makes the tool: OpenAI. Do not enter confidential or sensitive information about your research, your organization or yourself.
  • Authorship ethics: Scientific publications, such as Nature, have banned large language models like ChatGPT from authoring articles to ensure that a human is accountable for everything it publishes. As a science communicator, you may be wondering about the ethics of passing off text generated by ChatGPT as your own. If your gut is telling you it’s not a good idea, you’re likely right. No matter how you use ChatGPT to support your work, at the end of the day you are responsible for your finished products. Therefore, spending the time and effort needed to refine, flesh out and verify ChatGPT’s outputs is necessary to reduce the risk of harm to the reputations of yourself and your organization.

What are your thoughts on using ChatGPT to support your science communications? Let me know if and how you’ll use ChatGPT for scicomm in the comments below or reach me on Twitter @lainebcomms.

If you haven’t yet tried ChatGPT, be sure to check out the video at the end of this post to help get you started.

The following two tabs change content below.
Avatar photo

Laine Bodnar

Laine Bodnar is the Senior Manager Communications at CCRM. Laine has a thorough understanding of health communications, having previously worked in public relations agencies with clients in the pharmaceuticals, biologics, consumer health, medical device and nutrition industries. At CCRM, she is generating awareness of the ever-advancing regenerative medicine and cell and gene therapy fields as she supports the company with strategic communications. Laine completed an Honours Bachelor of Arts degree in Communication Studies at Wilfrid Laurier University in Waterloo, Canada, and a Postgraduate Certificate in Public Relations at Humber College in Toronto, Canada. Follow Laine on Twitter @LaineBComms.