We stand at the nexus of an information glut and an action deficit — particularly when it comes to defining the crisis of our time: climate change. Despite an excessive amount of data, a disconnect persists. According to the Yale Program on Climate Change Communication, 72% of Americans claim climate change stokes their anxieties, yet only 33% actually bring it up in conversations, even with those closest to them. Why this incongruity between apprehension and dialogue? It’s not just the message that counts, but how it’s delivered.
Just this year, the Intergovernmental Panel on Climate Change (IPCC) published its latest report — a document describing the state of scientific, technical, and socio-economic knowledge on climate change, its impacts and future risks. Crafted by hundreds of experts, and ratified by over 200 nations, the full report is sobering, unflinching, and, at 115 pages, overwhelming. Nonetheless, journalists can receive just a 24-hour embargo to distill this information into a comprehensive yet digestible read.
To directly tackle this communication challenge, we initiated a specialized test case ChatGPT which offers journalists facing tight deadlines the capability to rapidly analyze expansive data sets. For this exercise, we input a large volume of data extracted from the latest IPCC report. Then, we prompted the robot to “review and analyze the following data, then concisely give us the main points.” We repeated this process to ensure we gave the AI a broader contextual view of the entire report. To assess the effectiveness of the AI tool in data interpretation, we compared its output with the initial stories on the IPCC report published by national news sources including The Washington Post, New York Times, Los Angeles Times, and Wall Street Journal. Our objective was to gauge the analytical accuracy and narrative coherence that AI tools could potentially bring to journalistic coverage.