As if think tanks didn’t already have enough challenges – working in environments where it is becoming more difficult to operate, facing increasing pessimism about political landscapes, and dealing with ongoing financial constraints – now comes artificial intelligence (AI) adding a fresh layer of unpredictability!

To explore how think tank communicators are handling these uncertainties, WonkComms conducted a survey capturing current attitudes, strategies, and practices within the sector. This survey aimed to understand how AI is used in daily work activities.

The survey, held from February to April 2024, gathered responses from 111 individuals across 21 countries, with support from various organizations including Cast from Clay, On Think Tanks, Parsons TKO, R Street Institute, Smart Thinking, Soapbox, Sociopublico, and the Wuppertal Institute.

Our findings reveal that think tank communicators are already employing AI; however, to avoid reputational and other risks, think tanks must address two immediate AI-related issues: the need for organizational support and training.

Overview of AI in Think Tanks

The survey results show a varied picture of AI adoption: significant AI use accompanied by high productivity expectations, diverse concerns about AI’s impact on individuals and society, and a demand for better support from think tank employers.

These results align with similar studies in other fields, reassuring think tank communicators that they are not alone in these experiences.

However, two key challenges stand out, focusing on how think tanks can help their communication teams to use AI responsibly and efficiently:

1. In a credibility-centered sector, widespread AI usage with insufficient training poses risks.

Addressing this will help meet and understand staff needs, enhancing productivity without compromising organizational reputation.

2. A lack of attention to organizational policies and limited collaboration within the sector signals a missed opportunity.

Think tanks need to explore how they can create supportive AI environments by establishing safe spaces for experimentation, setting clear principles, and fostering collaborative cultures.

Six Key Findings

1. Many Think Tank Professionals Are Already Using AI

Of our respondents, 90% reported using AI at work, primarily for tasks like writing, editing, and transcribing.

Around 75% are in the early stages of AI use, employing it occasionally for specific tasks. However, only 22% use AI regularly or extensively in their roles.

Communications teams utilize AI for tasks including social media (43%), editing (41%), brainstorming (36%), background research (31%), and content creation (25%).

Moreover, 26% reported that research teams in their organizations use AI, as do HR and support functions (20%).

2. AI Training is Needed and Desired

Overall, 95% of respondents believe that AI could boost their productivity, though they seek support for maximizing efficiency (43%) and show interest in training and online learning (58% and 55%, respectively) to develop their skills.

This aligns with findings from other sectors, where reskilling and upskilling have become major themes.

Current data indicates that leaders in many sectors anticipate 40% of their workforce will need retraining within the next five years due to AI, while 60% of employees themselves believe retraining will be necessary.

3. Concerns Over AI Usage

Many respondents are cautious about AI, with concerns ranging from diminishing human interaction (48%), privacy risks (46%), accelerated change (42%), and perpetuating biases (41%).

With high AI usage, it’s important to consider the implications of using AI for editing, creating content, and research.

A notable 25% of respondents reported using AI without disclosing it, which, if discovered, could harm the credibility of their organization.

4. Low Concern Over Job Impact

Despite the demand for training, only 20% feared that AI might replace their roles, which is lower than findings from other sectors.

5. Many Think Tanks Are Unprepared for AI

In the WonkComms survey, 70% felt that their organization was not or only somewhat prepared for AI.

While policies are essential for safe AI use, restrictive measures may not be effective due to the accessibility of AI tools, which can introduce legal and security risks if misused.

6. A Desire to Connect and Share AI Insights

Many respondents expressed interest in learning from other organizations (55%) and engaging with peers about AI (43%) or consulting with experts (40%).

An outcome of the survey led DGAP to use these findings to draft a list of secure AI tools and establish a code of conduct for their application.

Presenting the Survey Findings with AI Avatars

The survey findings were showcased at the On Think Tanks conference in Barcelona, May 2024, using AI to generate avatars representing common survey responses:

  1. The survey data was uploaded to ChatGPT to analyze and generate scripts for two personas representing key traits in the responses.
  2. Scripts were created for avatars representing respondents from the United Kingdom and the United States, the most represented countries.
  3. Bias amplification was tested, producing a second version of the scripts with exaggerated traits.
  4. Video avatars were created using Synthesia.io.

Two personas, Alex and Emma, were created to share these insights:

Emma

Emma represents respondents at the start of their AI journey, hopeful yet in need of organizational support for proper AI use.

Alex

Alex reflects cautious respondents concerned about AI’s potential impact on human interaction, privacy, and ethical considerations, and who wish to connect with peers on these issues.