AI Trust Divide Shapes the Future of Communications
The Leadership in the AI Era poll by CommPRO reveals a communications industry moving rapidly into an AI-enabled future while still searching for guardrails that feel credible, consistent, and aligned with public expectations. Using Communications Week as the framing, the results paint a picture of a profession that sees AI not as an abstract frontier, but as a daily companion that reshapes workflows, creative processes, and risk calculations.
The poll was designed around three foundational questions that diagnose how communicators are navigating AI’s influence on strategy, operations, and trust. Together, they capture a profession in the middle of redefining its relationship to intelligence tools that now shape the speed, scale, and credibility of modern communication.
Question 1: Biggest Challenge Facing Communicators in 2025
Respondents selected from themes such as maintaining human authenticity, measuring communication effectiveness, managing misinformation, and navigating political and social polarization. While the specific percentages cannot be precisely tallied due to incomplete formatting in the raw poll file, the responses strongly cluster around one central issue: communicators worry most about integrating AI without losing the human layer that makes messages believable, empathetic, and contextually grounded.
Authenticity remains an industry currency. Communicators are enthusiastic about AI’s efficiency, yet wary of undermining audience trust if messages feel synthetic. This signals a need for clear creative oversight frameworks, human tone-checks, and brand voice governance so that AI accelerates work without eroding its credibility.
Question 2: Organization's Current Use of AI in Communications
The poll asked participants to describe how deeply AI is embedded in their workflows, offering options such as active experimentation, broad functional deployment, selective use, or uncertainty about existing practices. While the raw data could not be parsed into percentages due to formatting inconsistencies, the pattern is clear: respondents are spread across the adoption curve, from early experimenters to fully integrated teams, with a meaningful group unsure how AI is being used internally.
AI maturity varies widely across the profession. Some teams already automate research, summarization, and content development, while others are still mapping where AI appears in their tool stack. The insight is urgent: communicators cannot govern what they cannot see. Organizations need a shared AI inventory and clear usage guidelines so leaders can align expectations, maintain oversight, and reduce blind-spot risk.
Question 3: Disclosure of AI Use
This question delivered the most decisive signal in the poll. Twenty five percent of respondents favor full transparency, saying acknowledgment strengthens audience trust. Another 37.5 percent believe disclosure should depend on context, reflecting a flexible approach that adjusts to the visibility and purpose of the content. An additional 25 percent remain unsure because industry standards are still emerging, and 12.5 percent say disclosure is unnecessary since results matter more than the method.
Together, these responses reveal a profession searching for clarity as AI becomes a routine part of communication work. Communicators want practical, credible guidance that protects trust and still allows teams to innovate. The split shows why organizations need to set their own standards now, rather than wait for external rules to define how authenticity and AI coexist.
What this all means
Communicators and marketers face a pivotal opportunity. AI is now part of the message supply chain, and audiences increasingly want to know when automation shapes what they read or view. Transparent operating principles, practical disclosure frameworks, and explicit oversight processes can reduce reputational risk while reinforcing the human value at the center of persuasive communication. The poll signals a profession ready to modernize its playbook, but also one that needs deliberate leadership to define responsible AI use before external forces impose standards for them.