Lots of Questions About AI, Not Many Definitive Answers, Yet

Lots of Questions About AI, Not Many Definitive Answers, Yet Seth Arenstein CommPRO CoCreations

That we live in a dynamic, digital world was an understatement even before OpenAI’s generative AI ChatGPT was unleashed widely last Nov. 30. Now there are more questions than answers.

Warming Up

A scenario: you’re busy at work. You receive an email from a friend who needs a favor. He knows you’re an Amazon Prime member (he’s not and is mildly averse to technology) and asks if you can purchase something for him and have it delivered. He’ll send you a check right away. Fine. Pressed for time, you do the favor with several mouse clicks. 

Moments later your friend is emailing again. He writes his email was hacked. “Ignore any messages that come from my email,” they’re from a hacker, he writes. 

You’re worried. Was the Amazon Prime request a fake? You call the friend repeatedly, but there’s no answer. In a sweat, you cancel your credit card, halting the purchase. Maybe.

Yes, you should’ve called the friend initially, but his request seemed legit and you were at work. (An ancillary lesson: don’t read personal email on the clock. Come on.) 

In truth, no matter how violated you feel, it’s likely you’re not special. This hacker probably used AI to fool you and many, many others. 

AI and Crisis Response

Another scenario: It’s a Friday evening. A well-known carrier’s flight takes off from London, bound for Addis Ababa Bole International Airport. Unfortunately, hours in an armed, anti-government group hijacks the flight, with more than 300 passengers plus crew aboard. Armed, the hijackers contact air traffic control (ATC), relaying their demands. 

ATC alerts the carrier. However, it’s 3 am at the airline’s home base and nobody from the C-suite is available. Someone reaches you, a mid-level communicator. You’re sleeping but awaken and take the call. The flight manifest indicates most passengers are Ethiopian, but there’s no senior airline official who speaks Oromo, Amharic, Somali, Arabic or any of the roughly 90 languages used in Ethiopia.  

At 4am your time you issue a pre-approved holding statement. But you do something else, too.

You alert an AI team member (they’re always awake), providing a script for a short video that he’ll post asap. It will run on the company site only until a senior airline executive is located and a video with a real person can be posted. 

Bridging the gap until then, the generative AI-created video shows a virtual spokesperson disclosing what little the airline knows. Essentially the virtual spokesperson reads your holding statement. 

A Perfect (Virtual) Spokesperson?

Fortunately, your AI colleague creates videos of the virtual spokesperson in several Ethiopian tongues. Based on research, AI has decided the best age, gender and ethnicity of the virtual spokesperson. In this case, several different spokespeople are created. The videos clearly state that the spokesperson is computer generated. 

A decent use of generative AI, right? 

Crisis communicators were wary of this use case, mostly for trust reasons. Choosing a virtual spokesperson’s age and gender “feels manipulative, intentionally manipulative,” says Deb Hileman, president/CEO, Institute for Crisis Management. She prefers issuing a written statement “and using technology to translate it” into Ethiopian tongues. “The most important aspect of crisis communication is human empathy,” she adds. “Find a human,” to make the statement.  

Similarly, Jon Goldberg, chief reputation architect, Reputation Architects Inc, worries about using an AI-created spokesperson instead of a human, even as a stop-gap measure. At this moment when generative AI is new, he’s concerned about the potential for stakeholders suggesting “an organization is using AI to distance itself” from the crisis. Or “that it cares so little it’s delegating communication about a serious situation to a robot.”  

Of course, considering the ineptness of some executives’ crisis communication skills and how inhuman certain CEOs seem, deploying an AI-generated communicator in a crisis might be an improvement. 

Building Trust? Marketing and AI

Another early adopter of AI and generative AI is marketing, of course. 

Here’s a common scenario. You’re at a retailer’s site researching products you might purchase. You click past the banner that explains your entry into the site implies consent

While searching you browse only, avoiding leaving your name or email address. And fearing joining another email list, you refuse to sign anything entitling you to receive more information from the retailer in exchange for 10% off your first purchase. In sum, you’ve done the digital equivalent of window shopping at a brick-and-mortar store. 

A few hours later, an email arrives. It’s from the company whose site you visited. “We noticed you were looking at items on our site. You have good taste. Do you need more time to decide?” Or it says, “We’ll discount items 10% if you purchase them today.” Or the ubiquitous “Do you want to join our mailing list and get exclusive deals?”  

What’s happened is the site has captured your IP address and matched it quickly to an email (it must be done quickly, since an IP address will have hundreds of owners during its lifetime). Other than that, the retailer knows little else about you. You’re safe. (By this point, you should be shaking your head no.) 

Relationships Created

Enter Krateo, a startup that promises it can provide retailers with additional data about you and other anonymous visitors. Krateo monitors the retailer’s site and since you implied consented, it captures your IP address quickly and its proprietary AI technology scans myriad databases. These are databases where you supplied information legally. Perhaps you bought a car and provided the dealer with your name, address, date of birth, gender, etc.  

In some cases, Krateo can capture email addresses only, as it did with about half of roughly 5,000 anonymous visitors to a site that was shown to your writer as an example of the company’s capabilities. However, even with just email addresses of anonymous visitors Krateo can provide the retailer with useful data. 

For example, Krateo compiled a list of anonymous visitors’ emails, but limit it to those who made multiple trips to the site. The thinking is that these visitors represent more serious potential customers, says Clay Sharman, Krateo’s founder and chief creative dude (yes, that’s his formal title). 

Name, Age and Address

Where Krateo can make things much more interesting is when its AI models work on the other 2,500 visitors, who, remember, left no information at the site. In addition to matching their IP addresses with email, Krateo’s AI locates their names, addresses, age, income range and phone number, Sharman says. In other words, marketing gold. From there, Krateo can provide retailers and other clients myriad insights. 

For example, a retailer was targeting its marketing toward younger people. Instead, Krateo showed the company most of the web surfers it could track on its site were 45 and older. Another insight this retailer gained was that a large segment of site surfers were large companies and C-suite members. “This caught [the retailer] off guard,” Sharman says, noting the company is reconsidering some of its marketing strategies based on these insights.

Moreover, with the additional information Krateo captures, it elevates the kind of relationship a marketer can build with an anonymous visitor, Sharman says. Instead of being purely transactional, the retailer knows enough about you to suggest useful content, for instance. Indeed, one of Sharman’s goals is that Krateo will help companies regain trust with customers. 

Sharman and Krateo venture catalyst and advisor Archana Mehta note the tool gains power the more data it’s fed. Even better, however, is that Sharman and Mehta seem are a refreshing contrast to the prevalent image of less ethical AI exploiters.

For example, they’re fanatical about data collection. They’ll work only with organizations that gather data ethically and promise not to sell it to third parties. As Sharman says, “Nobody should get an email from a company and say, ‘Hmm. I don’t remember doing anything with this brand.’” 

Perhaps that attitude is a reason to breathe a bit easier at this moment where the U.S. is only beginning to think about guardrails for AI. 

Seth Arenstein

Seth Arenstein is a freelance writer for CommPRO and former editor of PRNEWS and Crisis Insider.

https://www.linkedin.com/in/seth-arenstein-2a11536/
Previous
Previous

Will IoT Make United Stechs Of America Great Again?

Next
Next

Xcyte Digital Completes Game-Changing Acquisition and Capital Raise