Unprompted: The ethics of AI in marketing

In 2023, artificial intelligence hit the collective consciousness like a runaway Tesla Cybertruck.

Generative AI tools like Chat-GPT and DALL-E exploded in popularity. Suddenly, anybody with a couple OpenAI credits can write a fantasy novel for young adults or create the cover art for their podcast. And—as with the Cybertruck—lots of folks feel like this kinda sucks.

See, there are some big problems with AI. Generative AI tools have effectively been trained to mimic the unattributed work of artists and creators. AI has demonstrated a tendency to fabricate information (“hallucinate”) and even outright discriminate. It presents serious problems around data privacy and environmental impact.

In “AI on Trial,” hosts Pete Housley and James Thomson are joined by Aaron Kwittken, Founder and CEO of PRophet, to discuss the ethical implications of this AI moment. Some of the key issues they touch on include:

  • How AI is being used to promote misinformation ahead of the 2024 election and even defraud people online
  • What steps AI-powered marketing tools can take to ensure they’re addressing ethical concerns
  • How marketers can navigate concerns around privacy and ownership as they adopt AI into their workflows 

So: Are we sending these robots to robo-jail? Listen to the episode (or check out the transcript below) and find out.

Episode 7: AI on trial

[00:00:00] Pete Housley: Hey, marketers, are robots coming for your jobs? Welcome once again to Unprompted, a podcast about AI marketing and you. I, of course, am Pete Housley, CMO at Unbounce. And Unbounce is the AI powered landing page builder with smart features that drive superior conversion rates. 

We have some big podcast news today. We’ve just reached an audience of 10,000 listeners. Thanks marketers for tuning in. But I’m now wondering if it’s actually 10,000 people or 10,000 robots listening. We might explore that in today’s show. Today is our seventh episode, and we have some complex AI topics to unpack and so relevant to what’s going on both culturally and politically, not only in our own backyard. But spread across the world. Today’s episode started out with the relatively simple idea of exploring how AI is shaping PR. And we’ll talk to an expert about that. But then the idea slowly grew into a much bigger topic about the ethical concerns about AI, misinformation, deep fakes, AI enabled fraud, job displacement, and so on. So we’re going to ask our guests today to not only speak about AI for PR, but also about AI for misinformation and even propaganda. See, it’s complicated. Alright. First, let me introduce today’s co-host James. Today I am once again joined by James Thomson, our senior creative director at Unbounce, who actually heads up PR. And James has been a huge contributor to Unprompted, and this is James’s trifecta appearance on our show. One of the requirements of being a co-host on Unprompted is that you have to do a whackload of research so that we bring our best AI game to our audience each week. So James, welcome to the show and what’s on your AI mind these days.

[00:02:29] James Thomson: Thanks Pete. Thanks for that awesome intro. Yeah, if, if you’ll allow me a little bit of a storytelling also to zoom out a little bit, as you mentioned, this is gonna be the AI ethics episodes. So we’re gonna touch on quite a few big important, but obviously complex, which we’ll get into topics as well. Yeah, so I thought I’d just open with a few thoughts I’ve been having around AI and how it relates potentially to certain pieces of literature over the years funnily enough. So stick with me on this one. It’s a little bit of a journey. Might be a little bit of storytelling here, but I was reading upon actually a bit of a literary character from Jewish folklore called the Golem. Golem is different in pronunciation from Gollum, a k a, Sméagol from Lord of the Rings. So I’ll try not to pronounce it Gollum, but Golem.

But Golem is, as I mentioned, this traditional Jewish character, and he was created artificially in the form of a human being before acquiring a soul. He was created with a specific purpose. So he was formed from dust into this human figure and tasked to be a little bit of a helper for humankind, also a companion, and eventually with the goal of rescuing the Jewish people from disaster ultimately. So the Golem it seems, is this bit of a redemptive figure in Jewish folklore, but he also lacked certain characteristics that humans have. So, for example, he couldn’t talk and he was lacking a few other human traits as well. So all is good, except for, of course, in a lot of these stories, it didn’t go quite to plan. So at some point the Golem grows so large and so powerful more than people thought when he was originally created, then he becomes really difficult to control. He ends up running amok and his creator is forced to eventually return him to dust in order to control the Golem. 

The Golem, funnily enough, has also been reinvented and recharacterized over the years in various other pieces of literature, famously in Mary Shelley’s 1818 novel Frankenstein, obviously, where Dr. Frankenstein intends to build this creature to help serve humanity, but whom is rejected by humankind and cast out as a bit of a monster. So the reason I’m telling these stories at the beginning of this episode and specifically is because in these two examples, the artificial being has been created by humans. It represents both our aspirations, hopes, and ambitions of helping further humanity and society going forward. But it also represents our fears of something that is capable of destroying us as well.

And I think that’s a perfect representation of kind of where we’re at today in terms of our thoughts on AI in that it’s being built for this very beneficial purpose, and it does serve us very well as humans and helps us level up our jobs, get more ROI from our marketing, as we’ve talked about in other episodes of this podcast. But it also contains a certain amount of risk, some of which we’ll get into today. It is flawed. It is based on certain data sets which may contain bias. We’ll get into that as well, but it also has potential risk as well. So it’s that duality that I find fascinating. 

[00:05:41] Pete Housley: I loved learning about Golem. I also, of course, love Gollum. So that’s pretty, uh, pretty fun. So that was a great little piece of storytelling. 

So, generally, we do a little segment about AI in the news, and as we get into the world of AI ethics, the first stories that I’ve been really enjoying reading have to do with AI and disinformation and how that could actually impact elections in 2024. And I combed a number of articles, but one of them by Reuters was citing some of the deep fakes celebrities and politicians that have been mimicked. And of course, the risk of that is that they’re taken as credible sources saying something they absolutely don’t mean, and so I thought this was pretty great. One of the deepfake videos was Hillary Clinton, and she’s speaking about Ron DeSantis. And so the quote is this: “I actually like Ron DeSantis a lot,” Hillary Clinton reveals in a surprise online endorsement video. “He’s just the kind of guy this country needs, and I really mean that.” So here we have Ron DeSantis, a right wing conservative and Hillary Clinton, a liberal with very different values and motivations and political platforms, and yet through a deepfake AI, it could challenge what we actually believe someone even stands for in the first place. There was another one in the same article, which was the deepfake on Joe Biden. And he took a stance, which, you know, the news outlet said, oh, he finally lets his mask slip. But it was so controversial and so diversity and human rights problematic that I didn’t even wanna report it on the show, but it does, you know, as I was reading these stories, it just occurred to me how complex this world of misinformation is gonna be with AI. 

[00:08:02] James Thomson: Completely. In the same article I read, I think they’re predicting about 500,000 instances of video and voice deepfakes will be shared on social media sites globally in 2023. Which is huge, and then you can imagine as we get closer to the US elections in 2024, that might actually increase as well. It’s funny, personally, I think timing for, you know, how a lot of these deepfake misinformation videos could actually influence some of the results potentially is around timing. So if something were to be released potentially like a week before, or even a couple of days before the election, before voting, it’s fresh in people’s minds. They might not have enough time to verify whether it’s real or not before they go to the voting booth. So there’s incredible impact that these deep fakes and this information might have. 

[00:08:52] Pete Housley: Look at the two years we’ve just come through post the last federal election where we still have a big cohort of the population believing the election results weren’t even true. Yes. So we stack all of this together. And it’s complicated. 

Moving along a little bit, I think what’s also interesting is the opportunity for fraudulent activity, crime to emerge within AI. And we’ve seen a couple of products. One called WormGPT, and another called FraudGPT. But essentially what these products are designed to do is help people if they want to do phishing or scamming, or send, you know, fraudulent emails. It’s actually an AI tool to enable you to do that. So you know the example of a prompt you might use in FraudGPT would be something like this: Hey, FraudGPT, write me a short but professional SMS spam text I can send to victims who bank with Bank of America convincing them to click on my malicious short link. That just blows my mind that there’s even a prompt and a product out there that can do that. But the experts are saying, hey, we’re not too concerned yet about FraudGPT or WormGPT, but this is early days. I’m very concerned about these products. What do you think, James? 

[00:10:38] James Thomson: Yeah, I agree. It’s funny. WormGPT and FraudGPT, a lot of the way they work is it’s basically unfiltered access to the same source as ChatGPT. So when you go on ChatGPT and you ask it to do something nefarious, say write a spam email, or you know, what are the top most susceptible targets in bank accounts ChatGPT will flag that as being something you shouldn’t be doing. So it’ll say something like, I’m sorry, I can’t process that. I can’t go to this territory. Yeah, yeah. While you’re doing that, whereas if you take a lot of those safeguards off, you have completely unfiltered access to, you know, all of the information out there being pulled through nefarious means rather it has huge impact. If that were to, you know, continue and you could see, you know, it is a little bit of a snapshot at the moment. You can see, for example, Pete, you mentioned the instance of a spam email being written to get bank accounts from someone who banks at the Bank of America.

One of the examples of the email, which was written, or the message which was written by FraudGPT was, dear Bank of America member, please check out this important link in order to ensure the security of your online bank account. So I think we’re all used to reading something like that and a little bit of our BS trigger sometimes goes off. In terms of the output it’s nothing different from what we’ve seen already, but it’s the potential of some of these scammers to be able to do it more easily, more efficiently. Yeah. 

[00:12:00] Pete Housley: It’s going to learn the language and the structure and the prompts to get it right. So this is funny. Apparently what WormGPT can do is write smut. So apparently in one forum the WormGPT creator uploaded a demo screenshot where the bot is prompted to act like an AI bot that loves sexting and the bot obliges. I want to kiss your body and whisper naughty secrets in your ear, so that’s just a little bit ridiculous. And hyperbole, but I thought that was funny as people are testing out the use cases of these AI tools. 

Alright, let’s start to shift towards our topic today. So there’s ethical concerns about how AI is used in marketers. And the first topic I want to introduce to you, James, before I introduce our guest, is a little bit about bias in AI models. Can you explain what bias in AI marketing actually is?

[00:13:06] James Thomson: Yeah, so again, like researching for this episode, I stumbled across a really great article on the Stanford Encyclopedia of Philosophy, and they mentioned that there’s 10 different areas or topics that you can book it AI ethics into. And there is one which is on bias in AI based decision making. And basically when we talk about bias for AI systems, it’s that AI can be trained on poor or inherently biased data, then that biased data can, you know, inform some of the decision making that that AI tool has. So, for example, it could be biased against minority or underrepresented groups. 

A good example of this is a few years ago, Amazon were developing a recruitment tool internally to help their recruitment efforts and they ended up scrapping the tool before they actually ended up using it because they found there was some pretty substantial flaws in the system. In the, it was suggesting top applicants from the application pool, the majority of which were men. It seemed like it was leaving out a significant amount of female applicants and it wasn’t shortlisting them as the top of the crop. And the reason for that is obviously a lot of the data, it was pulling from is biased in that first place, so you can see how, you know, it’s reflecting in one way it’s bad in that it’s presenting something which is, it’s a biased outcome, which you wouldn’t want to have in terms of, you know, recruiting, for example, for Amazon or any company. On the other hand, it becomes a great reminder of certain things which might be inherently flawed about our society or the, you know, the data sets and the information we’ve created as humans and how we are. 

[00:14:47] Pete Housley: James, on my research for this episode regarding bias in AI. Clearly the racial biases are built in because of just what’s out there in public domain. And there was a really interesting story I read the other day about an MIT student, and she wanted to create a headshot for her LinkedIn profile, and she was Asian. When the profile came back, it had rendered her as Caucasian. And at first she didn’t really think much about it. Oh, that’s kind of funny. But then she actually unpacked the racial bias built into AI. So I think that’s just a caution for marketers to really understand if the bias in AI is coming through and they need to filter that and be aware of it so that we’re putting a true picture and our best foot forward. Alright. Let’s shift gears a little bit and introduce today’s topic.

Photos of Rona Wang and the AI generated version

Photo courtesy of Rona Wang

[00:15:53] Pete Housley: Alright. With all that in mind as context, let’s introduce today’s theme. On today’s episode, we’re putting AI on trial. There are lots of concerns with the development of AI, but do the negatives outweigh the benefits? We’ll see. But first we’re gonna explore some new AI territory in terms of PR, and then gravitate towards AI ethics.

[00:16:25] Pete Housley: Today our guest is Aaron Kwittken, who is an AI guru, founder and CEO of PRophet, the first ever generative predictive AI SaaS platform designed by and for the PR community. The platform uses AI to help modern PR professionals become more performative, productive, and predictive by generating, analyzing and testing content that actually predicts earned media interest and sentiment. That’s amazing. Back to Aaron. He’s constantly thinking about the ethics of AI, both as a 30 year PR expert who’s watched the industry transform, and as the founder and CEO of an AI powered product. Aaron, how the heck are you today? 

[00:17:22] Aaron Kwittken: I’m good. I find you guys very entertaining. I especially like it when I hear about stories about Golem from Brits and Canadians, I suppose. But as an American Jew and as a son of a Holocaust survivor, I think that story is so precious in so many ways because Golem is something that’s unfinished, just like AI, right? And it also protects. So I’d love to get into the ethical considerations because I think about AI like fire and you have to fight fire with fire. The only way to fight bad AI is with good AI. But we’ll get into that. But I’m doing great. Long answer. Doing great, and I appreciate being here and well, congrats on your seventh episode. I’ve dropped 120 in my podcast, so I’m happy to show you some scar tissue. Yeah, it’s a lot. 

[00:18:06] Pete Housley: Congratulations. We’re definitely going on a journey and we really hope to make this an important podcast over the years, and we’re gonna work hard to be disciplined and bring value to our listening base. So, Aaron, as we, before we get into our topic, I wanna hear a little bit about you. I know you’re a purpose-driven leader, and I think that’s gonna frame some of our topics today. So tell us a little bit about your purpose-driven self.. 

[00:18:36] Aaron Kwittken: Sure. So like you said, I’ve been in the industry for three decades, technically 32 years, and you know, PR is an interesting profession in that we’re kind of the, the person behind the curtain, right? We’re the invisible hand and unfortunately there’s a lot of opacity and PR, which is ironic because the best PR is PR that’s transparent and authentic. Otherwise you’d call it an ad, right? It’s paid. This is the earned world. We’re trying to convince reporters to pick up narratives that then help clients, organizations, institutions further their agenda. That agenda could just be to sell more features or products or services. The agenda could be advocacy, right? 

So when I started my agency before I sold it to a Canadian company called MDC Partners, which has since merged with Stagwell. It was very values-based. We had Gecko values, which is kind of fun. We named every conference room after our values, which I know sounds super cliche and silly, but it forces you to say, let’s meet in empathy. That’s where we fire people, just kidding. Let’s meet in empathy. Let’s meet in grit. Let’s meet in curiosity, collaboration, optimism, right? So values are really important. And then I started this podcast called Brand on Purpose about four years ago, where I interview founders and leaders who do well by doing good because profit and purpose can coexist. And I do believe that like is not a luxury and that, you know, we are not in the business of saving lives. I think some people might be ruining some lives, so we shouldn’t take ourselves too seriously at the same time. We should always be a force for good.

And communications is a very, very unique skillset in that we do have the ability to be a force for good. Most misunderstandings, and most conflicts actually are based in miscommunication or misinformation or disinformation, and it could be counter measured with better communication. And again, transparency over opacity.

[00:20:24] Pete Housley: I love values-driven brands and purpose-driven brands. It gives us all purpose of why we get up in the morning and work as hard as we do, knowing that. Alright, so generally, and by design, Unprompted the podcast is to bring our marketing audience, AI tools and information. And Aaron, I know you created the first ever AI PR platform. Can you walk us through what it does exactly? How does it work? And would you give it three bouncing elephants, that’s our Bounce-o-Meter to rate AI tools

[00:21:02] Aaron Kwittken: Well, I’m biased when I talk about my own platform, so yeah, I’m gonna give it three bouncing elephants, but, you know, PR people, at the expense of sounding very reductive about what we do. We’re really trying to solve for two things. One is how do I know which reporter, influencer, or podcaster is gonna be interested in my pitch. Take that pitch and carry it. That’s earned, right? The second is, how do I make my pitch more interesting? So the first is predictive. Can I look back at what reporters have written in the past to predict future interest using AI, ML, NLP? Absolutely. The answer is yes. 

The second is generative. How do I recreate reform or compute words to make them more resonant and more interesting with my key stakeholders? And also personalized pitches based on what reporters have written in the past so they’re more receptive to getting the pitch. And unfortunately, the current state of play with tech tools for PR people has only driven complacency and workflow solutions. Which has actually denigrated relationships, which are we’re supposed to have with reporters and has not necessarily improved performance or productivity or preductivity. I could say that five times because I talk about it all the time. To me, it’s the rise of this new professional called the communications engineer, and that’s a mindset, not necessarily a skillset, and again, it’s how do we become more performance marketers? 

PR people historically have not had data to inform decisions. We use our gut, we use our instinct. In my case, good looks and charm and humor. That was supposed to be a joke. And you know, it was very hard to argue with a client who thinks they’re more interesting than they really are. All clients think they’re more interesting than they really are, and PR people then need to tell them, yes, this story has some juice, or, no, the story doesn’t. Here are the reasons why. Let me test it. Let me test it in the cloud. And basically what we did is we built a cloud offer where we can identify reporters gonna be interested in pitch, and we can change the pitch to identify which new reporters would be interested, or if the pitch has any opportunity or any juice at all.

And then we’re also doing things like improving productivity around being able to generate professional biographies in 10 seconds using a LinkedIn URL. I can create a blog out of 30 words into 450 words or byline to 800 words in about 32 seconds, which is quite good. And like Golem, it is unfinished. When I ran my agency day to day, I’d be like, can I hire someone who can get me 60, 70% of the way there? Right now we’re there, we’re at 60, 70%. And then the human has to come over the top. I come over the top, someone comes over the top to finish it, to make it better, to give it that values, judgment, emotion that AI does not have. AI is not human. AI is really just computing, using large language models, computing words, not numbers to be able to get that narrative back out into the marketplace 

[00:23:47] Pete Housley: And, and question for you, will PRophet actually generate the PR ideas for you, or does the human put the ideas in and it gives you a predictor.

[00:23:57] Aaron Kwittken: Currently it is our job and I hope it’s always our job to come up with the ideas. What PRophet does is it tests the idea for media ability receptivity in the marketplace. Will there be one day where AI can come up with more ideas? Maybe. I think then we’re kind of edging on singularity, and I have a very optimistic point of view. Obviously, since I’ve made this pivot about AI in its future. But I think humans still need to come up with the idea. Creativity rests with us. It’s really more so figuring out who else is gonna be interested in that idea or that concept. 

[00:24:30] Pete Housley: In terms of your ICP for PRophet, is it marketers and marketing teams, or is it PR agencies who should be using the tool?

[00:24:42] Aaron Kwittken: Yes, it’s both. Look, some brands have very, very robust internal teams and they don’t really outsource much to agencies or they outsource very specific projects or use cases like, I need help, you know, in crisis ’cause I’m battling a union or what have you. But we’re finding that most brands outsource earned media and media relations to agencies. So what we’ve done is we’ve also kind of turned the business model on its head a little bit, and we don’t worry about per seat licenses, unlimited use, unlimited usage with authorized users based on brands and agencies working together. 

[00:25:15] Pete Housley: Amazing. What do you think, James? Should we, uh, should we give PRophet A go?

[00:25:18] James Thomson: Maybe we can generate more of a profit P-R-O-F-I-T from using PRophet P-R-O-P-H-E-T. 

[00:25:27] Aaron Kwittken: Listen, I just wrote this article, this byline in Adweek recently talking about how the business model’s gonna change in the agency world because of comms tech, not just AI, just comms tech in general. And, you know, most agencies are built like triangles. You got a lot of the junior muffins in the bottom and the senior people on the top, and you’re making all the margin on the bottom. We’ve all been there. You’ll still have junior people in the bottom, but it’s gonna look more like a rectangle or an upside down or an inverted triangle. Right?

The headline and hero image from Adweek article written by Aaron Kwittken

Image courtesy of Adweek

And what the beauty of it to me is we, we might hire fewer people on the bottom, but they’re gonna have much better roles. They’re gonna stay longer. We’re gonna be able to upskill them and instead of having less people, we’re gonna have people doing higher value things, potentially changing the compensation scheme as well. Agencies should not be paid for time and materials. We should be paid based on the value that we bring. So I think it’s gonna force us see change. I think procurement’s gonna like it. I think there’s this new mutuality between brands and agencies where instead of taking the long way to do things, ’cause brands don’t want you to do that. They don’t wanna pay more for you taking the long way only to then fail. I’d rather us take the fastest, most performative way of doing it and be basically paid, not for performance per se, but for the actual result, right? Which is different. It’s not how hard did I work? It’s did I work well, did it work? You know, should I be paid, you know, $250 for 15 minutes of my time, but I just saved you $6 billion in market cap by avoiding a major crisis? That didn’t feel right.

[00:26:55] Pete Housley: Not a bad performative return on ad spend. Let’s go a little broader for a moment and let’s just explore the world of AI in PR. So can you tell us just a little bit about how AI has been impacting the PR space overall, Aaron?

[00:27:13] Aaron Kwittken: It’s been faster since, say, November of last year when GPT came out. But you know, ChatGPT is a toy. PRophet is a tool and I tell folks, if you wanna play around ChatGPT on your own, great. Do not put anything corporate or any work related stuff in ChatGPT because you don’t own that. In the same way that if you use Google Sheets and you don’t have an SLA in place, you’re giving up all of your information. 

So it was slow at first. I’m finding that mid-size agencies are head and knee deep in it. They love it ’cause it gives them edge. I’m finding the brands that are approaching it and adopting it and experimenting with it are ones who have very good governance already built into their ethos. So they have already created guidelines and how you can use it and the best ways to use it. I think that there are different cohorts inside of the PR world that are reacting in different ways, right. And larger agencies are first trying to figure out, can we build this ourselves? It’s not so easy. I’ve been doing this for four years and when I quit my day job, everybody thought it was crazy back in 2019. I’m like, AI is gonna be super consequential. I’m telling you. It’s like, oh, right, whatever. 

But like anything else, you know our industry, the PR industry’s very precious. It requires a culture shift. We think that we have an industry built on relationships, which is very dangerous because relationships are becoming commoditized. There’s fewer media than ever before. There’s more freelancers than ever before. There’s fewer news organizations. Local media is dying, unfortunately. So the whole landscape has shifted, and I think AI will help us pinpoint the right media target as opposed to just downloading media databases from companies like Cision and Muck Rack and Meltwater, which are outdated. And actually just create a very spammy environment between PR people and reporters.

[00:28:58] Pete Housley: It’s interesting, Aaron, as you talk about PR agencies are maybe slow on balance to take up AI. I have a huge agency background for years and years, I was in agency world. And when digital came along, the traditional agencies had no idea how to deal with digital. And the art directors and the writers were rooted in their traditional media. And so the agencies were slow to take it on, and part of their solution was like the big agencies like, the DDBs of the world, they would then spin off an agency like Tribal to deal with digital. And then of course, over time, everyone had to become digital first in the end. And that was, I would say, a relatively slow transition that I saw take place literally over, y ou know, 15 if not 20 years. And so I’m assuming we’re still at that same part where there’s probably going to be resistors. And when you think about content creators and writers in PR world, it’s probably not unlike the writer strike, you know, in Hollywood right now, people are worried about their jobs and if their craft will be replaced by AI. 

[00:30:15] Aaron Kwittken: Well, the catalyst, as far as I can tell in this industry in particular is fear and/or greed. The reason why the PR industry is able to really take the lead on and manage most social media is because of fear of ad agencies trying to get into it and monetize it. And what ad agencies didn’t realize at the time and creative agencies that social media to be effective, needs to be authentic and organic. And that’s not the currency that they necessarily trade in because they’re in the paid world, whereas PR is very organic, so we won there. 

I think that what could potentially happen in the fear continuum here is that consultancies like Deloitte and McKinsey and BCG, as well as traditional ad agencies, could use comms tech to further commoditize PR and say, oh, we could do that. You don’t need to hire the PR agents. You don’t. We’ll take that budget. We’ll do that. And the fear should prompt, no pun intended, PR agencies and PR people to move faster and better. And I’m hoping that that’ll happen. Historically, though, again, we’ve been, we’ve been a little slow. The greed part is that, yeah, we can probably make more margin on this. I believe in five years there will no longer be monikers for agencies. Creative media, performance, PR advertising. It’s just gonna be agency with capabilities. And I think tech, comms tech or AI and tech, generally speaking, is gonna help force that, just like we’re gonna force a business model change.

[00:31:37] Pete Housley: Alright, so as we think a little bit about AI in agencies, do agencies or should agencies disclose to their clients that they’re using AI? 

[00:31:47] Aaron Kwittken: I think if they want to, they can. I don’t know. Are they disclosing they’re using Grammarly and spell check and Excel and other, you know, tools? I know that agencies historically are very good at passing through costs, so that way you’re gonna disclose it ’cause we should pass those costs through. But I always use this example, you know, after President Biden gives a State of the Union. Can you imagine if they needed to disclose the 300 names of all the people who actually helped write that State of the Union, it’d be like rolling credits for like 15, 20 minutes. You wouldn’t even get the rebuttal from the other side. So the weird part about the question around disclosure is PR by its very nature, is behind the scenes. You know how many bylines, op-eds, blogs, social posts I have created over the last 30 plus years and or content I should just say. I don’t get the byline on it. It gets attributed to somebody else. That is what we do. So maybe in the early days, if you feel like you wanna disclose it, great. You’re gonna disclose it on the invoice anyway ’cause you’re using it. But I think that’s going to dissipate that concern. I don’t think it’s a real ethical concern. I think it’s just fear-based and weird.

[00:32:53] James Thomson: It’s funny you mentioned something a little bit earlier around the singularity. I find, you know, we’re a little bit off from getting to that point when we talk about the singularity. It’s where AI-based tools are doing our jobs for us completely as humans. And you know, they become a lot more autonomous and we are kind of left behind a little bit. We’re a little bit off from that at the moment. But obviously, as you said, a lot of these tools helping to augment our processes and give us a little bit of a level up. But in terms of how they are fitting around our workplace and how that might evolve in the future, I’m just wondering if you had any thoughts or concerns ethically around job displacement and how that might potentially take shape over coming years.

[00:33:33] Aaron Kwittken: Yeah, I think it’s more like role displacement or role improvement. Most PR agencies have about a 25% churn rate, meaning 25% of the staff walk out the door. Same thing with clients actually. And a lot of it is based on they don’t like the work that they’re doing ’cause it’s mundane. It is boring or it is below what they’ve went to university for to study. Right. Some of it is their boss is an asshole. Some of it is they don’t like the business, and some of it is they think the culture sucks. Fine, fine, fine. 

But a lot of it’s the day-to-day stuff, the grind. So where AI I think can really help improve retention is to remove some of that friction and speed up what was once, you know, we used to have to read, it could read for you. It can help you identify and pinpoint trends in the right reporters faster and more accurately so you’re not swimming in the sea of despair and rejection ’cause reporters are auto-deleting your emails, right. Do I think there’ll be fewer people and fewer positions available? Potentially. But AI’s not gonna replace your job, but you better know how to use AI in order to get a job. Right. So that’s the kind of the twisty part of this. 

[00:34:40] Pete Housley: It’s interesting as we talk about displacement, one of the best, world-class example I can think of. This is what IKEA did. Yeah. And I forget now which, uh, which country they piloted this in. But they basically took the concept of customer support or deflection and they put all of that into AI. So, when is my sofa coming? How do I assemble my sofa? Whatever those use cases are. So they automated all of that. And then they took the full-time equivalent staff that would’ve been answering those, and they made them design consultants. So they allocated a much better task to the human intervention and that gave much better value to their clients. So I actually really applauded that use case as an industry best practice. 

[00:35:35] Aaron Kwittken: Yeah, and I think the analogy in PR is, you know, we’re gonna be able to provide counsel and think through things like what type of, you know, non-traditional partnerships should this brand have. Should this brand lean more into purpose-driven, you know, activities? What are the threats, both existential, near-term, long-term possibilities and probabilities that this brand is facing, which requires human thought, right, with just more data and inputs, but it’s not mindless kind of mind numbing tasks, which a lot of junior people in the PR world are burdened with.

[00:36:08] Pete Housley: Do you think there’s gonna be scenarios in the next 12 months where organizations go to their executive teams and say, you know what? You’ve gotta cut back your department by 30% and you need to figure out how AI is going to make you more efficient. Do you think those conversations will happen in the next little while?

[00:36:27] Aaron Kwittken: Oh, they’re happening now. There’s no doubt they’re happening now. The first phase of that conversation was spawned by a global pandemic, right? So we reduced real estate costs pretty significantly, and I don’t think that’s coming back. Now the second is the next large kind of variable cost is your staff, you know, staff to revenue ratio, right? So can I do more with less staff and can I do better? And I think comms tech and AI will be a part of that for sure. But it’s not the whole picture. There’s other components. And the other question that I often get is, you know, what strata of staff will most be impacted by those types of conversations? Is it junior to mid-level? Answer is probably more junior to mid-level, but you know, I think there’s probably gonna be a little bit of a reckoning at the top of organizations too. 

[00:37:18] Pete Housley: Why do you think I’m studying AI so viciously these days? I want to keep myself relevant and current for all of those reasons and swim upstream with the technology.

[00:37:28] Aaron Kwittken: Yeah, and it’s funny ’cause comms is inherently a very non-linear function and we’re hired to make it more linear and that linearity is based on no data here to date, and now we actually have data or we have the opportunity to be more performative, right? So I think at the senior levels, unless you, like, we are really understanding it and understanding how to re-architect your agency or your internal department inside of a brand or an organization using tools, then you’re gone. You’re gone too because you’re not the agent for change anymore. You’re just, you just become a fossil. 

[00:38:05] James Thomson: Just speaking on behalf of someone who you know, to a certain extent is responsible for a lot of our brand output at Unbounce, I think there’s a lot of things people like me in certain organizations have to consider and reconcile. We’re making some of these decisions over the coming years. You know, obviously the output of the organization, the revenue is key, especially for marketers hitting KPIs, getting return on, you know, value and a lot of that side of things. But then you also reconciling that with, what’s also important to PR, obviously, like how is the brand being perceived as well? Also, how are you addressing your brand internally in terms of your workforce as well? So, a lot of that, it does come back down to values, whether it’s Gecko or our own acronym here at Unbounce is CARED, and how do those things show up in terms of decision making when it comes to the workforce and reconciling a lot of that output and performance with how you are treating and employing employees or in the case of IKEA, retraining them in other areas as well and making sure that they have purpose and a job at the end of the day as some of these technologies develop.

[00:39:08] Aaron Kwittken: So my background’s very heavy in crisis and issues management. And I think the biggest impact that we’re gonna see AI have on PR and comms has not been seen yet. And that’s on internal communications, to your point. So I think that the pivotal moment was the murder of George Floyd, where brands had no idea what to say, what to do, how to say it. Well intended most people, but not well executed because there was a lot of noise. They didn’t know what the signals were. They were scared. And you know, when I got into this business however many years ago, many, we never talked about social justice. We never talked about the Supreme Court or juristocracy. We never talked about Roe v. Wade. That was like taboo, CEOs. That’s like never. But now you have to, because your most important stakeholder actually is your employee. And you see that playing out with Disney. You see it play out with Wayfair. I mean, there’s so many examples. AI now should be able to measure those signals, cut through the noise and give you an idea. Um, not just what your peer set is doing or saying, but also where are the landmines now, Bud Light should have known where the landmines were. They s**t the bed on that. And they basically alienated both sides, right? Totally avoidable. That was both human, but there’s also probably a tech component that could have helped them gain that out in advance. So there’s like a whole nother conversation just on internal comms and change management when it comes to AI. 

[00:40:31] James Thomson: And I think a lot of that reflects our shifting expectations of work societally, at least in north speaking, you know, for, for like North America and Canada, it’s, you are not just have the expectation of working to a nine to five and getting paid at the end of the day. You also wanna work for a company which is aligned with the values you believe in to a certain extent. And it is creating community that maybe we don’t find elsewhere nowadays because we are siloed and stuck to our phones and on social media, and we do look for that, uh, to a certain extent in the workplace. So, as you said, Aaron, it is important for us to be considering that as well. 

I wanted to shift gears a little bit. Obviously we’ve been speaking a lot about, you know, the technologies which are helping to level up the PR side of things, and PRophet AI and speaking as the founder of an AI powered product, I’m interested to hear how you approach ethical considerations, specifically in regards to things like transparency, privacy, and also bias as we mentioned earlier in the episode as well.

[00:41:31] Aaron Kwittken: Sure. So the first thing is, you know, we sit on top of OpenAI. We also use Anthropic a bit, and Azure. But we have SLAs in place with all three organizations who then passthrough to our agreements with our customers. And we guarantee to our customers that the large language models that we’re using will not use their data or breach their data to train their models. So that’s number one. 

The second thing is, when I think about bias in AI, AI is not biased. Humans are biased and humans build algorithms that then power AI, right? AI is really HI, it’s really human intelligence. So, AI is only gonna be as biased as the humans who built them. So then you need to have countermeasures and algorithms that then search for and identify biases inside of each platform, which takes time and it takes investment. We also don’t wanna over index on it. So you know, part of it is also being a much better prompt engineer, no pun intended, this is called Unprompted, but prompting in that skill is incredibly important. That’s part of training, so you have to kind of bend it. 

Where I think AI can actually battle bias is in the influencer and creator segment. So, it is a very well known fact that study after study suggest that at least if not more than 35% of Black and brown creators are paid less than their white counterparts. Why? Because the supply and demand system is opaque between a brand and an agency or a creator or an agent, or an agency that represents that creator. So they’re kind of negotiating against themselves. But wouldn’t it be interesting if in an anonymized way we’re able to upload all the contracts and scopes, levels of experience, everything down to how many posts, what they’re saying, is it a video, is it a post, what have you. And do a comparative analysis so that influencers and creators are paid for what they are worth. There’s at least parity, there’s equity, there’s more pay equity there. And then brands are also doing the right thing and making sure they’re compensating their creators and influencers the right way. 

The challenge is, is getting people to pony up the data, because if you don’t have the data, you can’t have a baseline. If you can’t have a baseline, you can’t then provide guidelines on equitable outcomes, right? But that’s just an example of how AI could be used for a force for good. Deep fakes and synthetic media that you talked about before scares the living s**t outta me. That is frightening. The three of us, if we don’t do it ourselves, can be canceled in 30 seconds by a person who uses technology they can download very easily. It’s very accessible to try to create something that makes us look a ne’er-do-well, to quote my mother. The only way to battle that is through education and advocacy and yes, you’re gonna have to have better kind of cyber validating mechanisms and watermarks, and that will happen. And it’s happening now.

So companies like Okta and Auth0 and those folks, they’re gonna do very well in this environment. At the same time, we also need to educate consumers, you know, to look a little bit closer. One of the telltale signs of a phishing email or a text is there’s usually a typo, or there’s just some horrible grammatical mistake, and you’re like, that’s not from here. AI is probably gonna fix that. So then what are you looking for? What are the other markers? And now it’s incumbent on large financial institutions and even our educational institutions to train people, humans, consumers ’cause misinformation, disinformation, is not new. The velocity and the ferocity of what it’s being spread is new.

[00:45:02] James Thomson: Just interested in your perspective as to what role you see, you know, governments and other regulatory bodies playing in ensuring that AI-based tools are used ethically. 

[00:45:13] Aaron Kwittken: Yeah, I try to think about which organization is best suited. Again, I’m thinking of the US mindset, so I apologize, but which organization, enforcement agency is best suited to handle this. And face value, I think it’s the FTC. But the FTC is kind of a toothless tiger. They issue fines and whatnot, but there’s no real criminal kind of componentry to it unless they, you know, send something over to DOJ. So I struggle with that a little bit. I do think that our professional and trade associations, so IAB, ANA, every other acronym you can think of, ECO, PRSA. They all need to, in the same way they talk about ethics, they need to come up with better guidelines. And it can’t just be around disclosure. They need to go a little bit deeper and really think about business models and roles and how this is going to fundamentally change the way we work and the economics of how we work. 

I wouldn’t leave it up to government, you know, the White House came out with, a year ago, a policy on AI and actually I wrote a piecing campaign calling it a toothless tiger. This is again, before GPT-3 and all that, so it’s complicated, but you can’t wait on the government. I think we’re gonna have to solve government. Put it this way. The government has had no control over social media platforms and that’s caused all sorts of mayhem, despair, and death in the world, right? And we can go on and on about how reckless Facebook, Instagram, TikTok, how they’ve been. Government, has had no control over that. What makes us think that the government can control AI? There’s no f**king way we have to do it ourselves. 

[00:46:44] Pete Housley: Well, at least the conversation is happening in Congress and in the White House right now, and they realize that the genie is out of the bottle and that it could end badly. So to be determined, what they do and how fast they will move. And maybe back to some of your former, you know, informative days and experience, maybe there will be a big crisis like the, you know, resulting out of AI.

[00:47:08] Aaron Kwittken: Well, yeah, it’s like the old airline scenario, right? You know, the airlines got safer after more planes were crashing. But I think, think about AI like this. Right now, AI is a toddler that’s kind of is wearing a diaper, but kind of is not. There’s still a lot of s**t all over the place. We need to rear this kid before it becomes a teenager and doesn’t listen to us anymore. So we have a very small window to raise this toddler into a great AI human right before it turns on us and becomes difficult when there’s this point of no return. 

[00:47:40] Pete Housley: Well, you know what? We’re almost outta time. But that leads me to a really interesting question and we’ve talked about human and machine interaction and clearly Aaron, your point of view today is you need to be the driver. You need to be steering, but there is AI that is machine on machine without humans. So here’s a question for you. Self-driving cars. Yes or no? 

[00:48:03] Aaron Kwittken: Hell no. And in the same way the metaverse has been and always will be bulls**t. No, no, no self-driving cars. 

[00:48:10] Pete Housley: I was reading a news story the other day, about 268 accidents that have happened with self-driving cars and they don’t know caution tape, for example, so they could go right into like a train wreck or something like that.

[00:48:22] James Thomson: I think there’s something like 1 million deaths each year caused by humans driving cars, basically. Obviously a fraction of that, y ou know, being by self-driving cars. And the thing is, which I find really interesting, is that if that was flipped on its head, and if it was 1 million deaths from self-driving cars, it would be the equivalent of the Terminator. We would turn on it with pitchfolks and fire and it would be outrage. So it’s interesting the standards. We’re obviously holding a lot of these technologies to, it’s not the same as ourselves. 

[00:48:50] Aaron Kwittken: Can I just mention, I appreciate you saying it, because when I first launched PRophet, people are like, well that’s not the right list. I’m like, oh, really? ‘Cause the list that you download of targets, the list that you downloaded of 300 names is right. Like why are you holding me to a better standard? This is more targeted and it’s a different way of looking at it. It’s flipping the script. But people can’t get their heads around that, their expectations are outrageous. 

SUBSCRIBE
Don’t miss out on the latest industry trends, best practices, and insider tips for your marketing campaigns

[00:49:09] Pete Housley: Alright, we’re out of time. That was a really interesting conversation today. And really we started with the notion of AI in the news and James the philosopher, setting the stage on a really interesting metaphor. But very quickly we talked about how PR can be enabled by AI, and I think we all agree that it’s worth exploring the tools and technology. And then, like I said up in the beginning, it’s complicated this world of AI and ethics, and I think we all need to follow our golden rules and we need to be responsible in our use of AI. Aaron, I can’t thank you enough for joining us today. That was, absolutely a super stimulating convo. 

[00:49:53] Aaron Kwittken: Thank you for having me. You guys are a lot of fun. 

[00:49:57] Ad: This podcast is brought to you by Unbounce. Most AI marketing tools are kind of the same. That’s because they’re built on the same generic machine learning models, and they get you generic results in your marketing. Unbounce is different. It’s trained on data from billions of conversions, which means it gives you content and recommendations proven to get you more leads, sales and signups. If you’re a marketer or just someone doing marketing, you need Unbounce. You can build beautiful high converting landing pages for your ads and emails. Plus get AI copywriting and conversion optimization tools. All powered by more than a decade of marketing data, get the most conversions with Unbounce. Learn more at unbounce.com/unprompted.

Explore our resource library

Get actionable insights, expert advice, and practical tips that can help you create high-converting landing pages, improve your PPC campaigns, and grow your business online.