How can artificial intelligence change neuroscience? Richard talks to Lee Rowland about his work with NATO to utilize neuroscience to understand opposing forces, and how the introduction of artificial intelligence modelling allowed for the creation of synthetic audiences. With synthetic audiences, messaging can be tested rapidly without impacting the people the messages will ultimately be delivered to. This opens the door to an iterative process of feeding traditional test data into audience models and then testing the models against new messages for further tuning. AI can accelerate your effective messaging!
Brandon Wehn (00:08):
Welcome to the show. This is Brandon Wehn and you are listening to the Understanding Consumer Neuroscience Podcast brought to you by the folks at CloudArmy. In this episode, Richard talked to Dr. Lee Rowland, applied psychologist and behavioral scientist about his work utilizing artificial intelligence technologies with the military to understand opposing forces and how these tools can improve your messaging.
Richard Campbell (00:37):
Hi, this is Richard Campbell, thanks for listening to Understanding Consumer Neuroscience. Today, my guest is Lee Rowland who's an applied psychologist and behavioral scientist. And Lee, I think you have to introduce yourself too, you've had an interesting career.
Lee Rowland (00:50):
Hi, Richard. Yeah, nice of you to say so, I think so so far and it's been quite unexpected. Started out as an academic psychologist teaching in universities, thought that that was going to be my career path and somehow, after a few years of doing that, found myself in an area completely unexpected. I found myself working with the military in the area of psychological warfare.
Richard Campbell (01:19):
Oh, wow.
Lee Rowland (01:20):
Yeah, that was quite some time ago.
Richard Campbell (01:21):
And that goes way back, militaries have always tried to demoralize the enemy. I'm thinking about in World War II, the leaflet drops and things like that, that's always existed.
Lee Rowland (01:35):
Yeah. Actually, and I didn't know this until I got into this area and started to read around it, it goes right back to the ancient Greeks and probably before that, right back to things like the Peloponnesian War, Alexander, the Great, some of the wars of the classical world, they were really hot on using psychological techniques then to demoralize and deceive their enemy, to stoke fear in them through various forms of symbolism and what have you. So, it's an ancient practice but, as you say, it got more scientific in the 20th century, bit in the First World War but particularly in the Second World War. By the time we got there, they were using, really drawing on the social and behavioral sciences to try and improve their ability to psychologically outwit the enemy.
Richard Campbell (02:25):
Yeah, to get them into a state where they're not going to resist as effectively, so to speak. And of course, for better or worse, we're recording this in mid-2025 and there's a bunch of conflicts going on and you're certainly seeing a lot of messaging, let's say, I'll be generous on that, from various parties to confuse the effects of the conflict, who's responsible for what actions. Just they're using the modern tools, they're showing up in the social media spaces and the like.
Lee Rowland (03:00):
Absolutely, yeah. Russia, really, they've been experimenting with this for a long time, for decades and then things have rapidly scaled up and speeded up since the invasion of Ukraine. Russia were always, always interested in ... Right back to the data for propaganda but during the Cold War as well, Russia particularly keen and interested in psychological techniques. But since the invasion of Ukraine and, obviously, the advent and now the technology and the tools that social media and the internet provide, they have massively expanded and improved their capabilities.
(03:47):
And in Ukraine, they've been using these techniques and, well, not just in Ukraine but around the Ukraine war, deploying them on Western audiences as well as on Ukrainian soldiers and civilians as well as on their own Russian soldiers and civilians as well, trying to stoke fear, division, uncertainty, confusion and they have a whole doctrine around this that they have become very adept at using. But I don't want to imply that only the Russians are using this-
Richard Campbell (04:21):
No.
Lee Rowland (04:22):
... many other military organizations.
Richard Campbell (04:24):
Well, I also got to consider the idea that the Western militaries need to respond to that but they can't respond in kind. If you're going to be the good guys with a moral structure around it, some of these manipulations are very unethical.
Lee Rowland (04:40):
Yeah. We are bound by rules, restrictions, regulations, protocols and what have you that limit to what extent we can use these psychological techniques. It is curious in some ways, it's okay to match the enemy weapon for weapon if you are using kinetic operations and traditional forms of war fighting, bombs, missiles, aircraft and things like that. When I say we, I'm talking about NATO and Western militaries so including the United States in this, the most powerful military in the world, we can match our enemies in kinetic warfare tank for tank, aircraft to aircraft, battalion to battalion.
(05:27):
But when it comes to the psychological techniques using communication effectively as a weapon, we can't do that, we are bound by strict rules and ethics that limit what we can do. It's a curious state of affairs that ... Well, I suppose that old idea of the pen is mightier than the sword and, in this case, I suppose using the pen is more unethical than the sword. We can't just say what we want because there's a need to be honest, fair, transparent that NATO militaries need to adhere to.
Richard Campbell (06:01):
Sure. Because ultimately that stuff comes back to you anyway. How do you know you're approaching this the right way or doing the right thing if you don't have a set of ethical guidelines that you follow?
Lee Rowland (06:12):
Yeah. And also as you say, it comes back to you if we are seen as being dishonest. If you're using communication, it's important to be seen as telling the truth. But once you are seen to be not telling the truth, that undermines your credibility and, therefore, it's really important for NATO especially. I suppose there's a moral component here when it comes to communication that goes to the heart of human psychology about who we are and, if you're seen to violate that, that has negative consequences.
Richard Campbell (06:48):
Sure. And we have constraints like this in the marketing space too like consumer goods where you can't lie about what's in the box.
Lee Rowland (06:54):
No.
Richard Campbell (06:55):
You have to be honest what the product is and what its capabilities are.
Lee Rowland (06:59):
Absolutely. There are rules and restrictions around that, there are marketing guidelines, aren't there, and there are various bodies and councils and whatever who will ... If you are shown to be saying something that's a lie within your marketing, then there are consequences for that.
Richard Campbell (07:13):
There are consequences, yeah.
Lee Rowland (07:15):
They can pull the ad, for instance.
Richard Campbell (07:17):
Yeah, and fines and so on. And again, if you lose the trust of the customer, you really hurt your product. So, I don't want to make comparisons about dry goods on a shelf to kinetic warfare but there's a case for honesty in all of these systems, they definitely have benefits for that, but it does lead to somewhat a constraint to what you're able to do and how you're able to respond.
Lee Rowland (07:43):
Yeah. This is quite new or certainly the age of social media generated psychological warfare and what the military sometimes called information operations. We are still really learning how to do it properly in the way that social media shook everything up, including in the marketing world. It took a while for the marketing companies in the marketing field to catch up and realize how to use it and utilize it properly.
(08:13):
Obviously, the younger people who grew up in that age became very proficient at using it, they led the charge but then various marketing companies and then after that other organizations, governments and things really struggled initially to learn how to deal with it. And it is been 10 years, a steep learning curve there where they've been trying to get on top of it and I think they've got there now but that's all changing with regards to AI.
Richard Campbell (08:46):
Sure. So, I got to think one of the challenges the military has is that you can't test on the audience the same way we can in consumer marketing where we will literally run different experiments with groups of folks to evaluate a message. If you're trying to counter misinformation or psychological warfare, how do you even do that in the military circumstance where you just don't have easy access to the enemy? Every attempted data to run is difficult and expensive.
Lee Rowland (09:15):
Yeah. It has been done in the past. And in fact, before we go on to talk about the way it's being done at the moment, I just want to take you back a bit, say, 10 years or so and thinking particularly about neuroscience and neuroscientific testing during the particularly intensive period of radicalization from 2010 onwards of the West. Then, various research projects were looking at testing radicalization videos, all the converse of that anti-radicalization videos on audiences of their target and they were using EEG testing, actually, to see-
Richard Campbell (09:59):
Oh, how.
Lee Rowland (10:00):
Yeah, how certain videos or certain messaging was affecting people emotionally, emotional appeals, their cognitive responses to these videos as well in terms of their attention and their memory. But you could only do that on very small samples of 20 and that was expensive to run but projects were being run like that where, for example, the Department of Defense in the US were testing certain videos that they would get from the internet that were designed for recruiting people into radicalized groups and then testing them and seeing how and why they were being effective and how they could counter that with counter messaging.
Richard Campbell (10:40):
Interesting.
Lee Rowland (10:41):
But as you say, you could only do that on very small numbers and infrequently, it was very expensive to do.
Richard Campbell (10:46):
Sure. And each time you do, you're changing their behavior anyway whether it was effective or not so now the equation's different.
Lee Rowland (10:53):
Yeah, yeah. So, the question is is how can we do that. And also, there's so much messaging, there's so many communications and so many messages flying around at the moment on the internet that it's a much more ... It's quite a chaotic, confusing environment. And it's not like you're just testing something like, say, a single video, we're talking about countless messages and, with the advent as well of things like TikTok and other video sharing apps, the amount of comms that is reaching various audiences that are of interest to Western militaries is proliferated making it extremely difficult and it's also moving so quickly, it's such a fast-paced changing environment.
(11:48):
So, one area that they're exploring is using synthetic audiences to model populations or groups which I can explain to you a bit more about how they do that if you're interested.
Richard Campbell (12:01):
Yeah, absolutely, right.
Lee Rowland (12:02):
And then they can test messages on those particular audiences.
Richard Campbell (12:04):
So, this is testing a synthetic audience, not actually people?
Lee Rowland (12:09):
Not real people.
Richard Campbell (12:10):
So, talk to me about how you build a synthetic audience then. This seems fascinating to me.
Lee Rowland (12:15):
There are lots of approaches you can take but the typical one or the standard way at the moment is to take an existing data set. It could be a survey data set or any other existing data you can get about the group or a population including market research data or other data available on the internet, sometimes just statistical information.
Richard Campbell (12:37):
Yeah, any publicly available information might be interesting to load into a model like that.
Lee Rowland (12:41):
Anything and there's plenty of it out there. For instance, there's a quite well known survey instrument called the World Values Survey which is conducted every few years around the world in most of the countries and it asks hundreds of questions about people's values, moral values, things like that and that's a great one that can be used to compile a sense of where an audience population stands on particular issues.
Richard Campbell (13:07):
Mm-hmm. I got to imagine it's things like how you feel about recycling and climate change and other relevant topics for the day.
Lee Rowland (13:15):
Yeah, and some more spicy topics like the death penalty and things like that. And they can compile that information and what the typical thing that you'll do is to generate a bunch of personas, so artificial people, if you like, personas within a computer system which embody those qualities and characteristics of an audience. And you can set the parameters, you can say the age range that you want, what mixture, percentage of males, females and others that you want within your population or audience, target audience and give them certain characteristics, what professions they are and then what their likes and dislikes are and opinions on this, that and the other.
Richard Campbell (14:02):
All based on this various survey sources that you've gotten.
Lee Rowland (14:06):
Oh, yeah, all based on existing data. Sometimes you can go out and collect that data from scratch in advance and then build your synthetic audience from that or you can combine existing data sets with newly acquired data sets.
Richard Campbell (14:23):
I got to think this is tougher for the military because it's you can't really survey the enemy. That's a tricky thing to do.
Lee Rowland (14:28):
No. Not the enemy directly but you can-
Richard Campbell (14:33):
Yeah, that's a tough one.
Lee Rowland (14:33):
Exactly. Although, with the increasing use of social media, it does make it easier to collect large scale big digital, big data and you can also survey people. So, maybe not, for instance, Russian soldiers but maybe people Russian population.
Richard Campbell (14:57):
Yeah, but those Russian soldiers are also using social media so I wouldn't be surprised that you could grab that data.
Lee Rowland (15:02):
You can. You have to be aware of its limitations that you can't tell you everything you want but it's amazing what you can pull. Through a triangulation of lots of different sources, you can build a population or an audience group from that.
Richard Campbell (15:19):
And so, you're basically loading this into a large language model of some kind, you now have a tool that'll answer your questions as those personas.
Lee Rowland (15:28):
Exactly. And then the great thing is you can test it as well because, if you've got existing survey data, you can train it on a subset of that data and retain some questions for testing phase that you know the answers to so you'll know what a particular audience group. And I'm not talking about an enemy group, let's say, the Ukrainian population who've answered the World Values Survey, you can build an audience group from the training set and then, say, keep two or three questions back for your testing questions.
Richard Campbell (16:01):
Yeah, this is classical machine learning model fitting, right?
Lee Rowland (16:05):
That's exactly right.
Richard Campbell (16:05):
Where you have a test set and then you have an evaluation set as well.
Lee Rowland (16:10):
And then so you run those evaluation questions and see how accurate the model is and whether it's answering them ... The synthetic population is answering those questions close to or the same as the real population that we've got. But then once you've validated that and you're confident that you've got an accurate model, you can then begin to ask it all sorts of questions that it's not seen before, that it's not been exposed to, new questions based upon your new communication strategies or intervention strategies. You can say, if we were to drop these leaflets saying this or if we were to run this online communications campaign with these following messages but it doesn't have to be limited to just words as well, you can test certain videos.
Richard Campbell (16:55):
Yeah, for sure.
Lee Rowland (16:56):
For instance, how would this synthetic audience, how does this synthetic audience react to this specific video? Is it having the kinds of positive effects that we want? Is it believable, credible? Is it motivating? Is it likely to change their behavior, change their intentions? And if you run it on the synthetic audience and the answers are what you want to see and hear, then you can roll it out in the real world or online.
Richard Campbell (17:27):
Right. And these are the similar kind of constraints when we think about consumer goods and, actually, I would think more broadly it's things like social good elements in a society. Again, your climate change, recycling, being a better citizen and how we create messages that will help people, quote-unquote, do the right thing.
Lee Rowland (17:49):
Yeah, yeah. As you say, getting people to be a better citizen, finding ways to, well, combat the threat disinformation itself. How do we create citizens that are ... Well, not create citizens. How do we educate and inspire citizens to be more aware of the dangers of misinformation?
Richard Campbell (18:09):
Yeah. It just opens the doors to a bunch of new ways to help, say, a government get their citizens to do constructive things in society to support those big motions. Misinformation's got to be a big one there, just helping people think better.
Lee Rowland (18:26):
Yeah, and that's really needed, a lot of people ... The social media landscape has really, in many ways, confused people. We could rely broadly, rely on communications and trust one another and, to an extent, trust our institutions to tell the truth and act within our best interests but that has been undermined a lot since the rise of social media. And somehow we need to explore ways of rebuilding that trust and faith in these institutions and we need to re-educate people in how to do them.
Richard Campbell (19:03):
Yeah, and deal with these new tools. We've always had a problem with disinformation via the media tools of the day. Going all the way back to Gutenberg's printing press, we even had problems then. It's just these new tools are very pervasive and are good at creating bubbles around folks so that they think they're seeing the whole truth when they're only seeing a particularly shaped one.
Lee Rowland (19:30):
Yeah, it is an age-old problem. And with the advent of AI as well, we're discovering new ways to exploit that, exploit the human brain and weaponize information. One of the great things but also one of the dangerous things about AI is how quickly it learns and adapts and it's being put to use now to learn how better to exploit the neurological psychological tendencies of human brain through working out which messages can be more effective and it can generate so quickly new permutations of messages. And then with synthetic audiences as well ...
Richard Campbell (20:12):
Test just as fast.
Lee Rowland (20:13):
Yeah, find out which ones are going to be most effective. But it also can test the effects in the real world because it can send out lots of different variations of messages based upon different psychological profiling. Going back to thinking about what Cambridge Analytica did back in-
Richard Campbell (20:30):
Yeah, absolutely.
Lee Rowland (20:31):
... yeah, 2016, one of the hallmarks of their approach was to generate psychological profiles. So, they would segment audiences into smaller and smaller groups.
Richard Campbell (20:43):
Yeah, even down, as I remember from the readings then, down to individuals.
Lee Rowland (20:48):
Down to individuals.
Richard Campbell (20:49):
And the big thing that the individual didn't know when they saw that, quote-unquote, ad is that they were the only ones seeing that ad.
Lee Rowland (20:55):
Yeah.
Richard Campbell (20:56):
That was the inherent deception there. It looked like it was a banner on a webpage and it was but only for them.
Lee Rowland (21:04):
Yeah, exactly. So, using the data about the psychological profile of a group or an individual then tailor a message specifically but they don't have to get it right first time they can try it. If that doesn't work, try another one. Try another variation, another permutation until it is right and then also measure ... Because you would predict that these messages should have certain effects even if it's just in cyberspace what people, whether they like, share comments and things like that, how they react to messages. And if you're not getting the right reaction, you can modify it and adjust it.
Richard Campbell (21:40):
Modify the ... Yeah.
Lee Rowland (21:40):
Yeah.
Richard Campbell (21:40):
I think more people, when you think about Cambridge Analytica and the life, a lot of people are aware of the negatives of using these tools but I think you just painted the first most positive picture I've seen of using these tools in a constructive way to actually help bring people into clearer thought rather than more deceived.
Lee Rowland (21:57):
Yeah, how did I do that? [inaudible 00:21:59]
Richard Campbell (22:01):
Just the idea that we could build a synthetic audience to do the testing to help teach people to be more resilient, to be more resistant to these kinds of deceptions. I don't know, we're in a world right now, Lee, where a lot of folks are afraid of a lot of things and the fact that you've got a story here about using these tools in a way that can really help society, I'm feeling relieved at the moment. It's like, "Wow, that's a good thing that these tools aren't just being used to deceive folks but they can actually help them to think more clearly."
Lee Rowland (22:34):
Absolutely. So, we were talking earlier about communication and you talked about the Gutenberg press, the first book that it printed and widely distributed was the Bible and then you submitted the second one was the book about witch hunts.
Richard Campbell (22:51):
Yes. And literally was part of the stimulus of the great witch hunts that happened in the 1500s was from this thing. And it was a wildly, arguably, more popular book and it was a complete deception.
Lee Rowland (23:03):
Yeah. And then I'd said to that, didn't I, that the next great application and societally transformational use of that technology was science, to develop the first science journals and the first scientific articles that were able to be spread to scientific communities throughout Europe. So, communication in and of itself and the tools which are used to distribute communication are neither necessarily good nor bad, it really depends on the intent of the communicator and the person who is using those and they can be used. Communication is obviously humanity's perhaps greatest-
Richard Campbell (23:45):
It's our superpower. If you talk about what's made society, it's our ability to communicate with each other.
Lee Rowland (23:52):
Yeah. And communication has effects. It has effects first on our minds, it changes the way we think and the way that we feel and, ultimately, the way that we behave. All communication does that, that's what it does. It is basically a behavioral change technology working through psychology. And the situation that we've got at the moment is it has predominantly or it's been ... Obviously, it's used in many, many positive ways but I'm talking about in this psychological warfare situation that's going on with disinformation is predominantly being used in ways that are having very negative deleterious effects, well, throughout the world but particularly on Western audiences because our enemies are trying to use this technology to try and undermine trust and what have you within our institutions and others is having profound social effects.
(24:48):
But at the same time, it presents enormous opportunity for us as well because we can use the same technology and many of the same techniques, provided we do them within the right ethical constraints, to fight back and actually potentially restore trust, build social trust, strength within our societies again. And so, we're in an interesting situation, we're in an interesting place at the moment where we need very, very quickly to learn how to use AI and synthetic audiences as well as psychology and neuroscience. It's all part of the same bundle, it's all-
Richard Campbell (25:32):
Yeah.
Lee Rowland (25:33):
Yeah.
Richard Campbell (25:33):
No, it is an interesting grouping of tools, it speaks to another level of utilizing neuroscience for the benefit of society. And isn't it typical that it comes from the military the same way that the military matured the aircraft to create the commercial airlines that we have today. The particular problems in the military had in this space of psychological conflict is generating a set of tools that now we could bring to the consumers and really make a difference.
Lee Rowland (26:04):
Well, actually, yeah. No, it's a good point. The military have been there at the start of many of these technologies, they've always been interested. In fact, the military interests in psychology goes back to its birth. The military were some of the first people to develop intelligence testing during the First World War. Many of the communication models or communication techniques that are used by modern communications industry and marketing were developed during the Second World War through military funding and military interest. One of the classic ones there being the Yale communication model developed at Yale University with a guy, a psychologist called Carl Hovland and his team.
(26:46):
And basically, a lot of the modern idea of marketing communications comes from that work at Yale. The idea of there being a message, a messenger, a channel and a receiver and that was all developed actually during the Second World War for the propaganda, the ministry of propaganda and propaganda industry within the US at the time. And they funded a lot of that work and it transformed the way that we understood communications in the wake of that. We still use a lot of that today. And the military have been interested as well in neuroscience as well since its inception. Programs like many of our listeners will probably be aware of, things like Project MKUltra, the CIA Mind and Soul Experiment-
Richard Campbell (27:41):
Famously, yeah.
Lee Rowland (27:42):
Yeah. They use brainwashing, psychological manipulation, drugs, hypnosis and other psychological and they used electroconvulsive therapy, ECT, as well to experiment on individuals, military individuals as well as civilians and people in psychiatric hospitals. But this was a military program run by the CIA to understand better about the human brain and how it could be manipulated, exploited during warfare for military interest. That's one of the most infamous.
Richard Campbell (28:15):
Well, yeah. They've made movies out of it now too, right?
Lee Rowland (28:18):
Yeah. Things like The Bourne Identity, the brainwashing, neurological manipulation that went on there to transform him from a civilian into a lethal weapon.
Richard Campbell (28:34):
Into a Matt Damon.
Lee Rowland (28:37):
[inaudible 00:28:35]. Yeah, yeah, yeah.
Richard Campbell (28:38):
But you bring us all the way back to the beginning of this conversation and the ethical aspects of this. It's the problem is these tools are very powerful and you need a moral construct around using them before you're doing more harm than good.
Lee Rowland (28:51):
Yeah. And unfortunately, many of those out there using them against us are not bound by those moral constructs and, increasingly as well, bots are developing these messages themselves. And so, for example, Russia, for instance, has whole farms, bot farms where that they fund. They're staffed by people using multiple, multiple phones and channels to disseminate information and messages but quite often they're not themselves actually designing the communications, AI is doing itself and that obviously doesn't have any moral constraint.
(29:31):
And so, they're just putting these systems to work and what they're interested in is less what the artificial intelligence is themselves are doing, they're interested in the effects that they're having and the way that they get there, the way that they achieve those effects is by the by what they're particularly interested in is whether it's having noticeable measurable effects on the audience that they're trying to target but they're bound by no moral code whatsoever.
Richard Campbell (30:02):
No, no. And the good thing about being the good guys and being bound by those things is that you do have some good intent and you are trying to help people ultimately and that comes to roost ultimately. You don't want to end up being in the same bin as Cambridge Analytica.
Lee Rowland (30:19):
No. That changed the game and everything and tarnished the whole industry.
Richard Campbell (30:24):
Sure.
Lee Rowland (30:24):
It's given that area of work a bad name. The whole idea of psychological profiling and psychological targeting has been tarnished by the work of Cambridge Analytica. But the thing is, any communicator should do that as a matter of course, really. It's not that there's anything inherently wrong with psychological profiling or-
Richard Campbell (30:51):
Well, and arguably every communicator is doing it, it's just a question of level of effectiveness.
Lee Rowland (30:54):
Yeah. Well, if you're any good, if you're any good, you should be doing it. Even if you are just a natural, if you're just an individual talking to somebody else, shaping your communication to what you understand about that person, that person's psychological profile and delivering it in a way that will be most impactful, resonant to them is a skill. Any good communicator is one of the first tenets, isn't it, of good communication, know your audience.
Richard Campbell (31:21):
Yeah, know your audience.
Lee Rowland (31:22):
Yeah.
Richard Campbell (31:23):
And you've just introduced me to a whole new set of thinking on the kind of tools we could use to get to know our audience and by modeling them in software and then being able to test various messages against that model before putting it in front of the actual people.
Lee Rowland (31:40):
But the way I've always thought about it is why wouldn't you do that, is it better to just guess. So, it is okay if an individual is giving a talk, say, at a conference and they try and understand a bit about their audience first and tailor their message and try and think, well, how am I going to motivate my audience to, let's say, take action on this thing that I'm particularly passionate about, that's perfectly fine to do that. But if you bring technology into it or try and then suddenly start to see an institution taking the same approach to an audience and perhaps using technology, science and data, suddenly, that's deemed to be unethical in some way. And yet, why would you compromise the quality of your communication by not doing the work, the scientific work, the data collection and the modeling?
(32:36):
So, what really matters is not ... I don't think so much the techniques and the approach that's used, it's the intent behind it. It's if you're doing it for a good purpose, then why not use all of the tools, techniques and what have you at your disposal? And yet the moment you mention if you're using things like psychological profiling, psychological techniques, people suddenly get wary, take a step back because they're worried that they ... Because it just seems to have this connotation which I think was born in part by the Cambridge Analytica scandal but the idea that the military and other institutions are doing that seems to make some people very uneasy.
Richard Campbell (33:14):
No, and I totally understand that. We've always had a bit of a battle in the neuroscience space with the creepiness factor and the only answer is to lead with your intent.
Lee Rowland (33:28):
Yeah. There is a huge creepiness factor to it, it's like it's associated with being a master manipulator or a trickster or something like that.
Richard Campbell (33:38):
Yeah. Materials out of fiction, right? We've got books that tell us stories of this, the Mata Hari type tales but they are stories. In the end, we're people trying to be effective and trying to help folks and we use every tool we can to be more effective at that. And this is just introduces to a really interesting tool, the possibility that we could build these models and just make more effective communication.
Lee Rowland (34:03):
I wonder why it is, I've often wondered about this. Is it the case that there's something creepy about the fact that wanting to understand what's going on inside somebody else, inside somebody's head and their emotions, as soon as you do that, the moment that you do that, you're immediately seen as being exploitative in some way.
Richard Campbell (34:21):
Right. Well, and I wouldn't discount the value of drama of storytelling in that space but we call them dramatic stories for a reason, we deliberately ... The story's author always includes the conflicts because that's what makes the story exciting. The dull reality of crafting a good message, testing it well and actually helping people doesn't make a good story near as popular as the dramatic ones.
Lee Rowland (34:45):
Yeah. And is it that we look upon people who act rather than people who are thinking ahead as to what would move somebody else or what would inspire or change somebody else. We want people to be authentic, they're not so much concerned with what effect they're having in the world as being a real authentic person with inside their integrity. And that's what we tend to admire that they are just putting across their message because they really believe in it and they're really passionate about it.
Richard Campbell (35:17):
Well, I'm thinking you're opening a whole other topic here, Lee, of authenticity which I think is a huge thing to talk about in this particular era. One of the issues with using these tools to craft our messages that it somehow removes authenticity. I don't know that I subscribe to that. I think your intent is your authenticity that, when you move away from your real intents, then you're being inauthentic.
Lee Rowland (35:42):
Yeah.
Richard Campbell (35:43):
Lee, this is a fascinating topic. Now I want to make things, I want to get building and trying and experimenting with these models on this idea that I could validate messages against audiences without having to do the same level of work. That the first time my message went out there, it would be well-tuned, it's exciting.
Lee Rowland (36:04):
It's an incredibly effective tool and a great, not just time saver, but scaler as well. It enables you to just do so much testing because so much communication often fails because it's not got right, people don't do the work upfront because it's so time-consuming.
Richard Campbell (36:22):
Sure. And it's why companies like CloudArmy even exist that we do that testing for you on a subset of the audience so that you can reach the larger audience better. But the idea that we could iterate more in software first and then validate against those or take those tests that we've already done and incorporate it in the model, it just speaks to amplifying this entire process.
Lee Rowland (36:40):
One of the great things that you can do there is you can use AI initially to model your audience, develop messages, do some preliminary testing on them, whittle them down. The great thing is there are no bounds, you can generate dozens or hundreds of different messages, test them on these synthetic samples, find out which ones are likely to be most effective and then using software like CloudArmy's testing platform then test the three or five that you think are most effective. I still think it's absolutely vital to test on real audiences, real brains.
Richard Campbell (37:19):
Sure. Well, and you see different places for it. It's those kinds of tests that can build the data set to do the synthetic audience but then it's also validated with further testing with those tools.
Lee Rowland (37:30):
Yeah. Because there's no what AI can't do in the way that real testing is context, the AI is not out there in the real world in a real messy, chaotic environment. So, it's all right, they're all very well testing it on some messages in a very controlled ... That's the same sort of problem, isn't it, with laboratory experiments. And quite often in behavioral science you find that experiments that work well in a lab, you try and scale them up into the real world and then they just don't work.
(37:58):
And I still think the AI gives us a whole new suite of very effective tools to do some great things with both designing and testing messages but we still need to actually test it on the real audiences using the technologies and the methods that we have available to see what happens when you actually put it out there in the real world. Because they are not just receiving a single isolated message in a inert environment, these messages are being transmitted and received within chaotic, messy environments where there's lots else going on not just in terms in the information space but in the social space as well.
(38:40):
So, I think it's really vital but what the AI can do is help us test a lot more, get more accurate more quickly before we actually test it. Because one of the things you don't know, if you do some research on your audience, you generate some messages that you want to test and you might come up with some good messages for a product or something, for a brand and they might be good but what you don't know is all the things that you didn't test because you just didn't have the time. But what you can do with these AI synthetic samples is you can test lots and lots of different variations and permutations from different areas as well with different neurological or psychological effects or hypothesized effects that you just wouldn't be able to do or you wouldn't even be bothered to do because it would just be outside of your scope, your remit.
Richard Campbell (39:31):
Yeah.
Lee Rowland (39:31):
You wouldn't [inaudible 00:39:32] ... So, what you can do with AI is test things that you just wouldn't normally have the opportunity, to test things that you don't even know or have any idea whether they work but you can try some really left field ideas.
Richard Campbell (39:45):
Yeah. You take some flyers because there's just so little consequence of testing on a synthetic audience that just really push the boundaries and see how the software responds before you put it in front of people.
Lee Rowland (39:56):
Yeah, exactly. It costs nothing and there's no real consequence of failure.
Richard Campbell (40:03):
Yeah, you don't have any impact on people because there was no people involved.
Lee Rowland (40:06):
Exactly.
Richard Campbell (40:06):
Right?
Lee Rowland (40:07):
Yeah.
Richard Campbell (40:08):
You can ask far more inappropriate questions of software than you could have ever people and who knows where that might lead.
Lee Rowland (40:14):
You can do some real unethical stuff on AI synthetic audiences and then find out ... And that gives you some insight as well potentially into if you can test, if you can generate and test some, perhaps, unethical messages and find out whether they are likely to work on a particular audience. Now, even though you can't then actually go and deliver those messages, it can give you some insight potentially into your audience that might give you something then, a space to work in where you think, well, how could we adapt this. We know what effects we want to achieve, these particular messages that are likely to be resonant. What is it about those messages and why are they resonant and how can we adapt them in a way that's more-
Richard Campbell (41:02):
That fit's with our ethical model, yeah.
Lee Rowland (41:04):
... acceptable, that's more permissible, yeah.
Richard Campbell (41:05):
I appreciate that.
Lee Rowland (41:06):
So, I think the scope there, the opportunities that it opens up are vast.
Richard Campbell (41:10):
Yeah, that's fairly powerful. Lee, it's so much fun to talk to you about this. It's really fascinating stuff and it does seem like the next generation in this neuroscience space is this ability to iterate over datasets for longer to be able to, before you get out to the world, have a more tuned message. I really appreciate your time on this.
Lee Rowland (41:28):
Iteration is key. I think that's the really exciting development here.
Richard Campbell (41:31):
Yeah, absolutely.
Lee Rowland (41:32):
The ability to test.
Richard Campbell (41:33):
And the faster we iterate, the better we can be here.
Lee Rowland (41:35):
Yeah. With minimal consequences in order to hone it and get it right because that's always been a bit of a problem based ... Communication's always been a bit hit and miss and I think this offers the opportunity here to get it right much more quickly and not even much more right, it gives us the opportunity to get things right in ways that we didn't even know were possible.
Richard Campbell (41:58):
Yeah, it's fascinating. Lee Rowland, thanks so much for coming on the show.
Lee Rowland (42:01):
Thanks, Richard. I really enjoyed it.
Richard Campbell (42:03):
And we'll talk to you next time on Understanding Consumer Neuroscience.