Language selection

Search

Video: CSPS Data Demo Week: Combatting Misinformation and Disinformation with Digital Public Square

Description

The Digital Public Square, a not-for-profit organization, uses approaches like gamification, digital storytelling and crowdsourcing, in Canada and internationally, to build empathy and trust and to engage communities. Tune in for a demonstration of the It's Contagious project by CEO and co-founder Farhaan Ladhani of the Digital Public Square. The presentation is followed by a rich Q&A session on the implications of such technologies for organizations across the federal public service.

Duration: 00:57:38
Published: April 27, 2021
Code: DDN2-V09

Event: CSPS Data Demo Week: Combatting Misinformation and Disinformation with Digital Public Square


Now playing

CSPS Data Demo Week: Combatting Misinformation and Disinformation with Digital Public Square

Transcript | Watch on YouTube

Transcript

Transcript: CSPS Data Demo Week: Combatting Misinformation and Disinformation with Digital Public Square

[The animated white Canada School of Public Service logo appears on a purple background. Its pages turn, opening it like a book. A maple leaf appears in the middle of the book that also resembles a flag with curvy lines beneath. Text is beside it reads: Webcast | Webdiffusion.]

[It fades out, replaced two title screens side by side in English and French. At the top, it shows three green maple leaves, each made of different textures. Text on Screen reads:

CSPS Data Demo Week

Combatting Misinformation and Disinformation with Digital Public Square

GC data community

[It fades out, replaced by a Zoom video call. The video window is filled with a man with glasses, a neat goatee and grey Henley. He sits in a home library.]

Taki Sarantakis (Taki): Good morning, good afternoon, and good evening to public servants across Canada and at embassies across the world. My name is Taki Sarantakis. I am the President of the Canada School of Public Service, and welcome to our second instalment of the CSPS Data Demonstration Week. Yesterday, we kicked off the week with artificial intelligence and the law, and today, we are going to be talking about one of the big issues of our time, which is disinformation and public trust. With us today, we have Farhaan Ladhani, who is the co‑founder and CEO of a really cool little not‑for‑profit that gestated and is gestating at the University of Toronto called Digital Public Square.

[A purple text box in the bottom left corner identifies him: "Taki Sarantakis, Canada School of Public Service." As Taki speaks, a second video frame appears, showing a man with tan skin, grey hair and bright yellow glasses. Behind him, a logo reads "Digital Public Square." Beside the lettering, multiple small squares with diagonal shading sit around a small central square.]

Taki: And he'll walk you through what he's been doing in the realm of misinformation and fake news, so to speak. But first, just a little bit on technology. Technology has been something that we've had a bit of a charged relationship with as human beings, and we go through very clear patterns.

[Taki's video panel fills the screen.]

Taki: Technology is our salvation and then technology is our demise. Then it's followed by technology is our salvation, and it's followed by technology is our demise. Just a few short years ago, the digital giants, they could do no wrong and right now it seems like the digital giants in the public domain can do no right. Obviously, the answer for those is neither. Technology is a tool and like all tools, it depends on how you use your tool. If you use the tool for things that are intrinsically bad for society, that's what technology is. If you use your tool for things that are kind of intrinsically good for society, then that's what your tool is in that context. Today, we're going to see one of those tools overtly used for kind public good. Farhaan, my friend, welcome.

[Farhaan's panel reappears. His audio is slightly muffled.]

Farhaan Ladhani (Farhaan): Thank you, Taki. I'm really grateful to be here today.

Taki: We're very happy to have you. Are those your real glasses?

[Farhaan takes off the yellow glasses and waves them around a little, distorting his image in the lenses.]

Farhaan: They are. They're not fake, as you can see. They are not virtual, although I've been asked that before.

[Farhaan puts his glasses back on. He gestures to the wall behind him.

Farhaan: This is a real wall. Also not fake. Surprisingly hard to tell the difference between the two nowadays sometimes.

[Farhaan chuckles.]

Taki: Yes. Exactly. Do you need the glasses to read or is this like your personal statement to the world?

Farhaan: No. This is just how I represent myself to my children and I feel like I should carry it around with me everywhere I go. It makes them laugh and so, hopefully it'll make others too.

Taki: Awesome, my friend. You have about 1000 people on the line. Start walking us through a little bit of technology for good.

[Farhaan's video panel fills the screen.]

Farhaan: Thank you, Taki, and Thanks to everyone for joining us today. I'm really grateful for the opportunity to speak with all of you and to share a little bit about what DPS has been learning. Before I tell you a little bit more about Digital Public Square and the platform we've created, I'd like to set the stage with a little context.

[A purple text box in the bottom left corner identifies him: "Farhaan Ladhani, Digital Public Square."]

Farhaan: I saw a great article in the Evening Standard a few weeks ago. It was a review of a recipe that was making the rounds on TikTok. You take lots of tomatoes. You add a brick of feta cheese. You pour in what's probably an unacceptable amount of olive oil, garlic, and some spices. You bake it like Jamie Oliver would for 30 minutes. You take it out, add some pasta and voila, dinner. We're close to lunchtime and so maybe people are hungry. The dish catches on like wildfire. And apparently when it was first popularized by a Finnish blogger, Finland ran out of feta cheese. Stop to think about that. There was a run on grocery stores for feta cheese because of a recipe that showed up online, was clicked, shared, and commented on.

The problem, Taki, is that the dish is actually not very good. When American foodie, Mackenzie Smith published her version of the recipe on TikTok, it blasted off into outer space. My fridge is so full of feta cheese right now, I can't fit the salad. My favourite part of the story in the Evening Standard was the journalist's review. I'm going to quote this because it's just so good. They said, "the recipe was not a dependable failsafe. It wasn't discovered at a dinner party, stolen from the host. It wasn't passed down from someone's grandparents. The recipe appealed perfectly to what the author describes as the get‑rich‑quick switch in all of our brains. It's the promise of instant satisfaction. The recipe is of the Internet, from social media—a place where our brain is conditioned for instant gratification." What does any of this have to do with misinformation and disinformation? Turns out, a lot. Many of you deal in a world of complexity. You're doing your level best to provide high‑quality information to everyone in a timely way.

In addition to the way we as human beings are wired, you're up against a few compounding challenges that frustrate this wiring. First, the very structure of the online world has changed dramatically in the last three decades since the birth of the World Wide Web. The promise over 30 years ago was that the Internet would connect us all, every one of us, in a borderless world, in a free and open exchange of ideas. Today, the fissures in this model are clearly visible to everyone. There are the fissures that are exploited by both state and non‑state actors that have been exposing the deficiencies in the open and borderless model and building out a competing vision for a bordered Internet. This presents a whole host of adversarial challenges that we need to successfully navigate. Second, there are the fissures that are created by the very business models, the attention economy, that drive much of the Internet today and the sale of feta cheese. Those business models are principally focused on generating clicks to sell ads or shaping our information diet. As a result, today's Internet makes it easy to find digital junk food. And the third, well it's us. High sugar, emotion generating, and polarising material is abundant and oftentimes drives out attention on high‑quality content that all of you good people are trying to create.

You can find the high‑quality goods, but you typically need to go digging and it doesn't automatically rank high on your feed. Let me demonstrate with a simple example that I know every single one of us is guilty of. I know that I certainly am. How many of us end up on page five or six of our Google search—doesn't matter what we're looking for. Turns out study after study after study highlights how so many of us click the first organic link that shows up. In some cases, that's three out of ten of us and less than three in a hundred of us click the tenth link. It turns out the best place to hide a dead body is on page two of those search results.

So, in this world of information abundance and instant gratification, where we have more information at our fingertips than ever before and people seeking more mindshare from us than ever before. Why do we trust what we consume less and less every single day? Why is it that trust erodes more quickly when we see, hear or read something that doesn't agree with our current view?

[Farhaan's video panel shrinks and shifts to the right-hand side of the call. A presentation fills the majority of the page. The first slide reads "Investing in technology to lift communities and rebuild trust, Digital Public Square, April 2021." The Digital Public Square logo sits in the bottom right corner of the slide.]

Farhaan: I'm going to share a couple of slides here to walk people through a little bit of the analysis.

[The slide switches. This one is split into two sides, the left-hand white side and the right-hand blue side. Each side has half of the title. It reads:

"We have trust issues…And that has an impact."

The white side reads: "People are losing trust in governments, businesses and the media. People are conditioned to believe and share bad news, and our information diet makes this worse. It's cheaper, faster and easier than ever to launch an attack on your brand."

The blue side reads: "More than 55% of people think these institutions purposefully mislead the public. Fake information spreads faster online than the truth, and is 70% more likely to be shared. Where people get their news, and who they trust is making this worse. It costs as little as $400,000 to meaningfully influence trade agreements, elections and referendums. It's less for a brand."]

Farhaan: So, let's start with the fact that we've got trust issues. Many of you will have seen this in the slides I shared with you yesterday. Malign influence campaigns, conspiracy theories, political ads, and unchecked opinions sold as fact all contribute to the erosion of trust. This toxic information includes a range of false, misleading and divisive narratives. It includes disinformation, intentionally manufactured and distributed by malicious actors, as well as misinformation, rumours, opinions, and inaccurate stories that are characterized as facts. What makes it so dangerous? It's designed to be exciting. It's built to take advantage of how we engage online.

In a world where we scroll quickly and mindlessly through large volumes of content, these false stories catch our eye with ideas that are provocative, entertaining or surprising. The good folks at MIT found that false news stories on Twitter are 70% more likely to be retweeted than true news stories and six times faster at reaching at least fifteen hundred people with false news than true news. One of the reasons why false news spreads faster, deeper, and more broadly than true news is because of its novelty. It's more likely to elicit surprise and disgust, whereas true news is more likely to prompt sadness, joy, and trust. And it's cheaper than ever. The cost of running a disinformation campaign that has the power to meaningfully influence trade agreements, elections, and referendums is just $400,000. Let that sit in. $400,000 to meaningfully influence a trade agreement, an election or a referendum. Imagine what that can do to a brand. I know some of you will be asking now, what does the impact on a brand have to do with me and my work? If what your institution stands for and what it means to people matters, if it impacts their decision making, whether they will adopt or make use of a policy or program, or a life saving measure, then this matters to you now more than ever before. So, what's not working? We've spent considerable time so far talking about what's not working, so I won't belabour the point, but we also have a pretty clear picture of what does work, what it takes to build trust. It turns out that all of us intuitively know that each of these elements on the right-hand side of the ledger are critical to building trust, but they're notoriously difficult to scale.

[The slide changes. This next one features a global map with various points trailing to icons with labels. Text beside the map reads:

"Digital public Square has been on a mission to foster healthy communities enabled by good technology. Our platform has served communities in more than 20 countries around the world, engaging tens of millions of people on challenging issues, rebuilding trust in public, private and community institutions."

Points on the map all over the world lead to issues such as immigration, COVID-19, religious intolerance, violence against women, freedom of expression, worker's rights and political interference.]

Farhaan: What if we could find a way to deliver high quality information that people wanted while still satisfying our need for instant gratification? Could we build knowledge and participation while fostering community around the shared knowledge? Can we do this at scale? DPS has spent the better part of 10 years tackling exactly this issue in more than 20 countries around the world.

[The slide switches. The next one is titled "What we've built," and features a phone screen with a cartoon interface. Text on the slide reads:

"A gamified learning and engagement platform built to help private and public sector organizations efficiently:

  • correct misinformation rapidly
  • address knowledge gaps that impede action
  • foster and mobilize communities around shared issues and interests"]

We spent the last five years innovating on a gamified learning and engagement platform that helps to rapidly correct misinformation, address knowledge gaps that impede action, and foster and mobilise communities around shared issues and interests.

[The slide changes again. The new slide's title reads "How we are scaling." Beside it, a pink circle shows another image of the cartoon interface. Text on the slide reads:

"Enhanced automation will support scaling. This includes misinformation detection, content creation and learning profiles that adapt to the user."]

Farhaan: And we're using technology to help us scale. This will allow us to tackle a broad array of challenges through increased automation and the use of machine learning that's intended to improve our detection of misinformation, as well as our ability to deliver customised learning profiles that adapt to each individual. Let's step into a quick case study of how we deployed this gamified learning platform to combat misinformation and disinformation related to COVID‑19.

[The slide changes. The new slide is titled "A recent case study." An image shows a phone and a laptop, both with the cartoon interface. The interface features one hand giving a thumbs-up and one hand giving a thumbs-down above and below a speech bubble. Little cartoon virus cells float in the background. A caption reads "It's Contagious! Misinformation spreads faster than a virus. Can you tell the truth from fiction?" Text beside the image reads:

"In early 2020, the Canadian Government was concerned about the impact of misinformation on social cohesion and public health. We launched the latest version of LIFT — aimed at combating the alarming rise in misinformation on COVID-19."]

Farhaan:  Let me bring everyone back to a little over a year ago when the news of a pneumonia‑like virus was just circulating in China. Turns out when about a billion people start talking about something, we should probably pay some attention, so we did. What we saw back in January and February of last year was a watershed of misinformation related to COVID‑19. "Bill Gates is responsible for it, 5G is the underlying reason, also, you can take vitamin C and you'll be fine." What we did was take about 175,000 pieces of content that clustered into these 17 different types of narratives, we packed them into a product—the version one of the product I'm going to walk you through today—and launched on six continents.

We saw more engagement on this product than anything we've ever produced before, and we generate tens of millions of engagements every time we create things that are powerful. That demand for information was palpable. So, we went back, retooled, and built version two. And it turned out the Canadian Government was concerned about the impact of misinformation on social cohesion and public health. We were able to take that product and launch version two last year, aimed at combating the alarming rise of misinformation on COVID‑19.

[The slide changes. The next slide's title reads "Proof it's working" and its text reads "We significantly increased knowledge." On the right-hand side of the slide, stats sit in a semi circle. They read:

180,800+ players from Sept-Dec 2020

2.1M+ questions answered

41.5% players complete the entire game

14% people chose to read more

11+ avg. statements guessed per game"]

Farhaan: And it generated an impact. From September to December, we had 180,000+ players on the product. 2.1 million statements were actually answered. And those make us feel really good because the numbers are significant. But here's what really matters: Four in 10 people who started the process finished it. 14% of those people chose to read more, they were deeply engaging. And we weren't delivering one piece of content. We were delivering 11 doses of good information intended to inoculate people against the harmful misinformation and disinformation that was circulating in our midst.

[The slide changes. The new slide title reads "We are tackling two interlocking problems." Below it, there are two numbered points:

"1. Designing content to promote forms of cognitive engagement known to revise misconceptions.

2. Delivering content by a new method that circumvents the negative emotions and biased reasoning."]

Farhaan: So, what's the problem we were trying to tackle? It was an interlocking problem. The first was: we needed to design content to promote forms of cognitive engagement, known to revise misconceptions. [indistinct] did lab based work on this. We've seen lots of studies. But scaling has been a challenge. The second was delivering content by a new method that circumvents two key blocks in our ability and desire to update our beliefs: negative emotions and biased reasoning. And if we could find a way around those two blocks while dealing with the behaviour that we talked about before around instant gratification, could we accomplish the significant effects? We reached a lot of people, but what does that mean?

[The slide changes. The next slide is titled "We are able to revise misconceptions." The slide features a graph. The vertical axis shows "% Correct on Knowledge Test" and the horizontal axis shows two conditions: a control group, those who didn't play the game, and a treatment group, those who did. The control group has a marker covering the bottom range of the graph while the treatment group sits at the top range. Text beside the graph reads:

"Within a randomized control trial among a nationally representative sample, game play resulted in a significant 15% growth in knowledge."]

Farhaan: We're actually able to revise misconceptions. Within a randomised controlled trial amongst a nationally representative sample, gameplay resulted in a significant 15% growth in knowledge, and the effects were most pronounced in the most polarized communities. They were actually agreeing to a basic set of facts.

[The slide changes. The title reads "They retain information — meaning we are building resiliency." Below it, text reads "New corrective information is retained at high levels (86%) within sessions of gameplay." Beside the text, a graph illustrates by tracking percentage of correct answers over time.]

Farhaan: When people engage, you're actually retaining information. We're building resiliency to that misinformation and disinformation in our midst. New corrective information was retained at high levels, 86% within sessions of the gameplay.

[The slide changes. It's titled "We are reducing vaccine hesitancy." It reads
Within a US randomized control trial, game play resulted in a higher increase in intent to receive a COVID-19 vaccine. A graph illustrates.]

Farhaan: And finally, we're actually reducing vaccine hesitancy. We ran a randomised controlled trial in the United States where gameplay resulted in a higher increase in intent to receive a COVID‑19 vaccine over an active control. What does this all mean? It means if I take the same piece of content, stick it on a website or put it in a news feed and I take the same piece of content and stick it in our platform, we get a measurable difference around the response to the content in the product than an active control.

[The slide changes. It shows a graphic from the interface asking "How likely are you to share this with friends and family?" Farhaan reads the text of the slide directly.]

Farhaan: We're also generating network effects. 65% of people who responded said that they were either likely or very likely to share the corrected statement—the good, high‑quality information.

[The slide changes. Its title reads "It's scalable." Text on the slide reads "We are consistently generating impact across themes, languages and geographies." On the right-hand side of the slide, three images of phones with the interface sit in a blue circle.]

Farhaan: Finally, it's scalable. That same product is live in Myanmar today. The product has been live in the United States and in Canada and it's producing consistent effects, meaning that the product is doing some work in helping to address some of the key deficiencies we talked about at the front end around our behavioural needs and the needs that we need to respond to as a consequence of the information environment that we're living in today. I'm going to jump into a quick demo of the platform and Taki, I hope you'll play with me here before we get into questions.

[The game interface takes over the screen. It shows a yellow background with cartoon virus cells floating around. A thought bubble sits in the middle of the screen, displaying text. Sitting above it and below it is a blue hand with its thumb down and a pink hand with its thumb up.]

Farhaan: Folks, here is "It's Contagious". It's a really simple design. It didn't get to simplicity as a consequence of moving quickly. It was thoughtful and iterative and it's very much designed and intended to produce a series of behavioural effects. The first is reduce stress. We have enough of it in our lives. This is a stressful problem. We're all living under massive amounts of uncertainty. And as a consequence, our behaviours are going to be very much restricted from responding to things that don't comport to our existing beliefs. That's already true. Add stress and it makes it much worse. It's fun. It's intended to incentivise motivation and additional gameplay, which is why we're able to deliver 11 doses of good information. So, how does it work?

[Farhaan clicks a button at the bottom of the screen reading "play the game." The two hands move to the bottom of the screen, the pink thumbs up being labelled "true" and the blue thumbs down being labelled "false." Text reads "True or False? It is normal to feel stressed by the coronavirus pandemic." Text in a circle below the question reads "Drag your answer here" as a timer counts down from 45.]

Farhaan: This lands in your news feed and you click onto the platform. You get presented with the first set of questions then you start playing. True or false: Taki, is it normal to feel stressed by the coronavirus pandemic?

Taki: Well, you know you brought up feta earlier. I don't know whether you're a friend or foe because I wasn't sure if you were complimenting feta or denigrating feta, and ss a Greek, that's a big deal to me. But, I am going to go with it's normal. True.

[The "true" hand is dragged into the circle, and the question become new text reading:

"Correct! The pandemic affects everyone in some way or another. It is normal to feel sad, stressed, confused, scared or angry during a crisis. Self-care, exercising and talking to people you trust can help." A small button below it reads "more about this."]

Farhaan: Good. Okay. We start moving you through a process. The first question isn't a right or wrong answer. It's just how you feel and what you'll see is how you feel is an integral part of the whole process. You get presented with correct information. That's got a couple of different considerations that we can talk about as you move on. We're also able to see high‑quality additional sources of information.

[Farhaan clicks the "more about this" button. A dialogue box shows the answer's sources and another button reading "get better information?" Farhaan clicks it, and a correction submission form appears.]

Farhaan: So that if you feel like you want to learn about where the material comes from, you can dive in deep. Importantly, there's a prompt for you to disagree with us—get better information, tell us what you think the truth looks like. It's important because we get to learn about people very quickly who may disagree with the basis of facts that are presented to them. And as a consequence, we can integrate that into our learning model as we go. Let's keep going.

[Farhaan exits the dialogue box and moves to the next screen. On it, a star shows the number 2 and reads "Level up! You've levelled up to Health Newb." Underneath it, a hand has a thumbs up with a smiling face, glasses and propeller beanie.]

Farhaan: We're going to answer a couple of questions. We're going to get a little better at this and then we're going to learn about the recovery process, which is integral for us to learn—not just how you think about things and what you know, not just how you feel, but how it affects a whole host of public policy issues.

[The screen changes to a new question. Farhaan reads it.]

Farhaan: So, Taki, true or false, having covid‑19 antibodies, whether from natural infection or treatment, guarantees you're immune to reinfection.

Taki: Now, what I'm going to do is I'm going to answer it a bit like an uninformed person, because not everybody reads the news constantly like a lot of us do. I'm just going to assume that I've had my first shot and my friends tell me, "Once you have your first shot, you're good to go. You can travel all around the world. You can go to bars. You can go to restaurants." I'm going to say it's true. It guarantees me now that I'm good.

[Farhaan drags the "true" hand into the answer circle. A new screen appears reading "Wrong. The presence of antibodies does not guarantee permanent immunity to reinfection. A growing body of research shows the COVID-19 antibodies can be present for months after infection however is it unsure how much, if any, immunity is gained from the antibodies."]

Farhaan: Just in time. One second to go. Now, you get a corrected statement. You get immediate information and feedback that gives you a response to what might be an uncertainty. Again, we can learn more about this. You get the data that tells you where the source of information is. We're seeking to bring clarity and high‑quality information.

[Farhaan clicks on the "more about this" button and a dialogue box pops up with sources. He exits it, and a level screen shows the thumb character as just a thumb with glasses. A star shows the number 1. The screen reads "Level Down :(. Oh no, you've levelled down to A Thumb."]

Farhaan: Now, regrettably, we just levelled down. Loss aversion starts to kick in. Do we want to recover our rank?

Taki: Yeah, let's give it a shot.

[Farhaan clicks a button reading "recover your rank." The screen changes, reading "Recover your rank! There's no right or wrong answer to the following questions. Please just answer honestly!" Farhaan clicks a "continue" button and a question appears on screen.]

Farhaan: Now, there's no right or wrong answer to the series of following questions. We just want people to answer honestly. Now, we're learning about the emotive state that people have when they read a piece of information. What the data has told us is that once we understand how people feel about a piece of information, it has an incredibly high level of predictability around whether or not they believe it, whether or not they're going to share it, and whether or not it's going to have resilience in that belief for any length of time. So, "how does it make us feel when we learn that the presence of antibodies does not guarantee permanent immunity to reinfection?"

[Farhaan drags a hand labelled "anxious" into the answer circle. A new question appears with a slider style response.]

Farhaan: Well, for most people, it's going to make them feel anxious. As a result, we're now drawing a relationship between what you think, what you know about a piece of information, how it makes you feel, and then over time, as we gather data, what the effect of that has on a series of policy preferences, because we can draw relationships between them. So, now we ask, "do you support or oppose this policy: restricting the size of groups in public spaces?"

[Farhaan slides a thumb all the way into the "support" side of the slide. A short "level recovered" screen shows the thumb in the propeller beanie before moving to another true or false question.]

We asked another question. As we go on, we get another question. "True or false, you could still have COVID‑19 even if you recently tested negative." So, here we go again, Taki.

Taki: I'm going with true on this one.

[Farhaan drags the "true" thumb into the answer circle, and a "Correct!" screen pops up.]

Farhaan: Now we get motivation that keeps us going. And what you'll see over time is that the fluency of questions—the stuff that's really easy, the stuff that's really hard—is designed to encourage people to get a few things right and to get a few things wrong. One of the questions I always get asked is: "why does this work over a piece of information on a given website?" Well, it turns out that when you play games, you're expecting to get a couple of questions wrong and a couple of questions right.

[As Farhaan speaks, he answers another true or false question. The "Wrong" screen pops up.]

Farhaan: And as a result, your expectations on what might be true or false start to open up, and opens up the opportunity for you to update those beliefs. So, this one was wrong. Let's ask again. Let's go to another question.

[Farhaan moves onto the next question.]

Farhaan: "True or false, packages delivered through the mail are key sources of COVID‑19 spreading." Let's select true again and get to another recovery question. Again, I'm wrong.

[Farhaan selects "true" and a "Wrong" screen pops up. It transitions to a "levelled down" screen. His thumb avatar is once again a thumb with glasses.]

Now we're back down to our level one. Let's recover again.

[Farhaan navigates through questions.]

Farhaan: Again, now we start getting into more questions about how people feel and the relationship to a series of policies. This makes me feel angry. Turns out that disagreement between policies and anger correlate really, really closely together. How much emotion is driving our ability to not just consume, process, and then retain that information, plus, the effect it has on our behaviours is really, really significant. So, here we go. I'm going to oppose this, and the game continues. You get a series of questions.

The last thing that we're not seeing here, Taki, because I'm mindful of time, is the questions on demos. But here's an important one. "Would you be willing to receive a vaccine for COVID‑19 now that it's available in Canada?" Again, when we ran this in a randomised controlled trial, assessing the responses and the responses to this question as it pertains to the content, same material, two different places delivered in two different methods, we're seeing a significant difference between the two that's statistically significant, suggesting that the platform is doing a bunch of work for us and actually driving these types of outcomes. I'm going to stop there, and we can get into a discussion. Is that okay, Taki?

[Farhaan's screen sharing disappears. Farhaan and Taki's video panels fill the screen.

Taki: Absolutely. Thank you, Farhaan. That was amazing. So! I want to go back to the beginning, not just at the beginning of your presentation, but almost to the beginning of us as human beings. One of the things I took away from your presentation is that we have a lot of things that are deeply, deeply wired into our being, into the thing between our ears. Even though we live in a civilized world, and we live in a world of technology, a lot of what we bring to that comes from hundreds and hundreds of thousands of years of biological interaction between ourselves and our environments. What are some of the things that are deeply, deeply ingrained in our minds in 2021?

Farhaan: So, look. I think that we're always trying to classify everything because it helps us make sense of the world. And that's you know from when we…From many, many eons ago, when we were trying to figure out if the rustling in the bush was the tiger that was going to come eat us or a rabbit that I wanted to catch. As a result, we're trying to process large volumes of information to classify it, so we know how to behave and react. Back then, it was in order to make sure that we could protect ourselves. Today, it's about what you believe and what you don't believe on an ongoing basis. If you step back and you try to find a statistic—and I went to page five and six of the Google search, by the way, folks. If try to find a statistic that talks about the actual volume of information we process today, some of the best data on this is still years old. And it is exponentially greater—the volume of information we are presented with today on a daily and ongoing basis, than ever before. They are incomparable, but it's been like 50 to 60 years.

Taki: So let's, before we get to how much information we're consuming, which is an amazing point, let's just spend a moment, kind of, how we consume information. I think what I took from that is that every time we get a stimuli, like an external stimuli, one of the things our brain does right away is it does a little game of friend or foe. Is that a way of putting it?

Farhaan: Yes. Exactly. Is it something I already believe or is it new? And if it's new, am I ready to be open to new? How do I classify this?

Taki: Is it safe or is it dangerous? Am I comfortable with it or uncomfortable with it? Is it familiar or new? Our brains are making these near instantaneous diagnoses of how we're interacting with the world. And we know this to be true from psychology. We know this to be true from sociology, from anthropology. And recently, we've been learning this from—recently, last 15, 20 years—from behavioural economics, where a lot of people like Daniel Kahneman are telling us, a lot of these things that you've been taught at school, that you're a rational human being and you bring a focus to the market, and you calculate in your head costs and benefits and risks and rewards. Not instantaneously, you don't. In fact, you have to overcome a lot of biology to get to that level of rationality. Tell us now how the not good guys use this. You talked a little bit about page one, page two, but what are some of those biological strings that the not good guys are trying to tug at us.

Farhaan: Yeah, so they're going to your crocodile brain. They're going to your emotional centre. They're finding fissures in your existence in the world, whether that's what you might believe, who you might listen to, which communities you belong to, the grievances that you have, and they are attaching themselves, very surgically, to those grievances. Because those are pre‑existing conditions. It's not like someone's manifesting a new grievance that you have. That takes a lot of time and a lot of energy. It's way cheaper and faster to figure out what makes you mad and then to make you mad. It turns out it's really easy to make you mad by giving you something that you disagree with. It's the simplest and fastest way, that making you mad is giving you a piece of information that you firmly disagree with. Turns out you can make people really mad, even on weakly held views.

Taki: So, are you saying that making us mad is profitable for people?

Farhaan: It's incredibly profitable. If you go back and you look at the returns from the ads that were on a bunch of the sock puppet pages that were set up in a bunch of European countries around the 2016 presidential election that were driving a whole bunch of false narratives. Those pages were collecting advertising dollars as a consequence of people landing on those pages. So think about the perversion of what's happening here. You have an open system where you could run a bunch of ads. Those ads are great. They are supposed to drive people to a particular place. In this case, they're intended to get you mad, get you to a place where it reinforces your previous belief about why you might be mad at this politician or that politician, and then when you land there, a bunch of those people are going to click on a bunch of links to sell you products, and they're profiting as a consequence.

Taki: So, anger is profitable, and, as you said, it's not just profitable, it's wildly profitable. Like, kind of look around sometimes as a little bit of a naive Canadian, which I am. You look around at some of the things in the world and you're like, "why would they be doing this? Why would they be saying this? Why would they be propagating this third thing?" It turns out there's kind of a lot of money in this. That's anger. Talk to us a little bit about fear.

Farhaan: It drives similar emotions. Both fear and anger drive some of the same emotions. Fear can be leveraged both to move from fear to anxiety to potential change. It's a harder road to travel. What we've learned is that when people are happier, they're more likely to accept new pieces of information. And if you can drive them to the place where they can be motivated to be happier, then they're more likely to accept new pieces of information. But, fear drives the opposite direction. It helps to reinforce a whole bunch of pre‑existing beliefs and it creates that negative emotion that's a block to being able to update information. Fear and anger very much run tandem. Like that saying that Yoda had in Star Wars many moons ago. These two things are related. They sometimes operate on a continuum and sometimes operate independently. They don't need to be on a continuum. You can drive either one of these emotions. And to be totally honest, the adversary doesn't care. It doesn't matter which one is going to be most effective as long as it produces the cheapest cost per click. Which means that you're testing both of them to identify fear or anger amongst a particular audience. Which one drives the emotive response I want? Which gets you to the place that I need you to be in. And whichever drive is the lowest cost per click doesn't matter, I get my outcome.

Taki: There's an ask‑question feature on the platform where you can raise your hand. So, raise your hand. We're starting to invite questions and they'll be fed through. We'll do them in a little bit of a coherent session rather than just throwing random things at Farhaan. But now Farhaan, talk to us a little bit about—

[Farhaan sips from a mug.]

Taki:  Did you just change coffee cups or did you turn it?

Farhaan: No. I've got water and coffee.

Taki: I think you're trying to trick us right now. I think you're trying to make a....

Farhaan: Which one did I start the conversation with?

Taki: Exactly. I don't know. I don't know what's true anymore.

[Farhaan laughs.]

Farhaan: I don't know what's fake. In your demonstration, you were showing us the reliable sources. I saw things that are mainstream media. And mainstream media, I know there's a section or a segment of the population that right away just turns off. They're like, "mainstream media. Oh my God. Look who he's quoting. Look who he's saying is an authority." Doesn't that in some populations reinforce the fact that you're wrong?

Farhaan: A hundred percent. Let me give you two different answers. One on the trust side and then one on the mechanics. On the trust side, look, we've seen a really significant downward trend in trust across all of our institutions—government, business, and the media. The latest statistic I saw was that more than 55% of people think that these institutions are purposefully misleading the public. And so as a consequence, when people see information in media sources that they either have a pre‑existing view on or are unclear about, they're going to question the veracity of the information. There's no question about that. How do we deal with that from a mechanics perspective? And here, this is where machine learning becomes really, really valuable, right. Our expectation is that different sources are going to be necessary for different types of people. If we come back to vaccine hesitancy as an example, what we know around vaccine hesitancy is that there are many different drivers for why someone might be vaccine hesitant. Maybe it's how you grew up. Maybe it's what your parents thought. Maybe it's what level of education you had. Maybe it's what community you belong to. There's a whole array of issues that are related to driving this complex vaccine hesitancy. And for some people, understanding the source of the information is going to have a big relationship to whether or not they trust that information. So we're tackling that in the context of a dynamic environment where when we know a little bit about you, we can identify a little bit more about the content that is most likely to have the kind of response we want, which is a little bit more trust in the information itself, meaning that the sources are going to be dynamic because they're going to reflect where you are as a person in order to be tailored to you. And it's absolutely critical for complex issues. There is no one‑size‑fits‑all. It doesn't exist. And the second that we start to figure that out, the faster we'll be able to deal with an array of issues that are related to our ability to process scientific information in particular. It's why we need tailored approaches for particular groups and communities. Sources are a really important part of that story. A dynamic product that reflects the needs of the person who is learning is going to be way more successful at addressing what that knowledge deficit is.

Taki: So, now I'm getting a lot of questions in the realm of...If I were to summarise the questions of this block, they would be in the realm of: how do you or how does your platform determine what's true?

[Taki makes air-quotes with his fingers.]

Taki: So,you've got this instrument and you're combating misinformation or disinformation, but I guess another way of putting it is, how are you the arbitrator? How are you the umpire? Isn't this just another form of manipulation? People are manipulated on the X axis and now you're manipulating them on the Y axis?

Farhaan: Who is facts and alternative facts? So, look. My view on this is we have some facts. They are evidence based. They have been rigorously tested. They have been driven at by a number of challenges and on an array of issues. It's entirely possible to agree on a basic set of considerations that most people can agree on if they choose to. The fact that medicine can help people—doesn't mean that every type of medicine will help every type of person all the time, but the fact that we've got some medicine that helps some people some of the time is a fact. As we start to disambiguate that into very specific types of initiatives, specific types of projects on very specific themes, you can come down to a pretty basic set of facts that most people can agree upon. But then you need to learn from them what it is that they believe to be true. This opens us up to a broader array of understanding of where we have the need to investigate and learn a little bit more about what people believe and why they believe it so that we can update the basic set of facts. Or we can discard them and inform people about why we might be doing that. We're not the arbiter of whether or not there is a fact and what that fact might be, but it's pretty clear that you can find a basic set of facts on just about any issue and use that as the basis of starting that conversation to get an agreement on a basic platform on which we can start to learn more about each other in a much more nuanced ways. My view is that our platform starts at that basic part, which is let's get people to agree on a basic set of facts and start a conversation around where the delta exists, not to be able to determine who's the winner and the loser, that happens over time, but to agree on the basis of how we can have a conversation about how we get to what a new set of facts looks like.

Taki: Now, in the old days and as an old person, I remember the old days. But, in the old days, we used to not disagree on facts in the public sector. Unemployment was X. The number of people who had access to childcare was Y. Broccoli is good for you. Pizza, even though it's delicious, is not amazingly good for you. And then we would take a common set of facts, and then we would jump off of that to say, "Well, pizza is not good for you or broccoli is good for you, and this is what we're going to do." We would disagree on "no, you must ban pizza." "No, you must cram broccoli on every plate of every Canadian child, etc." But facts now seem to be much more—I don't know how to put this—pliable. They seem to be much more contestable. Some people have even gone as far as saying the fundamental problem in our current society is that we've weaponized facts.

Weber said that facts were completely distinct from who said them and who's observing them. Now, others are saying, "No, no, no, there's no such thing as a fact. It's all a lie, and therefore we've all got to weaponize facts." Give us your two cents on that.

Farhaan: So look, we've- we asked a whole bunch of people who like—to go back to the first set of comments—to do binary classification. It's a 'this' or a 'that'. We're constantly doing it's a 'this' or a 'that'. Then we said, we're going to move into a paradigm of the world where science is going to start to control the way we…is going to start to set a frame around how we should think about the world. And the basic consideration on the hypothesis is to disprove it. We're going to say something is true and we're going to spend all our time on disproving it.

Well then, you get those same set of people, who are now sitting in a room, that said "but you just said, that pizza is not good for you and you're going to spend all the rest of your time trying to disprove that and tell me that it's actually good for me. How do you expect me to come along for the ride in this total world of uncertainty?" I guess my answer to this is, I think that we're into a phase of more rapid learning and evolving and being comfortable with that uncertainty. Because as we peel back layers of the onion, pizza is not just good or bad for you, there are good things about it, and there are bad things about it. As we peel back the layers of the onion and learn more and more about the complexity of why something is good for you or why something is bad for you, we as people need to find ourselves in a world where we're comfortable with that rapid learning and updates. Because it's not going to be as simple as a binary classification in absolute terms. It isn't pizza is always good for you and pizza is always bad for you. It's: there are some parts of pizza that are good for you and there are some parts of pizza that are bad. It's not bad for you all of the time. That's way more complexity than we're used to dealing with, particularly when we simply want to classify things to make them nice, neat, and easy.

And so, now we're getting into a philosophical conversation around belief and why people might be looking for ways of making sense of the world that are easy, that are neat, and that conform to their pre‑existing views. I would say that's a big driver behind why conspiracy thinking works. It works because it gives you a neat picture of the world where it's a 'this' or a 'that'. Those are bad guys and they're only doing bad things. You can document everything they do into the bad category. See, they're bad guys. It's really easy to drive people to a place because they already want to get to binary classification. It's like pushing a cart that's already on its way downhill.

Taki: Yeah, it's fascinating, in Western culture especially. We love to think in binary terms like good, evil, black, white, on, off. There's this either/or and it's really, really fascinating how that impacts on public policy. Because public policy, as we all know and as you were talking about indirectly, is messy. It's about predation. It's about nuance. Yet the human brain isn't built automatically for nuance. You have to work at nuance. That's something that the people that are listening today, are writing policies or designing programs or delivering programs or writing press releases or reviewing social benefit applications. You have to constantly remember that. You are doing nuance and it's like "no, section seven of the application says…." But a lot of the people on the other side, your counter parties, they're not on section seven. They won't even go to the second page of the Google search. You have to meet them on their own terms, which is a lot of them come to this in binary terms. Am I eligible? Am I not eligible? Should I take the shot? Should I not take the shot? Et cetera. So, that was a little bit of philos- go ahead.

Farhaan: I was going to say, and just to make that more difficult, layer on all the other stuff. You've got this person reading a website that's trying to figure out whether section seven is meaningful to them or not, and they're worried about whether or not they should eat pizza, and whether it's good for them or not, and whether or not they should pick up the kids in five minutes or whether it's safe to let them play in the yard with a bunch of other kids. You think about all the complexity of all the decisions that they're making consciously and subconsciously, and how do you not step back and say "it's pretty understandable why people get overwhelmed." And it's pretty understandable why the delivery of these types of binary choice information streams that are intended to overwhelm our senses, get us into fear and grievance mode, are really, really easy and how we've ended up where we are today. Because the information environment in which we live reinforces that all the time. Breaking out of that bubble requires us to be conscious of, exactly what you said, where they are.

Taki: Yeah, I've read different figures on how many decisions an average human being makes a day. The one that I see most often—it doesn't necessarily mean it's true, but the one that I've seen most often—is the average human being makes about 3000 decisions a day. Most of those, obviously if you're making 3000 decisions a day, you've got to make them quickly. There's a handful of decisions every day that are consequential, that are impactful, and you should really take a lot of care, but at a base level, human beings are decision‑making machines. If we weren't decision‑making machines, we wouldn't survive. We would have been paralyzed a long, long time ago. As you mentioned now in this new environment, we're constantly- the number of decisions that we are taking all the time is just exploding. Even just when you're driving. Should I turn left? Should I turn right? Should I pass this person? Should I go back? That's a lot of strain on our CPU. Another thing that Policy and Program Officers in the Government of Canada should constantly, constantly remember. We have another question that is in the realm of…away from philosophy and right back deep into your platform. The questions in this second batch are related to: who's coming to your platform? Are they predisposed to believe or not to believe? Are you actively…So, what is the counterparty bringing to your platform when they come on your platform?

Farhaan: I would say everything. We're addressing both a very general set of the population as well as very specific groups. I'll use the example of the case that we just saw as the context for this. In looking for who are the vulnerable segments related to COVID‑19 misinformation and disinformation around a baseline study, it allowed us to assess what the drivers of vulnerability are. Where are you vulnerable to specific types of information? What might predispose you to that vulnerability? Does your background have a role with it? Does your age have a role with it? Do your psychographics have a role with it in terms of the way that you behave both online as well as in the offline world? We took that back and we built a model that allowed us to understand who the vulnerable communities were. And then we actively went and sought out to find those vulnerable communities in online spaces to get them onto the platform. In parallel, we did the same with a general audience. And so, we can say with a high degree of confidence that we're addressing both the broad array of people across society from a nationally representative sample, as well as very specific communities. We recruited, specifically, to understand whether or not we were getting people who are predisposed to playing games and who are more likely to be open to changing their views and opinions. It turns out that we were able to address both the segments that are representative of the population, as well as specific subgroups, not only those that may have been predisposed to believe or not believe, that may have been predisposed to specific types of information. I can tell you that you can cluster them as a result of understanding the data neatly in some cases. For example, if you believe that COVID‑19 is the work of Bill Gates tinkering away in the basement in Seattle or wherever he lives these days-

Taki: No, no, it's 5G. It's the 5G.

Farhaan: Right, exactly. It's the 5G guys. So, if you believe Bill Gates is responsible, you will believe the 5G piece, and you'll believe a whole raft of other considerations. There's evidence that we've seen that those considerations don't belong to only a subset of people who have specific ideological beliefs or people who belong to specific gender cohorts or age cohorts, that they run the gamut of society, and that there are some markers that give us indications and a high level of predictability as to what those clusters are. But our tool, we believe, addresses both the general population and then, as we talked about in the customization of information for specific learners, to address very particular vulnerabilities and back to the tech, that's where the machine learning is most useful.

Taki: So now your presentation has generated a flood of outrageously great questions. So…I am going to pick a question that's philosophical,

Farhaan: OK.

Taki: …And then I'll pick a question that is empirical, and then I'm going to pick a question that's personal that I think a lot of us have had some issues with over the last year. So we'll start with the philosoph- or do you have a preference?

[Farhaan puts up a hand and shakes his head.

Taki: Okay, so. We'll start with the philosophical. You have presented a very interesting tool developed in the non‑profit sector to combat misinformation. What is the role of public policy and public servants in combating misinformation? Is there a risk in the government stepping into this space? That's our philosophical question.

Farhaan: I may give the practical answer first, which is: there's always risks. Those don't go away. It's about how you manage those risks. Philosophically, I think it's absolutely critical that the public sector is fully engaged on understanding where Canadians are every single day on an array of issues that affect how we exist as a society, how we govern ourselves as a society, how we make decisions about ourselves as a society, and what community means to us. Community isn't an outcome. It is part and parcel of the thing which you create all the time. Misinformation and disinformation both have very significant effects at stretching the fabric of those communities, both in purposeful and accidental ways. And so, as a consequence, how do you step back and not participate and not get engaged in seeking to stitch that fabric up as tightly as you possibly can in order for this society to have the best possible shot at figuring out what it cares about, what its values are, and how it wants to behave today and in the future? Fundamentally, if you don't get engaged, I think I have to ask the question: what's the public service's purpose if not to serve the public? And so, philosophically, I fundamentally believe it's not just necessary or desirable, I think it's part of the mandate.

Taki: That's a super interesting question and a super interesting response, because implicit in the question is there's a big danger if we play in this area and explicit in your answer is there's a big danger if we vacate this. Now our empirical question: what can you tell us about the characteristics of those that are found to be well‑informed versus misinformed? So let me- I'll add some stuff to the question. Generally speaking, is it education? Is a gender? Is it age? Is it region? What are some of the characteristics of entities?

Farhaan: I wish there was a one‑size‑fits‑all response. There isn't, unfortunately. There are a couple of considerations that I think are important. Education had a relationship, but the relationship is a confounding one. Where you had high levels of education or high levels of political ideological beliefs, it just made you susceptible to different things. There's a great piece in The New York Times off the back of a Pew study about a couple of months ago, and I'll flip it to you to share with others.

'Cause I think it's a really important read. What it basically went back and looked at was the stuff that both Republicans and that Democrats believe. And it turns out that both of them are vulnerable to just different types of information. As a consequence, their beliefs are predicated on some of those markers, like ideology as drivers, but it's not that one was simply better off. It's that it just led you to be susceptible to different things. Democrats in this example, and I don't want to terrorize the fact too much, which is why I'll send you the article, had made the claim or produced the evidence that said, well, they over torque the danger of the disease and the proportion of people who end up in a hospital and who end up in an ICU unit. They are overwhelmingly informed about a piece of information that turns out isn't true, but it's because they believe the disease might make you sicker than it might actually make you. The disease might make you sicker. Conversely, with Republicans it's the exact opposite. So it wasn't that, simply, one higher education drove or lower education drove or political ideology simply drove one outcome or another. It's that as you start to cluster these things, what you learn is it's just different sets of vulnerabilities for different groups of people. It's why this is not simple. It's why it's complex.

Taki: Exactly. I love that point, Farhaan, because we often as public servants, we have certain characteristics that are that are not in the Canadian norm. We're a little bit more educated. We're a little bit more…We have a lot more job security, et cetera. So, we bring a lot of that to the world and we look around and we go, "how come they have these prejudices or these biases?" We have them too. We just have different prejudices and different biases and different triggers. Keep that in mind, because if you come to this with the humility that you yourself are subject to these pressures, you have a very different policy response and a very different kind of sympathy with the people that are being more overtly played with on things that you think, "oh, it is really easy. People should just read the newspaper and they'll be informed." The last question is the personal one. This one, I think, is something that a lot of us have dealt with during the course of the pandemic. I'm going to read it like the like the other two because they've been so great. I'm going to read it verbatim. So, here we go. "During the pandemic, I have felt particularly disappointed with a few of my friends who mistrust government sources and scientific advice about the usefulness of vaccines. I tried to debate rationally and patiently with these friends, but in vain. I am aware of our different background and value system. What is the best way in tackling this kind of situation?"

Farhaan: Taki, I think you had the right answer: empathy. As soon as we see the person we're talking to as opposition to our belief or we see them as adversarial to our pre‑existing view, and we try to convince them and we try to focus on facts and argumentation that are intended to disrupt their thinking and get them on to our side, I think we start with a place of failure. The best social science research explains to us why blowback exists. It's because it attacks your fundamental being. I use this story all the time. Taki, if I told you I know that you grew up thinking that the sky was blue. I know that you did. I know that you read it in books. I know your parents told you. I know you watched it on TV shows. I know you learned in school. And if I tell you, "Taki, it turns out that the good folks at Pantone, the experts on colour, the experts on colour—there are no bigger experts on colour. They do all the work on colour—have decided that because of the changes in the ozone and the way that light refracts, the sky is purple. It's not blue. It's actually purple." If I walk into a room with you and say, "Taki, the sky is purple. Here are all the facts around why it's purple. Here's why we've decided it's purple. Here's reams of evidence about why it's purple." You're going to think two things. One: I have horns growing out of my head. Two: you're going to walk away from that conversation believing the sky was more blue than when we started talking. It's because our expectation of what is going to shape and shift someone's views and opinions are going to be consistent with our own and our drivers for those beliefs, regardless of some of the characteristics we share, are fundamentally different from our own. And as a consequence, if we can't figure out how to uncover and draw some relationships between what those drivers are, no amount of the wrong fact is going to change the outcome. A good amount of some of the right ones for the right person might.

Taki: Farhaan, what a wonderfully thoughtful, provocative, and interesting hour you have given to about a thousand public servants in the Government of Canada. Thank you for sharing your time with us. Thank you for sharing your insight with us. And thank you for being a friend of Canada's public service. We continue our CSPS data week tomorrow with another cool little Canadian entity that is bringing forward technology to help turn the reams and reams of raw data that we have in the public sphere in the Government of Canada into refined data. I hope you'll join us. Thank you. Be well.

[Taki and Farhaan wave and the Zoom call fades out. The animated white Canada School of Public Service logo appears on a purple background. Its pages turn, closing it like a book. A maple leaf appears in the middle of the book that also resembles a flag with curvy lines beneath. The government of Canada Wordmark appears: the word "Canada" with a small Canadian flag waving over the final "a." The screen fades to black.]

Related links


Date modified: