Language selection

Search

The New Economy Series: The Economics of Attention (LPL1-V09)

Description

This event recording explores key policy questions that have emerged through the attention economy and the role of government in countering harmful repercussions on individual and societal health.

Duration: 01:30:21
Published: January 12, 2022
Type: Video

Event: The New Economy Series: The Economics of Attention


Now playing

The New Economy Series: The Economics of Attention

Transcript | Watch on YouTube

Transcript

Transcript: The New Economy Series: The Economics of Attention

[The animated white Canada School of Public Service logo appears on a purple background. Its pages turn, opening it like a book. A maple leaf appears in the middle of the book that also resembles a flag with curvy lines beneath. Text is beside it reads: "Webcast | Webdiffusion." It fades away to a video chat panel It's a close up on a dark-haired with a headset and square glasses, Owen Ripley. He sits in front of a black decorative shelving unit. As he speaks, a purple title card fades into the bottom left corner for a moment, reading "Owen Ripley, Canadian Heritage."]

Owen Ripley: Hello, everybody. Good afternoon for those in central Canada and on the East Coast and still a good morning to those joining us from the West Coast. Welcome to the ninth event in the New Economy series, a partnership between the Canada School of Public Service and the Centre for International Governance Innovation, otherwise known as CIGI. My name is Owen Ripley and I lead the team at the Department of Canadian Heritage responsible for broadcasting copyright and the creative marketplace.

It's my pleasure to be your moderator and to guide today's conversation with this great panel. At the outset, I would like to take some time to acknowledge that I am joining you from the national capital region, which is on the traditional territory of the Algonquin Anishinaabe people. Among other things, we do this land acknowledgement to show respect for and be more inclusive of First Nations who have a long history with this land. On that note, I'd also be remiss if I didn't acknowledge that a year ago today, George Floyd was killed by a police officer, thrusting the Black Lives Matter movement much more into the public consciousness. Since then, organizations, institutions, and society have grappled with hard questions related to systemic racism. The public service is no exception from that. For those of you who may have interacted with me, you know that I'm a passionate believer in the role and mandate of the public service, but I also know that it risks losing credibility over time unless it better reflects the Canadians it serves.

Canadians deserve a public service that is more inclusive of Black people, of Indigenous peoples, of people of colour, of persons with disabilities. Today, we should just take a moment to remember that it's incumbent on all of us as public servants to help build that.

Turning now to the event at hand, I'd just like to go over a few housekeeping items. The first one is that simultaneous translation is available in the language of your choice through the portal. You should have received instructions that were sent with your webcast link. I'd also like to note that we hope that today's panel discussion will be interactive. We'd like you to get involved and to submit your questions. While we have a long list of questions that we'd like to put to the panel, it will be better if you participate. I'd really invite you, as a question occurs to you, to use the Q&A function of Zoom and to submit your question and we'll do our best to hit as many of those questions as we can. You can do that by just clicking on the option in your Zoom interface there.

 As I mentioned, I lead the team at Canadian Heritage that's grappling with a number of digital policy issues in the space of broadcasting, in the space of online harms, and in the space of news media remuneration. I've had the opportunity to hear all of our panellists today in one form or another or to have interacted with them. We really have a great line-up of experts to shed some insight on this phenomenon called the attention economy. For some of you, you may spend a lot of your time thinking about the attention economy, and you may be involved in a policy area that's implicated by it. For some of you, you may be here today just to learn a little bit more about it.

Today's panellists, regardless of where you're coming from, do great service in terms of shedding some insight. We have Heidi Tworek, who is an Associate Professor of Public Policy and International History at the University of British Columbia. She is a Senior Fellow at the Centre for International Governance Innovation, or CIGI, a Non-Resident Fellow with the German Marshall Fund of the United States.

[Three more panels appear, sending Owen's to the bottom right corner. In the top left panel, a bald man, Bob Fay, sits in a white attic space with a banner that reads "Centre for International Governance Innovation," and shows a tall building. In the top right panel, a young woman with long brown hair and headphones, Samantha Bradshaw, sits in a home office in front of bookshelves. In the bottom left panel, a woman with pulled back brown hair and glasses, Heidi Tworek sits in front of a white wall and a tall combination lamp and bookshelf.]

A Non-Resident Fellow at the Canadian Global Affairs Institute and Co-Editor of the Journal of Global History. Heidi is an expert on platform governance, the history of media technologies, and health communications. We have Samantha Bradshaw, who is a Post-Doctoral Fellow at Stanford University, where she contributes to the work of the Internet Observatory and the Digital Civil Society Lab. She also is a Senior Fellow at CIGI and an active contributor to international expert panels, particularly concerning the effects of technology on democracy. Samantha is an expert on producers and drivers of information and how technology enhances and constrains the spread of disinformation online. Finally, but not least, we have Bob Fay, who is the Managing Director of the Digital Economy at CIGI, where his research focuses on complex global governance issues arising from digital technologies. He has held senior roles at the Bank of Canada, including serving as Deputy Director of the International Department and a special assistant to the Bank of Canada Governor, Mark Carney. Once again, welcome to all three panellists. We agreed amongst us that each panellist would just take a few minutes at the outset, not to give a long speech, but to spend three or four minutes just providing some opening remarks and shedding a little bit of light to get us started on this phenomenon of the attention economy. Bob, I think you're kicking us off.

Bob Fay: Great. Thank you very much, Owen, and it's great to be here and to be on this panel with Sam and Heidi who really are the experts.

[Bob's panel fills the screen. As he speaks, a purple title card fades in for a moment on the bottom left corner. It reads "Robert Fay, Center for International Governance Innovation."]

I've learned a lot from them, and I've learned a lot from what the Canadian government and what our public service is doing in this area. I thought I would start by just describing what we mean by the attention economy. I was just talking to my wife and one of my daughters, and they go, "what does that mean?" Because from a parent's perspective, when kids get onto the Internet, it's the inattention economy. They don't want to listen to us. Of course, the reason why is because when they're on, there's a lot of things on these websites that keep their attention and our attention more generally, because we go online to find information and to link with our friends and do a whole bunch of things. But when we do that, some of you may be aware of what's going on behind the scenes, but I'll describe it anyway. Essentially, we're bartering with our personal data to get a service. Each time we go on to a website or an app, we continually give up data. Sometimes we knowingly do it, and sometimes we have no idea it's happening because of opaque terms of agreements and things like that. The implications of what we're doing when we're engaging in this attention economy, I think, are much broader than that simple barter.

First, there's this ad tech model that runs in the background. When you enter a website, the information that is available to Google is essentially auctioned off, and this auction takes place as you're loading a website. The information about you will be released to advertisers who will decide how much they want to pay based on the amount of information they have about you because the more that they have available to them, likely the more successful they will be in the ads. This all takes place until you load the website. When you've loaded the website, the ad is in place or it's not depending on how the auction went. Of course, just imagine this taking place billions and billions of times a day. As an economist, this thing is incredibly impressive, but as an individual and as a citizen, it can actually be pretty scary. Alphabet and Facebook make billions and billions of dollars off of their advertising. This attention economy is literally worth a fortune.

And then, websites use a lot of techniques to keep you on the website. They're called Dark Patterns, and you can look them up, but they do other things, too. I went into the Google privacy settings to find out what was tracking me. Eventually, if you do that, you'll find out, like from my case, I think I had 200 companies tracking me. I'd never heard of any of them. I don't recall agreeing for any of them to see my data. Every single one of them says they're there to help me do something better. That's really nice, but I certainly didn't know about that. There's a lot of questions that arise: do you know what data have been collected about you? Do you know what it says about you or what it says about other people? In the case of our kids, do we really know what our kids are being targeted with and what data they were giving up? There was a survey that says—I'm looking at my screen here—kids get an ad once every 10 seconds when they're online. Just think of that. Just think of how many ads they're getting over the course of a session. All of this data creates what we call a data value chain, and the data feeds into the algorithms. Those algorithms are used by platforms to do something. What are all the issues that could arise? Obviously, there's a whole bunch of issues related to the consent and control of the data, how the data are used, and the issues around algorithms. You've heard about bias in algorithms. Bias in algorithms could be bias in algorithms or it could be bias from the data that enters the algorithms and a whole bunch of other things. Something that I know Owen probably deals with, certainly much more than me, are the concerns about how private actors are using our personal data.

There's so much concern about Big Brother and how the government may use our data. It's only until recently that people have started to say, what about those social media platforms? How are they using our data? The issues are very broad. I'll finish with this: they truly are profound. They're global in nature. They're things like democratic integrity, genocide, radicalization, surveillance, capitalism, fake news, and misinformation and disinformation that can create public safety and public health risks. There's what's called effective AI and how algorithms can be used to nudge our behaviour. There's a whole bunch of competition issues related to some of these technologies. Of course, what makes it even more difficult for a policymaker is that all of these areas are interrelated. I'll stop there for my opening salvo.

[All four panels return.]

Owen Ripley: Thanks. Thanks, Bob. Not a small order of things to tackle at all. I think we're going to Sam next to pick up that thread.

Samantha Bradshaw: Awesome. Yes. Thanks so much. Thanks, Bob. Thanks for that overview of the data that really powers the attention economy.

[Samantha's panel fills the screen. As she speaks, a purple title card fades in for a moment on the bottom left corner. It reads "Samantha Bradshaw, CIGI Fellow."]

I like the analogy of the attention economy and these social media platforms as being contemporary empires, but they're empires of our minds. That's something that's always really stuck with me. It's a good way of framing it. The data that you've described is really what they use to fuel the building and expansion of these empires.

I wanted to focus my remarks a little bit, not just on the data and the stuff behind it, but also particularly on the algorithms and the big systemic structures in place that will nudge, persuade, direct users towards certain information. A lot of my work centres around disinformation, fake news, conspiracy theories, and how this content can spread online, and particularly how social media platforms, through their technologies, might enhance or constrain the spread of certain kinds of harmful information. I think one big part of the attention economy has a lot to do with the algorithms and the ways that they'll nudge users towards certain kinds of content. When we think about algorithms and the way that they're designed to keep us hooked and keep us connected, like Bob described very well. They personalize content and tailor content to us so that it's stuff that we already like, that we'll find engaging, that we'll want to look at. The kind of content that tends to also go viral, the things that people tend to look at, tends to be content that's much more negative, content that makes us afraid, and content that makes us angry. Of course, there's content that makes us happy, too, that will go viral. That's why we see cat memes and cute dog videos or "Charlie Bit my Finger" going viral. Although, I heard that that video was recently sold. It's not going to be on the Internet anymore, which is a whole other interesting issue.

Back to the point, content that is very emotive and that tends to elicit negative emotions tends to go much further and much farther because part of it is human nature. It's the stuff that we like. People like conspiracy. We like rumor. We like to gossip and chit chat. At the same time, algorithms reinforce a lot of those emotions because they're the things that get people typing, that get people engaged and using the platforms. The more time we spend on these platforms looking, scrolling, chatting, liking, typing, commenting, getting outraged, the more advertisements that will be delivered to us. There's this tension here with the business model where the kinds of content that tends to go viral and keep us on are not necessarily good for us personally. They're probably not necessarily good for our democracy and for finding common ground and negotiating consensus with one another. I think I'll leave my opening remarks there, but I'm looking forward to more discussion with my panellists.

[All four panels return to the screen.]

Owen Ripley: Thanks. Thanks, Sam. Let's turn to Heidi.

Heidi Tworek: Thanks so much. Bob and Sam have laid this out very well for me. I'm going to start by putting on my historian hat and say that what we're confronting here is, I think, a whole host of old fears about new technologies embedded in new realities.

[Heidi's panel fills the screen. As she speaks, a purple title card fades in for a moment on the bottom left corner. It reads "Heidi Tworek, University of British Columbia, CIGI senior fellow."]

For example, I teach a course in the history of news and I always show my students a whole newspaper from 1900, where most of it is comprised of reports about grizzly deaths and weddings of people who met in parks. If you think about the stuff that goes viral, that basically describes the commercialization of newspapers in the late 19th century, and we find parliamentarians and others also concerned about the way that information going to the masses will create all sorts of protests and problems and how information needs to be controlled in different kinds of ways. I'd say lots of these fears that we have are old—even the instructions about what we should put out to go viral are old. You can find in a manual of 1930 an instruction that cats are cuter than dogs, so you should put more cat photos in newspapers. Even the idea that cat memes are better than dog memes—something I fundamentally disagree with as a dog lover. Those are also things that are actually quite old. Yet, these fears, I think, are embedded in fundamentally new realities and in a couple of different ways.

One is the strength of these companies that don't simply talk about content. They are doing all sorts of other things that are tracking you in different ways, and their sheer scale and power. The major companies, the top five major companies are worth more than the entire DAX index of German companies—is on a scale that we didn't see in the past. Another aspect of this is the algorithmic aspect that is so micro-targeted. I want to emphasize that that sometimes has really terrible consequences that go beyond what Bob and Sam have mentioned, and that includes things that are really psychologically harmful. For example, women who have had a miscarriage and previously researched pregnancy will continue to have ads around pregnancy and babies delivered to them. Only recently, actually, this is the first year where some social media companies allowed people to opt out of receiving notifications around Mother's Day, because, of course, for some people, if you've lost your mother or all sorts of other circumstances—you're estranged—being continually notified of Mother's Day is actually something that can be quite psychologically harmful. I think even beyond just thinking about the attention economy, the fears for democracy, there's also the ways in which this can psychologically harm individuals because we simply don't have the option to opt out of some of these aspects of tracking that only now is being discussed often. Actually, quite a lot of campaigning and awareness raising.

The other aspect of this that I would want to bring up is this question of the lack of choice. What we actually see is that when people do have a choice, they don't want to be tracked this much. Many of you may have read in the newspapers about the fight around Apple's updates around privacy, which are allowing people now to approach this in a different way. Rather than having to actually opt out, you have to actually opt in to be tracked. What we know, so far, is it seems like only four percent of people are actually opting in to be tracked. That tells us that a whole system is constructed on something that 96 percent of Apple users didn't want. That I think should give us pause if we think around the way in which this entire attention economy was constructed.

I'll end with just one final thing: to pop up two words that I think were implicit in what Bob and Sam were saying. One is that this is an ecosystemic problem. It encompasses all of the different nouns that Bob listed that makes it really tough. The second aspect is that it is indeed global in all sorts of ways, which also make it very difficult because if we think about what national solutions work, we're also confronting global platforms. Obviously, we often mention Facebook, Google, etc., but there are also platforms like TikTok, for example. We're not also confronting something where all of these platforms have their headquarters in the United States. Some of them do in China and elsewhere. How we solve ecosystemic and global problems are real challenges that we have to confront. I think ones where we have to bring together interdisciplinary groups like this, but also we have to bring in voices, for example, from the global south, who often can tell us what are the harms that have been rendered by these platforms years before they arrive on the shores of North America or Europe. Looking forward to the discussion.

[Owen's panel fills the screen.]

Owen Ripley: Thanks. Thanks, Heidi, Sam, and Bob. I already have a number of questions, but maybe, Heidi, I'll pick up on the point about iPhone users. I'm an iPhone user.

[All four panels return to the screen.]

I've started getting the notification that this app wants to track you. Do you want to allow it or do you not want you to allow it? It gives the user the ability to make that choice. But it raises the question, you know, Bob started at the outset talking about the economics of the attention economy. I guess, your thoughts on the ripple effect of that? What begins to happen to the model if suddenly your data is not necessarily paying for the service that you've been used to getting for free?

Heidi Tworek: Yes. I think there are a whole host of questions that arise in that. One is how effective was micro-targeting versus broad, category-based advertising? That's number one because the companies say it's really effective. Some academic studies suggest it's not that much more effective than other types of advertising.

[Heidi's panel fills the screen.]

I guess we might see answers to that. The second is that we don't really know, as outsiders, how effective some of these advertising techniques are. I think that's one element. How far is this a company selling to advertisers "micro-targeting is brilliant and it's going to up the number of conversions?" How far is that not necessarily true? We shall see that unfold. The other aspect of this that's really crucial is the question of how far micro-targeting may or may not have been violating laws around discrimination. We know from investigations, for example, out of The Markup, which is a fantastic media outlet that I'd suggest you follow if you're interested in these sorts of things. We know from The Markup that actually lots of advertisers, for example, around housing on Facebook were actually contravening the HUDs, the Housing and Urban Development Department of the United States Government's guidelines on housing, for example: using zip codes as a proxy for race. This is another really crucial aspect of this thinking in Canada as well. How far was micro-targeting used, perhaps not intentionally, in ways that are actually discriminatory and illegal? That's another question that opens up.

The final aspect of that is: how do we solve all these questions? We need some transparency. We know that the companies don't reveal huge amounts of the data for secret source reasons, but perhaps also because they themselves are not investigating it—something that I've called, and others have, agnotology, or a love of ignorance. Companies say they're tracking you. They've got all sorts of information. When we actually dig into what questions they look at, we know that there are some questions they do not investigate. For example, somebody who is working for YouTube was explicitly told not to investigate whether algorithms were pulling people into far right rabbit holes. Transparency is one way, through regulators, that we can at least start to solve some of these questions, because I honestly can't give you a straight answer as to what these changes will entail.

[All four panels return.]

Owen Ripley: Thanks, Heidi. Bob or Samantha, either of you want to jump in on that one?

Bob Fay: You go first, Sam.

Samantha Bradshaw: Sure. Heidi, I think those are super great questions. Even when we're thinking about regulatory responses to those questions around micro-targeting and putting limits and restrictions on that. When we're thinking about elections, we can pull broad demographic information about people. That kind of broad advertising could still work as part of the business model. When we get down to my very specific interests that is that I like yoga and eat avocado toast and these factors about me go into the advertising that I see,

[Samantha's panel fills the screen.]

I think that's really when people start to become very uncomfortable with this technology. Thinking more strategically about regulating privacy there, and addressing this micro-targeting issue is super important. As a researcher and an academic in this space, the calls for transparency and for more data and for more insight into the ways that algorithms are tailoring and personalizing or nudging people towards certain content over others. This is a super important question because, like you said, right now, we just don't actually know. We can't actually measure the true harm of these technologies. We can't disentangle the technological problem from other problems that we're seeing within society and our information ecosystems without having access to this kind of data that's held tightly by the various platforms.

One of the biggest questions is: do certain users get pushed towards more extreme content than others? We just don't know that because we can't audit the algorithms from this personal standpoint, because what I see on my YouTube reel is going to be different than everybody here. Maybe I, as a person, would be more likely to be pushed down one of these rabbit holes compared to others. We have no way of measuring this and looking at these trends across a whole society level without more insight, more data, and more transparency into the platforms.

[All four panels return.]

Owen Ripley: Bob.

Bob: I agree with everything Heidi and Samantha just said. I was thinking when Heidi was talking.

[Bob's panel fills the screen.]

I worked on a project 20 years ago at the OECD, and it was you start with the question that you want to answer and then you figure out the data. It was about how do you create statistical models to figure out who's going to become long-term unemployed before that happens? Because it's extremely harmful for individuals and society at large when people become long-term unemployed. The US had started up some work in this area, they called it profiling, and some of the questions that we addressed 20 years ago are just as relevant today. In terms of the statistical models, you could not use age, you could not use gender, and you could not use race because that would violate the law in the United States. That all seems good, but what do you do? You do what Heidi had mentioned. You find variables that are correlated and that's pretty easy to find. Educational attainment will tell you a lot.

[All four panels return the screen. Owen sips from a mug and Heidi nods.]

A lot of the questions we're dealing with today, I find, are ones that have been around for a long time, but we still haven't gotten a handle on how to address them.

[Bob's panel fills the screen.]

There's this really interesting project that's been started up by something called The GovLab, working with the OECD and others, called The 100 Questions. It's got five different areas. What are the questions that we need to answer? From those questions, what are the data that we need? That's where the transparency comes from. To guide the transparency and it guides the metrics as well. At CIGI, we started something called the Global Platform Governance Network, where we're looking at these issues right now. There's many colleagues in the Canadian government that are participating in this network. It is a global network of civil servants, legislative staff, and regulators that come together to discuss: what are countries doing in the area of social media platform governance, which can be quite broad. When we heard the types of questions that were being asked in this forum, we came up with three working groups. One is on transparency. One is on metrics. One is how do governments actually do research to try and understand these questions. I can go into that later on, but I'll point to the listeners today, to that. If people in the government want to participate, there's space for them.

[All four panels return.]

Owen Ripley: Thanks. Thanks, Bob, and colleagues. Just a reminder to folks listening: you are welcome to submit your own questions, if you like. There's little hand icon in the corner of your screen, and you have an option to either insert a question or upvote an existing question. Just to remind you about that. Sam and Heidi, you both spoke about the need for additional transparency in this space, along with privacy regulation. Sitting on the side of government, I think one of the criticisms we frequently face is that government has a tendency to approach problems based on their existing organizational structures and where something sits in the government portfolio. Partly, that reflects the incremental nature of public policymaking, but it obviously also reflects existing structures. I'd be interested in your thoughts on is there a country out there that's on the right track for dealing with some of these issues that are obviously more horizontal in nature and how governments pivot to connecting all the dots that are related to this issue? Why don't you lead us off, Heidi?

Heidi Tworek: I can go first if you want. I think that's the point of looking at this globally. I'm not sure any one country has solved it or will ever solve it. It's more a question of: how do we approach it in a way that we'll be able to deal with upcoming challenges?

[Heidi's panel fills the screen.]

One basic example is a place like Taiwan that actually has a Digital Minister. That already helps you, instead of having a portfolio, there's somebody who's really thinking about these digital questions along a whole host of different places. The Digital Minister, Audrey Tang—you can watch an interview with them on the CIGI website—is really thinking about this from a whole host of different perspectives, including personal experience in building and making transparent maps and all sorts of stuff. I think that South Korea is another place to look at. There are definitely lots of things where they struggle, like hate speech, misogynist abuse, and the ways that AI can run amok and become very quickly abusive, and chatbots. These are all questions that for example, South Korea is grappling with, but also in the context of a much more digitized society. I guess my basic point would be that there's no one country that I would point to as having "solved" this. We see some where they've created new portfolios and really tried to coordinate across different ministries. That, in turn, really played out when you have things like a COVID-19 break out,

[Samantha's panel appears for a moment. She nods before the Heidi's panel fills the screen.]

-because they were able to really take advantage of digital connectivity swiftly and put out rapid government guidelines on a whole host of different channels by working together in a way that most European and North American governments frankly struggle to do. I think there's a lot to learn from other governments there.

[All four panels return.]

Samantha Bradshaw: I think there's a lot to learn as well in terms of when governments have introduced policies or measures that haven't worked or that have had a lot of negative or unintended consequences.

[Samantha's panel fills the screen.]

In particular, designing new laws that tackle content only and going and dealing with a lot of the issues of the attention economy at this highly visible level of content rather than looking at some of the systemic structures as to why this content goes viral in the first place- is solutions that I don't think work very well and have a lot of unintended consequences. We often see laws being introduced to target disinformation, fake news or COVID-19 conspiracies, but then being used as a tool to repress freedom of speech, to repress any political opposition, and to attack freedom of the press. There is this flip side to the crisis that we're currently seeing with social media and the attention economy and a lot of these questions around content and democracy and speech where governments are using this as an opportunity to further exercise censorship and further exercise repression on their own populations.

[All four panels return.]

Owen Ripley: I guess it begs the question what the appropriate role for government is in this conversation. Your thoughts then, Sam, or others, on: what is the appropriate role for regulation, if any, in this space? If that's not the role of government, then what is? Any takers?

Samantha Bradshaw: This is the million dollar question here. I could take a first stab, and then I'd love to hear Bob and Heidi, I'd love to hear your thoughts.

[Samantha's panel fills the screen.]

For me, first of all, the role of government in terms of pushing for more transparency, like we talked about earlier, and more data access is crucial because we can't do good policymaking without good data. If we don't understand the problem, any solution that we're going to be proposing could have a lot of these unintended consequences. It could have a much greater negative effect than doing nothing if we don't really understand the problem and understand the extent to which harm is coming from these platforms. For me, I think that's step number one in terms of what governments can do. Then, I think there's a bunch of low hanging fruit in terms of policy responses. We already talked about privacy and talked about data and putting limits, restrictions, and requirements on the kinds of data that are collected and used about people and providing a framework that gives users more control and more choice over their information. I think this is one of the more low hanging fruit in the overall systemic problem is starting to control user data there.

[All four panels return.]

I'd like to hear from Bob or Heidi, about questions around antitrust. This is an area that I know less about, but I know there's a lot of policy making and thinking happening in the space of competition. I'll stop there and let you guys take over.

[Samantha chuckles.]

Owen Ripley: It's always this balance between: is the answer to the machine, more machine? We see that even with Apple rolling out its feature about being able to stop tracking. It's a technological response to this business model that you've commented on. Bob, you talked about how you actually are able to go into your Google settings and control your privacy settings. We all get prompted now to do our privacy check-up and go through that step. You're seeing these technology companies roll out technological answers to some of these things. I think it's the question: is that the answer to this or is there more?

Samantha Bradshaw: Oh, sorry. Owen, I was just going to say that's a great point too. A lot of the regulation and the responses do focus on the technology rather than on other trends. If we're thinking about the relationship with disinformation and elections and democracy, for example, people tend to focus on the social media companies only.

[Samantha's panel fills the screen.]

We don't focus on the fact that polarization within society more broadly has been increasing even before we've had smartphones or computers. Decline in trust in governments and institutions has also been growing before computers and the internet. The decline of political parties more broadly has been happening around the world. These are bigger systemic trends within our societies that don't necessarily have connections to technology. Technology might certainly enhance or exacerbate some of these problems, but they're also pre-existing problems. If we deal with the social media challenge or this challenge around the attention economy, these other problems aren't going to just disappear and go away. As regulators too, we can think more broadly about solutions that don't just involve technology.

[All four panels return.]

Bob Fay: Yeah, I'd like to reinforce that point. There seems to be this tendency to really focus on the technology and, of course, given the name of my institution with governance in it, we really think the solutions are at the governance level.

[Bob's panel fills the screen.]

But, easier said than done. Take one of the points that Sam raised on low hanging fruit and privacy. I agree, but if you look at Bill C-11 that's going through, it seems to have stalled. It's somewhere in the parliamentary process. One of the questions was: why didn't it have a human rights focus? Why is Canada not pursuing a human rights focus? Whereas that's what the EU has done as the basis of its legislation. It's essential we deal with this. The way forward is still open for debate. I think more generally, to- one of the issues that we raised on the horizontal nature of these technologies is that they infect all areas and governments are not set up for that. They're set up to work vertically. Of course, we know that there's mechanisms put in place in the Canadian government and elsewhere to get that horizontal cooperation, but how that disseminates throughout the public service is unclear. I've had my experience in those areas and it's very difficult when it comes to priorities and to pick up on competition. In our trade agreement with the United States, we have open data flows. Economists will say open data flows are what you need to harness the social benefits that can come from aggregated data. You really want to put minimum restrictions on data flows, whether internally or cross-border, subject to some protections. The one thing we also know is, going back to something I raised earlier, that the more data that we allow to flow across borders essentially just increases the monopoly power of certain companies that are headquartered in the United States. There's trade-offs there. Who should make those trade-offs? How should they be made? An extremely difficult question, and poses a challenge for virtually all areas of policy.

[All four panels return.]

Heidi Tworek: Yes. I'll jump in with some further thoughts. I agree with all of this. Having studied previous technologies and how they were also used to foment political, economic, and cultural issues, we should be wary of making this just a technology problem. I want to throw out a couple of things.

[Heidi's panel fills the screen.]

One is that as scholars are pointing out, we sometimes have lost imagination in what the internet could look like and what a public internet could be. For example, people have pointed out the idea of libraries in the late 19th century. That is a form of democratizing knowledge that was public. Yet, it seemed almost unimaginable that the government would do that in the early 21st century, which is quite strange. Joan Donovan at the Shorenstein Centre said, why don't we just hire 10,000 librarians to help curate the internet as a thought experiment in how the internet could be different? Ethan Zuckerman, who is at the UMass Amherst, has written a new book that's thinking about things like public infrastructure online. I'd say that's one potential thing for government to at least think about. Why are we so constrained in our imagination to think about how we regulate the companies that already exist instead of thinking- and I think that's the job of academics, right? To do the big picture thinking about what could another internet be? The truth is, there are some companies like that. For all their flaws, Wikipedia's an imagination of what that internet is. Yet, we only ended up with one Wikipedia and not many. How far was that a result of the policy infrastructure that existed? Is there another policy infrastructure that could lead to more Wikipedias rather than more Facebooks? I think that is a question that we should at least take very seriously. That's a big question. That is not a low hanging fruit question, but I think it's an important one to not constrain our imaginations.

Also because, of course, companies have lobbyists, including in the United States. When companies lobby for regulation, we just have to always make sure we're a bit sceptical about why they're lobbying for a particular type of regulation and not another. For example, is lobbying for a certain type of antitrust or competition legislation actually going to unintentionally lock in the current big players who exist? We do actually have historical examples of that. To give you one concretely, just to emphasise that we need to think very carefully about some of these things and bring in these historical examples. The Associated Press was obviously one of the big news agencies in the US in the first half of the 20th century, but it was actually a closed franchise system where only some other newspapers were allowed to join. Other newspapers said, this is anti-competitive and we need to have access into this market. The other news agencies aren't as good, etc. That, in the end, led to the US government launching an antitrust case against the Associated Press. The Associated Press loses in 1945 and has to open up its franchise to everyone. You think this is a good news story about antitrust, but ironically, what ended up happening is all the other news agencies, within 30 years, went bust. The United Press didn't exist. INS didn't exist because everybody flocked to the AP. It's just to say, we have to be a bit careful with antitrust to assume that it's always going to lead to more competition in the market. We actually find some historical examples where the opposite occurs, unfortunately.

A couple of low hanging fruit I think we can think about from the government perspective. One is creating due process in thinking about content moderation. I've suggested eCourse as an example of that to try to get away from the problem that Sam described of how legislation around content has actually led to suppression of speech. But, what if we create some online due process around content regulation? A second thing to think about is the role of government in bringing civil society into the mix. While the Christchurch principles have been talked about a lot, something that was incredibly notable about them was that they only involved government and platforms. What was the role of civil society initially? That has changed over the course of the last couple of years. I think we can think about the role of government as also elevating and bringing in marginalized and other voices from civil society who deserve to be heard by platforms but aren't, and who deserve to have a role in saying how these platforms should develop. The role of government doesn't always have to be regulation. It can also be lifting up voices and ensuring that they have a stake and that we follow the principle of nothing about us without us. Thinking about the role of government, very expansively is what I would advocate for.

[All four panels rejoin.]

Owen Ripley: We maybe have some folks who are listening to this and this is all very interesting, but these ideas and problems seem super big. But you know, my recommender algorithm recommended The Social Dilemma to me on Netflix, and I watched it. I'm really worried about this. At a citizen level though, what are some concrete things that folks can do to either better educate themselves on this or just exercise more control over these issues? Any thoughts on that one?

Bob Fay: I wonder if I could just maybe not quite answer that first, but go back to governance. One of the areas that the Canadian government is actively involved in is standard setting through the Standards Council of Canada, but also there's something called the CIO Strategy Council, which is the Treasury Board and the CIO Strategy Council.

[Bob's panel fills the screen.]

It brings the public and private sectors together to work on standards. Standards are quite interesting because they're things that we don't think about, but they pretty well govern most aspects of our lives in many ways. There are standards over technology, but there's also standards that allow technology to talk to each other. Our IPhone can talk to Android phones and things like that. Standards also can be used to put in place values as well, to embed our values. That's tremendously important when it comes to technology because we want to make sure our values are respected and that they can be brought into the standard setting. The Canadian Government is actually involved in that. It's not just the government. It's the private sector, and a whole host of private sector institutions that can get involved. We don't want to exclude the private sector. I think there's a tendency to say we can't talk to the platforms because of the vested interests. I think one of the questions we found in our Global Platform Governance Network is who do you want to talk to at the platforms? Is it the policy guys or is it the tech guys within them? They may have very different perspectives on, for example, the uses of the technology that they develop.

The government has got its directive on the use of automated systems which is ground breaking in many respects. There's the algorithmic impact assessment questionnaire, which I think is a really good step on creating mechanisms for responsible use of technology and venues for people to come back and say, "I don't understand this decision the government made. What sort of recourse do I have?" There are a lot of things underway. Once again, it's how you coordinate all that? There's the domestic level, and then, as Heidi said, there is the whole international perspective on these things because there's a whole bunch of stuff going on outside of Canada that directly influences Canada, whether we like it or not. In GDPR, the General Data Protection Regulation of the EU is a classic example which people point to as a standard on the control and use of personal data. But as a country like Canada, if our firms want to do business in the UK and the EU and use personal data, they have to comply with that. Otherwise they cannot do business in the EU. All of a sudden, in Canada, we have to follow those rules, whether we like it or not, and they become a de facto standard and we had no input into the development of GDPR. I think my perspective would be that we should have some input into that. But how do you do it to? At CIGI, we promoted the concept of what we call the Digital Stability Board, which is essentially a mechanism to get regulators, policy makers, and standard setters to come together globally and have these important discussions.

[All four panels return.]

Owen Ripley: Thanks. Thanks, Bob. Sam, I know you've thought about this question a fair bit—the role of citizens in this. Any thoughts or words of wisdom for folks who are struggling to see how an individual Internet user fits into the equation?

[Samantha's panel fills the screen.]

Samantha Bradshaw: Yes. I think there are a couple of things that people can do to help enhance the integrity of our information ecosystem and also keep themselves- exert more control and autonomy over their own data. First of all, just go through all of your privacy settings on your apps and make sure that they are in line with what you want. I think it's hard right now because there isn't a lot of choice necessarily. I've personally gotten off the Facebook platform recently and WhatsApp. There are other alternatives that are starting to emerge that are a little bit more privacy-forward when it comes to data. Doing a little bit of research, seeing if there are alternative solutions for you, and then deciding if the trade-off for what you get with your data is worth it or not to keep these services.

When it comes to disinformation and the spread of other kinds of harmful content online, I think, as citizens, we all have a responsibility and a role to play in our democracies and in creating an environment that is conducive to public debate and finding and negotiating that consensus that we need for our democracy to work. Of course, this is incredibly hard in our post-truth era, where nobody can even agree on the basic facts. That makes it really hard for everyday users to feel like they have some control without just getting angry seeing all of the negative content that we see every day online: all of the anger, all of the outrage, all of the fear, and all of the rumours and conspiracies that are constantly going viral. A lot of the research in this area is starting to show: just don't share content and don't share this kind of content. That's the best thing that users can do from an individual standpoint. It seems to have the most effect in terms of reducing the spread and the harm of this kind of content, and, overall, is the most impactful thing that we can do.

I don't like to put too much focus on the individual as well, and what we should be doing as people because there are huge systemic structures in place that are creating the structures that we have these interactions with and within and that will amplify a lot of the negative content already. There is, of course, a role for users, but there is a very important role for social media platforms themselves to play in fixing a lot of the harms that we're starting to realize are coming from these technologies. A lot of the platform's responses have been just labelling information, for example. To me, that just puts the onus back on the user to be literate readers of information that they're coming across online instead of the platforms taking accountability for these systems that are constantly pushing users towards extreme positions or pushing them to share conspiracies and disinformation. Thinking more strategically about what platforms could do at the systemic level is really key here and a huge part of the solution to the problem that takes the citizen out of it.

[All four panels return.]

Heidi Tworek: If I can just add something super quickly, which is to underscore this point about the individual user versus the systemic, and I'll give one concrete example about how hard it is. In the EU, there was a whole case around what browsers should be preinstalled on smartphones.

[Heidi's panel fills the screen.]

Because, of course, we all tend to use the thing that's simpler. If Google Chrome is preinstalled on your Android, you'll use it, but there are other browsers which are more privacy-forward, like Ecosia and DuckDuckGo, whatever. This case was about how far should the default settings offer more choice? I think that's one thing that we can think about as users. There are almost always alternative services—not always but often even with the browsers. But, you need to have a choice that's relatively simple for people. I think that that Apple opt-in/opt-out example shows us that if there's a role for providing those choices, often people do make the more privacy-oriented choice.

There's a lot of this that is really about ease of use. It is very hard to get off, for example, WhatsApp because a bunch of your family might only be on WhatsApp and persuading them all to move to something else can be very difficult unless you have, for example, mandated interoperability between one and another. That is to say, I think there's a bit of a balance here where the role of government for example, in figuring out what is a level playing field in terms of default choices, can be another way to think about the role of government in these systemic issues that then impact on individual users.

[All four panels return.]

Owen Ripley: Yeah. I think that's- in one of the questions, Heidi alludes to that. This conversation is focusing a lot on the negative externalities of these platforms, but there's all the positive externalities as well. Many of these companies are very good at designing products and services that help us be more efficient and more effective. In that respect, I was going to ask whether COVID has been a petri dish in some respects in terms of digital policy interventions, as well as just digital products and services. We've seen the economy, the government, and the public health responses all have to shift to being digital. I wanted to ask, I think Heidi, you've done some thinking about this or reflecting on this. What have we learned, through COVID, about the attention economy and how governments have either navigated that space effectively or not so much?

Heidi Tworek: I did a study last year where we looked at the first six months of COVID communications in nine democracies on five different continents.

[Heidi's panel fills the screen.]

We found that there were governments like Senegal, South Korea, Taiwan, New Zealand, who used all of the channels available to them incredibly effectively to meet their citizens where they were at to get government guidelines out fast. One thing that we also found, the swifter you got your government guidelines out, the fewer problems you had with conspiracy theories, disinformation, etc. That was also backed up by other big data studies that found that the swifter governments got out clear guidelines, the fewer people searched online for quack cures like silver or bleach. We can see that there are actually really positive things that came out of governments that were savvy enough to use all of these channels in different ways. That's everything from using text message systems, to having chat bots, to making sure that when you post on social media, Senegal did- you bring in people who are really important in civil society, which in Senegal were imams and priests. You could see all sorts of positive ways that the governments could use this to inform people. I don't mean this in a propagandistic way, like selling you're Nike shoes or something, but to really use it to tailor that information to meet people where they were at in a forward-thinking way. That's, I think, some really useful lessons that we can draw in Canada.

We've also seen places, like B.C., rarely use things like social media, and still had a decently effective response. But it raises the question: how much more effective could it have been if things like social media and emergency text message systems had been used to their full efficacy, as we saw in some other places? I think it's basically just a plea to say that there are ways that we can use these tools responsibly to inform citizens, meet them where they're at, and then, finally, to use them as two-way communication tools. I think one of the things we saw in places like South Korea or Taiwan was it wasn't just using this as a broadcast system, it was also trying to understand what questions do people have? What do we need to clarify? How can we improve these sorts of things? There's actually more of a two-way conversation. Those are some of the things that one can use social media for, even despite the many potential harms that come with it.

[All four panels return.]

Owen Ripley: Bob or Sam, any insights stemming from COVID where you've sat?

Samantha Bradshaw: I think the coronavirus pandemic has really shed light on some of the real challenges with content moderation on social media.

[Samantha's panel fills the screen.]

In particular, just the way that anti-vax groups or vaccine-hesitant narratives spread online and really the limits of platform power to moderate these issues. A lot of the conspiracies- at my work at Stanford, we have a virility project that is, in real time, tracking a lot of the vaccine conspiracy theories that are spreading across social media. We work in partnership with the platforms and with public health agencies to combat a lot of this rhetoric online, in real time. A lot of the stuff that we're seeing that platforms have a hard time actioning against is content that's not falsifiable. It's very easy for platforms to label or take down content that is explicitly false, but as soon as people start using, 'I heard from a friend' or 'a friend of a friend of mine experienced this', then all of a sudden it becomes not falsifiable and is much harder to take action against. Asking questions about the vaccine is a clever way to spread a lot of hesitancy and scepticism about its safety without actually making a statement that is falsifiable. How platforms take action against this kind of content raises a lot of really interesting and difficult free speech questions. For me, what we've learned about the coronavirus pandemic, COVID-19 is the real challenges at addressing these issues at the content level.

[All four panels return.]

Owen Ripley: I was going to ask you earlier, Sam. You alluded in your opening remarks to the science article from a couple of years ago that talked about how quickly falsehood spreads compared to the truth.

[Samantha nods.]

Humans are wired to be more attracted to that kind of content and that plays into this model. On the one hand, the internet's a distributed network. That's partly what makes the internet so great. I think it's the challenge of: how do you guard against these outcomes? We saw, after the events in the US capital on January 6th, you saw that social media platforms are capable of making some of these hard decisions and moderating some of these practices. At the same time, you saw the migration of users to different platforms. It then raises the question of the whack-a-mole problem. I wanted to ask you, and this is for all three of you, is there a platform that you've seen out there or a case study of a platform being wired differently, that is countercultural in terms of how it is wired that presents this alternate possibility of something else being possible? Have any of you seen something like that?

Samantha Bradshaw: I think Wikipedia is one example of a different platform with a different model that just works very well. Of course, it still has its challenges, but in terms of its structure and its governance, it's very effective at keeping a lot of misinformation and disinformation off of the platform.

[Samantha's panel fills the screen.]

Reddit is another interesting model. Reddit has some high-level guidelines in terms of what can happen on the platform, like high-level being no incitement to violence and no hate speech. Every board has its own set of community moderators who have further rules and moderate the content themselves, depending on the board, the topic, the kinds of people that are involved. That's another really interesting model that doesn't get talked about a lot. When we think about the governance of social media platforms, we tend to focus on Google/YouTube, Facebook, and Twitter—the big three—but there are other models out there that work in various ways and that are more effective and provide more control to users, but also have their own trade-offs and stuff too.

[All four panels return.]

Bob Fay: I would just add to what Sam was saying is that what she basically discussed is the governance. It's not the technology. It was the governance of the technology. I think that's really what stands out from the examples that she raised.

[Bob's panel fills the screen.]

I think we do get caught up in the technology because it's cool. It can do all these interesting things. But the governance of this technology is completely under our control. I think sometimes we forget that, and sometimes for good reason, because technology moves fast and governance mechanisms tend to move quite slowly. But I think there are ways to speed that up, and they're being used in some sectors of the economy. We can create structures to allow regulators to test out new things. It's being used in the financial sector. It's being proposed in the UK to have E-sandboxes, they call them, as part of their duty of care responsibilities that they're going to put in place. I think there are a lot of things that can be done. I think the focus has to be on the governance.

[Heidi's panel fills the screen.]

Heidi Tworek: I was going to say exactly the same two examples as Sam, but also to say we have to be careful in our expectations of platforms too. Something like Wikipedia in many ways replicates many of the problems we have in society—overly dominant in terms of male editors. Many more articles that are about men. Articles about women are getting rejected, including, very famously, the Associate Professor woman who was actually awarded a Nobel Prize but had been rejected for a Wikipedia page multiple times. I think it's a good example of how you can have excellent governance structures, but you can't necessarily expect that an Internet company may be outside of other social and cultural forces. I know that, of course, Wikipedia is working very hard to change this in all sorts of different ways.

I have a couple of things to say. One is that I think this points us to the way in which scale makes it hard to solve any of these problems. One of the reasons why Reddit still vaguely has a handle on this is they only do English language. Where does Facebook get in trouble? When you start opening up in places where you have zero content moderators in a language, whether that was in Myanmar, or also in Ethiopia, where they are just pulling in some people in Amharic, but nobody, as far as I know, working in Oromo, which is one of the major languages spoken in Ethiopia. I think another thing to bear in mind is that there's a complex sometimes between scalability and the ability to govern. Often, what we see is that platforms start to get out of control when they go big because they don't have enough infrastructure in place to be able to deal with these questions. We've just talked about this in the content perspective, but you could move on to thinking about fraud and Mechanical Turk on Amazon like it's almost any of these. The minute you get to a very large scale, you start to have a lot of problems—potentially unexpected. I think that that's another way to think about this. One: how far can platforms escape from or undo more dominant social forces? It turns out that's hard. Two: what does scalability do to all these questions?

[All four panels return.]

Owen Ripley: You started, Heidi, by saying that these are old fears embedded in new technologies. I think I agree with that, but I think that that scale problem is really the huge challenge. The power of these companies. The fears may be old. We've never before been in the moment of time where a message or piece of content can have such a powerful distribution within hours. I think that that's really the challenge. Being a historian, have there been other moments in history where you've seen a public policy response that we can look to? I think all three of you alluded to antitrust. Is that the response that you would point to if you're looking to a previous historical example of how you deal with the problem of scale?

Heidi Tworek: Yeah, there's many different- depending on the problem we want to solve, there's a different history that we can learn from in some way—a usable history. I do want to correct one thing, which is the idea that this is the first time that we've had things go so fast. We've never dealt this before because actually, in a weird way, things were much faster in the early 20th century. I have one example that I look at in the book behind me, News from Germany, which is the actually completely false statement that the Kaiser of Germany had abdicated on the 9th to November 1918. I traced, minute by minute, how long did it take for announcement to get onto the streets. It took 10 to 15 minutes. Newspapers would print extras. It would go on telegraphy. It was really that cities were very closely intertwined. Actually, in a weird way because you had newspaper boys who had run around the street screaming things like "extra, extra, read all about it." This is what has happened. That was not so far off from a couple of minutes on Twitter. You couldn't really escape it if you were sitting inside a building and you heard someone shouting outside. In a weird way, you couldn't escape in a way that you could actually turn off your phone. Even these accelerated timescales are less clear, and I think it also points to the fact that these things are disjointed. What I'm describing is a world where that was true for people living in cities. It absolutely wasn't true for people living outside of major cities. Today, many people are connected to the internet, but lots of people—billions—are still disconnected or they're only connected via something like Facebook. I think it teaches us to be a little bit careful in those assumptions and to think about things like the physical infrastructure of who is actually connected and who is not. That's I think one thing that we can draw from, is really thinking about the infrastructural question, which also is a really crucial one in Canada, oft debated.

That there's also harm from not being connected to this as well—harm in terms of where you can live and what you can do in your life. A couple of historical things I might just point to is we have had moments in the past where public policy makers have realized that disaster requires them to work together and create technologies that actually are interoperable. The classic example here is the Titanic disaster. When the Titanic disaster happened, there were two types of wireless operators. One was from Marconi, which wanted to create a monopoly by having Marconi devices that would only speak to Marconi devices. There were other wireless devices as well. When the wireless operator on the Titanic was sending out distress signals, those could only be heard by some of the ships around, not everyone. Potentially, more people could have been rescued had this wireless been more open. Actually, a radio conference right afterwards with policymakers mandated interoperability. That's why you had wireless and then radio sets that could actually speak to each other. I think we've had some Titanic moments, and yet we haven't yet come together. I'm hoping we don't have to have another Titanic moment before we actually invest in the Global Platform Governance Networks that Bob and CIGI are leading.

[Owen's panel fills the screen.]

Owen Ripley: We have a question here that- we often talk about these platforms as the new public square—the digital square. We have a question here about recognizing that at the end of the day, they're all private service providers for the most part.

[All four panels return.]

Obviously, that's the tension that I think is at the heart of today's conversation. I did want to spend some time, as we near the end here, recognizing that we will have folks from a variety of different business lines listening in—program officers, admin folks, and some who will not directly work in this space. How, for a Public Servant who is not directly implicated in the policy work on some of these issues, but who is thinking about their business line, any thoughts on how they should approach this digital square and thinking about these issues?

Bob Fay: I'm just looking at the questions. There are some very good questions here, and one of them is on user consent agreements.

[Bob's panel fills the screen. As he speaks, a purple title card fades in for a moment on the bottom left corner. It reads "Robert Fay, Center for International Governance Innovation."]

Has anyone ever read them? How many do you read through every day? It's a clear role for standards just to come up with a standard template that the companies have to follow. And standard setters, it could be that the government could come in and just mandate it or you could work with the private sector. There's many ways to put them in place, but we do need something more standardized. One of the things that I've been thinking about is a lot of the focus right now is regulating the outcomes of the business models and much less so on regulating the business model itself. Regulating the business model would imply competition policy tools need to be used, for example. The danger with just using competition policy is- and some of the examples that could be raised like the EU court case against Microsoft and the US as well. These can take years and years to resolve, and by the time they get resolved, all the bad stuff's already happened. There are tools that can be used by competition authorities—interim measures and things like that—that can be brought in to help deal with it. We're in this world now where we have to actually do both. We have to regulate the business models more than we have using a set of tools. We have to also think about those harmful outcomes. As we think about the harmful outcomes, not destroy all the good stuff that comes about it, too. These actually are all areas for government action. There are different approaches within—how Australia decided to handle news versus how the EU thinks about handling news. We'll see where Canada lands on that one. There are examples out there that we can draw upon as we make our own decisions. I know you're already doing that, but it's just for the broader audience to know that those exist and a place like CIGI, this is where we can add value too. Because that's the nature of our work. Civil society can help out. As Heidi said, we want to bring civil society into these discussions because they will bring particular perspectives that we may just not be aware of given the nature of how we do our work.

[All four panels return.]

Owen Ripley: Sam.

[Samantha's panel fills the screen. As she speaks, a purple title card fades in for a moment on the bottom left corner. It reads "Samantha Bradshaw, CIGI Fellow."]

Samantha Bradshaw: Owen, going back to your question, and looking at one of the questions here about online platforms playing some of the more traditional roles of infrastructure like news media libraries and thinking about this tension between public versus private spaces. One way of thinking about this stuff comes down to a debate that we've had for a long time in the internet studies, the Internet policy community more broadly, which is whether or not social media platforms should be considered news providers, where they have an important role in curating certain kinds of news content. If that content isn't fair, balanced, equal or representative, that they can then face a some kind of punishment, whether it be fines or whatnot.

Treating the platforms more like a news provider, there's a lot of challenges with framing platforms as simply news providers, because they do more than that. Traditionally, this was called intermediary liability. Platforms have liability from the content that appears on their platform. Take Google search, for example. The way that Google search works is that it crawls the Internet and essentially organizes everything for us so that when we do searches, it will then deliver content to us based on previous people's searches, based on keywords, on the various web pages, and whatnot. If Google search brought some kind of hateful or harmful content to the top when a user was making a search, and if they were held liable for that content, Google search, as it exists today, it wouldn't be there because they wouldn't be able to crawl and archive the entire web without having content that other people create that might be harmful accidentally make its way up into the search results. There's a whole industry that tries to game search engine algorithms to get certain content to the top of Google search. This intermediary liability is very important for platforms in some senses because the content that they are sharing is user generated. Holding them liable might have other unintended consequences for benefits that we as users get from the platform in terms of having a working, awesome search engine. Thinking about this tension between public and private infrastructure, I think moving the debates away from these questions around intermediary liability and towards what Heidi was talking about earlier in terms of imagining a different kind of public infrastructure, whether it's an Internet curated by librarians or thinking about a whole new model in general and what that could be. I think that's where we need some creativity and ingenuity and to disentangle the private from the public a little bit more.

[All four panels return.]

Owen Ripley: I'm curious. If we think of government as a service provider, and obviously, governments are big service providers—I would include municipal and provincial in that as well. I guess my question for this table is: is government doing a good job leading by example? In terms of being more transparent about the data that is going into decision making. Are there examples you can think of, and again, doesn't have to be federal, but are there levels of government in Canada that are doing a good job of putting their money where their mouth is and living by some of the principles that you've set out? If we think of them as a service provider?

Heidi Tworek: This is a fascinating question. Also, thinking about COVID, where so much of the debate has been around: how much transparency should you have around COVID cases?

[Heidi's panel fills the screen. As she speaks, a purple title card fades in for a moment on the bottom left corner. It reads "Heidi Tworek, University of British Columbia, CIGI senior fellow."]

How many dashboards, etc.? We've also seen the downside of transparency when people have been accused of being "COVID carriers". There was a particularly terrible case in the Atlantic Provinces where this also intersected with racialized abuse because a Black doctor was accused of being a person who had brought in COVID-19, even though it turned out that he wasn't. I think one thing to bear in mind is that when we're doing this, transparency can be tremendously important, but there are also potential downsides that can be very problematic in terms of leading to stigmatization. I think figuring out and really working through "how unbalanced is that?" is going to be crucial for governments moving forward. Sometimes that has to do with volume. How do you present aggregates? On and on. It goes back to the point that Bob made about thinking about this 20 years ago. I think there's a lot that we can learn from how different provinces approach this and what databases people wanted. Because in some cases, collecting certain types of data really heightened the ways in which we understood that racialized Canadians were bearing the brunt of COVID-19. Sometimes that didn't lead to the policy changes one would have needed to actually implement things. This brings me back to the great piece Sun-ha Hong wrote for CIGI about why transparency isn't enough.

Just because you have transparency and you know something, then you have to be able to do something about it. Just because we know something, it doesn't mean that we enact the policy initiatives that you need to solve it. I think that's another thing that we should bear in mind. I just want to add one quick thing about the question you asked before, Owen, which is why should I care if I don't have an internet portfolio? You never know when your portfolio is going to become the hot spot. Which Public Health Official, if you'd asked them three years ago, would think that they would be the centre of online death threats, abuse, and praise in some cases, but also all these other things—conspiracy theories, etc. They couldn't have predicted that. Yet, there they were finding themselves in the centre of this storm in all sorts of ways. I think this is one reason why it's important for anybody, regardless of their portfolio, to understand these dynamics. You can think about how we get information out there? What are the ways it works? What would we do if we were confronted and drawn into the centre of one of these storms? Those are some of the reasons why I advocate that no policy portfolio is exempt from potentially being the one that next week gets picked up in some way.

[All four panels return.]

Owen Ripley: Any thought on either of those questions, Bob?

Bob Fay: I fully agree with Heidi, and it just reinforces the point that you have to bring a multi-disciplinary approach to answering these really tricky governance questions. I'm an economist. We have a historian. I'm not sure what everyone's backgrounds are, but you really need the collective voices to come together, because that's the only way we're going to really understand how these technologies are diffusing. Then, we can figure out the transparency and the metrics and things like that that are essential for governance. It's very tricky.

[Bob's panel fills the screen.]

There's no silos anymore. Everything's interconnected, but that creates challenges.

I just want to raise one other point too, because I think a lot of the issues while COVID really did come, there were a lot of questions about the data and the how the data could be used for good. For example, we can collect data that really can help us understand how the pandemic is ripping through or rippling through, depending on the community, different communities. Yet, we know it can be used for harm as well. We have a statistical agency that's well versed and knows how to do all this stuff. We have a data trust in Canada, called Statistics Canada, that knows how to manage really sensitive information. I think people might be surprised about the level of detail that the statistics agency knows about us for good reason. Because it helps to make-  it's one input into creating good policies. There is a lot of talk about the role of data trust, and there are different structures that can be used to manage data. They all have their strengths and weaknesses, but we have expertise in this country to do that. We've got open data. As they say, we've got the directive on these automated systems—the algorithmic impact assessment tool. There's a whole bunch of things that are being done that our public servants have come up with and, undoubtedly through good consultation, that are being used. We just have to figure out how to do more of that.

[All four panels return to the screen.]

Owen Ripley: I know 90 minutes always seems like a long time, but it has gone quickly. We have just a couple minutes left. I'll just see if there's anything else you want to throw on the table in a minute or less that you haven't had a chance to raise or put out there. Heidi, why don't we start with you? Any final thoughts?

Heidi Tworek: One thing I think we can do in terms of reframing our mindset is to remember that for Silicon Valley and other companies, including TikTok, their motto is move fast and break things.

[Heidi's panel fills the screen.]

The government is the opposite. One way we could think about this is what is the role of government in putting in place things like precautionary principles? Bob's example of the algorithmic impact assessment is one of those. Social media companies don't do that. They A/B test on entire nations. It has dramatic consequence if you're an independent media outlet in Serbia and suddenly wake up one day and nobody's looking at your Facebook page because Facebook decided to A/B test on you. That's I think one way we can think about the role of government. Where does it need to put in place precautionary principles for companies that have moved fast and broken a lot of things potentially?

[All four panels return.]

Owen Ripley: Sam?

Samantha Bradshaw: I think I'll add on top of that to say also, maybe more procedural accountability is needed and rules about the rules in order to put up those roadblocks and those barriers so that companies don't just have open highway to go drive and move fast and break things.

[Samantha's panel fills the screen.]

If there is more procedural accountability and rules about the kinds of things that can go into terms of service agreements, more rules about the ways that algorithms are audited, and instilling certain principles around technology, design, and use. I think that's another good route to slow down the fast moving technology so that we're not continually growing at a scale without any protection for, or thought for, the users that the technology will be affecting.

[All four panels return to the screen.]

Owen Ripley: Bob.

Bob Fay: We're not in this alone. Every country is dealing with these issues and encouragingly, so is the United States, where these platforms are headquartered.

[Bob's panel fills the screen.]

We have our domestic issues, and then, we have layered on top of it, the international dimension. You know, Canada, there's a big opportunity for Canada to lead here. I think we're known as a country with strong values and values should be determining how the technology is used. There's a tremendous opportunity for our public service to continue the good work they're doing and to bring an international scope where it's really needed.

[All four panels return.]

Owen Ripley: I think that that's a good note to end the conversation on, Bob, about values driving the policy. This has been a real treat for me to moderate this discussion. I'd like to thank all three panellists—Heidi, Sam, Bob—for making time in your busy schedules to do this. I know it's always a little bit weird in these zoom calls. We see each other and don't see the multitude that is out there, but I will speak on their behalf and thank you.

[Owen's panel fills the screen. A purple title card fades into the bottom left corner for a moment reading "Owen Ripley, Canadian Heritage."]

For those who are listening, I draw your attention to our other events that may be of interest to you. The Canada School of Public Service Digital Academy will be hosting an event on Embracing Digital Disruption this Thursday, May 27th. If that's of interest, I encourage you to check it out and sign up. The final event in the New Economy Series will be on Thursday, June 10th. It will look back on the key themes from across the series and do a bit of a wrap up and thinking about their implications for the public service. On that note, I think it's more or less afternoon from coast to coast to coast now. I will wish you all a good afternoon. Thank you very much for tuning in. Bye now.

[The chat fades to the animated white Canada School of Public Service logo appears on a purple background. Its pages turn, closing it like a book. A maple leaf appears in the middle of the book that also resembles a flag with curvy lines beneath. The government of Canada Wordmark appears: the word "Canada" with a small Canadian flag waving over the final "a." The screen fades to black.]

Related links


Date modified: