Interviews

Some of Frank Pasquale’s ‘New Laws of Robotics’: “Robots should not fake human characteristics. AI should not intensify arms races”

Image: Martin Kraft (photo.martinkraft.com) License: CC BY-SA 3.0 via Wikimedia Commons

Versión en español

Movies such as I Robot or Ex Machina have featured a dystopian future of robots threatening humanity. We are now witnessing rapid massive transformations in Artificial Intelligence and its various applications in different fields. Is this threatening human obsolescence? Is the process of technology dissemination developing fairly? What about the unknown sides of data collection? What about regulating AI? How can robots truly serve humanity and to what extent? All these questions and more are addressed by Frank Pasquale who talks to AIKA about his new book “New Laws of Robotics“, AI in the age of COVID-19, AI regulation and more relevant aspects.

Frank Pasquale is an expert on the law of artificial intelligence, algorithms, and machine learning, and author of New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press, 2020). His widely cited book, The Black Box Society (Harvard University Press, 2015), developed a social theory of reputation, search, and finance, and promoted pragmatic reforms to improve the information economy, including more vigorous enforcement of competition and consumer protection law. The Black Box Society has been reviewed in Science and Nature, published in several languages, and its fifth anniversary of publication has been marked with an international symposium in Big Data & Society.

Pasquale is Professor of Law at Brooklyn Law School, an Affiliate Fellow at the Yale Information Society Project, and the Minderoo High Impact Distinguished Fellow at the AI Now Institute. He is also the Chairman of the Subcommittee on Privacy, Confidentiality, and Security of the National Committee on Vital and Health Statistics at the U.S. Department of Health and Human Services.

  • In your new book “New Laws of Robotics”, you highlight the importance of creating teams or “partnerships” of humans and robots in the different fields; tell us more about the importance of such partnerships, the challenges that stand in the way of their success? What are possible solutions to overcome such challenges?

Let me give some examples from the first few chapters in the book. In medicine, there’s a really interesting set of partnerships that are developing between nurses and robots. One of these is the Robear, and that is a robot designed to help nurses to lift patients, especially heavier patients, from the bed. This is a really important innovation because a lot of nurses have orthopaedic problems because they are lifting extremely heavy patients or patients that are very vulnerable and need to be lifted very carefully. The robot is designed to enable a transfer of the patient from, say a bed to another bed, or a bed to a chair without demanding excessive physical exertion from nurses. So this I think is a very good example of what I call the first new law of robotics in my book; AI and robotic systems should complement professionals and not substitute for them. It is a relatively narrow and well-defined task, the nurse is always present, and brings in the robot to assist. I think we are going to see more and more of these examples of robots particularly in routinized tasks; for example, bringing drugs around hospitals.

Now, in terms of challenges, clearly there are challenges in terms of this type of AI; is it too costly? Does it get in the way? Can we include more and more tasks in a robot like Robear? There are going to be really interesting questions for the future. Some AI developers want to develop very extensive AI systems that are not just doing manual tasks but are also taking on roles like care, trying to offer something like empathy or to look empathetic. So you can imagine a robot that tries to look sad when a patient is in pain or look happy when a patient, say, takes a few steps beyond what they normally would.

That’s the situation where I think that the robot has gone beyond complementing a professional to substituting for them, and more importantly from my book’s perspective counterfeiting humanity: the robot is faking feeling. What I mean by that is the robot is mimicking human emotions, even though robots can’t actually feel human emotions. That is disservice to patients and to nurses, who as professionals are trained in terms of how to expressively connect with patients, and empathize with patients. Persons can authentically do that because they have experienced pain and disappointments of their own lives, and also joy and sense of accomplishment. A robot cannot do that.

So these first two laws of robots in my book are about: robots should complement professionals and not substitute for them, and that robots should not be mimicking or faking humanity. Mimicry, fakery and counterfeiting, are three terms that I use in the book and try to define very carefully because I think that the idea of counterfeiting has a resonance with the use of counterfeit money that’s appropriate here. My fear is that we have robots and AI systems that are faking human emotions and that are trying to claim our attention the way human beings can.  That’s like living in an economy where bad money drives out good, where fake money once out in circulation reduces our faith in the value of existing money. So robots or machines faking human capacity and empathy, that will diminish our valuation of genuine human empathy, or we’ll be confused about when that actually exists, and when it is mere mimicry of humans, or, worse, human mimicking the mimicking of humans.

Other sorts of partnerships could be in the military, where there are lots of robotic systems being developed. There is something called the Octoroach, which is a robot that is designed to act like a roach but it has eight legs, thus the ‘octo’ name for it. It could crawl into buildings and it could surveil things. There are drones that robotically could become autonomous, some would launch ballistic missiles once they launched a form of an autonomous killing machine. There are killer robots that automatically fire upon individuals. So, for example, in particularly contested war zones, you can have a robot, which controls a machine gun that senses someone coming via machine vision or machine hearing systems that sounds like an enemy or that looks like an enemy. Those are all examples that I think are troubling, primarily because they violate my third new law of robotics, which is that robotics and AI systems should not intensify arms races. I feel like this is a literal arms race, where once one nation has a fleet of killer autonomous drones, other countries are going to develop their own fleet of killer autonomous drones, and it sort of leads into forms of increasing investment in machinery that could have vastly destructive consequences. So, that to me is really a critical problem of AI and that is something we need to address.

The first law in the book, that a robot complements a professional, would dictate that any military robot be controlled by a person or be controllable by a person. But we also need to be sure we are in an environment where such control can be maintained. There are military futurists that worry that if we don’t join the military arms race, that becomes a threat to their ability to act fast enough to counteract an attack. From this concept was developed the idea of a push button war, and I am trying to help us to avoid that! I think that we do not want to live in a world where some great power, or an even lesser power, develops a robotic system that is so fast at attack that everyone else has to have robotic defences to countervail it …which in turn become targets for new forms of attack, etc.

So, that might be an example in response to the question of challenges that stand in the way of success of partnerships. It’s fascinating when you think about success in military context, and what is it? This was the hardest chapter to write in the book, because from the perspective of a military of a country, being successful means being so intimidating to other countries that they don’t even try to fight you. But that certainly cannot be defining success for the world! Success for the world has to refer to something that is more about multi-polarity; the idea that there are multiple poles of power in the world, not just a few countries with the power to intimidate or destroy everyone else.

  • Tell us about your favourite use case when it comes to “partnering with technology”? Why do you see it as a successful example?

My laws are in favour of partnering, as opposed to substituting robotics for professionals. I think that one of the most successful examples that I see happening right now in medicine, are examples of doctors who are pattern recognizers working with Artificial Intelligence experts to avoid errors. See, the idea here is that you still have dermatologists to look at skin abnormalities, say there’s a mole on someone’s hand, that dermatologist will examine that mole and have it diagnosed as a melanoma or not based on their existing expertise and their knowledge of the medical literature. But there is sometimes this fear at the back of their mind that what if it does not look like one but it might be one; like I have 80% sense that it is not but maybe it is. Is a biopsy in order? That may be painful and inconvenient for the patient.

I think that in the future, you are going to see more and more scans being done by machine vision systems that will give people a much better sense of how likely something  is to be a cancer, to help specialists avoid errors. So that I think is a very important side of AI, that could do a lot to avoid error, and help a lot of people in dermatology, pathology, radiology; all those forms of pattern recognition, to help physicians feel confident that they won’t be sued for missing an unusual form of cancer. Just as AI is becoming much more successful in driving as a preventer of error, rather than as a driver itself. Of course, I expect that driving will eventually become totally automated, and do not expect to see the same result in medicine, precisely because it is only one information input in the professional field that needs to be understood and applied by a responsible professional.

Of course, there is the question of if the AI is also wrong! But in general, my hope is that these are going to be very powerful ways of ensuring less error in the medical system, which is good of course, because a lot of people die of medical errors. So we want to try to minimize this as much as possible. I think there was a medical report in 1998 that estimated over 90 thousands deaths in the US per year due to medical errors, and we haven’t really done that much better since then. So, the important question becomes how do we use the best technology to do better?

I also think that in education there are some good partnerships where you have robots that can teach children lessons that are not available from local teachers. And that’s particularly important when you think about younger children. For instance, imagine a family that wants its young child to learn Chinese, but doesn’t know Chinese, and neither does anyone around them. In this case you can have an interactive, entertaining, well designed robot to teach them Chinese. That’s really important as a new opportunity.

But I think that this is also going to be a situation where we don’t want to be substituting for the teachers themselves. Because personal interaction is constitutive of good teaching—there has to be a responsible person interacting with students, mediating between them and all the technology that could help or hurt, aid or parasitize, interest or distract them.

There is also a question of democracy, of maintaining diverse priorities and ways of knowing in society. Teachers are going to be teaching others subjects where they might have particular expertise or viewpoint and the ability to have a socially interactive model for all the students. But they can’t do it alone, as they certainly won’t know all the languages of the world or other things which might be more niche interests in math, arts, coding, culture, history, social sciences, etc. So, in all those areas AI and robotics can be incredible supplements, and also the teacher can stand as a quality evaluator in terms of helping students and their families know which AI is the most useful and which is less useful. So I think that’s an important example of partnerships with technology.

  • In “New Laws of Robotics” book introduction, besides raising questions about making the best use of AI through “robotic and human interaction” within a regulated framework, you highlight the importance of democratizing decision making rather than leaving it in the hands of the powerful few. What do you mean by this? How can such democratization be achieved?

I am going to use the example I just used with the teaching robots; teachers and bots or AI that could be used to teach certain lessons. Imagine that we would go much further than what I just described and we encounter a society like the US with, say, 10,000 teachers in public high schools who teach American history. And then school boards say, “you know we are taxed way too much to support these 10,000 teachers of American history, so let’s just have a teaching robot that has one prescribed history course, and we are going to roll that out to all classrooms and it’s just going to be one history class for all the United States.” I think that would be a very troubling development, because I think that there are people in different schools, with different backgrounds, with different ideas about what’s important in history and what’s not, I want to see all of those people diversely teaching history throughout the country, I don’t want to have everyone receive the same version of history.

Of course you could say that we could try to program all of them into some robot that has 10,000 different types of teaching in it, but I still think that that would be missing the point of democratization because part of distributing power and expertise means that there are people who have some control over their daily life, their own part of the world. This idea of having control over some corner of life or some part of life, is one reason that so many more people support a job guarantee (or at least job-supporting government programs) than support universal basic income.

Going to work in any particular context gives you some level of control. Even when the boss is controlling, you still have a level of control or autonomy in your position and you have socialization; your socialization in the common project of the work place, as part of a larger group of workers. So, I think that that is part of the key to democratization in the future of automation policy; it’s that we want to have the ability for human beings to help govern their workplaces and govern the development of technology in them.

You see this idea in Elizabeth Anderson’s book ‘Private Government.’ She’s described the problem we have today that so many workplaces are ruled by bosses with an iron fist, you know, it’s like a dictatorship, and she says we need to democratize work places. I think she’s right about that but I also think she underestimates how, even in very controlled workplaces, there is still sort of room for autonomy. Even a night assistant manager at a drug store, for example, might have a chance to get to know other workers there, arrange the store in a certain way, try to find ways to deliver packages or otherwise accommodate a shut-in person, etc.

There are forms of feelings that one has in meaningful work, of participation in the world. It may exist only partially or imperfectly in many places. But that is also part of what the book addresses; that we should listen closely and try to understand the experiences of people in all walks of life. Moreover, law and policy can cultivate the feeling of (and reality of) participation, autonomy, and governance. When jobs include more technology, workers should have some say in how that technology is integrated and how it is designed.

  • Transparency is a term that is often used when addressing topics related to the ethical dimension of using Artificial Intelligence technologies in different fields. In your book “the black box society” you mention transparency as an essential beginning towards giving users control over the use of their data, however there is an argument that it is complex to achieve full algorithmic transparency for instance for economic reasons such as not preferring to reveal the used tech to competitors as well as the complexity of sharing technical information with non-technical users. In light of this, let me ask you:

How would you define transparency in this context?

How do you believe that transparency is the most important ingredient to guarantee accountability?

How far is this area currently addressed in related laws and regulations?

I think that, with respect to transparency, chapter 5 of The Black Box Society has a good chart that shows a spectrum of the timing of transparency and the depth of transparency. So, you can make something completely transparent immediately, or you can wait years and years to only make it partially transparent—and there are so many points in between those two poles. My argument with respect to trade secrecy, one dimension of the question here, is that even if trade secrets are valuable right now or for a few years, there must be some sort of disclosure. That’s the lesson of patent law is to disclose information and your right to exclude others from practicing it is protected.

Now, transparency with respect to time and scope is about process. The substance here includes transparency with respect to the data, the algorithms and their uses. Some say that it’s impossible to do much with AI and big data transparency once systems reach a certain level of complexity. But I have a piece on the LSE (London School of Economics) blog, that’s called ‘Bittersweet Mysteries of Machine Learning‘, where I say that at a very minimum, even in systems where people say oh it’s completely unexplainable or I’ve no idea how it works it’s so complex, still, we should be able to demand what the sources of data  are, what data is being fed into the system and what are the outputs.

There may be a black box in the middle, but we still deserve to know what data is going in and what inferences or data are coming out. Now, when it comes to the centre of things, I think that one of the issues there is that, if we are again dealing with something that is so complex that there is just no way to narratively describe it, to explain it in algorithms  that are comprehensible by human beings, then we need to think very deeply about all the ways in which we do not want to allow that sort of AI to affect human beings’ life chances, how we don’t want it to affect classifications, rankings and evaluations of people.

We don’t want that because essentially we have laws that state that certain ways of gathering data about persons (and using that data to judge them) are illegal, and if we can’t understand what data was used, or how it was used, then we can’t know that it doesn’t violate those laws. We now have so many books and so much research on the various categories of transparency and machine learning that show the biases in algorithmic systems. Virginia Eubanks, Safiya Noble, Ruha Benjamin, Andrew Ferguson, Ari Ezra Waldman, Margaret Hu [and others], there are just so many scholars that have exposed these problems that it is no longer safe to trust black box systems with controversial human classifications.

In terms of how this is currently addressed, the General Data Protection Regulations (GDPR) is one thing that limits non transparent profiling in various ways, and in the US there are some privacy and financial laws that also protect individual rights and important social values in this area. I think we also have to go much further, because [the answer here] will be requiring certain ways of completing ranking, ratings, and evaluations of persons; that’s the only thing done on the basis of articulable criteria; it has to be done in an articulable way in order to maintain our standards of fairness in the process. If you don’t do that then you get rid of those standards; those standards are inextricably intertwined with language as the core of law, not algorithms and not computational evaluations.

For the question, whether I believe that transparency is the most important ingredient to guarantee accountability, no, I think that ultimately it’s an ingredient but that there are many forms of accountability out there that would involve, for example, post hoc analysis of these systems in order to audit their impact on groups. In addition, we may rightly say that in certain instances, more narratively intelligible explanations, or simpler and transparent standards (applied with discretion and flexibility) should replace machine learning, AI, or algorithms.  I also think that in terms of accountability, more accountable systems would involve showing people how the algorithmic world works, which is one step towards legitimacy. But this always has to be tested against other potential non-algorithmic ways of ordering those affairs.

  • The use of AI to provide tech solutions to the current COVID-19 crisis has been witnessed in different countries through governments’ applications to track COVID cases. Such initiatives have been met with doubts from many who are mainly concerned about privacy, access to data on mobile phones and government surveillance, what do you think about these applications? How far do you believe such reactions of doubts and concerns are justified considering that the same users can have other applications which collect data all the time?

Excellent last point! I’ll start with the last point to say that, if you have concerns about privacy and data collection of COVID tracking apps, the next important step to articulate those concerns is to explain exactly what is the margin of lost privacy beyond already existing laws of privacy, given that the person involved is usually already using other forms of technology? Of course there may be persons who  are not using cell phones at all. But for those who have these sorts of systems on board, the marginal loss is something that has to be calculated.

With respect to the apps themselves, what I have seen is that there are places where these apps seem doomed to fail and there are places in which they seem to have played a role in excellent pandemic response. Let me start with the successful countries. Some of the literature on South Korea, in English language journals and other venues, indicates that in the wave of the 2016 MERS epidemic, South Korea amended its privacy laws in a way to ensure rapid coordination and collection of data to inform the Korean authorities about exactly where everyone was moving; whether they had just come into the country or who have exposure to someone with COVID case. If you look at the success of South Korea at tracking clusters of the disease from rapidly understanding exactly where a person who seems to be a super spreader has moved and to quickly identify those individuals to put them in quarantine, to support them in quarantine, to know where they were in quarantine. All of those things are way in favour of exceptionally broad and comprehensive data collection for a very narrow purpose, which is in public health.

And that to me suggests that these COVID tracking apps could play a very important role, particularly at the start of epidemics. If we come to contrast the South Korean example with introducing the COVID tracking apps in, let’s say, in Europe or as in the UK right now, there is a clear distinction: the EU/UK/US governments have not marshalled the serious resolve and state capacity that South Korea and Taiwan did. So in a sense they don’t deserve (as much as the more capable states) to have access to the relevant data. If they were more competent, the equities might be different.

My question is how would the tracking app lead to better allocation of resources in a reasonable way in the EU/UK/US? And I have my doubts! I think that there are just so many people now who have the illness that it’s very hard to imagine the tracking app being very effective at helping us to better understand the spread. There are also problems in terms of, people could be in one place, but can’t see each other because there’s a wall between them; they work in different departments, but the tracking app will inform on the person who was not infected or have no exposure to the infected person because of the location in the same building. Those are sorts of failure that could occur.

It’s also a huge data collection issue, and differential. When it’s early, or things are well controlled, the results of data collection only impinge on the lives of relatively few people (who must quarantine). Later, if you’re in the middle of a pandemic, suddenly you’re talking about collecting data with consequences for thousands or millions of people, with respect to the COVID exposure.  That risks possible discriminatory data use, and worse.

In general: AI-enhanced public health surveillance is a good way of helping a competent and rapidly acting public health authority to rapidly stop a pandemic and nip it in the bud. Moreover, the invasion of privacy entailed by that sort of capacity for and triggering of on-going location tracking of everyone  for public health purposes, that restriction of liberty is far less harmful to liberties than what you see happening when a pandemic gets out of control. I realize that that’s going to be controversial as a model for the future of pandemic control, but I think if you look at how Korea did it, it’s essential, also essential to the Chinese response, but I am not so sure about Vietnam or Taiwan (which relied a lot on border control). Nevertheless, I look at the freedoms now enjoyed for many months in those countries (from fear of deadly disease), and the small sacrifice they made at the beginning to achieve it—and I feel that they are in fact much more free in a sense than the liberal democracies haplessly “advising” citizens to stay home, stay safe, etc.

Of course, in thinking comparatively, there may be different paths out of the crisis; so maybe there’s an Australian path that’s about an extremely strict lockdown for an extended period of time, then you could say that there is also the South Korean high tech surveillance approach and in China there seems to be a sort of combination between those two, that is, the long closure of borders and the very close tracking of everyone. But, I think what is beyond question is that this is the time that the US, the EU, South America and Central America really think deeply about what did the successful countries do right, because, you know  this is a world historical problem. The death and illness are horrifying. And their effects will not end when the pandemic ends (if it ends—incompetent handling of it has now effectively created the opportunity for mutant forms, like we saw in the Danish mink farms, to arise, and perhaps defeat or reduce the effectiveness of vaccines). For example, because of the loss of economic growth and opportunity over the past just seven months, every single American household is predicted to lose (on average) $125,000 dollars in future earnings. And what goes without saying is the enormous suffering and loss of lives.

  • Nowadays with the current situation witnessed by the whole world, there is significantly higher dependence on technological platforms in many fields. For example, we see platforms such as Google dominating when it comes to distant learning, Zoom rising for business and working from home, and Facebook and Twitter maintaining their role of providing quick bites of information and updates accentuating platform capitalism. What benefits and risks do you see in this process?

I think it’s an extremely risky process! This is giving enormous power to largely American companies to have global reach (and also some Chinese firms), and I don’t trust many of these mega-firms. By contrast, I think nations around the world need to develop more forms of technological sovereignty.

Also, governance needs to be distributed. Distributed governance in education involves teachers mediating between technology and students, instead of direct interaction from technology to students. I think that something very similar should be applied to platforms. I would ideally like to see, in each country, multiple search engines, and multiple social networks (with APIs for interoperability, of course, and data sharing for the search engines). I hope to see that on the horizon because what we suffer now is just an enormous situation of power concentration.

I also hope that we see more break-ups of mega-firms. For example it’s ridiculous that Facebook, WhatsApp and Instagram are controlled by a single company led by one man with exceptional influence over its board, management, and users. He’s basically an emperor, as I suggest in my work on “functional sovereignty.” I mean, in multinational corporations, CEOs have the power to choose many important people on their board over time, and the board is said to run the company but if the CEO has chosen the board and can knock people off the board, then who’s really in charge? And in these big tech firms, the CEO  is often even more powerful than the average corporate CEO. I think that with these CEOs having all this power, there has to be a look at breaking up these firms. I mean, break apart Google and YouTube, break apart Facebook, WhatsApp and Instagram, there are many ways to do this that Lina Khan, Elizabeth Warren, Sally Hubbard, Stacy Mitchell, Tim Wu, and others have proposed, and I think we should.  

  • The year 2016 has been regarded as a turning point marking the beginning of a tangible impact of oligarchy through engineering and reshaping the public sphere and manipulating the public opinion. For example, Brexit and the US elections were two events that reflected the impact of Facebook on politics. After the users’ privacy invasion scandals and attempts to regulate such platforms, with the current US elections scene how far do you see progress in that sense?

I think that these platforms are trying to look busy, but I believe that they had made very few significant [actions] in controlling highly suspected interventions, both by authoritarian populist, nationalists, white supremacist political parties and by foreign agents that support them and that generally sow chaos, and I think that this is very problematic! [Such examples represent] fundamental challenges to the idea of self-regulation by these platforms. As you note, there are some concerns about free expression, however these are private companies who use free expression laws to limit the government’s ability to regulate them. So they, therefore, have to take that regulatory role to govern their own speech, or they have to allow government regulators to take on that role (by admitting they are common carriers). Doing neither is a recipe for chaos and a descent into authoritarianism (including the incredibly damaging lies now spread by Trump about US elections).

I hope that in the future we see a lot more interventions by governments to maintain the integrity of elections because I think that there are enormous problems with the fact that the President of the United States, President Trump, just outright lies and many of his followers, many in the Republican party do the same thing. Examples around the world proliferate, I have many examples on chapter 4 of my book New Laws of Robotics about automated media. I think that [this situation is] incredibly troubling and that we need to see governments start to impose basic standards of truthfulness and decency in anti-hate speech laws on these platforms.

And if they don’t, my prediction is that the government will get taken over by the people who use those cheap tricks of political appeal to take over democracy. In other words: we either democratically control the public sphere, or we allow it to be subverted by demagogues who will control it in an authoritarian way.

I mean, we’ve seen that with respect to authoritarian leaders around the world, there are so many examples. I think that the problem only gets worse until you have progressive governments with some notion of fair play and decency in terms of political appeals, strongly intervening to ensure a public sphere that is truly respectful of the liberty of all citizens and that is not going to feature the horrors of efforts to terrorize, harm, baselessly stigmatize or spread lies about certain political parties, minority ethnic groups, and other vulnerable groups. Mary Ann Franks, Carrie Goldberg, Danielle Citron, and K-Sue Park are brilliant on this front—they are intellectual leaders of a movement for a better public sphere.

 I think that with that we really have to think deeply about this type of issue, and I think that we really have to reframe things because of how easily social media enables complete lies, complete fabrications to spread. We are learning more and more since 2016, we learn more and more about the prerequisites to democracy; we need to have an informed populace, not one that is continuously being exposed to lies, disinformation, propaganda.

  • What are the main challenges standing in the way of platforms and algorithmic regulation? And how do you see future progress in this area?

I think the main problem in this area is that there is not enough appreciation that governance happens. It’s not as if we can ever just say we are going to completely deregulate platforms and not having any government interference in them, that then there is no governance, and complete freedom prevails. In fact, governance will happen, and good governance is necessary to freedom.

We also must recognize that people feel often objectively a need to be on the platform and therefore they don’t have a choice to be on it or not. In those environments the ability of professionals to intervene and set some rules is crucial. We’re not freeing people by keeping government out, in fact we’re often un-freeing them to be manipulated or marginalized by a platform.

I know that there is an easy counter-argument here, which would be, “you just called out certain leaders as authoritarian and now you want governments to put rules on my life, what gives? You want authoritarians to do that!” My answer is to say that, no I don’t want authoritarians to do that but I do want  countries that are not authoritarian to quickly recognize how easy it is for authoritarians to take advantage of the current information environment, the platforms environments, and to stop that kind of thing from happening there. That I think is the critical issue.

With respect to other issues of platform governance, one of the biggest problems is that governments are too slow to try to redistribute the bounty from platforms. If you look at the amount of revenue of Facebook, Google and Amazon, etc., that is revenue that could easily be redistributed to the firms these platforms are squeezing out of business, including much local media. There are many ways in which we could redistribute such funds. I think we should think deeply about that because right now those funds are primarily concentrated in the hands of shareholders and top managers of these firms and I think that we have to think about what sort of power this gives them and how can we ensure that that level of power and wealth doesn’t get so great that it overwhelms democratic processes.