Every way of seeing (like a platform) is also a way of not seeing
Courts put the hold on Florida's social media bill and cut down the FTC case against Facebook.
A tech policy newsletter from an outsider.
Free at the point of consumption.
Delivered every Monday (+/- a couple hours).
News, notes, and quotes
The wheels of justice have been grinding. The Florida social media law has been temporarily enjoined. I first saw the news from Brad Heath, who was kind enough to link to the decision.
But was anyone really surprised? The First Amendment was always and will always be the biggest hurdle for these kinds of laws because they violate Constitutional principles.
This one gets a big oh my: “The Texas Supreme Court ruled Friday that Facebook can be held liable for any sex trafficking on its platform, despite the protections of Section 230.” Here is the decision.
Big changes on the competition front as well. A federal judge threw out antitrust complaints brought against Facebook by the Federal Trade Commission (FTC) and more than 40 states, saying, “The FTC has failed to plead enough facts to plausibly establish a necessary element of all of its Section 2 claims — namely, that Facebook has monopoly power in the market for Personal Social Networking (PSN) Services.” Here is that decision.
The market definition of the PSN was always the weakest part of the FTC case against Facebook, which is in turn among the weaker of the big tech cases. The government has 30 days to refile a complaint, otherwise the case will be thrown out.
Hal Singer took to Twitter to say:
Pushing the separations bill will make it easy to break up Facebook. But the benefits of separation aren’t clear and convincing.
I haven’t seen much research on the topic, but one paper modeling out the breakup of Facebook found that consumer surplus would be lowered by 44 percent. I’ve got some concerns about how the scenarios are being constructed, but interoperability mandates and advertising taxes seem to benefit consumers.
I cobbled together this table of the data since it was spread over two sections:
Policy wonks should be focused on consumer surplus, and under the breakup scenario, everyone, not just Facebook, fares worse.
Meanwhile, House Minority Leader Rep. Kevin McCarthy laid out a framework for regulation, which includes accountability, transparency, and strengthening of antitrust review.
Also on the competition front, the FTC’s 2015 competition policy statement has been rescinded 3-2 along party lines, the first move by new FTC Chair Lina Khan. The DOJ still doesn’t have a lead for their antitrust division, but I assume that will change soon. The White House is putting together an executive order on competition, which means they will probably move on the EO as well as announce a lead.
A little late to the game on this one, but the Colorado Privacy Act passed at the beginning of June. Here is a LexBlog article on the law, which is located here.
From MeFi, I learned about Jonathan McDowell, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics. Since 1989, he has self-published Jonathan's Space Report, a monthly free web-based newsletter that recaps launches of satellites, reentries and launches of manned spacecraft, and other recent spaceflight activity. His back catalog of issues since 1989 is also available in plain text.
The Internet and social media have upended traditional power dynamics: “After leaving Nike in a high-profile breakup in 2019, American track star Allyson Felix—the country’s most decorated female track and field Olympian—couldn’t reach a sponsorship deal with any other footwear brands. So when the pandemic delayed Felix’s quest to make a fifth Olympic team last spring, she did something nearly unheard of in the sports business: Felix decided to build a shoe company of her own.”
I like Crawford’s framing here,
Protagoras is famous for saying, there are two sides to every question. But this is badly misinterpreted as suggesting that there are two answers to every question. The point of Protagoras was the spirit of contradiction. Questions needed to be turned back on themselves in the way that Crawford is doing here. Antilogic is a powerful tool.
Papers and research
Liya Palagashvili just released a new and comprehensive study through the Mercatus Center on how regulations are influencing technology startups’ business directions, products, and margins of innovation. I will have more on this paper later, but she found “About 70 percent of startup executives believe they operate in a moderately or highly regulated industry. There is also some evidence that startups in highly regulated industries face a barrier to obtaining venture capital funding. The paper finds that a majority of startups rely on contract labor because they require flexibility and face uncertainty in their early stages. Moreover, survey results show that about one third of technology startups hire employees or contractors who are high-skilled foreign workers and that startup executives indicate that they need greater access to the international market in order to grow and succeed.”
Carl Bergstrom did an interview with Vox’s Shirin Ghaffary on his new paper calling the alarm on social media: “My sense is that social media in particular — as well as a broader range of internet technologies, including algorithmically driven search and click-based advertising — have changed the way that people get information and form opinions about the world. And they seem to have done so in a manner that makes people particularly vulnerable to the spread of misinformation and disinformation.”
At what point do we collectively admit that trolling behavior is a sign of illness and psychopathy, and it is not an inevitable byproduct of tech? More research on this point.
“Dilemmas in a General Theory of Planning” by Horst W. J. Rittel and Melvin M. Webber is a classic in public policy. I reread it this week. Here is the meat of the argument: “The search for scientific bases for confronting problems of social policy is bound to fail, because of the nature of these problems. They are ‘wicked’ problems, whereas science has developed to deal with ‘tame problems. Policy problems cannot be definitively described. Moreover, in a pluralistic society there is nothing like the undisputable public good; there is no objective definition of equity; policies that respond to social problems cannot be meaningfully correct or false; and it makes no sense to talk about ‘optimal solutions’ to social problems unless severe qualifications are imposed first. Even worse, there are no ‘solutions’ in the sense of definitive and objective answers.
Evan Swarztrauber alerted me to this newish paper from BCG on “Boosting Broadband Adoption and Remote K–12 Education in Low-Income Households.” As is noted, “this report identifies solutions and best practices to accelerate internet adoption through sponsored-service programs.”
This intro had me hooked: “The gist of the book is that reputation management is the best lens to understand the FDA, not ‘public interest’ vs ‘regulatory capture’. The political and regulatory power of the FDA is bound up inextricably with how Congress, the pharmaceutical industry, academic medicine, and consumer protection groups view it. By virtue of the size of the market it regulates and its pre-market approval power, the FDA is likely the most powerful regulatory agency in the world.”
Every way of seeing (like a platform) is also a way of not seeing
Neil Chilson has a new paper out, connecting the work of James C. Scott to big tech platforms and their possible regulation. I hate to just copy the abstract, but it is the precis of the work and a great beginning to the argument,
Scott argues that the success of the Industrial Revolution motivated a “high modernism” mindset in government, with state leaders seeking to fundamentally reshape and improve society. To pursue such ambitious tasks governments needed a society that was legible, often achieved by eliminating important complexities and ignoring local knowledge. Scott argues that when “schemes to improve the human condition” include characteristics of imposed legibility, a high-modernist mindset, strong central control, and weak social or political constraints, they are doomed to fail, sometimes in horrific ways.
To avoid such outcomes, Scott’s work suggests four lessons for anyone who would intervene in complex systems: (1) minimize simplistic legibility; (2) temper ambitious plans with prudence and humility; (3) reduce the planner’s ability to impose a plan; and (4) increase the ability of participants to resist or shape such plans.
As governments and tech platforms seek to address the concerns driving the
“techlash,” these lessons provide guidance on how to avoid the worst pitfalls
that could adversely affect efforts to improve the human condition online.
A couple of years back, Chilson shopped around the idea and has since worked it into a full paper. I have been waiting for this paper for some time, and it lives up to expectations. Chilson uses Scott’s work on legibility in Seeing Like a State (SLAS) to analyze both platforms and the regulation of platforms.
Extra: Venkatesh Rao’s summary of its main themes remains the best review of SLAS. For an application of the idea to TikTok, check out Eugene Wei’s post.
Chilson is right that “the ‘techlash’ is a symptom of digital technology increasing the legibility of the world,” triggered by concerns that people are losing their privacy, being taken advantage of, and being misinformed, just to name a few. For policymakers, the best path is to minimize simplistic legibility, temper ambitious plans, reduce the ability to impose a plan, and increase the power of the participants.
But I want to focus on a narrow aspect of platform legibility, its antithesis, illegibility.
Scott defined legibility as “a state's attempt to make society legible, to arrange the population in ways that simplified the classic state functions of taxation, conscription, and prevention of rebellion.” Since he is a political theorist, Scott views legibility as a problem in statecraft in enacting certain kinds of praxis: “The premodern state was, in many crucial respects, partially blind; it knew precious little about its subjects, their wealth, their landholdings and yields, their location, their very identity.” Until more modern times, the state “lacked, for the most part, a measure, a metric, that would allow it to ‘translate’ what it knew into a common standard necessary for a synoptic view.”
Because it is synoptic, legibility means the loss of certain kinds of local practices. Scott uses the term metis to describe this kind of local knowledge, which one writer explained as, “a sort of know-how that balances experienced and abstract knowledge, and which is deeply rooted in local cultural practices.” While not exactly the same concept, the idea of tacit knowledge is in the same ballpark.
The importance of metis shines through when things go wrong. Here is Rao’s summary of Scott’s first case:
The book begins with an early example, “scientific” forestry (illustrated in the picture above). The early modern state, Germany in this case, was only interested in maximizing tax revenues from forestry. This meant that the acreage, yield and market value of a forest had to be measured, and only these obviously relevant variables were comprehended by the statist mental model. Traditional wild and unruly forests were literally illegible to the state surveyor’s eyes, and this gave birth to “scientific” forestry: the gradual transformation of forests with a rich diversity of species growing wildly and randomly into orderly stands of the highest-yielding varieties. The resulting catastrophes — better recognized these days as the problems of monoculture — were inevitable.
In other words, legibility is a fraught project, where slippage occurs and information doesn’t properly capture reality.
Similarly, platforms are collecting information about individuals to understand them, but it is not perfect. Illegibility exists.
Platforms are often thought to be all-knowing. Indeed, some research finds that Facebook Like data can be used to accurately predict highly sensitive personal attributes like sexual orientation, ethnicity, religious view, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, gender, and, most important for this discussion, political opinions.
But the cracks are there if you look for them.
For some time, for example, Facebook has categorized individuals into segments, which they call affinities, for advertising purposes. When Pew surveyed users in 2019 and asked about how well these categories actually track their preferences, only 13 percent said that they are very accurate descriptions. Another 46 percent of users thought the categories were somewhat accurate. On the negative side of the ledger, 27 percent of users “feel it does not represent them accurately” and another 11 percent of users weren’t assigned categories at all. In other words, over a third of all users are effectively illegible to Facebook.
I’ve been collecting other examples of platform illegibility:
Mike Masnick’s has a series on why content moderation at scale is impossible to do well. A big takeaway from the history of this space is that it is really hard to automatically detect and takedown nefarious content. Spam filters are always blunt instruments.
A Phoenix man is suing the city because data obtained Google was faulty. According to the report, “Police had arrested the wrong man based on location data obtained from Google and the fact that a white Honda was spotted at the crime scene. The case against Molina quickly fell apart, and he was released from jail six days later. Prosecutors never pursued charges against Molina, yet the highly publicized arrest cost him his job, his car, and his reputation.”
One admittedly older study of actual traffic statistics found that Facebook likes are correlated to traffic by only 0.67.
Google’s organic search team penalized the Google Ads team before for violating the Google webmaster guidelines.
Positive emotional expressions on Facebook did not correlate with life satisfaction, whereas negative emotional expressions within the past 9-10 months (but not beyond) were significantly related to life satisfaction. (NIH)
Alexis C. Madrigal: “Facebook’s control over its platform has significant limitations.”
By its own admission, Facebook’s series of experiments on getting out the vote in 2010 were statistically significant but the impacts were smaller than nearly every other study before it. The resulting study explained that “users who received the social message were 0.39% more likely to vote than users who received no message at all.” For some comparison on the magnitude, rain decreases election turnout by about 0.8 percent. When asked to vote on a tax increase to fund schools, people vote about 2 percent more for the referendum if the polling place is a school when controlling for the political views and demographics of a voter.
In 2020, 29 percent of mobile users and 47 percent of desktop users reported that they had an ad blocker enabled. Four in ten social media users scroll past ads, a behavior known as ad blindness.
Facebook’s controversial 2015 study which exposed people to specific kinds of content slyly admitted that the process of ranking content for the Newsfeed meant a loss of information. In the official Science writeup of this study: “We found that after ranking, there is on average slightly less crosscutting content.” This result could only come about if ranking meant that some content was discarded.
Marketers lament the difficulty in identifying users across sites and devices. A group of them has even brought a case against Facebook, claiming the site is inflating audience numbers by some 400 percent. The Verge reviewed the case when it was still live. BuzzFeed has also reported extensively on Facebook’s problems with fraud and spam.
“During our field study, users continued through a tenth of Mozilla Firefox's malware and phishing warnings, a quarter of Google Chrome's malware and phishing warnings, and a third of Mozilla Firefox's SSL warnings. This demonstrates that security warnings can be effective in practice; security experts and system architects should not dismiss the goal of communicating security information to end users. We also find that user behavior varies across warnings. In contrast to the other warnings, users continued through 70.2% of Google Chrome's SSL warnings. This indicates that the user experience of a warning can have a significant impact on user behavior. Based on our findings, we make recommendations for warning designers and researchers." [Google Research]
Facebook’s inaccuracy in measuring key metrics has been thoroughly documented at MarketingLand. In one two year period, Facebook admitted to misreporting the average watch time of Facebook page videos, the organic reach of Facebook Pages (in two different ways), the ad completion rate, the average time spent reading Instant Articles, the referral traffic to external websites, the iPhone traffic for Instant Articles, ad link clicks, the amount of views of mobile videos, and the number of video views in Instant Articles.
Siva Vaidhyanathan: “Does anyone, even Mark Zuckerberg and Sundar Pichai, really understand these massive, complex, global information systems with their acres of infrastructure, billions in revenue, and billions of users almost as diverse as humanity itself? I think not. That’s the thing about complex systems. Almost no one understands any of them.”
In their paper aptly titled, "The Unfavorable Economics of Measuring the Returns to Advertising," Randall A. Lewis and Justin M. Rao said bluntly “most advertisers do not, and indeed some cannot, know the effectiveness of their advertising spend.”
Many assume that the knowledge problem has been laid to rest with platforms, but they still have limited, synoptic views. Illegibility limits the scope of action for a platform.
All of this is a long and meandering way to say: Chilson’s paper is a good starting point, but the discussion needs to now continue along other paths. Future work will necessitate that the limits of legibility are tackled.