Opinion: Google has been force-feeding us ads. Now one big antitrust case could change the internet forever
The feds are coming after Google. In January, the Department of Justice filed an antitrust lawsuit alleging that the technology giant monopolized the lucrative space of digital advertising. A sure-to-be exhaustive trial underway in the U.S. District Court in Washington is expected to last until November.
The lawsuit focuses on three key players: advertisers, publishers and vendors of ad tech tools that match publishers with advertisers and orchestrate ad targeting and delivery. However, monopolization also affects another key group that remains largely unprotected by U.S. laws: you and me, the end users. Google’s dominance has led to a singular method of how ads target us, and we’ve been left with little choice and with no legal safeguards in the current setup.
The big players in the industry control production and distribution, an echo of the old studio system. The Justice Department could use the same solution that it did in the ‘40s.
Many of us are familiar with incessant social media ads on Facebook and Instagram. That’s called “closed web” advertising and is operated in-house by those companies. But how does digital advertising on the “open web” work, say, when we search Google?
As explained in the Justice Department complaint, the process begins when the user opens a publisher’s website that’s programmed to show personalized ads. During the short time that the website loads, an automatic auction takes place on an ad exchange to determine which ads to show to the user. The ad exchange receives information about the website and the user from the publisher, supplements it with any additional information it may have about the user’s demographics, location, interests and web browsing history, and then matches it against bids from the advertisers.
The more information the ad exchange has about the user, the better it is able to personalize the ads and then maximize the likelihood that the user will click on an ad, and ultimately buy the product or subscribe to the service. Google owns the leading ad exchange, AdX, along with the leading publisher ad server, Google Ad Manager (formerly DoubleClick for Publishers), and the leading platform used by advertisers, Display & Video 360. As the leading search engine, it is also in control of much of its users’ data. All of this lets Google control the ad tech market, receiving the lion’s share of the profit and suppressing competition from other vendors.
California’s laws are too unwieldy to provide real online privacy protection. The Delete Act would create a one-step mechanism for consumers to get every data broker to delete their personal information.
Google has faced antitrust litigation before. In 2017, the European Commission fined the company 2.42 billion euros for breaching European Union antitrust rules and giving an illegal advantage to Google Shopping, by promoting it in search results and demoting competing services.
The federal U.S. complaint identifies monetary harms to users: “As publishers make less money from advertisements, fewer publishers are able to offer internet content without subscriptions, paywalls, or alternative forms of monetization.” But the harms to users are far more than monetary. They also concern our collective loss of agency — our loss of power to decide what types of ads we wish to see, and to influence the ad delivery strategy to which we are subjected.
Why is this an issue? Because ad delivery is not simply predicting our preferences, but also shaping them, by steering us toward specific choices and opinions.
ChatGPT, Bing Chat and other artificial intelligence breakthroughs raise tough questions about what we owe our digital helpers.
The contract is simple and familiar: When we agree to use a free service — read the news online, listen to music or search for apartments — we implicitly agree to “pay” with our attention by seeing ads. Google is uniquely positioned to steer our preferences because it has access not only to how we respond to ads, but also to how we search and navigate the web. This large-scale surveillance of our lives through our data makes Google’s steering very efficient, and it is usually done without our knowledge or permission.
Personalized ads can be persistent and annoying. Remember how that pair of shoes that you really wanted to buy, but really didn’t need, followed you around online? Or how you kept seeing ads for apartments after you already signed your lease?
These ads can also be convincing. They don’t just nudge us to buy things we don’t need, but also often change our preferences and opinions. Remember the Cambridge Analytica scandal? Facebook shared data with the research firm connected to Donald Trump’s 2016 campaign so it could try to sway voters. Ads also affect our opportunities to build skills and make a living, and may even hurt our reputation by falsely suggesting to others that we may have a criminal record. The list goes on.
The proposed merger of Penguin Random House and Simon & Schuster could lead to fewer voices — including marginalized voices — being published.
Google’s ad technology aims to maximize engagement — to show an ad to a user who would feel compelled to click on it. But engagement is not always a reasonable goal. It often needs to be balanced with considerations such as curbing misinformation that fuels political polarization or with goals such as equity and non-discrimination. For example, one study found women were less likely than men to be shown ads on Google for high-paying jobs.
Federal regulators previously took steps to shield users against unlawful discrimination in housing ad delivery on Facebook. But such action is reactive and slow.
As users, we need more expansive legal protections. In the U.S., we are especially vulnerable because, unlike in the European Union, we lack comprehensive data protection laws. And we need new technologies to enforce these protections.
Breaking Google’s ad tech power could allow new vendors to enter the market and encourage the development of systems to make digital advertising safer by prioritizing privacy and user control.
We all are bearing the cost of Google’s monopolistic practices, by losing control over what ads we are fed, and force-fed. We must allow the ad tech market to partition, to make way for a new digital world in which each of us has more power over what we see and consume.
Julia Stoyanovich is an associate professor and director of the Center for Responsible AI at New York University.
More to Read
A cure for the common opinion
Get thought-provoking perspectives with our weekly newsletter.
You may occasionally receive promotional content from the Los Angeles Times.