What Do We Mean When We Say Digital Discrimination?

This December, the Federal Communications Commission (FCC) is poised to take action to combat digital discrimination. Equitable access to broadband is an important objective, and the agency has long been charged with ensuring that telecommunications be made available “without discrimination on the basis of race, color, religion, national origin, or sex.” But a key question is how one defines “discrimination.” The breadth of the agency’s proposed definition could have unintended consequences, for the telecommunications industry and for antidiscrimination law generally.

Back in 1996, Congress amended the Communications Act to include antidiscrimination as part of American telecommunications policy. Last year’s Infrastructure Investment and Jobs Act reinforced this initiative, directing the FCC to adopt rules “to facilitate equal access to broadband,” including by “preventing digital discrimination of access.” To fulfill this mandate, the FCC has placed a notice of proposed rulemaking on the agenda for its December 21 open meeting.

Children pick up curriculum packets for their classes, made available so students won’t need a laptop to complete work, at a student meal distribution site in Seattle, Washington, March 17, 2020. Via Reuters.

Perhaps the agency’s most important decision is how to define “discrimination,” which Congress left to the agency’s discretion. In everyday parlance, the term implies an intentional decision to treat people differently based on a prohibited ground. Merriam-Webster defines discrimination as a “prejudiced or prejudicial outlook, action, or treatment,” or “the act, practice, or instance of discriminating categorically rather than individually.” Similarly, intent is the touchstone for constitutional discrimination claims: The Supreme Court has ruled that state action violates the Equal Protection Clause only when the law or policy has a discriminatory purpose.

But courts have interpreted some antidiscrimination statutes more broadly. For example, under the Fair Housing Act, a defendant’s conduct may constitute housing discrimination if it has the effect of treating white and minority applicants differently, even if the difference is unintentional. The FCC has proposed adopting this broader “disparate impact” theory as part of its definition of digital discrimination:

We propose to adopt a definition of “digital discrimination of access” that encompasses actions or omissions by a provider that differentially impact consumers’ access to broadband internet access service, and where the actions or omissions are not justified on grounds of technical and/or economic infeasibility. We seek comment on whether this definitional approach should depend on whether, and for what reason(s), the provider intended to discriminate on the basis of a protected characteristic.

This impulse is understandable: To someone denied a service because of his or her race, the harm is significant whether intended or not. But the potential breadth of disparate impact liability is concerning, because so many innocuous (or even important) decisions can have different consequences for different populations. For example, to return to the housing context, using credit scores to determine mortgage eligibility or choose a tenant will disproportionately disadvantage minority applicants, because of the correlation between race and socioeconomic status. But that information is important to banks and landlords seeking some security that the applicant will pay every month.

Importantly, not all cases of differential treatment result in liability: The bank may prevail if it shows that security is important and cannot be achieved in a less discriminatory way. Similarly, the FCC’s proposed rule would exempt acts that are “justified on grounds of technical and/or economic infeasibility.” But the fear of litigation can have a chilling effect on industry behavior. Some realtors, for example, refuse to answer seemingly innocuous questions such as a minority home buyer’s inquiry about the local school’s racial composition, for fear of being sued for “steering” the applicant to or from potential neighborhoods based on race.

Moreover, while the Supreme Court has previously upheld statutory disparate impact claims, the current Court may not be as favorably disposed. The 2015 Fair Housing Act case, for example, was decided 5–4, with since-departed Justice Anthony Kennedy providing the deciding vote. The odds of this proposed rule reaching the high court are small, but they could provide a vehicle for the court to halt the growth of disparate impact claims (and potentially undermine settled areas of antidiscrimination law).

That’s not to say access issues are unlikely to differentially impact minority groups. But rural availability gaps or urban affordability issues are largely driven by the economics of network deployment. The proposed rule is unlikely to remedy many such issues, as they likely fall under the economic feasibility exemption. These systemic issues are better addressed by subsidies that help correct the economic drivers of those gaps—and by broader social initiatives designed to disrupt the correlation between race and income.

Discrimination is a loaded term—and for good reason. Its rhetorical power should be deployed against those pockets of intentional race-based harm that still exist, not diluted by association with justifiable decisions whose unintended negative effects are better addressed by other laws.

The post What Do We Mean When We Say Digital Discrimination? appeared first on American Enterprise Institute – AEI.