LOS ANGELES—In the lead-up to the Federal Communications Commission’s net neutrality repeal vote in 2017, a government website designed to gather public input on the proposal was flooded with about 2 million fake comments, mostly supporting the repeal proposal. News outlets have been attempting ever since to figure out exactly how the bogus comments were able to dominate the FCC’s “public comment” system.
Now, a new study by a Harvard University technology researcher has found that creating automated, phony comments to manipulate debate on government sites is not especially difficult. In fact, Harvard student Max Weiss discovered, comments created by his “bot” — an automated program — were virtually indistinguishable from comments written and posted by actual human beings.
Weiss’s bot created comments that were posted on a federal website on which the public was invited to discuss a proposed reform to the national health care system, Medicaid. The student posted 1,001 bot-generated “deepfake” comments, stopping when his fake comments made up more than half of all comments on the system (he then removed the fake comments).
“When humans were asked to classify a subset of the deepfake comments as human or bot submissions, the results were no better than would have been gotten by random guessing,” Weiss wrote, in the findings of his study.
FCC Chair Ajit Pai in 2018 admitted that approximately 500,000 of the fake comments on the net neutrality discussion site originated with Russian email addresses. Media investigations have since reported that another 1.5 million were generated by a professional lobbyist for the broadband industry.
A later analysis revealed that of the 22 million comments received by the FCC during the net neutrality repeal public comment period, 96 to 97 percent were likely generated by bots, according to a TechCrunch report by Weiss and two other Harvard researchers.
“But even after investigations revealed the comments were fraudulent and made using simple search-and-replace-like computer techniques, the FCC still accepted them as part of the public comment process,” the researchers wrote.
The Harvard researchers described the bots used to generate net neutrality comments as “relatively unsophisticated,” adding that “our demonstration of the threat from bots submitting deepfake text shows that future attacks can be far more sophisticated and much harder to detect.”
Because federal comment sites — which are mandated under the 2002 “e-Government Act” — provide the only viable method for members of the general public to express their views and provide information about proposed government policies that affect them, “we must adopt better technological defenses to ensure that deepfake text doesn’t further threaten American democracy during a time of crisis,” the researchers concluded.
Photo By Luis Gomes / Pexels