Sports Illustrated is the latest publication to face scandalous revelations about its use of AI to produce articles. More precisely, SI claims that it innocently bought AI-generated articles from AdVon, which AdVon claims were human-written even though they had fake authors with fake headshots and fake bios. One seasoned sportswriter pointed out that “The conventions of sports writing—the who, what, when, where and how—are so established at this point that they are unusually easy to emulate by a robot,”
But SI is not the only miscreant to use generative AI content without copping to it. Pew Research found that about 20% of kids who know about ChatGPT have used it to help with their homework. The proportion is higher for more affluent kids, and these are the kids who are most likely to have heard of ChatGPT in the first place. AI tools give them yet another advantage over the young people on the other side of the digital divide.
And it’s not just school and leisure that are being affected by AI dishonesty. Lawyers have used ChatGPT to write up briefs, since automating jobs that are dull, dirty, and dangerous is normal behavior. Unfortunately, they’ve repeatedly found that their robot helper makes thing sup, including cases cited as precedents. In a case that threatened sanctions for the lawyer discovers using AI, the attorney explained that he “did not understand it was not a search engine, but a generative language-processing tool.”
Regulations?
This kind of issue doesn’t come up in manufacturing. Our use of automation is straightforward. We don’t pretend that an artisan painstakingly hand-carved the goods we churn out in our factories. When we use AI to plan maintenance or to optimize workflow, we’re more likely to brag about it than to hush it up.
But that’s on the factory floor. Do you use Microsoft Outlook’s email suggestions, sending, “No problem! Have a great day!” with one click instead of thinking through a personal answer? Maybe you’ve got a chatbot at your company website, pretending to be a caring customer service representative when it’s actually powered by an algorithm? Chances are good that you’re using AI somewhere in your hiring process. These actions could trigger complaints if regulations requiring honest admission of AI use were constructed, as has been proposed.
The White House‘s proposed AI regulations require the Secretary of Commerce to come up with ways to do the following things:
(i) authenticating content and tracking its provenance;
(ii) labeling synthetic content, such as using watermarking;
(iii) detecting synthetic content;
They also include this: “In the workplace itself, AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions.” We can imagine ways that covert use of AI could lead to these kinds of issues.
It’s clear that we as a society aren’t fully accepting of the use of generative AI the way we accept the use of calculators. It also seems fairly clear that we don’t have full consensus that all use of AI should be openly acknowledged. We probably need to make decisions about this soon.