Recently, we’ve been seeing headlines like these:
- “Tesla engineer attacked by robot at company’s Giga Texas factory”
-
“Tesla robot goes haywire on engineer in Texas factory: ‘Trail of blood'”
- “Robot attacks Tesla engineer”
The reports are about a freak robot accident in a factory two years ago. The report just surfaced. News sources talk about “a brutal and bloody malfunction” and the robot “digging its claws” into the worker.
Another story claims that Microsoft’s generative AI tool, the Image Creator, “keeps slashing people’s throats.” Specifically, the tool will, when prompted in a particularly sneaky way, create graphic violent images. Geoffrey Fowler of the Washington Post tut tuts that Microsoft is dismissing this behavior as the fault of the person who came up with the prompts.
But whose fault is it?
Are the machines at fault?
Fowler very specifically argues that blaming people who trick AI tools into misbehavior is a cop-out. The tools should, he figures, have safeguards against bad human behavior.
The news sources describing the Tesla robot “attacking” an engineer aren’t going into detail about their thought processes, but native speakers of English all know that “attack” implies intentional harm and malevolence. Would we describe a car accident as an attack by a car on a human victim?
As CleanTechnica pointed out, “the way it’s framed in various headlines, it comes across like some robots came to life and decided to go after the nearest humans. In actuality, it is certainly not a general-AI robot. Basic programming in the machine seemingly combined with a weird circumstance, leading to the terrifying situation for the Tesla engineer who got mauled a bit and then fell down a chute. What should be made clear, though, is the robot didn’t start thinking about world domination and consciously try to eliminate the nearest humans.”
Our concern about the way these stories are phrased isn’t out of sympathy for the machines. It’s out of concern that we’re giving some human beings a free pass.
Industrial accidents happen and factories are intrinsically more dangerous workplaces than, say, offices. But the frequency and likelihood of “robot attacks” has to do with the safety precautions taken by the humans working with those robots. AI tools create unseemly images, but only when people ask for them.
Safeguards are important because people do things that they shouldn’t. That doesn’t make the machines the bad guys, and it doesn’t make the people innocent victims.
How are your machines?
Indramat motion control systems have an excellent safety record. They are also very reliable. When you need service or support, though, you should call us first. We are Indramat specialists, and we have the nation’s largest stock of emergency replacement units. Call (479) 422-0390 for immediate assistance.