South Korea is the site of the first robot suicide, according to reports. The robot, known as Robot Supervisor, was employed by the Gumi City Council to travel around the multi-story building, delivering things to various offices and generally keeping an eye on things. It worked from 9:00 a.m. to 4:00 p.m., like the human employees, and it used elevators to make its rounds. It had been employed for just under a year, and was reportedly a popular employee.
At the end of a normal workday, Robot Supervisor was seen circling a single area, looking perplexed. The next time it was seen by coworkers, it was at the bottom of a staircase. It looked as though the robot had thrown itself down the stairs, a distance of about two meters.
Suicide?
If this was in fact a suicide, it would be the first documented case of robotic suicide. Coworkers speculated that Robot Supervisor had been overworking and might have deliberately put an end to its life. Coworkers thought it unlikely that the fall could have been an accident, since Robot Supervisor was accustomed to the building and to going from one floor to another.
The erratic circling behavior suggests a problem with sensors or programming, though. Robots don’t have the capacity for emotion and none of the human workers had seen any indication that Robot Supervisor was unhappy or under stress prior to the brief circling episode. A technical glitch is probably the culprit. The robot’s remains are being investigated to determine the exact cause.
Anthropomorphism
South Korea is known for having the highest robot-to-human worker ratio in the world. It’s also known for its punishing work ethic. South Korean human workers have the longest average workweek and the highest rate of suicide in the developed world.
It would not be surprising if Robot Supervisor’s human colleagues projected their feelings of burnout onto their mechanical coworker. Research shows that people experience empathy toward robots, including non-humanoid shapes of robots. Since robots don’t have emotions, the feelings humans tap into must actually be their own feelings.
What can we learn?
This incident highlights the need for robust safety features and clear protocols for robot operation. Robot Supervisor was a social robot, which idd not pose danger to human coworkers. However, we have to wonder — if the coworkers felt that Robot Supervisor might have committed suicide, would they have tried to save their beloved robot colleague if they had seen the fall? That would certainly have been a dangerous form of heroism.
Much modern work in robotics focuses on improving human attitudes toward robots, encouraging trust, and increasing feelings of connection. Could these efforts lead people into physical or emotional danger?
It’s something to think about.