The Minuteman

The Official Newark Academy Newspaper

Who’s in Charge?

By Meghna Padmanabhan ’17, Section Editor

Decisions are tough. The dreaded “would you rather” game drives people nuts by forcing them to choose between two equally desirable or equally despicable choices. Though hypothetical, these decisions feel like life or death, and the imaginary consequences of choosing wrongly are surely enough to keep you up at night.

But what if these decisions could be made for you? I mean, wouldn’t it be amazing if there was an algorithm or some type of series that could instantly tell you which choice is the better of the two? Would it really be that much of a relief to be able to relinquish control for a bit, to hand the reins over to someone else? What if it meant that you couldn’t decide who lives and who dies in a true life or death situation? And is the decision ever really out of your hands?

Technology has brought us to this fork in the road with the introduction of self-driving cars. According to MIT cognitive scientist Iyad Rahwan on Business Insider, “Every time the car makes a complex maneuver, it is implicitly making trade-off in terms of risks to different parties.” By being pre-programmed to protect its passengers, a self-driving car’s first instinct is to ensure that they avoid injury, despite any innocent pedestrians who may be in the way. If someone were to run across a busy road, and swerving into a tree would be the only way to avoid hitting the pedestrian, the car would have no qualms about running him or her over as long as the passenger is safe. In response to this dilemma, MIT has created a “Moral Machine” to test these mechanical choices from a human perspective, emotions and instinct included in the decision-making process. It is interesting to see how many people tend to ignore the consequences by choosing to kill a criminal rather than a lawyer, regardless of the fact that the killing of either is still murder.

You pick.
You pick. (featuring Megha Gupta ’17 and Will Schwartz ’17)

Companies like Apple, Tesla, Uber, Audi, and many more have been developing the technology to create and expand upon the idea of self-driving cars, making their passengers their #1 priorities. Google has attempted to take some measure to counteract unnecessary fatalities by designing the cars to ensure they hit the smaller of two objects to prevent much damage to the passenger, yet this could still lead to small cars or pedestrians becoming primary targets. In March, 2016, Google’s AV leader at the time, Chris Urmson, insisted that, “(Their) cars are going to try hardest to avoid hitting unprotected road users: cyclists and pedestrians. Then after that they’re going to try hard to avoid moving things.” As a relatively new driver who’s been in a situation where my car spun out of control after impact, I know I’d feel safer knowing that the technology would be doing its best to ensure that no pedestrian or other vehicle would be affected, yet I would also feel a bit unsteady about relinquishing total control to a machine. According to junior Abbey Zhu, “I don’t like that at all. I feel like you definitely should have a human behind the wheel because as someone who is not too experienced at driving, I am in danger of hitting people 99% of the time when I’m in a parking lot, but I can stop that and hit the brakes!! I have that control!! I don’t think I could live with myself if that was taken away from me and someone else died in order for me to live.” It comes down to whether or not human beings can be more or less trusted than automated machines. Either way, the problem boils down to a person’s own ethical standpoint, and forces them to ask themselves if they’re the ones making the “safe” decision, or if it’s being made for them; in a situation like this, are YOU really in charge?


Comments

Leave a Reply