Physicist Sabine Hossenfelder reflects a little on a list she found of the way some machines that "learn" don't necessarily do so in a way that we might appreciate.
Among the unforeseen consequences: When hooking a type of learning processor called a "neural net" to a Roomba automatic vacuum in order to increase its speed by limiting bumper contacts (those are when the Roomba bumps into something, backs up, and starts off again in a new direction). So the Roomba learned to drive backwards, since it doesn't have bumpers on the back. Not really any faster and perhaps a little wearing on the device's housing, since it will still bump into things.
Another person set up a neural net that will "reward" a self-driving car that it is able to drive faster. So the net began driving the car around in small but speedy circles.
Perhaps a good thing to remember if we want to understand what it might mean to literally take things "literally,"
No comments:
Post a Comment