I believe that self-driving cars will be ready for the roads before the general population is ready to embrace them. It has been proven that self-driving cars are safer than human drivers, however they’re not perfect, as shown by the fatal Tesla accident in Florida. The benefits of self-driving cars are easy to see. Without our hands off the steering wheel and eyes off the road, we could be reading a book, watching a movie, taking a nap, etc. Long work commutes would be forever changed. However, the current self-driving cars are still a ways from that reality. Even though Tesla says their cars have autopilot, their cars aren’t ready to drive completely on their own. The article “Tesla’s ‘Autopilot’ Will Make Mistakes. Humans Will Overreact.” explains, “The company (Tesla) says: ‘Always keep your hands on the wheel. Be prepared to take over at any time.’ But if you call your system ‘Autopilot,’ you can’t be surprised when some drivers watch a movie while using that mode. We all want self-driving cars so badly that some people are behaving as if they’re already here.” The article “What we know and what we don’t know about accidents involving self-driving cars” talks about the accidents that Google’s self driving cars have been involved in. The director of Google’s self-driving car program, Chris Urmson, said that not once were their autopilot software the cause of the accidents. In each instance, it was human error. He also explains that even if it wasn’t their fault, the accidents are something to learn from. I read the comments on this article, and someone made an interesting point. They said that it seems unfair to judge the abilities of self-driving cars based on accidents while the software is still in development and not available to the general public. Just like any software that is still in its testing phase, of course there are going to be some errors and crashes. Google has not come out and said that their cars are ready for the roads outside of California yet, so why are we surprised when their cars make mistakes. This same article also brought up the moral dilemma of: “What would happen if an autonomous car were faced with a decision to either drive off the road into a pedestrian or to collide with a school bus full of children?” It also mentioned that self-driving cars are better at driving within the speed limit and sensing vehicles in blind spots, so the chances of this situation are less likely than if a human driver were at the wheel. But if this situation were to occur, how would you hold a self-driving car accountable vs. how would a human be held accountable? I think the issue that many people would have with the self-driving car and its decision to hit either the bus or the pedestrian, is that this choice was programmed. Someone at one point decided the logic that would made this decision. So would the programmer be held accountable?