Recently Elon Musk (the founder of Tesla, PayPal, and SpaceX) and Stephen Hawking have been ringing the alarm bell on the dangers of artificial intelligence. They believe that within artificial intelligence resides the seeds of our destruction. Their concerns are centred on the principle that technology evolves at a much faster rate than human evolution, and very soon machines will evolve past us and ultimately destroy us.
Should there be laws that regulate the development of artificial intelligence? Would these laws even make a difference? Is it possible to embed parameters that avert an apocalyptic scenario? Can we embed morals into the design of artificial intelligence? Are people just overreacting?
Elon Musk is not just bringing attention to the development of artificial intelligence. He is donating $10 million to AI research, specifically the Future of Life Institute.
You must be logged in to post a comment.