We live in an age of unprecedented technological growth and development. While many students at the University have always grown up in a high-tech world, that reality wasn’t always the case. Consider the fact that the computing power required to send the first Apollo mission to the moon is now housed safely within the average smart phone.
But what happens when we reach the next step of technological development?Ray Kurzweil, renowned futurist and Director of Engineering at Google, recently sat down with the New York Times for an interview. He made several predictions about how technology will develop over the course of the next few decades. He predicted, for example, that by the year 2030 nanotechnology will progress to a point that mankind will be able to cure disease by simply letting nanobots swim along in the bloodstream and support the immune system.
Artificial intelligence is another area of possible explosive growth. By 2029, Kurzweil predicted, computers will possess human-like emotional intelligence and become convincing as other people. This is an amazing jump to consider from where we are now, where publicly available programs like Siri can currently only use voice-recognition capabilities and sentence comprehension to give accurate responses to commands.
If Kurzweil’s predictions are correct, however, it’s only a matter of a few decades before Siri will begin to let you know that she worries about your heavy workload, or that she thinks you should relax more. When does the line between human and machine blur?
Setting aside the question of artificial intelligence, other developments could lead in equally amazing directions.
New developments in fabrication technology have led to the slow rise of the 3D printer and 3D scanner, devices capable of using high-durability resins to build or copy literally any object given the proper computer software. Some implications are good. Finding rare parts for machines has become much easier, as it is now simply a matter of downloading the right specifications and making a new one. Invention is now also easier, as the initial concept can be programmed in to a computer and then printed into reality.
Other implications are more challenging to consider. Defense Distributed, a Texas-based nonprofit corporation, has successfully developed the first-ever printable firearms and parts. Now, anyone with a 3D printer can download and manufacture functional guns on their own. In the context of the current debate over the Second Amendment, this means that it is now impossible for even the most adamant of gun control advocates to ever succeed in getting guns off the streets.
Ethically, I don’t know if society is ready for many of these developments. Our limited perspective has blinded us to our own advancements. Rather than marvel at the fact that a device we hold in our pockets can instantly access or send information from hundreds of miles away, we get frustrated when the program that automatically corrects our spelling and grammar makes a mistake and causes a socially awkward moment.
How, then, would the average user respond to the prospect of AI on the scale that Kurzweil suggests? Would he or she navigate the tricky questions about emotional intelligence and moral responsibility before using such an emotionally aware machine? Or would he or she simply treat that machine as another replaceable and disposable device, completely skipping over the moral problems in the process?
With technology like 3D printing, what happens to concepts like copyright law? If physical objects themselves can be pirated like a song or video, how do we address issues of ethics related to intellectual property and ownership? On the side of gun printing, how will we respond to the challenge of weapons manufactured at will? Will possession of the data files needed to print a gun be made illegal to combat their proliferation?
Thankfully, we still have several decades to prepare ourselves for many of these issues. For other issues like the 3D printer, however, we need to seriously address these issues soon. It is our sense of morality that will guide our use and understanding of these technologies, either to our own peril or to our growth and success as the human race.
David Giffin is a second year Masters in Theological Studies student at the Candler School of Theology from Charleston, Ill.