Nest, part of Alphabet (neé Google) recently announced that they'd be shutting down the servers necessary for a home automation product that they acquired several years ago. While tech devices routinely have support discontinued, this case is unique in that the product will cease to function entirely for all users. Furthermore, this is not just some fledgling startup that must discontinue a service because they're out of business. This is Alphabet, one of the world's most valuable companies.
As our devices become increasingly smart and connected, what guarantees should we have that they'll continue to work? Companies can not support these devices forever. By having devices that rely more and more on exterior servers, are we giving these companies too much power and putting ourselves at risk of these potentially critical products ceasing to function at the companies' whims? Could a subscription based model improve things - for example, renting a device instead of owning one?
Watch comedian John Oliver report on this debate here.
On February 16th, the U.S. District Court in California gave the FBI a warrant requiring Apple to install a specialized update on the San Bernardino shooter’s iPhone. This hypothetical software update would not compromise the encryption scheme used in iPhones, but simply disable certain password entry features that makes a brute-force approach to open the phone risky. Still, Apple argues that creating this software, built only to compromise the security of the dead shooter’s phone, would nevertheless jeopardize the security and privacy of all iPhones depending on the same features.
The FBI unquestionably has jurisdiction to break into the terrorist’s phone, and Apple has already cooperated in providing the terrorist’s cloud data. All the FBI needs to access the encrypted data on the phone is the correct password for the lock screen. The problem that they face, however, is that the shooter may have enabled a feature that deletes the phone’s decryption key after more than 10 incorrect tries to unlock the phone, thus rendering the data almost permanently inaccessible. If Apple disables that feature via a tailor-made update, then the FBI can simply use brute force guessing to unlock the phone. Thus the U.S. District Court has invoked the All Writs Act to order Apple to patch the shooter’s phone as part of the investigation.
But other criminals might also be using this feature to hide their information while U.S. law enforcement holds their phones. If Apple complies with the FBI’s warrant, then other investigations could request similar tailor-made updates to facilitate breaking into more phones. So long as the requests persist, Apple would continue to hold, in CEO Tim Cook’s terms, a “master key” that would allow any iPhone to be opened, criminal or not. There is even the possibility that a malicious actor might somehow obtain the “master key,” putting countless iPhone owners at risk of being hacked.
Does the potential for widespread security and privacy breaches, depending Apple’s own security, outweigh the potential to gain critical intelligence on a foreign terrorist group? Is the U.S. Government justified in using the All Writs Act, a law dating back almost to the start of our Constitutional government, to require software companies to compromise their own software features, encryption or otherwise? Come and discuss all of this and more this Thursday at Pugwash.
- The Federalist's Gabriel Malor argues that Apple has no good reason to not make the patch.
- Gizmodo's Kate Knibbs argues that the update would jeopardize everyone's security.
As always, free food and drinks will be provided!
Gun rights and restrictions are a topic of extreme contemporary contention. Let's take the Pugwash perspective on this topic.
According to the Atlantic article Is There Such a Thing as a Smart Gun, companies are attempting to create technological solutions to the moral and practical problems posed by firearms. The idea of smart guns is that, like your iPhone, only authorized personnel will be able to use the device. Unlike your iPhone (hopefully), a malfunction that allows unauthorized users access, or denies authorized users access, could mean unnecessary deaths.
Smart Guns will authenticate in one of two ways. One method is through fingerprint scanning. However, this requires relatively clean hands, a prerequisite that may not be met in a firefight. The other option is RFID identification. Yet this opens the gun to hacking, a risk very few gun owners will be willing to take. Furthermore, increased gun technology means increased complexity of weapons. Many gun owners take solace in the simplicity of their weapon, in that they can easily understand how it operates. A gun with computing technology could diminish the folksiness of firearms. In addition, more technology could increase the risk of failure.
Do Smart Guns promise a technological solution to our countries' seemingly inconsolable disagreement on weapons? Or are they a false prophet; a technology that will only introduce further problems and complications into an already volatile situation?
As always, free pizza and drink will be provided.
The B61-12 is an atomic bomb with missile guidance systems and a low, configurable yield. As part of a revitalization program that will "Modernize to Downsize" our nuclear arsenal, the US Government plans to convert several older, more powerful models into this more controlled version of the A-bomb. But could this mini-nuke actually represent an even greater threat?
Far less destructive than many bombs in our arsenal, this mini-nuke's accuracy and dial-a-yield system nevertheless make it incredibly lethal and effective in destroying a specific target while minimizing collateral damage and nuclear fallout. With more reliable weapons, the current administration argues, we can maintain the same presence with fewer nukes. On the other hand, with the potential to not cause catastrophic damage, the B61-12 might not evoke the same moral qualms that larger bombs left sitting in our stockpile have. Could the capability to launch a weaker nuke tempt us to finally reintroduce the atomic bomb to warfare after six decades of resisting the use of horrendously powerful nuclear weapons?
As always, free pizza and drink will be provided.
New York TImes: As U.S. Modernizes Nuclear Weapons, ‘Smaller’ Leaves Some Uneasy
National Interest: The Most Dangerous Nuclear Weapon in America's Arsenal
What if we could eradicate diseases by permanently modifying mosquitoes? What if we could permanently modify humans to accomplish a similar goal?
Scientists are working on “gene drives,” which cause genetic changes that are passed on generation to generation, often with close to 100% success rates. The first successful instance was changing brown fruit flies to yellow. Although this may not seem particularly important, it could lead the way to changing genes that allow mosquitoes to carry malaria or west nile virus, among others.
Gene drives aren’t just limited to insects, though. Changing genes permanently, for an organism and all its offspring, is something could be done to any creature, theoretically, using a similar method. Plants that produce their own pesticides or animals that become poisonous to invasive species could be released into the wild, where the changed genes would quickly propagate throughout natural populations.
However, this comes with a danger: is changing an entire population’s genetics a good thing? Can we predict the outcomes well enough to know that we won’t be introducing something even worse than what we already have? The idea of a gene drive is that it’s easy to spread, but on the flip side, it’s hard to combat if something goes awry.
The idea of having an “off switch” has been proposed, but isn’t as developed as the gene drives themselves are. Even the act of modifying so many organisms’ DNA could be dangerous, as mutations could spring up that could hitchhike along with the engineered modifications. Even more interesting to consider — and of course, less of an immediate concern, as the most interesting topics usually are — is the idea of using gene drives on humans. Would it be ethical to send a gene drive through the human population that made people less susceptible to cancer or other diseases with genetic causes? The effects of the gene drive would last for generations, and it would be impossible to receive informed consent from every individual affected, but the potential for eliminating certain diseases is a powerful draw.
In recent months, there has been a resurgence of interest on the impact of technology on the labor market. The usual story is about a near future in which a large fraction of the workforce might be made obsolete by various forms of technology and automation. I will begin the lecture with evidence demonstrating the effect technology has had on labor markets in the recent and remote past. I will then revisit the usual story of human obsolescence, in particular looking at its potential “blind-spots” and hidden assumptions. We will discuss how different assumptions might generate different outcomes for the future of the workforce. The lecture will conclude by reviewing policy recommendations for dealing with technological change, focusing on traditional fiscal instruments such as income taxation and unemployment insurance.
Laurence Ales is an Associate Professor of Economics at the Tepper School of Business. He has a Ph.D. in economics from the University of Minnesota and a B.S. in physics from Tor Vergata University. His research interests include inequality, macroeconomics, public finance, and contract theory; recent publications include “Technical Change, Wage Inequality, and Taxes” and “A Theory of Political and Economic Cycles.” Ales is the recipient of the 2012 Richard M. Cyert Award for Excellence in Teaching.
This lecture is hosted in collaboration with the School of Computer Science.
Pugwash is a student discussion group focused on the ethics of science and technology, with free pizza at every meeting.
As students of Carnegie Mellon, we are offered a front row seat to the promises of computers and robotics in the 21st century. Suddenly, tasks that our society had considered reserved for humans - such as medical diagnosis, driving and even certain forms of writing - do not seem outside the ability of currently implementable algorithms.
It is possible that within the next few decades, we will see vast sectors of human labor replaced by cheaper-than-flesh electronics.
What will be the economic impacts of this? Will a majority of society become unemployed? Or will we stay ahead of the tide of automation , shuffling society into ever more "human" work? And if most of society does become unemployable, can we move to a sustainable post-work world?
On a more psychological and philosophical level: can the mindset of the modern man tolerate being inferior to a bot? Are our minds so conditioned towards a life of work that the absence of labor will usher in a mass scale existential crisis? Or is this the beginning of something far greater than the daily grind?
Join us to discuss the ethics of technological automation. And don't fret if you can't make it; we have something to take your place.
Frequently our weekly discussions have centered upon how developments in science and technology effect our privacy. Most recently we saw this in our discussions of Free Software, The Internet of Things, and David Cameron's Response to the Charlie Hebdo shootings.
But for this meeting we want to step back from the current events that generally guide our discussions and ask the fundamental underlying question: does privacy even matter? While some people seem perpetually concerned with maintaining their privacy, others don't seem to value it at all. We often hear the latter group argue that privacy isn't important because "they have nothing to hide". In many ways this is a very understandable sentiment.
So please join us this Thursday to debate whether privacy really matters. We look forward to seeing you there.
Ready to elect next year's officers? Then join us at our meeting on Thursday for our annual elections.
Ready to BE one of next year's officers? While we have a few nominees already, three of the current officers are seniors so we need some more members to step up. In particular, we're looking to fill the following positions: President, Vice President, Treasurer, Secretary, Scheduler (The person who prepares the emails!)
We've become heavily reliant on satellites to provide us with a wealth of benefits including communication, navigation, and weather services. However, the approximately five thousand launched satellites have started to congest the space about Earth. Furthermore, only seven percent of the seventeen thousand monitored manmade objects in orbit about Earth are still functional. The rest is mostly debris created by satellite explosions and collisions. If the accumulation of space debris about Earth is left unregulated, space flights and satellites will become significantly more challenging as the probability of collisions will increase dramatically.
This video by the European Space Agency provides an overview of the problem and some of the possible solutions.
At this week's Pugwash, we'd like to ask:
- How problematic is space debris?
- What measures should we pursue to reduce space debris?
The era of smart devices is here. More and more devices like televisions, lighting, thermostats and watches are becoming increasingly computationally sophisticated. Together they form what has been termed the "internet of things". As the name suggests, many of these new smart-products connect to the internet, allowing individuals and companies to remotely control and administer the devices and to collect data from them. However, given all of the personal data that these devices collect and the lack of regulation within the industry, many individuals as well as the FTC have expressed privacy concerns.
At this week's Pugwash, we'd like to ask:
- How useful is this new internet of things?
- How problematic are the potential privacy issues?
- What legal frameworks should be in place to protect consumers?