Many people rely on the pharmaceutical industry to provide reliable treatments, including medicines, whenever they have an ailment. For some of them, they are provided with whatever they need and continue to live their lives undisturbed. For others, there is nothing that can be done, and all that can be done is to comfort them and ease suffering. For a select few of these, there exist treatments and could heal them, but have been unproven and are not always available as a result. 

Is it ethical to make somebody suffer who wants to willingly indulge in an experimental procedure just because it is untested? Join in to we discuss Experimental Medicine;

Despite the potential to provide teaching without restriction of time or place, educational technology has seen mixed results. Students have gained opportunities to learn at their own pace and study curricula that their schools may not otherwise be able to provide. On the other hand, the effectiveness of their learning depends on the quality of the technology, which may be outdated or poorly designed. For instance, many teaching programs use simple, procedural methods to instill knowledge, which research has found to be less effective in some cases than focus on concepts and critical thinking. 

What are the differences between effective and ineffective teaching programs? What benefits do applications of technology have over traditional teaching methods, and what is the proper place of technology in education?

Mathematical Paradoxes in Democratic Election Systems - Why the US Might Want to Keep the Electoral College

This week, we are hosting J. Gregory Yoest's lecture on the decision science involved in election systems, particularly the USA's electoral college. There has been much consternation and debate regarding the electoral college, especially after two recent elections where the victor did not lead in the popular vote. Research by 20th century social choice theorists, however, generated several arguments in favor of the current system. Join us this Thursday as Mr. Yoest examines various possible election systems and compares them against the electoral college.


Polling in recent years has shown some major inaccuracies. Back in 2012, many polls predicted the presidential election to be far closer than it actually was. This summer, Bernie Sanders perplexed the polls by taking a 1.5 percentage point victory in Michigan’s Democratic primary election, while the polls predicted a 21-point victory for Hillary Clinton on average. And finally, in last month’s presidential elections, Donald Trump has claimed victory via key swing states in the electoral college, despite polls estimating most of those states to swing in Clinton’s favor. These and even more flagrant polling misses, including the Scottish independence and European Union exit referenda in Britain, suggest a widespread systematic error in current polling.

Experts have cited reliance on landline numbers as a major sampling bias for polling. Many people have abandoned use of landline phones in favor of cell phones in recent years, but polls continue to primarily survey landline numbers due to a federal law against automated calls to cell phones. As a result, polls tend to underrepresent populations who favor cell phones over landlines, such as young adults and lower income groups. The sample is further limited by landline users who distrust automatic or unrecognized callers and thus do not respond to polling calls or ask to be placed on do-not-call lists. Even when polls try countermeasures, they introduce new biases, such as adjusting weights to match other polls or oversampling heavily ideological groups. What other problems do today’s polls face, and how should they seek to remedy them?

Read more:
The Week: The problem with polls

Several universities and private companies have been interacting in ways that worry proponents of academic purity. As tech companies recruit top minds in fields like Artificial Intelligence, academia loses some of their best researchers while industry gains greater expertise to propel itself forward. Some decry this poaching practice as the cause of a brain drain that will set back research by years just so that companies can increase their profits. Others support this recruitment strategy for empowering scientists with a competitive choice, creating a free market of employment that may ultimately improve fields and draw more new researchers.

Does the industrial practice of recruiting researchers from academia harm or help fields of research? Is there a good inherent in academic research that industry's interference detracts from? Do relations between academic and industrial parties break through some stagnation that academia suffers when in isolation?

Read more:

Nest, part of Alphabet (neé Google) recently announced that they'd be shutting down the servers necessary for a home automation product that they acquired several years ago. While tech devices routinely have support discontinued, this case is unique in that the product will cease to function entirely for all users. Furthermore, this is not just some fledgling startup that must discontinue a service because they're out of business. This is Alphabet, one of the world's most valuable companies.

As our devices become increasingly smart and connected, what guarantees should we have that they'll continue to work? Companies can not support these devices forever. By having devices that rely more and more on exterior servers, are we giving these companies too much power and putting ourselves at risk of these potentially critical products ceasing to function at the companies' whims? Could a subscription based model improve things - for example, renting a device instead of owning one?

Read More:

Watch comedian John Oliver report on this debate here.

On February 16th, the U.S. District Court in California gave the FBI a warrant requiring Apple to install a specialized update on the San Bernardino shooter’s iPhone. This hypothetical software update would not compromise the encryption scheme used in iPhones, but simply disable certain password entry features that makes a brute-force approach to open the phone risky. Still, Apple argues that creating this software, built only to compromise the security of the dead shooter’s phone, would nevertheless jeopardize the security and privacy of all iPhones depending on the same features.
The FBI unquestionably has jurisdiction to break into the terrorist’s phone, and Apple has already cooperated in providing the terrorist’s cloud data. All the FBI needs to access the encrypted data on the phone is the correct password for the lock screen. The problem that they face, however, is that the shooter may have enabled a feature that deletes the phone’s decryption key after more than 10 incorrect tries to unlock the phone, thus rendering the data almost permanently inaccessible. If Apple disables that feature via a tailor-made update, then the FBI can simply use brute force guessing to unlock the phone. Thus the U.S. District Court has invoked the All Writs Act to order Apple to patch the shooter’s phone as part of the investigation.

But other criminals might also be using this feature to hide their information while U.S. law enforcement holds their phones. If Apple complies with the FBI’s warrant, then other investigations could request similar tailor-made updates to facilitate breaking into more phones. So long as the requests persist, Apple would continue to hold, in CEO Tim Cook’s terms, a “master key” that would allow any iPhone to be opened, criminal or not. There is even the possibility that a malicious actor might somehow obtain the “master key,” putting countless iPhone owners at risk of being hacked.

Does the potential for widespread security and privacy breaches, depending Apple’s own security, outweigh the potential to gain critical intelligence on a foreign terrorist group? Is the U.S. Government justified in using the All Writs Act, a law dating back almost to the start of our Constitutional government, to require software companies to compromise their own software features, encryption or otherwise? Come and discuss all of this and more this Thursday at Pugwash.

Read more:

As always, free food and drinks will be provided!

Gun rights and restrictions are a topic of extreme contemporary contention. Let's take the Pugwash perspective on  this topic. 

According to the Atlantic article Is There Such a Thing as a Smart Gun, companies are attempting to create technological solutions to the moral and practical problems posed by firearms. The idea of smart guns is that, like your iPhone, only authorized personnel will be able to use the device. Unlike your iPhone (hopefully), a malfunction that allows unauthorized users access, or denies authorized users access, could mean unnecessary deaths. 

Smart Guns will authenticate in one of two ways. One method is through fingerprint scanning. However, this requires relatively clean hands, a prerequisite that may not be met in a firefight. The other option is RFID identification. Yet this opens the gun to hacking, a risk very few gun owners will be willing to take. Furthermore, increased gun technology means increased complexity of weapons. Many gun owners take solace in the simplicity of their weapon, in that they can easily understand how it operates. A gun with computing technology could diminish the folksiness of firearms. In addition, more technology could increase the risk of failure. 

Do Smart Guns promise a technological solution to our countries' seemingly inconsolable disagreement on weapons? Or are they a false prophet; a technology that will only introduce further problems and complications into an already volatile situation? 

As always, free pizza and drink will be provided. 


The B61-12 is an atomic bomb with missile guidance systems and a low, configurable yield. As part of a revitalization program that will "Modernize to Downsize" our nuclear arsenal, the US Government plans to convert several older, more powerful models into this more controlled version of the A-bomb. But could this mini-nuke actually represent an even greater threat?

Far less destructive than many bombs in our arsenal, this mini-nuke's accuracy and dial-a-yield system nevertheless make it incredibly lethal and effective in destroying a specific target while minimizing collateral damage and nuclear fallout. With more reliable weapons, the current administration argues, we can maintain the same presence with fewer nukes. On the other hand, with the potential to not cause catastrophic damage, the B61-12 might not evoke the same moral qualms that larger bombs left sitting in our stockpile have. Could the capability to launch a weaker nuke tempt us to finally reintroduce the atomic bomb to warfare after six decades of resisting the use of horrendously powerful nuclear weapons?

As always, free pizza and drink will be provided.

Read More:
New York TImes: As U.S. Modernizes Nuclear Weapons, ‘Smaller’ Leaves Some Uneasy
National Interest: The Most Dangerous Nuclear Weapon in America's Arsenal

What if we could eradicate diseases by permanently modifying mosquitoes? What if we could permanently modify humans to accomplish a similar goal?

Scientists are working on “gene drives,” which cause genetic changes that are passed on generation to generation, often with close to 100% success rates. The first successful instance was changing brown fruit flies to yellow. Although this may not seem particularly important, it could lead the way to changing genes that allow mosquitoes to carry malaria or west nile virus, among others.

Gene drives aren’t just limited to insects, though. Changing genes permanently, for an organism and all its offspring, is something could be done to any creature, theoretically, using a similar method. Plants that produce their own pesticides or animals that become poisonous to invasive species could be released into the wild, where the changed genes would quickly propagate throughout natural populations.

However, this comes with a danger: is changing an entire population’s genetics a good thing? Can we predict the outcomes well enough to know that we won’t be introducing something even worse than what we already have? The idea of a gene drive is that it’s easy to spread, but on the flip side, it’s hard to combat if something goes awry.

The idea of having an “off switch” has been proposed, but isn’t as developed as the gene drives themselves are. Even the act of modifying so many organisms’ DNA could be dangerous, as mutations could spring up that could hitchhike along with the engineered modifications.  Even more interesting to consider — and of course, less of an immediate concern, as the most interesting topics usually are — is the idea of using gene drives on humans. Would it be ethical to send a gene drive through the human population that made people less susceptible to cancer or other diseases with genetic causes? The effects of the gene drive would last for generations, and it would be impossible to receive informed consent from every individual affected, but the potential for eliminating certain diseases is a powerful draw.

Suggested reading:
Powerful 'Gene Drive' Can Quickly Change An Entire Species
The Risks of Assisting Evolution


Subscribe to Pugwash