Skip to main content

New Perspectives on the Physics of Black Holes

A colorful graphic illustration of a black hole.

The extreme properties of black holes make them ideal laboratories for thought experiments, allowing us to test our best theories against the edge of what we know. Paradoxes thus brought to light are shaking up the world of theoretical physics in exciting ways.

Published August 15, 2013

By Diana Friedman

The New York Times recently ran a fascinating article on the black hole firewall paradox. The puzzle and the contradictions it seems to imply are being debated this week at UC Santa Barbara’s Kavli Institute for Theoretical Physics.

The crux of the issue is a conflict between central tenets of general relativity theory and quantum mechanics. Basically, either the equivalence principle (a foundational concept for general relativity) doesn’t hold, entangled particles can “cheat” on each other, or information can be lost. The latter two are both forbidden by quantum mechanics.

This probably needs some explanation! Joseph Polchinksi, a theoretical physicist at the Kavli Institute and one of the authors of the paper that pointed out the firewall paradox, describes the conundrum in a guest blog for Cosmic Variance.

Briefly, in 1974, Stephen Hawking showed that, contrary to the nomenclature, black holes are not black. In fact, they radiate a constant stream of particles known now as Hawking radiation. When virtual particle pairs pop into existence near an event horizon, one can fall into the black hole, leaving the other to radiate away from the black hole rather than annihilate with its twin.

Problematically, Hawking said, the radiation would be totally random, containing no information about the states of its composite particles and their anti-twins, which is anathema to quantum mechanics. “There is strong evidence that [the conservation of quantum information] is an inviolable principle of physics, and we don’t really know how to make sense of quantum mechanics without it,” says Cal Tech theoretical physicist John Preskill in this Quantum Frontiers post.

Different Conditions, Different Outcomes

Juan Maldacena, a theoretical physicist now with the Institute for Advanced Study, explains, “In quantum mechanics (as in classical mechanics) the information about a system is not lost. Different initial conditions lead to different outcomes…The radiation coming out of black holes would be completely thermal and devoid of the information of what fell into black holes. Thus, black holes appear to be sinks of information, perverse monsters that threaten the fundamental laws of quantum mechanics.”

Maldacena implied a solution with the anti-de Sitter/conformal field theory correspondence (AdS/CFT for short), which offers elegant mathematical demonstrations of the holographic principle. The idea is that everything occurring in 3D space is actually a projection of things happening on a 2D boundary, and you can translate between the two using the AdS/CFT. If that sounds conceptually bizarre and unconvincing, fair enough! But the math is so compelling as to have been near-universally accepted in the theoretical physics community.

“This meant that even 3D black-hole evaporation could be described in the 2D world, where there is no gravity, where quantum laws reign supreme and where information can never be lost. And if information is preserved there, then it must also be preserved in the 3D world. Somehow, information must be escaping from the black holes,” explains Zeeya Merali in Nature.

The “how” turned out to be less straightforward. (I know: Straightforward?! Bah!) Stanford physicist Leonard Susskind proposed that information could be salvaged from black holes via quantum entanglement between radiated particles. But this ends up violating another core concept of quantum mechanics, monogamous entanglement. A radiating particle can’t be entangled with another, earlier radiated particle, because it was born entangled with its anti-twin (remember, the one that fell into the black hole?). Preskill explains the monogamous entanglement issue in more detail here. (H/T Jennifer Ouellette)

Some Revolutionary Implications for Cosmology

Polchinski—along with colleagues Ahmed Almheiri, Donald Marolf, and James Sully—published a paper stating that, in order to preserve information, the entanglement between the virtual particles formed near the event horizon has to be severed. This is where the challenge to relativity comes in. Merali elaborates, “‘It’s a violent process, like breaking the bonds of a molecule, and it releases energy,’ says Polchinski.

The energy generated by severing lots of twins would be enormous. ‘The event horizon would literally be a ring of fire that burns anyone falling through,’ he says. And that, in turn, violates the equivalence principle and its assertion that free-fall should feel the same as floating in empty space—impossible when the former ends in incineration.”

A possible solution to the puzzle lies in the idea, formulated by Susskind and Maldacena, that wormholes connect particles on either side of an event horizon. “The conjecture seems to allow us to view the early radiation with which the black hole is entangled as a complementary description of the black hole interior,” explains Preskill. This would mean that one particle could be faithfully entangled with two joined particles on either side of the event horizon, because the connected particles would actually be the same.

This could have some revolutionary implications for cosmology—the wormholes connecting all these entangled units of information might turn out to be the very stuff of space! “If true, this insight would be a step toward a longtime dream of theorists of explaining how space and time emerge from some more basic property of reality, in this case, bits of quantum information,” explains NYT author Dennis Overbye.

Also read: The Anthropic View of the Universe

New Scientific Explorations on the Red Planet

A photo of the planet Mars.

The Curiosity rover just celebrated its first Martian anniversary. Between Curiosity’s especially public mission and Commander Chris Hadfield’s amazing updates from the International Space Station, it’s been a great year for engagement with space science!

Published August 08, 2013

By Diana Friedman

Last week was the one-year anniversary of the Curiosity rover’s landing on Mars. To celebrate, NASA and the Jet Propulsion Lab released this video (H/T Phil Plait, who’s written great posts on the rover’s activities), providing a glimpse into another world and sharing Red Planet highlights from the last year. In that time, Curiosity has already found evidence of an ancient riverbed that may have been capable of supporting life and provided clues about the thinning of the Martian atmosphere.

For more on the rover’s current and future work, see this New Scientist article. You can also watch Dr. Ashwin R. Vasavada, Deputy Project Scientist at the Mars Science Laboratory, reveal some of Curiosity’s recent results and discuss upcoming Martian science via webcast Thursday, August 15.

Dustyn Roberts, a roboticist who helped design and build Curiosity, talks about the engineering involved and more in this podcast. Curiosity’s landing on Mars was fraught, as explained awesomely in NASA’s 7 Minutes of Terror video. It takes signals about a quarter of an hour to transmit from Mars to Earth. That’s a seriously tense window of uncertainty while you wait to find out whether something you made is successfully on another planet or a smoldering wreck! The elation that erupted at NASA headquarters upon confirmation of Curiosity’s smooth descent is contagiously exciting.

“Hitch a Ride to Mars”

If you want to experience that rush for yourself, it’s just gotten a little easier. Tiny DIY satellites called CubeSats (just 10 cubic centimeters) are opening up new avenues for citizen scientists to participate in space research. NASA recently partnered with over 100 international government agencies and NGOs to sponsor the International Space Apps Challenge. The challenge “Hitch a Ride to Mars” invites teams to design a Martian mission using DIY CubeSats.

As Congress stymies plans to lasso an asteroid for research, it’s encouraging to think about the various new ways in which innovation allows people to engage with space science. 

“All the commercial and private endeavors are great. The competition sparks innovation, and that’s what we need. NASA should be supporting these projects and also doing great basic research. Should we go to Mars? Definitely! Start working on asteroids? Yes!” Says astronaut Dr. Charlie Camarda.

“The more people we have up there and the more ideas and challenges we think about, the more inspired people will be to come up with even more ideas and solutions—students and NASA scientists alike. It used to be that only test pilots could go up, but now it’s getting more popular. It’s still really expensive, but I hope soon it will be a more accessible experience. With a more diverse group of minds inspired to think and dream about space, we’ll start to see really great stuff happen.”

To conclude on an inspiring video note, here’s Commander Chris Hadfield’s Space Oddity!

Also read: There’s a Star Man Waiting in the Sky

How Does New York City Prepare for Flooding?

A shot of the lower Manhattan skyline taken from the New Jersey side of the river.

Mayor Bloomberg recently released a report detailing plans to make NYC more resilient in the face of rising sea levels and climate change. Philip Orton, PhD, a research scientist at Stevens Institute of Technology who studies physical oceanography and storm surges, consulted on the report. Here he shares his perspective on the science behind the protection and adaptation strategy.

Published June 23, 2013

By Diana Friedman

Image courtesy of Rawf8 via stock.adobe.com.

Mayor Bloomberg recently released a plan to make NYC better prepared for, and adaptive to, rising sea level and extreme weather threats. The report, titled “A Stronger, More Resilient New York,” is based on input from scientists with diverse areas of expertise as well as community organizations. Dr. Philip Orton, a research scientist at Stevens Institute of Technology who studies physical oceanography and storm surges, describes the report as solidly grounded in science and quantitative methods.

The report takes climate change as an unequivocal given that demands acknowledgement and adaptive action. At a press conference, Mayor Bloomberg explained, “Our city will be much more vulnerable to flooding in the decades ahead…We expect that by mid-century up to one-quarter of all of New York City’s land area, where 800,000 residents live today, will be in the floodplain,” he said.

“If we do nothing, more than 40 miles of our waterfront could see flooding on a regular basis, just during normal high tides…[Hurricane] Sandy cost our City $19 billion in damages and lost economic activity. And we now forecast that a storm like Sandy could cost nearly five times that much by mid-century—around $90 billion.”

“We wish that everyone had agreed that there was this threat from storm surges before Sandy,” says Dr. Orton, “but putting up the protections that we should have put up before is a huge step in the right direction. Just having that attitude that yes, we do get hit by hurricanes, is impactful. Considering sea level rise on top of that, as the Mayor’s plan does, can help protect us from future storm surges.”

Localized Measures

The Coastal Protection chapter of the report outlines the strategies to fortify and defend NYC’s diverse coast areas. The recommendations focus mainly on local projects (such as restoring and widening sand dune systems, cultivating oyster reefs and wetland areas, and installing tide gates) rather than harbor-wide, in-sea barrier structures. Such large-scale projects, it is estimated, would cost between $20 and $25 billion and take decades to construct. Harbor barrier structures would also have hydrodynamical consequences that would deflect damaging storm surges to vulnerable areas.

Dr. Orton explains, “The fluid dynamics is a definite reason for not building barriers in the harbor. Any barrier raises the flood level somewhere else some amount. At Stevens Institute, we’ve run models and quantified the storm surge increase off of hypothetical barriers due to the reflection of the storm surge back out to sea. Since we’re right next to the open ocean, the extra water radiates out to sea and the storm surge increase is not actually that large. But even a decimeter is still too much for some high-risk, low-lying neighborhoods to find it palatable.”

What Does Such Research Entail?

Will localized measures prove adequate for the task of protecting coastal neighborhoods? According to Dr. Orton, sand dune expansion “is a proven method of effectively reducing flooding. It had a huge positive influence on areas along open shores during Sandy.” Other recommended measures, such as wetland expansion and oyster bed growth, require more research before the degree of their efficacy can be fully understood. Current models indicate that, in larger areas such as Jamaica Bay and New York Harbor, such systems are indeed likely to be vast enough to reduce flooding as well as waves. More quantitative research on natural shorelines and their influence on flooding remains necessary. “The operative word being ‘quantitative,'” Dr. Orton emphasizes.

Dr. Orton elaborates, “It’s hugely interesting that we have the tools to study this now. The same models that are used for storm surge forecasting can be used for testing the effects of adaptation strategies, and the modeling techniques have improved dramatically over recent years, especially as computing power increases. This has been the same with climate modeling. We apply conservation principals for water velocity, momentum, mass, heat, and other variables. Storm surge doesn’t involve biology, carbon cycles, or chemistry, so it’s a simpler problem than climate and these are easier predictions to make,” he said.

“It’s just about the physics of the movement of water and it’s very reliable, given good weather forecasts (though those have a lot of uncertainty). In the fluid dynamics models, you input values for factors like water depths and land elevation, and you add in obstructions to test your adaptation strategies. You can put in barriers or frictional elements in places to simplistically represent wetlands or oysters or whatever, and observe the outcomes.”

Adaptation Versus Retreat

Reeva Dua at Columbia’s Center for Climate Change Law Blog and Eric Goldstein at the Natural Resource Defense Council Blog point out that there is some controversy over climate change response strategies based on adaptation versus retreat. Some predictions place certain low-lying areas underneath six feet of water by the end of the century. Governor Cuomo has proposed an alternative floodplain buy-out program to acquire property within flood zones and convert it to natural buffers.

“With the accelerated sea level rise that we expect to kick in really rapidly in the next century, eventually there will need to be more plans,” says Dr. Orton.  A 500-year coastal flood (a flood with a height expected with a 0.2 percent probability of occurring at a given location in a given year) in the next century could overwhelm the protections being devised now.

There’s likely to be ongoing debate on this point. In the meantime, refusing to cede ground “gives a really strong message of support for all low-lying, high risk neighborhoods,” says Dr. Orton. “Some of those areas are very high-value, like lower Manhattan, and some of them are sparsely populated or have lower property values, like the shore of Staten Island. Insisting on defending all these areas sends a strong message of solidarity.”

Also read: When Waters Rise: Cross-Border Science for Global Flood Response

The Biological and Legal Implications of Gene Patents

A logo representing gene patenting.

The Supreme Court rules that genes cannot be patented, though cDNA can be. Biologist and lawyer Dr. John Murray discusses the issues involved and the ruling’s ramifications.

Published June 13, 2013

By Diana Friedman

The Supreme Court ruled last week that genes cannot be patented, but cDNA can be. This website provides some background on the case, ASSOCIATION FOR MOLECULAR PATHOLOGY ET AL.v. MYRIAD GENETICS, INC ET AL. Myriad identified and isolated two genes that are associated with high risk for breast cancer, BRCA1 and BRCA2, and sought patents for the isolated genes. The case hinged largely on a debate over the relation of patents to progress.

The Association for Molecular Pathology and the American Civil Liberties Union argued that granting exclusive claims to genes would stymie further research, limiting what scientists unaffiliated with Myriad would be able to do. On the other hand, such exclusivity incentivizes research and development, the costs of which could be otherwise untenable.

“There have been mixed reactions, everything from ‘Armageddon!’ to ‘meh,'” says Dr. John Murray. Dr. Murray has a PhD in genetics and practices intellectual property law. “I think the truth is somewhere in the middle.”

What Observers Expected

Based on the arguments, the ruling matched what observers expected. It overturns thirty years of patent office decisions in which gene claims were granted, and concern has been expressed that thousands of prior patents might now be endangered. However, points out Daniel Fischer in this Forbes article, many of those patents are old and near expiration.

Dr. Murray elaborates, “The stuff about it being a disaster for biotech is just overwrought. We’re in a post-Human Genome Project and Encode Project world. In the early 90s, you could just ride out into the frontier of the human genome, stake your claim, and they’d give you a patent on it. How many biotech companies are dedicating their resources to isolating individual human genes now? I would guess none or very few, because at this point it’s mostly already been done.

Now it’s more about developing ways of using these genes, diagnostic techniques, or ways of manipulating the DNA. To get a patent, you have to be able to demonstrate that your invention is non-obvious. But when it comes to isolated gene sequences, these days I can just go on my computer and get those, so it’s not at all non-obvious. The kind of research this might have killed has already been done.”

The Impact of the Ruling

The Court was careful to explicitly circumscribe the impact of the ruling. The Opinion states, “It is important to note what is not implicated by this decision. First, there are no method claims before this Court. Had Myriad created an innovative method of manipulating genes while searching for the BRCA1 and BRCA2 genes, it could possibly have sought a method patent…Similarly, this case does not involve patents on new applications of knowledge about the BRCA1 and BRCA2 genes…Nor do we consider the patentability of DNA in which the order of the naturally occurring nucleotides has been altered.”

The compromise seems unlikely to devastate biotech companies. While some companies will take a hit, says Dr. Murray, it won’t be a fatal one. “Any smart biotech company, including Myriad, will have claims on their ‘killer apps’ every which way from Sunday. Moving forward, there will be a real premium in drafting methods claims, identifying what you’re doing differently from other people.”

A Lack of Scientific Understanding

Unless you work for a biotech company with an ossified business model, the most depressing element of this case might be the lack of scientific understanding on display throughout the case. According to Justice Thomas, cDNA is patentable because “the lab technician unquestionably creates something new when cDNA is made.” Dr. Murray calls the ruling “scientifically and intellectually incoherent.” The debated claims cover the information expressed in the exons.

“When exons are separated from introns naturally, it’s unpatentable. If the information is separated in a lab to make cDNA, it’s mysteriously patentable, even though it’s the same information. So you could certainly apply better science, which the Court didn’t do. They were clearly just trying to make sure they didn’t blow up the biotech industry by invalidating cDNA,” says Dr. Murray.

Dalila Argaez Wendlandt, a partner with Ropes & Gray, points out that DNA/cDNA distinction could allow for technical loopholes. “What if you took that same cDNA sequence and added non-functional introns?” she asks in Fisher’s piece.

Dr. Steven Salzberg, Professor of Medicine and Biostatistics at the Institute of Genetic Medicine at the Johns Hopkins University School of Medicine, highlights some of the Court’s basic errors in this article. “It’s troubling that the highest court in the land can’t get even the basic facts of molecular biology right when writing a decision that has such fundamental importance to genetic testing, the biotechnology industry, and health care,” he says.

On the plus side, screening for the BRCA genes is likely to become much more affordable, says John Wilbanks, chief commons officer at Sage Bionetworks, in this Wired article. “By making that data free, there is a lot of room for public good and public and private innovation.”

Also read: Law Experts Give Advice for Scientific Research

Can We Resurrect the Wooly Mammoth with Science?

A colorful painting of a wooly mammoth.

What does the recent discovery of the “best preserved mammoth in the history of paleontology” mean for de-extinction? Is it possible to clone these prehistoric beasts?

Published June 2, 2013

By Diana Friedman

Wooly mammoths near the Somme River, part of an American Museum of Natural History mural. Image courtesy of Charles R. Knight via Wikimedia Commons. Public Domain.

The purportedly “best preserved mammoth in the history of paleontology” was discovered by a team of Russian scientists in Siberia last week. According to lead researcher Semyon Grigoriev, head of the Museum of Mammoths of the Institute of Applied Ecology of the North at the North Eastern Federal University, “when we broke the ice beneath her stomach, the blood flowed out.” This is raising excited speculation about the imminent possibility of resurrecting the extinct species.  

The mammoth de-extinction effort is somewhat monopolized by the South Korean bioengineering firm Sooam Biotech Research Foundation, led by stem cell scientist Hwang Woo-suk. Hwang achieved scientific stardom when he claimed to have created human embryonic stem cells and the world’s first cloned dog, then fell from grace when it turned out he had fabricated some of his human stem cell data. (The dog is real.) Despite his tarnished reputation, Russian scientists with the North-Eastern Federal University of the Sakha Republic entered a research partnership with Sooam last year. According to the Siberian Times, the deal gives the South Korean scientists “exclusive rights” on cloning wooly mammoths from Siberian samples.

“If we dream about it, the ideal case would be finding a viable [wooly mammoth] cell, a cell that’s alive,” says Hwang in Carl Zimmer’s National Geographic article, “Bringing Them Back to Life.” Zimmer explains that such a cell could be used to create millions more cells. “These could be reprogrammed to grow into embryos, which could then be implanted in surrogate elephants, the mammoth’s closest living relative.”

Serious Hurdles

According to Gigoriev, the discovery “gives us a really good chance of finding live cells which can help us implement this project to clone a mammoth.” Does this mean we’re about to see a wooly mammoth come back?  Not so fast.

There are serious hurdles. Even if living cells are used to create embryos, implanting them into elephant ova is no simple task. “Nobody has ever even cloned a living elephant,” Zimmer said in a recent interview. “Furthermore, nobody knows how to get an egg out of an elephant or how to implant an egg back into an elephant. Nobody knows what combination of hormones you’d need to make sure the whole thing works right.”

Furthermore, it’s not at all clear that the recent mammoth discovery will yield up any living cells. Scientific American editor Kate Wong spoke with a colleague of Grigoriev, Daniel Fisher of the University of Michigan. Fisher had this to say:

“They have not found any ‘living cell’—at most they could hope to find what the cloning enthusiasts might call a cell with ‘viable’ DNA, meaning that it would be intact enough to use in the context of a cloning effort. In fact, although there is much talk of ‘viability’ of this sort, I think it remains to be demonstrated that any DNA from a mammoth meets this criterion. In general, ancient DNA is highly fragmented and by no means ‘ready to go’ into the next mammoth embryo.”

Do We Want to Bring Back Wooly Mammoths?

This doesn’t mean there’s no hope for cloning a mammoth. It’s theoretically possible to reconstruct a wooly mammoth genome from DNA fragments. DNA does degrade over time, and it’s not yet possible to construct whole molecules from digitized computer sequences. However, scientists can synthesize series of about a hundred thousand base pairs at a time. (Synthetic biology pioneer George Church is working on using this method to recreate Neanderthals.)

It might be possible to identify genes that distinguish wooly mammoths from elephants—such as sequences that code for traits like wooly hair or special hemoglobin that helped mammoths to survive extreme cold—and transplant those sequences into elephant DNA. “In theory, this would produce a wooly mammoth baby, or at least something that looks a lot like a wooly mammoth baby,” says Zimmer.

All this raises the question: Do we want to bring back wooly mammoths? Some conservationists argue that de-extinction distracts from efforts to maintain critically endangered populations of living animals. There’s also the non-trivial question of where mammoths would go should scientists manage to bring them back. Covered in vast grasslands when mammoths roamed, the Siberian tundra is now blanketed in moss, which wouldn’t be able to support mammoths.

Reintroducing Biodiversity

Sergey A. Zimov, however, believes reintroducing Pleistocene era biodiversity to the tundra would restore the grasslands. The mega herbivores played a critical role in the steppe ecosystem by keeping the soil broken up and fertilized and by suppressing moss growth. “Moss communities, once they are in place, create and sustain their own environment,” says Zimov. Furthermore, he continues, “Northern Siberia will influence the character of global climate change. If greenhouse gas-induced warming continues, the permafrost will melt. At present, the frozen soils lock up a vast store of organic carbon. With an average carbon content of 2.5%, the soil of the mammoth ecosystem harbors about 500 gigatons of carbon, 2.5 times that of all rainforests combined.”

To study the feasibility of steppe renewal and its possible climate implications, Zimov has conceived and implemented Pleistocene Park. The Park is an area of about 160 square km of Siberia into which large, grazing herbivore species like bison, musk ox, moose, horses, and reindeer have been introduced. Zimoz told Zimmer he’d be happy to welcome de-extinct mammoths in his park, “but only my grandchildren will see them…Be prepared to wait.”

Also read: Prehistoric Sloth-Like Creatures May Have Roamed the US

There’s A Star Man Waiting in the Sky

A man in astronaut gear poses for the camera. His helmet is in the foreground, with an American flag and mini replica space shuttle in the background.

NASA astronaut Charlie Camarda talks about his experiences in space, managing life or death situations, and the future of the U.S. space program.

Published June 1, 2013

By Tamara Johnson

Charlie Camarda

Astronaut Charlie Camarda was a mission specialist on NASA’s 2005 STS-114 Discovery flight, the Return to Flight Mission. He is now senior advisor for innovation to the Office of Chief Engineer, Johnson Space Center. Camarda recently visited The New York Academy of Sciences (the Academy) to address more than one hundred K-12 students.

*some quotes have been edited for length and clarity*

What’s it like to go to space?

It’s such an exciting ride! To tell you the truth, when we took off, it was so hyped up that I had actually expected there to be a lot more vibration and sound than there was. Whether it was the weather or I just had a really good flight, it was actually very smooth. You’re prepared for it.

We were the Return to Flight Mission after the Columbia accident, so we had lots of work to do. And we had lots of supplies to bring and lots of new technology to put in place and evaluate to make sure the rest of the crews would be safe. We were very busy, and that’s how typical flights are. Most of your time is tightly budgeted and controlled by the ground.

What did it mean to be the first crew to fly following the Columbia tragedy?

It was harder on our families. I grew up as a research engineer at NASA Langley Research Center, and my particular area of expertise was very close to what caused the accident. I worked on high temperature structures, heat transfer, and leading edges, so I was very aware of the dangers of things striking the thermal protection system and how fragile the thermal protection system was.

As far as being worried as to whether or not we were ready to fly, though, I was very confident we were. I felt very safe. The emotional significance of flying after three of my classmates and seven very close friends had passed away—that was a little tough. It takes a while to come to grips with that.

What were the aims of your mission?

We had several priorities. We were testing the new technology we’d developed to make sure that each successive mission (and our mission!) would be safe: how to inspect the vehicle and send the data down to Earth; how to make sure what we thought we were seeing was correct; and collaborating and coordinating to make sure the data aligned with our predictions. We developed a lot of inspection technology and also repair technology. But, we wanted to be sure, if we did get hit, astronauts could go outside and repair the vehicle. We did the first repair on orbit I believe.

How do you deal with anomalies in space?

One of the new procedures we did was what’s called an R-bar pitch maneuver. When we’re on the radius vector directly underneath the space station, about 600 feet below the station, the entire shuttle does a back flip. ISS Expedition 11 commander Sergei Krikalev and flight engineer John Phillips photographed the shuttle’s belly from Space Station to see if there was any damage. You have shuttle tiles, about 30,000 of them, with a black coating. If you get hit, it’s real easy to see because beneath the black tiles there are white silica materials.

As we were doing the back flip, they saw a small piece of what’s called the gap filler. It’s Nomex material, about the size of a very thin felt pad, about six pieces of paper. Two pieces of Nomex came out and were sticking out about an inch. So we had to inspect it and understand what it would mean. When we sent the image of the material down to the ground, the experts in aerothermodynamics said we had to go out and pull [the loose pieces] out of there. If we didn’t, they would trip the boundary layer, the layer of air that hugs the surface, and shed these vortices in a wedge type angle. Those vortices would hit the wing leading edge and burn us up. Can you imagine? You just had these very small pieces of material sticking out.

How do you train to manage life or death situations?

We fly in the back of a T38 and we learn what’s called crew resource management. It’s what pilots, navigators, and crew do on aircraft, so when they see emergencies, they know exactly what their jobs are. There’s an economy of words, a scripted procedure that each person has to follow. You know exactly what you have to do. We train like that as a team, doing navigation, flying the vehicle, talking to the ground, trying to make sense of what’s going on around you in all kinds of conditions. It gets you ready.

What do you think of the future of space science?

Well, I think we’ve started to lose our edge, to be honest, but all the commercial and private endeavors are great. The competition sparks innovation, and that’s what we need. NASA should be supporting these projects and working on basic research. Should we go to Mars? Definitely! Start working on asteroids? Yes!

The more people we have up there and the more ideas and challenges we think about, the more inspired people will be to come up with even more ideas and solutions, students and NASA scientists alike. It used to be that only test pilots could go up, but now it’s getting more popular. It’s still really expensive, but I hope soon it will be a more accessible experience. With a more diverse group of minds inspired to think and dream about space, we’ll start to see really great stuff happen.

Also read: Inspiring Scientists – Ready, Set, Robots!

How Do You Predict the Success of a Spinoff

Universities are fast becoming ground-zero for the commercialization of new technologies based on internal IP.

Published March 1, 2013

By Christopher S. Hayter, PhD

Universities have long been touted for their role in regional and state economic development, not only for their well-established role in education and research but, increasingly, the commercialization of new technologies. New spinoff companies, based on intellectual property stemming from university R&D, off er a promising vehicle for technology commercialization and have the potential to generate jobs, and even enhance the quality of traditional faculty responsibilities. Furthermore, university, state, and federal policymakers are increasingly seeking ways to encourage the establishment of university spinoff companies and support their growth.

A recent study examines factors of success among university spinoffs, offering practical insights for entrepreneurs, policymakers, and scholars alike. Spinoff success is defined as technology commercialization, measured by whether or not these early-stage companies have sales. The study, entitled “Harnessing University Entrepreneurship for Economic Growth: Factors of Success Among University Spinoffs,” appears in the February issue of Economic Development Quarterly, a peer-reviewed journal that focuses on economic development and revitalization, primarily in the United States. The study is based on a unique, nation-wide sample of faculty entrepreneurs at public universities who have established spinoff companies in a variety of technology areas and are at different stages of development.

Factors Affecting Sales

The study finds that a number of entrepreneur-, firm-, and university-specific factors significantly predict spinoff success. For the individual faculty member, consulting with industry provides insights and experiences that positively impact their ability to understand markets and technology development. At the firm-level, spinoffs that have research joint ventures with other companies, external sources of intellectual property, professional (non-faculty) management, and venture capital funding are more likely to commercialize their technology compared to those that do not.

Joint ventures and IP sourcing from other companies and universities provide valuable technical solutions while professional management addresses an important challenge recognized from other studies: academic researchers do not usually have the skills needed to effectively run and grow a company. And according to faculty entrepreneurs in the sample, venture capitalists are not only important sources of funding, they also provide mentoring and networking services and technical expertise important to spinoff performance.

Other factors in the study were shown to negatively impact spinoff success. Specifically, spinoffs attempting to commercialize technologies in the life science industry have an especially tough challenge: results show that these companies are approximately 40 percent less likely to commercialize their technology. Finally, spinoffs that rely primarily on a university for entrepreneurship services are less likely to commercialize their technology.

In short, these findings show that all spinoffs are not created equally. Spinoffs in the life sciences face especially acute challenges with staggering capital requirements, complex scientific issues involving the human body, and regulatory hurdles with the Food and Drug Administration. Beyond industry-specific considerations, spinoffs with access and strong external linkages to new technologies, ideas, funding, and management are more likely to commercialize their technology.

Need to Strengthen External Networks

Combined with the (negative) findings related to university entrepreneurship services, the study shows that networks are critical for spinoff success. In other words, if the findings are generalizable to broader populations of spinoffs, then policies and programs designed to spur academic entrepreneurship should establish and strengthen dense networks of funders, professional managers, support services, potential customers, and a variety of innovation sources to improve commercialization.

Entrepreneurial support networks have long existed in specific technology focus areas—like social networks to support the medical device industry in the Minneapolis, MN, metropolitan area. In other areas, these networks need to be built or strengthened, an acute challenge for most rural regions in the U.S. and beyond. This study shows that while university spinoffs may not automatically lead to new jobs and prosperity, policymakers will at least be better equipped to fashion policies and programs to improve the likelihood of commercialization and, therefore, economic development.

Also read: What Happens When Innovative Scientists Embrace Entrepreneurship?

The Rise of Big Data: The Utility of Datasets

Data visualization and machine learning will be key to analyzing large datasets in this new scientific revolution.

Published March 1, 2012

By Diana Friedman

The importance of observation—the crux of the scientific method—remains unchanged from the early days of scientific discovery. The methods by which observations are made, however, have changed greatly. Consider astronomy. In the early days, under a black expanse of night punctuated by brilliant fiery lights, a group of science-minded people looked up at the sky and recorded what they saw—the fullness of the moon, the locations and formations of the stars.

Observation with the naked eye was the norm until the 17th century, when the invention of the telescope revolutionized astronomy, allowing scientists to see beyond what their eyes could show them—a literal portal into the unknown.

A New Revolution

Now, a new revolution is taking place, in astronomy and across nearly all scientific disciplines: a data revolution. Scientific data collection has become almost entirely automated, allowing for the collection of vast amounts of data at record speed. These massive datasets allow researchers from various organizations and locales to mine and manipulate the data, making new discoveries and testing hypotheses from the contents of a spreadsheet.

“The astronomy community was able to switch to the idea that they can use a database as a telescope,” says Alex Szalay, Alumni Centennial Professor, Department of Physics and Astronomy, Johns Hopkins University, as well as a researcher in the Sloan Digital Sky Survey (SDSS), a 10+ year effort to map one-third of the sky.

Thanks to projects like the SDSS and open access data from the Hubble Space Telescope, would-be Galileos don’t need access to a telescope, or even a view of the night sky, to make discoveries about our universe. Instead, huge data sets (so-called “big data”) can provide the optimal view of the sky, or, for that matter, the chemical base pairs that make up DNA.

How Big is ‘Big Data’?

It is hard to estimate exactly how much data exists today compared to the early days of computers. But, “the amount of personal storage has expanded dramatically due to items like digital cameras and ‘intellectual prosthetics,’ like iPhones,” says Johannes Gehrke, professor, Department of Computer Science, Cornell University. “For example, if you bought a hard drive 20 years ago, you would have had 1.5 to 2 gigabytes of storage. Today, you can easily get 2 terabytes. That’s a factor of 1,000.”

It is not just the amount of data that has changed; the way we interact with and access that data has changed too, says Gehrke, a 2011 winner of the New York Academy of Sciences Blavatnik Awards for Young Scientists. “There is an entire industry that has sprung up around our ability to search and manage data—look at Google and Microsoft,” says Szalay.

But what is big data? Is a 2-terabyte file considered big data? Not anymore. “It’s a moving target,” says Szalay. “In 1992, we thought a few terabytes was very challenging.” Now, the average portable, external hard drive can store a few terabytes of data. An easy definition of big data is “more data than a traditional data system can handle,” says Gehrke.

Searching for Structure

Scientists working on large-scale projects, like the SDSS, or those in genomics or theoretical physics, now deal with many terabytes, even petabytes, of information. How is it possible to make sense of so much data?

“We have the data—we can collect it—but the bottleneck occurs when we try to look at it,” says Szalay. Szalay is currently working on a project at Johns Hopkins to build a data-driven supercomputer (called a data scope) that will be able to analyze the big datasets generated by very large computer simulations, such as simulations of turbulence. “We are able to provide scientists who don’t usually have access to this kind of computing power with an environment where they can play with very large simulations over several months; with this computer we are providing a home to analyze big data.”

The rub? Scientists need to be fluent in computation and data analysis to use such resources. “Disciplines in science have been growing apart because they are so specialized, but we need scientists, regardless of their specific niche, to get trained in computation and data analytics. We need scientists to make this transition to ultimately increase our knowledge,” says Szalay.

Two fields in particular are garnering attention from scientists for their ability to provide structure when data is overwhelming: data visualization and machine learning.

Picture This

Data visualization takes numbers that are either generated by a large calculation or acquired with a measurement and turns them into pictures, says Holly Rushmeier, chair and professor, Department of Computer Science, Yale University, and a judge for the Academy’s Blavatnik Awards for Young Scientists. For example, a project might take numbers representing flow going through a medium and turn them into an animation.

“Visualization allows you to look at a large volume of numbers and look for patterns, without having a preconceived notion of what that pattern is,” says Rushmeier. In this way, visualization is both a powerful debugging tool (allowing researchers to see, through the creation of a nonsensical picture, if there might be a flaw with their data) and an important means for communication of data, whether to other researchers or to the general public (asin the case of weather forecasts). So perhaps the old adage needs to be re-written: Is a picture now worth a thousand lines of code?

“There are many flavors of visualization,” says Rushmeier. Information can be mapped onto a natural structure, such as valves being mapped onto the heart, or an entirely new picture can be created (data without a natural structure is referred to as high-dimensional data). The classic example of high-dimensional data is credit card data, says Rushmeier, “but there is a lot of high-dimensional data in science.”

Mapping Information

Rushmeier is currently immersed in 3D mapping, working closely with an ornithologist who studies bird vision. He records light waves to which birds are sensitive, from the UV to the infrared, to get a better sense of how bird vision evolved and for what purposes (e.g., mating and survival). Through 3D mapping, Rushmeier is able to take the ornithologist’s numerical data and simulate the actual viewpoint of the bird onto different 3D surfaces.

“To stop a conversation dead in its tracks, I tell people I work in statistics. To get a conversation going, I say I work in artificial intelligence,” jokes David Blei. Both are true—Blei, associate professor, computer science, Princeton University, works in the field of machine learning, a field that encompasses both statistical and computational components.

The goal of machine learning is to build algorithms that find patterns in big datasets, says Blei. Patterns can either be predictive or descriptive, depending on the goal. “A classic example of a predictive machine-learning task is spam filtering,” says Blei. A descriptive task could, for instance, help a biologist pinpoint information about a specific gene from a large dataset.

Part of our Daily Lives

Machine learning is not only used by technology companies and scientists—it is a part of our daily lives. The Amazon shopping and Netflix recommendations that pop up almost instantaneously on our computer and TV screens are a result of complex machine-learning algorithms, and the recommendations are often eerily spot-on. But it is important to remember that getting from data to real information requires a step, says Blei. This is especially true when machine learning is applied to science and medicine.

“We need more work in exploratory data analysis,” says Blei, as well as careful validation of algorithms, to avoid making irresponsible conclusions. Interestingly, Blei says that quality of data is not as important to the final result as it might seem; instead, quantity of data is paramount when it comes to drawing conclusions through machine learning. And enormous datasets abound in science—just consider all of the raw data generated by The Human Genome Project.

Now, says Blei, the analysis of data sources (like Twitter) pose an equally big challenge. “Unlike a dataset, a data source has no beginning and no end.”

A prediction that doesn’t require a complex algorithm? The fields of data visualization and machine learning, as well as other forms of data science, will continue to grow in importance as datasets and data sources get bigger over time and everyone, from neuroscientists to corporations, looks for a way to turn data into meaningful information.

Also read:

Particle Detectives: Physicists and Data

Physicists at Brookhaven National Laboratory manage and analyze petabytes of data from the ATLAS Experiment at CERN’s Large Hadron Collider in search of answers about the universe’s smallest constituents.

Published March 1, 2012

By Diana Friedman

One hundred meters below the Swiss-French border, an enclosed 27-kilometer ring serves as an exclusive race track of sorts. Instead of cars with souped-up engines, bunches of protons race around this track and physicists, not pit crews, keep things running smoothly. The goal here is not to cross the finish line first, but rather to cause collisions. When bunches of protons collide, the ATLAS (A Toroidal LHC Apparatus) detector records the photo finish—that is, the particles that are the byproducts of the collision.

The ATLAS Experiment, part of the Large Hadron Collider (LHC)—the world’s biggest particle accelerator—at CERN (the European Organization for Nuclear Research), represents a worldwide effort to answer big questions about the smallest particles. “We want to know, what are the fundamental particles in the universe, what are they made of, could there be additional particles that we don’t know about, and how do they interact?” says Howard Gordon, U.S. ATLAS deputy operations program manager at Brookhaven National Lab (BNL) in New York, the host laboratory for ATLAS in the U.S.

Finding the answers to these questions—such as what gives particles mass and what comprises dark matter—could greatly advance not only the world’s knowledge of high-energy physics, but a variety of fields in science and beyond. “We don’t necessarily know what the applications of this research will be at this point—this is inquiry-based research,” says Gordon.

Identifying the Universe’s Fundamental Particles

The task of identifying the universe’s fundamental particles is complex enough that it takes 3,000 scientists at 174 institutions around the world to operate the ATLAS detector and analyze the data generated. Data generated at CERN, the Tier-0 computing facility, is transferred to Tier-1, -2, and -3 centers through a federated grid-based computing system. BNL, the largest ATLAS Tier-1 center in the world, is responsible for 23% of the total Tier-1 computing and storage capacity, says Michael Ernst, manager, Relativistic Heavy Ion Collider/ATLAS Computing Facility at BNL.

Srini Rajagopalan, a physicist at BNL who is currently transitioning to Gordon’s position, explains the ATLAS detector as a “gigantic, multibillion pixel camera.” Protons pass through each other at the center of the detector 20 million times every second (currently every 50 nanoseconds). “Imagine that you take that many pictures over and over, every 50 nanoseconds of every minute, every hour for months,” says Rajagopalan. The amount of data that is produced is far too much to record and store, especially given ATLAS’ limited running time (about 30% of the year).

“We just don’t have the technology to write out that many events so we have to run algorithms to get the numbers down,” says Rajagopalan, who for the past five years has been working on ATLAS’ trigger—the name for the algorithms that are programmed to look for specific patterns associated with different physics phenomena.

Carefully Programmed Algorithms

Physicists at Brookhaven National Lab monitor ATLAS data from the ATLAS remote monitoring room.

The algorithms must be carefully programmed to suppress what Rajagopalan calls “fakes” (or background events) and to keep events which could lead to new physics results. Even with the aid of the trigger, “We write 300 1.5 megabyte (MB) events to disk every second. That’s a Justin Bieber CD’s worth of information every second,” says the father of a teenage girl with a laugh.

When the LHC first started, there were significantly less protons per bunch, but as time goes on the luminosity (how many protons are colliding) of the LHC is increasing. And more collisions equal more data. Rajagopalan’s challenge is to fine-tune the trigger to keep up with this influx of data, while preserving the most potentially useful information (events that are not picked up by the trigger are not stored for future use because of the immense volume of data coming in).

“We have to look at 20 million events every second, pick around 300 events most interesting to physics under study, and trash the rest immediately.” To do this well, the algorithms through which the data passes must be both fast and accurate.

Data Management and Analysis

ATLAS raw data, which originates at the LHC, is maintained in a distributed fashion at Tier-1 centers through the Worldwide LHC Computing Grid. A grid system (as opposed to one central data repository) is ideal not only because of the sheer amount of data generated, but because a federated grid spurs a diversity of approaches that leads to the adoption of best practices, says Ernst. “Scientific innovation in data-intensive science requires distributed access to that data.”

Through the Worldwide LHC Computing Grid, BNL receives its share of raw data, almost instantly, from CERN. BNL reprocesses this data with improved processing capabilities (interpreting the electronic signals produced by the detector to determine the original particles that passed through, their momenta and directions, and the primary vertex of the event) and then adds it back into the complete data set (comprised of the results from all Tier-1 facilities).

While a complete dataset is kept at CERN, it is not stored in a form that can be used for analysis. Instead, BNL and other Tier-1 facilities create derived datasets for physicists to use for further analysis, including duplicate datasets for the most popular data. “We must provide access to a huge data volume, requested by several thousand users worldwide simultaneously, in addition to managing more than 50,000 concurrently running user analysis jobs,” says Ernst. The data hosted at BNL was replicated to Tier-1, -2, and -3 sites at a rate of about 200 MB/second over the past six months.

The volume of raw ATLAS data totals about 1 petabyte a year, but that volume is multiplied several times over (there was a total of 7 petabytes of ATLAS data created in 2010) when secondary data analysis and derived datasets are included.

The Hunt for Higgs

One of the highest-profile ATLAS projects is the search for the Higgs boson, a hypothetical elementary particle. “We are looking for the Higgs because we believe it is what gives particles mass,” says Rajagopalan. Multiple triggers have been designed, each focusing on specific particles into which the Higgs might decay. Each trigger selects certain physics events that could provide evidence of the existence of the Higgs.

About 1 petabyte of raw data has been filtered through the Higgs triggers so far, and Ernst estimates it will take another petabyte of data (amounting to about another year of data collection) before physicists can hopefully confirm or rule out the existence of the Higgs. Thus far, physicists have noted a little bit of an excess of events that might point to a Higgs particle, but it is not sufficient to say definitively whether it exists or not.

Either way, work related to the mass of particles is far from over, says Rajagopalan. “If the Higgs is discovered, we’ll know that it’s there, but we’ll need to understand its properties, how it works, how it interacts with other particles. If it’s ruled out, we have to work to discover an alternative explanation of what gives particles mass.”

The Future of ATLAS

“The ATLAS detector will run for as long as the next 20 years,” says Gordon, given periodic breaks for maintenance and upgrades. Many of the physicists working on ATLAS at BNL contributed to the original construction of ATLAS detector parts. Now, they have a role in upgrading the parts to allow for more physics capabilities.

“We have to improve the trigger to extract the events of interest,” says Gordon. “We have some ideas about how to improve the trigger when the intensity of beams gets higher.”

The next scheduled ATLAS shutdowns are in 2013-2014, 2018, and 2022. In 2022, physicists are scheduled to replace parts that will have become damaged by radiation. The upgrades are not inexpensive—but they are necessary, says Rajagopalan.

“The investment in science provides a foundation for our future. Where we are today—all of the advancements in technology, science, and medicine, is because of a solid foundation in basic research. It’s important to continue to build that foundation so we have a brighter future tomorrow.”

Also read:

How An Innovation Challenge Advances Scientific Research

A group of students present their school project in front of onlookers.

Innovation challenges not only provide an interactive way for students and other innovators to embrace science, but they can also play a direct role in making the world a better place.

Published December 1, 2011

By Adrienne J. Burke

In a day and age when “thinking outside the box” is universally touted as the fastest path to scientific and technological innovation, incentive prize contests have come to be seen as one of the most creative ways to generate groundbreaking ideas. Here’s how it works: Broadcast a challenge with specific parameters and reward whoever solves it first. This simple but increasingly popular approach to tackling scientific problems goes so far outside the box, in fact, that winning solutions frequently come from completely unexpected or even unknown entities.

Consider the solvers in some recent contests: It was a concrete industry chemist in Illinois who figured out how to separate frozen oil from water in an Exxon Valdez oil spill cleanup challenge. A human resources professional posed a winning research question in a Harvard diabetes challenge. A Columbia University experimental astrophysicist won a Bill and Melinda Gates Foundation challenge for suggesting a new approach to controlling malaria. And a team of West Philadelphia high-school kids built a super-efficient car that was a strong contender for an X Prize.

Even one of the most celebrated incentive contests in history is legendary for its surprising winner: a self-educated English watchmaker won Parliament’s £23,000 Longitude Prize for inventing the marine chronometer in the 18th century.

Ideas from Untapped Sources

Extracting ideas from untapped sources is largely the point of incentive contests. Proponents of the approach, which is sometimes called crowdsourcing or open innovation, frequently quote the wisdom of Sun Microsystems founder Bill Joy: “No matter who you are, most of the smartest people work for someone else.” When a problem has stumped your field’s experts, they say, casting the net to a broader, more diverse, and multidisciplinary population can yield amazing solutions. In fact, studies by Harvard Business School professor and innovation researcher Karim Lakhani have shown that winning solutions in challenge contests are most likely to come from solvers whose area of expertise is six disciplines removed from the problem.

At Scientists Without Borders, a program conceived by The New York Academy of Sciences (the Academy) in conjunction with the United Nations’ Millennium Project, a web-enabled platform for seeking and suggesting solutions to science and technology challenges in the developing world is yielding input from a global and multidisciplinary set of innovators. The same is true at the Gates Foundation, where Program Officer Andrew Serazin says the five-year, $100 million Grand Challenges Explorations initiative to promote innovation in global health has successfully harvested ideas from a highly diverse set of people. “We’ve gotten some promising projects out of it, and we’ve gotten as much value out of reading applications,” he says.

Low Startup Costs

The startup costs for getting into the challenge-posing game can be surprisingly low. Platforms such as Scientists Without Borders and businesses like InnoCentive, IdeaConnection, NineSigma, and OmniCompete that facilitate contests for so-called “seekers,” make it easy for anyone to post a problem online and field solutions from around the world. You don’t need to offer a huge monetary reward to sponsor a successful incentive contest, either. Serazin contends that as little as a few thousand dollars can draw contestants, and plenty of seekers on Puri’s site get input without offering any reward at all.

Even if your organization isn’t ready to post its challenges to the outside world, simply employing the philosophies and practices of incentive contests can spur innovation within your own workplace. InnoCentive CEO Dwayne Spradlin notes, “The challenge-based approach is a fun way to get people inside an organization involved in solving a problem.”

Henry Chesbrough, the executive director of the Center for Open Innovation at University of California, Berkeley, Haas School of Business says, “Any organization has biases, myopia, previous experiences that advantage certain approaches and discourage or discount others. A contest can transcend these cognitive barriers.”

Contest Limits and Benefits

While useful, contests also have their limits. And not every scientific puzzle lends itself to the challenge format. Experts agree that, to be suitable, a problem must be able to be very well defined, and the parameters for winning very clear.

“An explicitly identified goal is essential to focusing the world’s attention on a challenge,” Serazin says, “and the achievement of the goal must be measurable.” He points to contests such as the Ansari X Prize, which promised $10 million to the team that could build and launch a spacecraft capable of carrying three people to 100 kilometers above the earth’s surface twice within two weeks. Contestants’ performance could be measured so that it would be clear who the winner was. “In health and biomedicine, getting that kind of specificity is not easy,” he warns.

Nor should incentive contests be seen as a cheap way to outsource R&D. Forming and managing a challenge requires substantial internal knowledge and resources. The genome researcher Craig Venter hosted a DNA sequencing challenge for several years before turning it over to the X Prize Foundation to administer. With the level of expertise and management the contest demands, he says, “it costs several million dollars to run a contest to give away $10 million.”

As Chesbrough notes, prize competitions aren’t going to render the internal R&D department obsolete, but they can complement, extend, and inform it. A small but growing segment of the business world agrees with him. According to a widely cited study by the consulting firm McKinsey, almost $250 million was awarded to prize-winning problem solvers between 2000 and 2007.

Meeting the Challenge

Large corporations, small businesses, philanthropies, universities, government agencies, and nonprofits—from GE to the Gates Foundation, from NASA to Scientists Without Borders—are among the organizations now offering cash to outsiders who can meet their challenges. InnoCentive, one of the best known companies serving the incentive contest market, has hosted more than 1,000 challenges since 2001 and boasts a solver community of more than 200,000 individuals in 200 countries. Robynn Sturm, advisor for open innovation at the White House Office of Science and Technology Policy, says challenges should be a part of any innovation portfolio. Today, analysts estimate the incentive-based prize market at $2 billion and growing.

President Obama is accountable for some of that projected growth. He recently called on federal agencies to increase their use of prizes and challenges to spur innovation. “Prizes and challenges are not the right tool for every problem, but right now they’re being so underutilized that it’s safe for us to call on all agencies to increase their use,” says Sturm. Already, the White House-sponsored Challenge.gov website features nearly 60 government challenges, and a banner there encourages government agency leaders to “challenge the world.” Government-sponsored contests are inspiring citizens of all stripes to offer up novel solutions to national problems such as childhood obesity, energy storage, and keeping astronauts’ food fresh in outer space.

OSTP Deputy Director for Policy, Tom Kalil, says that, in addition to increasing the number and diversity of minds tackling a problem, contests offer several advantages over traditional grantmaking, including freeing the government to pay only for results, not for unfruitful research. The approach, he says, also “allows us to establish a bold and important goal without having to choose the path or the team that is most likely to succeed.”

Different Approaches

Adds Sturm, “Prizes and challenges allow you to see a number of different approaches all at once. With a grant or contract, you have to pick your course and cross your fingers. With a prize, you can say, ‘This is our goal, and we’re happy to pay anyone who hits it, however they do it.’”

Scientists Without Borders uses challenges as one part of an open innovation platform designed specifically to generate scientific and technological breakthroughs in global development. It enables members of the community to work together and combine their resources and expertise to take action and accelerate progress. Organizers believe the challenge approach will move the needle by generating, refining, or unearthing effective solutions and then getting them deployed as widely as possible.

Craig Venter notes one more benefit of incentive contests: they can serve as truth serum against exaggerated claims and marketing spiel. When Venter joined forces with the X Prize Foundation to establish the $10 million Archon Genomics X Prize the idea was to incite progress in genomic sequencing technologies and to get beyond what he considers to be industry spin about the state of the art.

The winner will be, specifically, the first team to build a device and use it to sequence 100 human genomes within 10 days or less, with an accuracy of no more than one error in every 100,000 bases sequenced, with sequences accurately covering at least 98 percent of the genome, and at a recurring cost of no more than $10,000 per genome. “You can’t fake it,” Venter says. “There will be clear winners for a set of standards.” If prizes and contests can incentivize people and provide a reality check of all the claims that are out there, he says, “then they can really help science move ahead.”

Incentivized in Academia

What does a scientist, lab head, or manager need to know to enter the challenge arena? Tom Kalil points to the Harvard Catalyst/InnoCentive Type 1 Diabetes Ideation Challenge as an example of how the scientific community can use challenges— both within an organization and more broadly—to generate not just technological solutions, but new research ideas.

With funding from the National Center for Research Resources, the Harvard Clinical and Translational Science Center offered a cash reward for winning answers to the question, “What do we not know to cure type 1 diabetes?” Contestants were asked to formulate well-defined problems aimed at advancing knowledge about, and ultimately eradicating, the disease.

The challenge was open to the entire Harvard community as well as InnoCentive’s 200,000 solvers. Ultimately, nearly 800 respondents expressed interest in the contest, 150 submissions were evaluated, and 12 winners were each awarded a $2,500 prize. The winners included a patient, an undergraduate student, an MD/PhD student, a human resources representative, and researchers from unrelated scientific fields.

Promoting Collaboration

Eva Guinan, director of the Harvard Catalyst Linkages program and associate director of Clinical/Translational Research at Dana-Farber Cancer Institute, says the contest itself was an experiment to see how the model could work in an academic biomedical environment, given that researchers are traditionally disincentivized from collaborating. She says top-down management support was one key to securing widespread participation. In an email to the tens of thousands members of the Harvard community, from deans to janitors, President Drew Faust endorsed every employee’s participation in the challenge, suggesting that it would “help stimulate innovative thinking and potential new understandings and therapies.”

“Companies need to open up and break down boundaries between departments,” Spradlin says. He points to a recent InnoCentive client—a large engineering organization that hosted an incentive contest internally, but opened the competition only to staffers with information technology backgrounds. “We told them to run the contest all over the company. The solution came from someone in the finance department.”

Be a Seeker and a Solver

Harvard’s Karim Lakhani suggests scientists can spur innovation in their own labs just by participating in contests, either as solvers or seekers. “Often scientists and PIs get narrowly focused in one area, but we know that being exposed to new questions and expanding your horizons can yield creativity,” he says. “There might be a very interesting problem out there that lets you directly export and apply knowledge from your field to a different field. That creative expression is worthwhile in itself, and working on another problem may unlock a problem in your field.”

For would-be seekers, he suggests a strategic approach: There might be problems you are stuck on, or a set of problems that aren’t high priority for your lab but need to be knocked off your list, he says. Those would be worth broadcasting to see if outsiders come up with interesting solutions. “Take a portfolio approach to your lab,” he says. “Decompose your problems and express them in modules. Then be strategic about them and say, ‘I think we’d benefit from outside perspectives here.’ It’s a very different way to do science.”

Not Just Motivated by Money

Edward Jung, founder and CTO of Intellectual Ventures in Seattle, says that crucial to results is the problem statement. “If you’re trying to invent the Boeing 787, you don’t put out a request to invent an airplane,” he says. “You divide it up into smaller, tractable pieces such as, ‘design a more efficient way of modulating turbine blades.’”

And Harvard’s Eva Guinan adds a word of caution: Before launching a challenge, “you really have to be convinced that it’s what your organization wants to do. There are a lot of people who aren’t believers.” With internal challenges, beware of managers who don’t buy in. “There can be complaints such as, ‘This person is working for me, and I don’t appreciate that they’re sitting on their computer working for someone else,’” Guinan says.

Others can be so hung up on the belief that the PhD is the smartest person in the room, that they’re not willing to consider input from anyone without an academic pedigree. “You have to be willing to push this as an issue of social and cultural change,” Guinan says. Karim Lakhani points to one more secret of incentive contests: Participants often aren’t motivated by the money. “Most people know they’re going to lose, but they participate anyway,” he says.

Instead, participants are drawn by the opportunities to be part of a group effort, work on an interesting problem, learn something new, achieve a clear goal, and get feedback on their work. “This is at the heart of why people do science,” he says.

What’s Next in Incentivizing Science?

At the forefront of new models for hosting challenges is the grassroots, collaborative approach to problem solving that Scientists Without Borders enables. While the platform is also host to competitive incentive-prize contests, such as a current PepsiCo-sponsored challenge that seeks ideas for curbing folic acid deficiency, it also enables users to seek input from the broad and global Scientists Without Borders community—engendering a teamwork approach to solving the challenges of the developing world. Organizers don’t just want people to find each other—they want them to work together and combine their resources and expertise to take action and accelerate progress.

Unique among organizations that facilitate challenges, Scientists Without Borders provides user-friendly online modules that allow anyone to frame and post a challenge, offers an expert advisory panel for guidance, and enables users to help each other solve problems regardless of where the challenges exist or users reside. Organizers call it a bottom-up, user-generated challenge model that will surface barriers on the ground, in the field, or at the bench that might otherwise be overlooked.

Whether in the global development niche that Scientists Without Borders fills or in a scientific laboratory looking to ignite its members’ creativity, open innovation tools like incentive contests and challenges can be powerful and inspiring ways to tap human ingenuity.

Learn more about the Academy’s Innovation Challenges.