Skip to main content

What Happens When Innovative Scientists Embrace Entrepreneurship?

A woman talks with other women.

Deciding to make the leap from research to start-up doesn’t mean you have to leave your passion for science behind.

Published October 1, 2019

By Chenelle Bonavito Martinez

Sean Mehra
Chief Strategy Officer and co-founder, HealthTap

The days of lifetime employment with one employer are long gone. Most people will have at minimum half a dozen jobs over a working lifetime and possibly two or three career paths. And just as many will try their hand at starting their own business. Unfortunately, small business statistics show that by the end of four years more than half of them will be gone.

But being a scientist may have a distinct advantage when deciding to be an entrepreneur. Forbes contributor and STEM consultant Anna Powers writes in a 2018 article titled One Scientific Principal Will Make You A Better Entrepreneur, that “…the process of entrepreneurship mirrors the process of innovation in science. A lot can be learned [about innovation and entrepreneurship] from science, which has formulated certain guidelines about the process of innovation. Perhaps that is why almost 30 percent of Fortune 50 CEOs are scientists.”

The key to easing the transition from employee to “boss” is recognizing how the skills you possess for one job, translate into another. This not only applies to a direct transfer of specific technical knowledge or soft skills like communication and collaboration, but also how certain skills specific to your current career, are the same as those you need to possess to become a successful entrepreneur.

What it Takes

So what does it take for a scientist to become an entrepreneur? Opinions vary, but mostly it starts with a question and a desire to make an impact. However, deciding to make the leap from research to start-up doesn’t mean you are leaving your passion for science behind.

Sean Mehra, Chief Strategy Officer and co-founder of HealthTap, a digital health company that enables convenient access to doctors, says, “People think of the decision to be an entrepreneur as a choice to leave your skills and knowledge as a scientist behind, when that’s not really the case.” Scientists are innovators and they can easily identify as entrepreneurs. Mehra cites several examples of skills developed in the lab that can be applied to starting a business.

“Writing grants to acquire funds for research is not much different than fundraising, corporate development and sales,” he says. “Conducting experiments is product R&D and market fit. If you have recruited postdocs to work in your lab and guided their work, then you have hired talent and managed a team. Publishing and presenting your research at conferences is pretty much like marketing your vision. And networking and connecting with colleagues in your field is no different than prospecting for business connections and talking to your customers.”

Myriam Sbeiti and Daniela Blanco, Co-founders of Sunthetics, met in school and as graduation approached saw an opportunity to launch a viable business. In 2018 they developed a more efficient and more sustainable electrochemical manufacturing path for a chemical intermediate of Nylon 6,6. The process uses electricity rather than heat to power the reaction in a way that uses 30 percent less raw materials and energy, reducing a variety of harmful emissions in the process.

Suntheics co-founders from left to right: Professor Miguel Modestino, Myriam Sbeiti, Daniela Blanco

Similar to the Scientific Method

In the future, Sbeiti and Blanco are planning to apply this electrochemical process to a variety of reactions, making the chemical industry green, one reaction at a time. Sbeiti reflects that a lot of the research and interviews conducted to figure out if their ideas were viable were very similar to the scientific method and scientific experiments, i.e. they created a hypothesis and then needed to validate it. The major difference was that they did not need to confirm their hypothesis through years of research, instead they needed to talk to potential customers to find the right fit.

As scientists and researchers themselves, both emphasized that failure was the hardest skill to master. “The chemical industry wasn’t really interested in our original idea and the fashion industry didn’t really see value.” After a round of customer interviews, they realized they were designing a product they thought the customer needed instead of the product the customer said they wanted. In addition, efficacy and cost were a customer priority so Sbeiti and Blanco pivoted their idea to develop a product that fit the market. The Sunthetics team is shaping up to make the impact that was envisioned after graduate school. In fact, Blanco continues to pursue her technology as part of her PhD research and “thinks of it like R&D.”

Entrepreneurship is definitely a “higher risk and higher reward” scenario says Mehra. Most traditional researchers typically have a lower risk tolerance than the average innovator or entrepreneur. It can be very uncomfortable for a trained researcher turned entrepreneur to accept failure and pivot away from their original idea. But Mehra says that “even if the original idea isn’t quite right, there is still a lot of good knowledge acquired through the process.”

Unlocking the “Why”

Unlocking the “why” and a desire to create impact at scale are drivers behind making the shift into entrepreneurship. While contemplating his own career path, Mehra reflects that “thinking about my passion for technology, I realized that technology has a way to scale and have a one-to-many impact on the world. I started to think about ways I could use my technology skills to help people on a global scale instead of, for example, treating patients one-at-a-time as a doctor.”

Sbeiti and Blanco also began their journey by observing their surroundings and asking why. These common traits make up what Clayton Christensen, the current Kim B. Clark Professor of Business Administration at the Harvard Business School of Harvard University, and his co-authors, call The Innovators DNA. After six years of studying innovative entrepreneurs, executives and individuals, they agree this common skill set is present in every innovative entrepreneur. Clayton et al. argue that if innovation can be developed through practice, then the first step on the journey to being more innovative is to sharpen the skills.

Studies of identical twins separated at birth indicate that one’s ability to think creatively comes one-third from genetics “that means that roughly two-thirds of our innovation skills come through learning — from first understanding the skill, then practicing it, and ultimately gaining confidence in our capacity to create,” says Clayton. The most important skill to practice is questioning. Asking “why” or “what if” can help strengthen the other skills and allow you to see a problem or opportunity from a different perspective.

Ted Cho
StartupHoyas MED

A Search for Something That’s Never Been Done

Ted Cho, President of StartupHoyas MED, an organization dedicated to healthcare startups and innovators at Georgetown University, sees that skill in many of the innovators and entrepreneurs who are part of the StartupHoyas community. Like Drs. Jean-Marc Voyadzis and Hakim Morsli who created Amie, a “virtual surgical assistant” to help patients prepare for surgery and recovery, entrepreneurs often create their companies by observing and questioning their surroundings, identifying a problem, and developing a solution.

Cho says that “one of the most common pitfalls for entrepreneurs is building solutions without problems. Often times the most successful startups are those that are rooted in problems that the founders experienced firsthand. However, that doesn’t mean that you necessarily have to be an insider. Some of the most innovative ideas with the greatest potential to create impact have come from outsiders with fresh perspectives who aren’t locked into the conventions that seem to restrict many of the traditional players in the healthcare space.” While all of the innovators and entrepreneurs in the StartupHoyas community are focused on improving healthcare, not all are medical students. In fact, many are students and faculty from other areas of life sciences.

Starting one’s own company is much like scientific research — it’s the search for something that’s never been done before, because there is no product that is exactly like yours. But it’s important for researchers considering a business launch to stay flexible. As Cho says “pick something you love, but be careful not to fall in love with your own science.”


Creative Intelligence

Innovative entrepreneurs have something called “creative intelligence,” which enables discovery, yet differs from other types of intelligence. This means innovators are more than “right-brained individuals.” They engage both sides of the brain and leverage what the authors call the “five discovery skills” to create new ideas.

  • Associating: Connecting seemingly unrelated question, ideas or problems from different areas.
  • Questioning: Challenging the status quo by asking “why,” “why not” and “what if.”
  • Observing: Scrutinizing common phenomena, particularly behavior.
  • Experimenting: Trying new ideas.
  • Networking: Meeting people with different viewpoints, ideas and perspectives to expand your knowledge.

Source: Excerpted from “The Innovators DNA.”

Also read: Advancing Innovation and Entrepreneurship in Clean Energy

How to Advance Commonsense Artificial Intelligence

An obscure graphic depicting neural networks.

Professor and AI researcher Yejin Choi wants to build machines with “commonsense intelligence.” What is commonsense intelligence and how is she doing this?

Published June 11, 2019

By Robert Birchard

Natural language processing is a branch of artificial intelligence (AI) that studies the interactions between computers and human languages. Yejin Choi, associate professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington and Senior Research Manager at the Allen Institute for Artificial Intelligence’s MOSAIC project wants to build machines with commonsense intelligence. Dr. Choi recently spoke about her research and what she means when she says common sense.

What is the focus of your research?

My research addresses some of the fundamental limits of AI, modeling common sense that humans have but today’s AI lacks. Specifically the inability of AI to navigate previously unseen situations or perform generalized tasks by relying on memory or external knowledge. Machine learning today is very task specific and not very efficient—models work really well for only one purpose because they lack the general knowledge of the world.

How do you define common sense?

Common sense is the basic level of practical knowledge and reasoning capabilities concerning everyday situations and events that are commonly shared among most people. For example, if we forget to close the fridge door, then we can anticipate that the food inside will spoil. Common sense is essential for humans to live and interact with each other in a reasonable and safe way. As AI becomes increasingly important in human life, it is crucial for AI to understand and reason about this fundamental component of human intelligence.

What differentiates human intelligence from AI?

Yejin Choi, PhD

One of the fundamental differences between human intelligence and AI is our understanding of how the world works and our ability to reason based on that understanding of how the world works.

AI excels at understanding taxonomic knowledge like whether a penguin is a bird, or aspects of encyclopedic knowledge, like whether Washington is located in the United States. However, it struggles to reason about everyday common-sense situations, for example, if you need to break a window, it’s better to use a hard and heavy object like a bicycle lock than a soft lightweight object like a teddy bear. This type of knowledge is difficult to process because people don’t explicitly state, that bicycle locks are harder and heavier than teddy bears, and it’s difficult for machines to learn this just by reading text and processing language patterns.

The goal of my research is to acquire implicit knowledge for AI and construct a commonsense knowledge graph, which we will then use to build a deep learning representation that acts as external memory. This external memory can be used in other applications to enable faster learning based on less data.

How do you teach artificial intelligence to reason implicitly?

I’m most excited about a new deep learning model that transfers representation between language and knowledge. A lot of knowledge is within the language. Nobody says, ‘My house is bigger than me,’ but if I did say that, you would understand my meaning. Our research involves a new language written for common sense which everybody can understand and evaluate. This isn’t the natural language read in textbooks or spoken in daily dialogues, but it’s still technically a natural language, albeit a bit outside the scope of the usual use of language.

Between natural language and commonsense language, there’s a significant overlap in the words and phrases that represent meanings which allows us to perform transfer-learning. We can use both typical language model training data and our new machine commonsense data. That’s a big plus because today’s deep learning based neural language models are trained on enormous datasets, so that even though our machine commonsense dataset is very large in scale, it doesn’t match the scale of typical language model training data.

What are the benefits of your research?

By addressing the lack of common sense in deep learning or AI systems, we can advance the scope of how much an AI system might be able to perform. With the understanding of implicit knowledge we can teach new tasks with less data. Along with other researchers, our goals are improving performance against benchmarks for measuring common sense and also reducing the amount of training data needed to address a range of tasks.

Can AI be taught human characteristics like empathy or curiosity?

Understanding social common sense knowledge and reasoning capabilities will improve a machine’s ability to simulate empathy, but it’s only mimicking what humans feel. AI doesn’t have feelings. Improved common sense will help AI better understand how humans might feel about a given situation, but it won’t instill AI with these characteristics.

Learn more about AI news, events, and programming at the Academy.

Advancing Science in an App-Driven World

Apps and other digital platforms have become part of our daily lives for everything from social interaction to ordering dinner. These technologies are also providing intriguing opportunities to accelerate the use of science to improve our daily lives.

Published June 1, 2019

By Jennifer L. Costley and Chenelle Bonavito Martinez

Image courtesy of Pixel-Shot via stock.adobe.com.

According to the Pew Research Center, 77 percent of all Americans own smartphones. For the 18 through 29 set this number increases to 93 percent and continues to rise. According to analysts who track such things, the number of apps downloaded daily across iOS and Google Play has reached 300 million, and the average number of apps downloaded to every iPhone/iPod touch and iPad is more than 60.

So it is safe to say that we are increasingly living in an app-driven world and that digital technology is now an integral part of how most of us manage our time and lives. Science is no exception — digital technologies are providing intriguing opportunities to accelerate the use of science to improve our daily lives.

This exciting trend is underlined by recent 5G announcements from Verizon and AT&T. The impact of 5G (fifth-generation wireless connectivity) has yet to be felt, but with transmission speeds much faster than current capabilities and a capacity for many more devices to connect simultaneously, it is clear that 5G is poised to transform our world.

A Network of “Solvers” from Around the Globe

Here at the Academy, the transformation has already begun. Virtual, cloud-based innovation challenges — sponsored by some of the world’s most dynamic companies — are enabling us to tap into a network of “solvers” from around the globe. Thus far, Academy challenges have generated potentially groundbreaking ideas on topics ranging from future aircraft design, to wildfire management, alternative energy sources and sustainable urban development, just to name a few.

One recent example, sponsored by aerospace giant Lockheed Martin, was “Disruptive Ideas for Aerospace and Security”. In this challenge, researchers were invited to submit ideas for novel innovations utilizing autonomy, human augmentation or block-chain technologies. The entries include an extraordinary range of truly game-changing ideas, some with the potential to upend the aerospace industry.

And researchers are not the only ones getting involved. In the “Future of Buildings and Cities Challenge,” young people from around the world were invited to develop sustainable building concepts for future urban landscapes. The winners, six gifted teens from five countries, collaborated virtually to develop an ingenious “green” building design that incorporated a water recycling system, solar roof panels and “green walls” (a collection of vines, leaf twiners and climbers on a grid-like support to help purify the air and provide additional insulation). The concept also featured an ingenious “home assistant,” leveraging a series of indoor sensors to detect occupancy, light intensity, temperature, humidity and air quality, an idea that 5G connectivity could soon enable.

Artificial Intelligence

But 5G is not the only game-changing technology at play. The field of artificial intelligence (AI) has also made astounding progress over the past decade. Machine learning and natural language are particularly dynamic subfields of AI, with the potential to revolutionize critical elements of the economy, including the media, finance, and healthcare sectors.

That’s why the Academy will be building upon the success of our annual Machine Learning Symposium to launch a new symposium series on natural language, dialog and speech in November of this year. We’re also thrilled that Yann LeCun, Chief AI Scientist at Facebook, and Manuela Veloso, Head of AI Research at J.P. Morgan, have agreed to serve as honorary chairs for the launch of a new initiative on applications of AI to critical sectors of the New York City economy.

We stand at the forefront of a massive shift in how society compiles, shares and learns from massive data sets. But there are serious obstacles to overcome before we can unlock the potential of digital technology, AI, and big data to drive positive change. As advocates of evidence-based policy and decision-making, we in the scientific community must be at the forefront of efforts to ensure these new technologies are used to the benefit of humankind, and the planet upon which we live.

Science and Social Media: #facepalm or #hearteyes?

Beneath all the negative noise, science can flourish on social media, but users must be diligent, measured, and ethical with how they use this powerful platform.

Published June 1, 2019

By Kari Fischer, PhD

Image courtesy of Poramet via stock.adobe.com.

Somewhere in between those halcyon days of Facebook as a friendly college social media network and the acrimonious 2016 elections, meme-filled newsfeeds took over, and social media sites like Facebook, Twitter, YouTube and Pinterest transformed into new express lanes for the spread of misinformation. This development feels especially glaring in science.

As the use of social media expanded it also became a major source for news and information. A 2018 Pew Research Center study found that 68 percent of American adults get news through social media sites. That change held not only for politically-themed content, but for science too. Another 2018 Pew study found that most users report seeing science-related posts, and 33 percent view it as a source for science news. Millions follow science-related pages on social media with the most popular pages including National Geographic, IFL Science, NASA, and ScienceAlert.

As news sources become increasingly fractured, it is difficult to dig through the mountains of contradictory articles, especially when we are asked to evaluate highly technical subjects that might be communicated poorly — sometimes intentionally so. The aforementioned list of influential “science-related” pages also includes those whose basis in empirical data is more loosely defined, like that of Dr. Mehmet Oz. In 2014 he was called before Congress for promoting sham supplements, and recently tweeted about the link between astrology and health. His page has over 5.5 million followers.

Flawed information has a way of spreading quickly. Of the 100 most shared health-related articles in 2018, over half of the articles contained misleading or exaggerated statements, or even outright falsehoods. Some of those articles even came from reputable news sources.

The Pervasiveness of False Information

The pervasiveness of false information on social media may translate to an effect on public health. When measles outbreaks increased 30 percent worldwide, vaccine misinformation on the internet took center stage. A recent study in the United Kingdom from the Royal Society for Public Health showthat 50 percent of parents with young children were exposed to negative messages about vaccines on social media.

This did not happen entirely organically. Russian trolls engaged not only in spreading political falsehoods, but they heightened the debate around vaccines too. A study analyzing tweets from 2014 to 2017 revealed that known Russian accounts tweeted about vaccines at higher rates than average users. The content of their tweets presented both pro- and anti-vaccine messages, a known tactic that amplifies a sense of “debate” and therefore propagates a sense of uncertainty.

Why are these misleading posts so attractive? Dominique Brossard, professor and chair in the Department of Life Sciences Communication at the University of Wisconsin-Madison, pulls no punches in her assessment, “They’re using all the strategies that unfortunately the scientific community has not been using.” She emphasizes that they exploit the most fundamental driver of whether or not information is accepted: trust. “What are the main things that build trust? Concern, care and honesty.” Or at least the perception of honesty.

The strength of these tactics can be especially heightened when they are insulated from outside influence. Many organizations against vaccines structure their Facebook groups so that they are closed or private, allowing for misinformation to be stated entirely unchecked and out of the public eye.

The Effect on Public Opinion

But, as all good scientists know, correlation does not equal causation. The pervasiveness of false information does not mean that there is a straight line of causality to an effect on public opinion. “It’s hard to quantify the effects of misinformation,” Brossard cautions. That same 2018 Pew study revealing 68 percent of American adults getting news on social media also stated that 57 percent expect the news they see to largely be inaccurate.

The public may also be changing how they’re interacting with social media. After the 2016 elections and the Cambridge Analytica scandal, some users needed a pause. On Facebook, 54 percent of adults modified their use in 2018: adjusting their privacy settings, deleting the app from their cellphone, or even taking extended breaks.

Social media companies are also modifying their approach. Pinterest blocked users from searching for vaccine-related terms. YouTube removed advertisements from anti-vaccine themed videos, and recently pledged to curb the spread of misinformation by modifying its recommendation algorithms — hopefully preventing users from following conspiracy-laden video rabbit holes.

And in spite of all the misleading content, which prompts all scientists to reply #headdesk or #facepalm — that’s social media speak for frustration or exasperation — there are many exciting online communities that may provide some redemption for these platforms.

Recognizing the opportunity to cater to the sci-curious, experts in science outreach jumped online as a way to spread a passion for science. YouTube accounts like AsapSCIENCE and Physics Girl have millions of subscribers, and take the time to break down complex subjects for their audiences.

Scientists and Instagram

On Instagram, science.sam is the account of Samantha Yammine, who uses the platform as a new line of communication with the public. While earning her PhD, she shares her daily life as a researcher through photos and videos both in and outside of the lab, with a humanizing effect. She also contributes to a research study nicknamed #ScientistsWhoSelfie, which is systematically exploring the effects of scientists’ Instagram posts to influence public perception of scientists.

Social media also provides a megaphone to amplify diverse voices in science, and remove hierarchies that exist offline. The accounts belonging to #VanguardSTEM link to live, monthly interviews with both “emerging and established women of color in STEM,” where they cover research, career advice and social commentary.

Kyle Marian Viterbo, social media manager at Guerilla Science and producer of The Symposium: Academic Stand-Up, cites her experience in biological anthropology groups on Facebook as some of the earliest examples of social forums for scientific discussion, where status and titles were stripped away. “We talked about papers and coverage of papers in depth, in a way that only an academic community can. It’s been an amazing experience to see that community grow, and add new scientists who have equal conversation power with folks who are emeritus professors.”

Scientists and Twitter

A 2017 study estimated that over 45,000 scientists use Twitter. From volcanologists, to climate scientists, to evolutionary biologists, they’re all online in a professional capacity. There, they share new papers, announce job openings in their labs, comment on published research and network with other scientists both in and outside of their field.

For science professionals who feel emboldened to get online, but don’t know how, Viterbo advises easing your way in, “My number one advice is to just lurk. You’re silent, you’re observing, it’s almost like an ethnography situation…you don’t have to be active. A lot of it is also getting to know what you want out of that experience, and you don’t really know that until you see other people doing it well, and it resonates.”

Once your field observations are complete, Viterbo says it’s time to experiment with a few posts, “You just have to play in this space, and allow yourself to make a few mistakes.” She reminds scientists that we have the instincts for learning how to do well, but we can also get out of our own way, “Apply the scientific method to communication and social media, but also be more forgiving. We’re not necessarily the most forgiving of ourselves in science, but do it for fun!”

Communication Works Both Ways

If you plan on venturing into social media with an agenda in mind, perhaps take a cue from Tamar Haspel, a science journalist who writes the award-winning Washington Post column Unearthed. She spends much of her time researching controversial topics like pesticides, GMOs and diet recommendations, and cautions scientists to remember that “communication works both ways.”

Haspel makes a point to read thoughtful discussions from all sides, even on Twitter, “I have smart people with wildly different views in my feed, and I pay attention when they post something, because of course when we see something that we don’t want to believe we have a tendency to just scroll down. I try to stop, click through, and listen.” Her own posts are comprehensive explainers on the complex science of agriculture, and she also readily self-corrects and engages politely on divisive topics.

The result has positioned her as a trustworthy source for information. Haspel’s number one piece of advice for scientists who want to achieve the same? “We need to think less about being persuasive, and think more about being persuadable.

Also read: Deepfakes and Democracy in the Digital Age

Lockheed Martin Challenge Inspires Innovative Ideas

A shot of a pilot in a cockpit hovering above planet Earth.

The winners of Lockheed Martin’s 2019 Challenge are developing innovative ways to advance national defense.

Published May 15, 2019

By Marie Gentile, Robert Birchard, and Mandy Carr

Big ideas come from the unlikeliest sources. Their only common attributes are the passion and ingenuity of their inventors. Recently, Lockheed Martin sponsored the “Disruptive Ideas for Aerospace and Security” Challenge to find the next big idea. Meet the winners who hope to transform the future with their innovative solutions.

Grand Prize Winner: IRIS

Bryan Knouse

The ability to make decisions can be comprised by cognitive overload, especially during stressful situations, so Bryan Knouse envisioned IRIS — a voice-controlled interface for Patriot Missile Systems — that would help people make better decisions.

“IRIS leverages software automation and speech technology in high pressure scenarios to reduce human cognitive overload and enable the operator to better focus on mission-critical decisions,” explained Mr. Knouse. “I came at this project thinking about using AI and software interfaces to make sophisticated experiences simpler and safer.”

A mechanical engineer by training, but Al software and programing tinkerer by habit, Mr. Knouse believes voice interfaces present the greatest opportunity to make complicated and sophisticated processes simpler. In the aerospace and security field simplicity is valued because complexity can cause poor decision making which loses lives.

“Artificial intelligence excels at not getting overwhelmed with scales of information. Unlike humans, a computer won’t get paranoid, or disturbed, or stressed out after being fed a spreadsheet with millions of rows of data. A computer will process the information.”

“This challenge was an awesome opportunity. Not just because I was able to build a cool project, but also to connect with a company that I’d otherwise not really have an opportunity to interface with,” Mr. Knouse concluded. “These kinds of technology competitions are a great way for the private sector and established companies to interface with innovators.”

Second Place: Improving Urban Situational Awareness

Dan Cornett

Ninety four percent of vehicular accidents in the United States are caused by driver error, but what if assistive technologies could help drivers focus? This is the premise advanced by Garrett Colby and Dan Cornett, two solutions orientated engineering students, from the University of North Carolina at Charlotte.

While no technology can remove modern day distractions, a modular sensor array could collect data about roadside conditions and unobtrusively alert the driver to potential hazards.

The pair plan to combine neural networks, RADAR, LiDAR, and a 360-degree camera, to continuously collect information on roadside conditions. The weakness of one sensor could be compensated for, with the strength of another, while the data provided by each, individually could be compared to ensure accuracy.

Garrett Colby

“Challenges like this are a good illustration for potential engineers that anyone can make a difference,” said Mr. Colby. “This project was different in that the sky was the limit, being a conceptual project you got to really think outside the box,” added Mr. Cornett.

“Challenges like this give young engineers a place to demonstrate their creativity.”

Third Place: Augmented Superluminal Communication

The sense of isolation experienced during space flight could contribute to the degradation of mission performance. Gabriel Bolgenhagen Schöninger, a physics student at the Technical University of Berlin with a communications technology background, believes his proposal could help lonely astronauts focus. The solution is wearable technologies, biometric sensors and augmented reality to simulate conversation.

Gabriel Bolgenhagen Schöninger

The idea came from Mr. Bolgenhagen Schöninger’s own experience with the rigors of living far from his native Brazil.

“My intention was to create an environment where you can simulate a conversation by collecting communications data and then emulating this data in a virtual environment,” he explained.

In advance of space travel, information could be condensed and inserted into intelligent communications platforms. The compressed communications data could then be “reanimated” to respond to the astronaut. While he developed this idea for long distance travel, Mr. Bolgenhagen Schöninger believes it could have implications in the field of education.

“This challenge creates a great opportunity for young people to get feedback on their ideas,” he finished. “It can help motivate young engineers, to display their ideas, while developing more confidence in that idea.”

Learn more about our challenges

AI and Big Data to Improve Healthcare

Am image of a stethoscope and a tablet displaying a health/medical app.

The next decade will be a pivotal one for the integration of AI and Big Data into healthcare, bringing both tremendous advantages as well as challenges.

Suchi Saria, PhD

Published May 1, 2019

By Sonya Dougal, PhD

One of the most common causes of death among hospital patients in the United States is also one of the most preventable — sepsis.

Sepsis symptoms can resemble other common conditions, making it notoriously challenging to identify, yet early diagnosis and intervention are critical to halting the disease’s rapid progress. In children, for each hour that sepsis treatment is delayed, the risk of death increases by as much as 50 percent.

Novel innovations, such as the one pioneered by Suchi Saria, director of the Machine Learning and Healthcare Lab and the John C. Malone Assistant Professor at Johns Hopkins University, are helping to reverse this trend. In 2013, Saria and a team of collaborators began testing a machine learning algorithm designed to improve early diagnosis and treatment of sepsis.

Using troves of current and historical patient data, Saria’s artificial intelligence (AI) system performs real-time analysis of dozens of inpatient measurements from electronic health records (EHRs) to monitor physiologic changes that can signal the onset of sepsis, then alert physicians in time to intervene.

“Some of the greatest therapeutic benefits we’re going to see in the future will be from computational tools that show us how to optimize and individualize medical care,” Saria said. She explained that the emergence of EHRs, along with the development of increasingly sophisticated AI algorithms that derive insights from patient data, will fuel a seismic shift in medicine — one that merges “what we are learning from the data, with what we already know from our best physicians and best practices.”

Nick Tatonetti, PhD

Electronic Health Records: A Gold Mine for Computer Scientists

EHRs have become a data gold mine for computer scientists and other researchers who are tapping them in ways designed to improve physician-patient encounters, inform and simplify treatment decisions, and reduce diagnostic errors. Like many other technological advances, though, there are those physicians who regard EHR systems with less enthusiasm.

A 2016 American Medical Association study revealed that physicians spend nearly twice as much time engaged in EHR tasks than they do in direct clinical encounters. Physician and author Atul Gawande recently lamented in The New Yorker that “a system that promised to increase my mastery over my work has, instead, increased my work’s mastery over me.”

Yet, data scientist Nicholas Tatonetti, the Herbert Irving Assistant Professor of Biomedical Informatics at Columbia University envisions a day when such AI algorithms will enable physicians to deepen their interaction with patients by freeing them from the demands of entering data into the EHR. Tatonetti has designed a system using natural language processing algorithms that takes accurate notes while physicians talk with patients about their symptoms. Like Saria’s AI system, Tatonetti’s takes advantage of the vast amount of data captured in EHRs to alert physicians in real time to potentially dangerous drug interactions or side effects.

Unknown Interactions

Anyone who has filled a prescription is familiar with the patient information leaflet that accompanies each medication, detailing potential side effects and known drug interactions. But what about the unknown interactions between medications?

Ajay Royyuru, PhD

Tatonetti has also developed an algorithm to analyze existing data in electronic health records, along with information in the FDA’s “adverse outcomes” database, to tease out previously unknown interactions between drugs. In 2016, he published a study showing that ceftriaxone, a common antibiotic, can interact with lansoprazole, an over-the-counter heartburn medication, increasing a patient’s risk of a potentially dangerous form of cardiac arrhythmia.

As data-driven AI techniques become more accessible to clinicians, the treatment of conditions both straightforward, like hypertension, and highly complex, such as cancer, will be transformed.

A Paradigm Shift in Physician-Patient Interactions

Ajay Royyuru, vice president of healthcare and life sciences research at IBM and an IBM Fellow, explained that, “when a practitioner makes a patient-specific decision, the longitudinal trail of information from thousands of other patients from that same clinic is often not empowering that physician to make that decision. The data is there, but it’s not yet being used to provide those insights.”

In the coming years, physicians and researchers will be able to aggregate and better utilize EHR data to guide treatment decisions and help set patients’ expectations.

The ability to draw on information from tens or even hundreds of thousands of patients, in addition to a physician’s own experience and expertise, could represent a paradigm shift in physician-patient interactions, according to Bethany Percha, assistant professor at the Icahn School of Medicine at Mount Sinai, and CTO of the Precision Health Enterprise, a team that turns AI research into tangible products for the health system.

“Big Data offers us the promise of using data to have a real dialogue with patients — if you’re newly diagnosed with cancer, it means giving people a realistic, data-driven assessment of what their future is likely to be,” she said.

Biases and Pitfalls

Despite the surge of interest and investment in AI over the past two decades, significant barriers to its widespread application and deployment in healthcare remain.

AI systems that tap current and historical patient health data risk reinforcing well-noted biases and embedded disparities. Medical research and clinical trials have long suffered from a lack of both ethnic and gender diversity, and EHR data may reflect patient outcomes and treatment decisions influenced by race, sex or socioeconomic status. AI systems that “learn” from datasets that include these biases will inherently share and perpetuate them.

Percha noted that greater transparency within the algorithms themselves — such as systems that learn which features an algorithm uses to make a prediction — could alert users to obvious examples of bias. Removing bias from AI algorithms is a work in progress, but the research community’s awareness of the issue and efforts to address it mirror a greater push to eliminate bias and decrease inequities in medicine overall. Optimistically, Percha noted that Big Data and AI may ultimately help create a more level playing field in healthcare delivery.

“Clinical decisions made on the basis of data have the potential to be much more standardized across different health facilities, so people who are in a rural area, for example, might have access to the same decision-making benefits as someone in a city,” she said.

Patient Data Privacy

Ensuring patient data privacy is another hot-button issue. Training artificial intelligence systems requires access to massive troves of patient data. Despite the fact that this information is anonymized, some patient advocates and bioethicists object to this access without explicit permission from the patients themselves.

Another privacy issue looms equally large: how to safely collect and protect the streams of potentially useful health data generated by wearable devices and in-home technologies without making patients and consumers feel, in Royyuru’s words, “like they are living their lives in front of a camera.” Studies have shown that data from smartphone apps can provide valuable information about the progression of certain diseases, such as Parkinson’s.

Wearables and in-home IoT devices can also extend the realm of clinical observation well beyond the doctor’s office, revealing, for example, important details about a Parkinson’s patient’s ability to complete the tasks of daily living. Yet Royyuru emphasizes that unless patients trust that their data will be kept private and ethically utilized, these technologies will fizzle long before they’re widely adopted.

Building Trust

The next decade will be a pivotal one for the integration of AI and Big Data into healthcare, bringing both tremendous advantages as well as challenges. Some applications of AI, such as image recognition, are already especially well-suited to healthcare — AI algorithms often match or even outperform radiologists in interpreting medical images — while others are far from ready for widespread use.

Saria, who has deployed her system successfully at multiple hospitals says, “physicians often greet news of AI breakthroughs with skepticism because they’re being over-promised results without clear data demonstrating this promise. True integration and adoption of AI requires not just careful attention to physician workflows, but transparency into exactly how and why an algorithm has arrived at a particular recommendation.”

Rather than replacing or challenging a physician’s place in the healthcare ecosystem, Saria believes that AI has the ability to lighten the load, and as algorithms improve, generate diagnostic and treatment recommendations that physicians and patients can both deem trustworthy.

“We are still figuring out how to make real-time information available so that it’s possible for physicians or expert decision-makers to understand, interpret and determine the right thing to do — and to do that in an error-free way, over and over again,” Saria said. “It’s a high-stakes scenario, and you want to get to a good outcome.”

Mark Shervey, Max Tomlinson, Matteo Danieletto, Sarah Cherng, Cindy Gao, Riccardo Miotto, and Bethany Percha, PhD, Mount Sinai Health System, Icahn School of Medicine at Mount Sinai.

The Cutting Edge: There’s An App for That

A graphic illustration of a smart watch and its various medical/health applications.

Researchers are making greater use of the increasing computational power found in smartphones. This means apps may soon be able to help improve human health outcomes.

Published May 1, 2019

By Charles Cooper

The Apple Watch Series 4 helps users stay connected, be more active and manage their health in powerful new ways.
Photo credit: Apple Inc.

Apple CEO Tim Cook has major ambitions to “democratize” the health sector. In a recent interview with CNBC, Cook said that “health will be the company’s greatest contribution to mankind.” He’s also enlisted an important ally to help Apple make that happen.

Atrial fibrillation, which affects 33 million people worldwide, can lead to blood clots, stroke and heart failure. But later this year, Johnson & Johnson (J&J), which developed a heart health application, will carry out a study of volunteer patients 65 and older wearing an Apple Watch to understand whether smartphone technology can help enhance the accuracy and speed of clinicians’ efforts for earlier detection, diagnoses and treatment of the malady.

“Five years from now — and certainly within a decade — wearable devices will be an integral part of healthcare diagnosis and delivery,” said Paul Burton, MD, PhD, FACC, Vice President, Medical Affairs, Janssen Scientific Affairs, LLC, noting that the app will work in conjunction with the Apple Watch Series 4’s irregular rhythm notifications and ECG feature.

Real-Time Data

The diodes on the back of an Apple Watch Series 4 essentially look for a pulse to check blood flow and applies an algorithm to determine whether the pattern pulses are irregular. It has the capability to take a high-fidelity ECG reading which is then sent to a physician. That kind of real-time data is crucial when you consider that around 20 percent of individuals who experience a stroke are not aware of their underlying AFib condition.

The widening availability of digital tools, paired with advances in technologies like artificial intelligence and machine learning, is raising hopes that history will repeat itself. In the last decade, business applications helped organizations become more efficient and to better engage with their customers. Now researchers are making greater use of the increasing computational power found in smartphones and it’s no longer a stretch to imagine a future in which there’s an app for nearly every step in the research process.

Burton expressed excitement at the potential of apps to make changes in behavior and improve health outcomes in ways that were unimaginable less than a decade ago. “I think this is an amazingly exciting point bound only by our imagination. I think the possibilities are endless,” Burton said. “AFib is treatable but you need definitive, compelling data to really make a difference in healthcare.” At the same time, Burton cautions that “apps don’t work if people download them but can’t be bothered to use them.” The point being that all the technology in the world won’t help, if the people who need it most don’t incorporate the tools into their lifestyles.

Promises and Reality Checks

That challenge was faced head-on by University of Southern California research scientists Susan Evans and Peter Clarke, as they tested out a mobile app they developed to help low-income people who use food pantries to obtain fresh vegetables, which, while often plentiful in supply, may be limited in variety.

Though the use of health-related mobile apps are now common, the promise and the performance often don’t match up. Clarke noted that fewer than one percent of the estimated 330,000 apps available on the Apple and Android download stores have been subjected to rigorous testing for effectiveness.

“Getting people to incorporate devices and apps into their lives is a whole separate science,” he said.

In developing their app, Evans and Clarke made sure the design incorporated user input early in the process, just as if they were creating a consumer app. For example, even though food banks collect fresh food and vegetables, many low income people aren’t incorporating those offerings into their diet because they may not know how to cook and/or preserve the food that’s available.

Evans and Clarke, who started the project with certain assumptions about what was needed, were forced to refine their ideas about how to change dietary habits and that came only after extensive field research and speaking with the people they hoped would ultimately use the app.

“We had to customize the app in order to meet clients’ needs and not impose this on them from the top,” said Evans. “It took years of tinkering. In terms of functionality and navigation, we designed it over and over again to try and get it right.”

Technology Is Only as Good as the User

An app recipe for broccoli burritos. This user wanted Latino-flavored and kid-friendly recipes.

As scientists and researchers struggle with the alchemy of user engagement, they have the advantage of being able to lean on the experience of software developers working in the consumer and business markets. Unfortunately, there’s no one size-fits-all answer explaining how to get a target audience not just to download the applications, but to also use them consistently.

University of Michigan computer scientist Kentaro Toyama struggled to understand the nuances surrounding successful user engagement when he worked as assistant managing director of Microsoft Research in India. Toyama’s team built several different digital apps in areas like healthcare and social services that performed well in the labs. But few survived the test of time after they were released to the public.

“When we did these research projects in relatively constrained contexts, we could show how technology has a positive impact,” he said. “However, when we scaled those projects, we found that it did not have the same impact. Technology can be extremely good at delivering what people want,” he said. “It’s not so good when it comes to encouraging [people] to become better versions of themselves.”

Marissa Burgermaster would probably agree. As an elementary and middle school teacher she became interested in how food and nutrition influenced the lives of the students she taught. Ultimately she decided to pursue a doctorate in behavioral nutrition.

Nutrition Education Interventions

During the course of her research, she also discovered a seeming contradiction: As a whole, nutrition education interventions didn’t produce tremendous results, but anecdotally they did appear to work for at least some students.

“What kept coming across from the data was … that different groups of kids … responded quite differently to the intervention,” she said. “That explained why an average intervention didn’t get great results — even though for some kids, it was exactly what they needed.”

Burgermaster said it underscored the importance of accumulating as much data as possible before the fact. She went on to do her post-doctoral research in biomedical informatics and nowadays teaches in the Department of Nutritional Sciences at the University of Texas, Austin. Burgermaster kept the lesson in mind when she set out to develop an app that provides nutrition information to underserved communities.

“The reason why I was drawn to intervening via technology was not just to use data, but also it’s about meeting people where they are and get them to where they need to be. And let’s be honest: people are stuck in their phones,” she said.

The app, which is rolling out this spring in Austin, offers users personalized recommendations with tailored nutritional recommendations and interventions to help them reach their goals. Like J&J’s test project with Apple, it’s another indication of the potential for health practitioners to use smartphone and wearable technology to generate data about their patients to help with diagnoses.

A Mobile Lab in Every Home

Mobile Instruments — Ozcan Lab

When Aydogan Ozcan talks about the potential of smartphone apps to effect transformative changes, don’t expect to hear him riff about cool new ways to arrange virtual candies on a screen or share adorable cat videos. He has a far bigger goal in mind.

Over the years, Ozcan’s lab has focused on developing field-portable medical diagnostics and sensors for resource-poor areas, coming up with relatively inexpensive ways to equip smartphones with advanced imaging and sensory capabilities that once were only found in expensive high-end medical instruments.

In the last decade, he has come up with ways to exploit the functionality available in contemporary smartphone hardware and software to further bio- and nano-photonics research. For example, one technique allowed a smartphone to produce images of thousands of cells in samples that were barely eight micrometers wide — and at the cost of less than $50 in off-the-shelf parts.

More recently, Ozcan demonstrated how the application of deep learning techniques can generate smartphone images that approach the resolution and color details found in laboratory-grade microscopes using 3-D printed attachments that cost less than $100 apiece.

“Instrumentation is very expensive. The cost of advanced microscopes, for example, can run to hundreds of thousands of dollars,” said Ozcan, a professor of electrical and computer engineering and bioengineering at the UCLA Samueli School of Engineering, and a three-time Blavatnik National Awards for Young Scientists finalist.

Smartphones: Mobiles Medical Labs

Smartphones are relatively inexpensive with more than 3 billion people using them around the world, encouraging Ozcan to envision a future where resource-poor nations will have expanded access to advanced measurement tools, that provide data for local residents to better treat medical conditions. Think of the average smartphone one day functioning as a mobile medical lab.

Ozcan also believes that people in their homes will soon be using a growing assortment of advanced mobile technologies and apps for preventive care, particularly when it comes to monitoring an aging patient or someone with a chronic condition.

“In the U.S., five percent of patients cause 50 percent of health expenditures per year. We can reduce that cost with better preventive care but for that, the home needs better technology. We should be able to provide that with mobile cost-effective systems so you can do some of the measurements that would normally require sending people to the hospital to take a sample, wait for the results and then go to the pharmacy with a prescription.”

While we may not be there yet, the world is fast approaching that tipping point where mobile apps lead to a veritable explosion of powerful, cost-effective alternatives to some of the most advanced biomedical imaging and measurement tools now in the market.

Also read: Tech’s Messy Challenge: Finding the Rx for Global E-Waste


About the Author

Charles Cooper is a Silicon-valley based technology writer and former Executive Editor of CNET.

Citizen Science in the Digital Age: Get Out the Maps

An over-the-shoulder shot of a person driving, using an iPhone as a dashcam.

Mapillary aims to make the world a smaller place with maps that continually update street-level conditions.

Published May 1, 2019

By Robert Birchard

The term “citizen science” first entered the Oxford English Dictionary in 2014, but it describes a long standing tradition of collaboration between professional and amateur scientists. Perhaps no field is as closely associated with citizen science as astronomy, where amateur stargazers continue to sweep the skies for unidentified heavenly bodies. Today, with the advent of smartphone technology, even more fields of scientific inquiry are open to the curious amateur.

Jan Erik Solem, CEO and Founder of Mapillary

Making the World a Smaller Place

With more than 440 million images from more than 190 countries, the street-level imagery platform Mapillary is trying to make the world a smaller place with maps that continually update street-level conditions.

“Carmakers can use the data to help train their autonomous systems — essentially ‘teaching’ cars to see and understand their surroundings — and mapmakers to populate their maps with up-to-date data. Cities can use it to keep inventories of traffic signs and other street assets among other things,” explained Jan Erik Solem, PhD, CEO and founder of Mapillary.

The data is collected by contributors who upload it onto Mapillary’s platform.

“The traditional approach of mapping places include sending out fleets of cars to map cities and towns, but these places change faster than mapping corporations are able to keep up with,” Solem added.

Simpe Tools Like Mobile Phones and Action Cameras

“Using simple tools like mobile phones or action cameras, anyone can go out, map their town and have data instantly generated from the images to update maps everywhere,” said Dr. Solem. “No one else collects data in this collaborative way.” The data is free for educational and personal use he added. “The company is closely tied to the research community and we recognize how helpful it is for researchers to have access to the kind of data that’s hosted on our platform,” explained Dr. Solem. “Mapillary is a commercial entity, but we are driven by research and this is part of our way of paying it forward.”

The data that Mapillary receives is verified through computer vision technology and GPS coordinates, integrated with the mobile phones and cameras that map the roads. “Our computer vision technology detects and recognizes objects in images including things like traffic signs, fire hydrants, benches and CCTVs. Having diverse imagery from all over the world means we have a rich training dataset that enables us to build some of the world’s best computer vision algorithms for street scenes.”

Mapillary’s mobile app allows for instant updates with the latest road conditions.

Keeping Citizens in Science

Citizen science requires enthusiastic participation of the public, but how can researchers keep the public engaged? This question was recently considered in a paper from Maurizio Porfiri, PhD, Dynamical Systems Laboratory at New York University titled, Bring them aboard: Rewarding participation in technology-mediated citizen science projects.”

The team hypothesized that monetary rewards and online or social media acknowledgments would increase engagement of participants.

“People contribute to citizen science projects for a variety of different reasons,” said Jeffrey Laut, PhD, a postdoctoral researcher in Dr. Porfiri’s lab. “If you just want to contribute to help out a project, and then you’re suddenly being paid for it, that might undermine the initial motivation.”

“For example, one of the things we point out in the paper is that people donate blood for the sake of helping out another human,” explained Dr. Laut. “Another study found that if you start paying people to donate blood, it might decrease the motivation to donate blood.”

Proper Rewards for Participation

If a citizen science project is suffering from levels of participation, researchers need to carefully choose the level of reward.

“I think with citizen science projects the intrinsic motivation is to contribute to a science project and wanting to further scientific knowledge,” said Dr. Laut. “If you’re designing a citizen science project, it would be helpful to consider incentives to enhance participation and also be careful on the choice of level of reward for participants.”

The technology used and scope of information collected may have changed, but the role remains as important as ever.

“It is important that citizens understand the world in which they live and are capable of making informed decisions,” said Ms. Prieto. “It’s also important that all people understand science, especially to combat disinformation. From this point of view citizen science is vital and a needed contributor to the greater field of science.”


Learn more about citizen science:

Law Experts Give Advice for Scientific Research

A dramtically lit gold justice scale backlit an a dark background - 3D render

With a bit of forethought, researchers can avoid the pitfalls of modern intellectual property management and data security.

Published May 1, 2019

By Alan Dove, PhD

Jim Singer

Google Docs. Open notebooks. The Internet of Things. Open source software. Cloud storage. For researchers, the ever-expanding world of digital data handling tools seems like a theme park built just for them. For intellectual property lawyers, security experts and technology transfer managers, however, it can look more like a house of horrors.

Scientists focused on their projects often set up collaborations, configure networked data-gathering equipment and write software with little thought about patents, copyrights or liability. Those choices can come back to haunt them years later. However, researchers can avoid many of the pitfalls of modern intellectual property management and data security with a bit of forethought.

The Other Kind of IP Address

The original intent of the internet was to facilitate scientific collaborations over a single network for defense department projects at multiple institutions, and it still excels at that, subject to caveats about data security.

“Collaboration has always occurred across research institutions, [but] online collaboration has made it happen more quickly,” says attorney Jim Singer, chair of the intellectual property department at the law firm Fox Rothschild in Pittsburgh, Pa. Singer adds that “often what we see is that the collaboration occurs … before the researchers have considered that they might have some intellectual property that’s worth protecting.”

Even simple online collaborations can lead to legal quagmires. “If you’re collaborating using, say, Google Docs … what you’re left with can be a joint work where it’s not clear what each party owns, and in fact, you end up with a co-ownership situation,” says Jeremy Pomeroy, an intellectual property attorney and founder of the Pomeroy Law Group in New York City. To be clear, that’s not always a good thing. “Often clients like the idea of joint ownership, it sounds friendly, [but] they don’t understand the implications of that,” says Pomeroy.

Joint Ownership of a Copyright

Jeremy Pomeroy

Without an agreement to the contrary, there is joint ownership of a copyright of “a work prepared by two or more authors with the intention that their contributions be merged into inseparable or interdependent parts of a unitary whole.” (17 U.S. Code § 101). Co-inventors on a patent are all considered equal contributors, each able to license or sell the invention however they like. Worse, it may not be up to the scientists to decide who to include on that list.

“Patent law decides who is an inventor in a patent application, and joint inventorship means joint ownership. If you’re claiming something in the patent application and a collaborator contributed anything to those claims, then that collaborator must be named as an inventor,” says Singer. A minor contributor could wield outsized influence over the fate of the invention.

Cloud-based storage and high-speed connections also make it easy to collaborate across continents and sometimes conflicting legal systems. Singer says that the U.S. Patent Act states that if you invent something in the U.S., you must file your patent application first in the U.S. before filing in other countries. That would be simple enough if China, India and other nations didn’t have similar types of requirements. Even if all of the scientists involved work for the same institution or company, “the question becomes ‘where can you file the patent application first?’” Singer says.

The rise of rapid publication platforms, such as preprint servers, has added yet another twist. Singer explains that the U.S. gives inventors a one-year grace period to file a patent after describing an invention in a publication, but most other nations don’t. Once the paper is published, the invention becomes unpatentable. “I’ve seen inventors lose international patent rights because of that,” he warns.

Officers Are Standing By

Andy Bluvas

While explaining the litany of risks inherent in collaboration, intellectual property experts emphasize that effective protection can be as easy as having each collaborator contact their institution’s technology transfer office as soon as the project begins. That office can then ensure appropriate allocation and documentation of collaborators’ respective intellectual property rights. The call will likely be well received. “We routinely have to educate new faculty members, [and] something we harp on is that before you put anything online you should come and talk to us first,” says Andy Bluvas, a technology commercialization officer at the Clemson University Research Foundation (CURF) in  Clemson, S.C.

One of the most common collaborative activities online, sharing computer code, involves shifting legal nuances that many researchers don’t know about. In 2014, the U.S. Supreme Court ruling in Alice v. CLS Bank invalidated hundreds of software patent claims and made patenting new code much harder. The expression of ideas in software can still be subject to copyright protection, but it requires a different legal approach.

Scientists and engineers working on technologies for the “Internet of Things” (IoT) are also discovering complex interactions between patents, copyrights and product development cycles. “Usually they release the technology or the products with some sort of software inside, and then what happens is they incrementally improve it,” says Bluvas. He recommends that researchers working on those types of projects bring their attorneys aboard to keep the software and hardware designs aligned with the legal code.

Reusing Source Code

Chris Gesswin

The common programming practice of reusing source code can cause other problems. “We’ll have researchers that borrow from multiple different packages of software, and they may be open source or they may be proprietary,” says Chris Gesswein, executive director of CURF. Open source software allows such borrowing, but different open source licenses place different restrictions on how the borrowed code can be used, and proprietary software has even stricter limits. Also, use of open source code may, under the terms of the license, cause the entire software package to become subject to the license’s open source requirements. The result can be software covered by multiple conflicting licenses, making it difficult or impossible to commercialize.

Academic researchers may be reluctant to involve administrators in their work, but technology transfer officers share the scientists’ priorities. “No matter what, our goal is to get [scientists’ results] out there … in peer-reviewed journals,” says Bluvas. Patent and copyright filings usually proceed faster than research journals’ publication cycles, so scientists don’t have to choose between timely publication and protecting their intellectual property.

Nonetheless, a majority of investigators don’t take advantage of the technology transfer officers’ expertise. Gesswein estimates that only 15 to 25 percent of Clemson’s research faculty interact with his office.

Who Let the Data Out?

Privacy laws present another challenge for many projects, whether researchers want to know about them or not. “I appreciate more than anybody that scientists don’t want to think about stuff like this, they just want to do the science,” says attorney Mark McCreary, chief privacy officer at Fox Rothschild. But ignorance of privacy laws can have serious consequences, especially for multinational collaborations.

Mark McCreary

McCreary points to the European Union’s recent implementation of the General Data Protection Regulation (GDPR) as an example. The law includes a controversial “right to be forgotten,” which allows for medical research exceptions where individuals may rescind their consent for the use of their data. Subjects could retroactively withdraw from a biomedical study, and researchers would have to delete the associated data, but this exception is, as yet, untested and it is unclear if a narrow reading could lead to invalidated studies.

Failure to comply with such rules can be costly; the maximum fine for a GDPR violation is four percent of global revenue from a project, or 20 million Euros, whichever is greater. “You’d have to really be a bad actor [to incur] something like that, but it’s there, it’s a possibility,” says McCreary.

An Obligation to Protect Personal Data

Scientists may also have an obligation to protect the privacy of personal data, and cloud-based tools raise the risk of a breach. Hackers are unlikely to target a single project, but “when you put it into a service provider where they [have] tens of thousands of other organizations’ data, that becomes a lot bigger target, so there’s a lot more risk,” says McCreary.

Researchers developing or collecting data with IoT devices face more diffuse risks, as every device they add to the network is another potential security hole. “When you think about it from an attacker’s perspective, they’re going to go after the weakest links in your system,” says Vyas Sekar, assistant professor of electrical and computer engineering at Carnegie Mellon University in Pittsburgh, Pa.

IoT devices often fit that description. Sekar explains that many networked devices employ shoddy programming practices and receive inconsistent or nonexistent security updates. To combat those problems, he advocates delegating security to professionals in university or corporate technology departments.

While many of the specific legal and security risks of online collaboration and data collection are new, experts in the field agree that the fundamental principles aren’t. “You have security issues that come up, you have privacy issues that come up, but really a lot of the old laws still apply, it’s contract law and it’s intellectual property law, it’s just in a different venue,” says McCreary.

Also read: Imagining the Next 100 Years of Science and Technology

Big Data: Balancing Privacy and Innovation

Presented by:

Science & the City

Often cited as the “4th Industrial Revolution” big data has the potential to transform health and healthcare by drawing medical conclusions from new and exciting sources such as electronic health records, genomic databases, and even credit card activity. In this podcast you will hear from tech, healthcare, and regulatory experts on potential paths forward that balance privacy and consumer protections while fostering innovations that could benefit everyone in our society. 

This podcast was produced following a conference on this topic held in partnership between the NYU School of Medicine and The New York Academy of Sciences. It was made possible with support from Johnson & Johnson.