Big data and the future of R&D management: the rise of big data and big data analytics will have significant implications for R&D and innovation management in the next decade.
But although it offers intriguing possibilities for new approaches to R&D, new business models, even new markets, for many, as Stephen Hoover has noted, "Big data isn't a solution--it's a problem" (Hoover 2015). The many are those who have not begun the journey to understand what big data is and how it can be used to change the way we do R&D or generate value.
R&D organizations are increasingly moving into the realm of big data--some driven by advancements in technology that have increased the amount of data gathered in a single experiment to a level that requires special handling. However, few have considered what the changes in this data landscape will mean for R&D and R&D management. The goal of this study is to understand how big data will affect R&D management and R&D activities in the future.
Analytics: Answering Big Data Questions
Big data is a term that is widely used but has no commonly accepted definition. It is most commonly defined in terms of five Vs: volume, variety, velocity, value, and veracity. In other words, truly big data is large in volume, varied in type and source, and accessible quickly once it is generated--increasingly, these days, in real time; it may vary in composition and meaning over time, and it may or may not be trustworthy. The original definition, coined by Laney (2001), included just the first three characteristics; two additional terms--value and veracity--were added as it became evident that the potential of big data is in the value of the information and the need to ensure its integrity (Marr 2014). There are also a number of technical terms that are part of the conversation around big data, including open data (data that is typically in the public domain and readily available, such as government data), found data (data originally generated for a specific purpose that can be analyzed for a different purpose--for example, analyzing credit card transactions to discern consumer purchasing patterns), and datafication (the trend toward capturing more aspects of social and physical phenomena as digital data), as well as many others (Alexander, Blackburn, and Legan 2015).
Large quantities of published information are available on technical developments that support the creation, acquisition, storage, and analysis of big data. Another set of literature looks at its operational impact on companies, primarily focusing on customer-facing functions such as marketing and customer service. Publications on the impact of big data on research, to the extent that they exist, generally come from government research institutions and academia. Very little has been published about privatesector applications of big data to R&D, although this does not mean the private sector is not engaging with big data.
One problem with the term "big data" is that it does not describe a particular technology or approach. As Bill Pike of the Pacific Northwest National Laboratory told the research group, "Big data is data of sufficient size and complexity to challenge contemporary analytical techniques." For organizations, big data is any data of sufficient size and complexity to challenge the analytical techniques and technologies available to it. Thus, big data will mean different things to different organizations. For organizations accustomed to working with massive data sets, big data implies a scale beyond state-of-the-art data management technologies. For other organizations, big data may be any data set that cannot be handled by Microsoft Excel. The more useful approach to defining big data is to look at the characteristics of what we call big data--the five Vs--and how those characteristics relate to the ways organizations are accustomed to accessing, processing, and using data.
In this context, big data is the problem to the extent that it challenges organizations' ability to absorb and harvest value from their data streams. The solution to that problem is advanced analytics--techniques such as machine learning, unstructured textual analysis, and other tools that can glean insights from large, complex data sets. Advanced analytics identify latent relationships between variables, uncovering patterns that are not discernable by humans alone. This interaction between data, models, and analysis is the core of the promise of big data for applications in R&D. For instance, artificial intelligence and machine learning systems are likely to play growing roles in both project and portfolio management, helping R&D leaders make smarter decisions and improving both the execution and value proposition of R&D (Farrington and Crews 2013).
Analytics may take a variety of forms, but they all have the same goal--to glean insights from raw data. Those insights can then be used to create predictive models organizations can apply to business processes and other elements, helping them to achieve business objectives. Robinson, Levis, and Bennet (2010) describe three types of analytics:
* Descriptive analytics use data to find out what has happened in the past.
* Predictive analytics use data to find out what might happen in the future (forecasting and estimation).
* Prescriptive analytics use data to identify the courses of action that are likely to produce the best outcomes under given conditions.
Descriptive analytics is comparable to traditional business intelligence--the compilation of statistics and major findings about past activities and conditions in a given time period (Delen 2014). Predictive analytics applies new techniques to traditional forecasting to create sophisticated models; modern approaches to predictive analytics use advanced statistical methods and machine learning algorithms to isolate and examine thousands of variables simultaneously in the context of a predictive model. This kind of analysis allows the interactions of many variables to be observed and the ones driving a potential result to be identified. The work of quantitative hedge funds in modeling the stock market is one example of predictive analytics at work.
Prescriptive analytics applies techniques such as optimization, simulation, and heuristics-based decision making to map the potential consequences of alternative strategies or courses of action. This type of analysis provides an understanding of the trade-offs between different options; it may improve the quality of decisions by integrating more factors and more complex interactions than humans are capable of processing unaided. One area where prescriptive analytics is making inroads is in computer-assisted diagnosis. In these systems, a physician enters observations about a patient into an engine that then scans the medical literature to identify diseases or disorders that might be generating those symptoms. The system can then analyze all of those inputs to identify the procedures or treatments most likely to be effective in that particular case (Haftner 2012).
Big data analytics will inevitably have an impact on management. As big data gains a foothold, management decisions based purely on intuition or experience are increasingly being regarded as suspect (Economist Intelligence Unit 2012). La Valle and colleagues (2010) report that, in their study, "top-performing organizations use analytics five times more than lower performers." The challenge to management is that decisions about strategy and operations become more complex as the complexity of the data that must be considered increases (Zhao, Fan, and Hu 2014). The question is whether R&D management is prepared to cope with this changing environment.
The rise of big data, and big data tools, then, will present challenges and opportunities across the full range of R&D management responsibilities and activities. Going forward, it will increasingly inform innovation and the process a company uses to execute innovation, enable new approaches to R&D, and transform the practice of R&D. Some of these changes--particularly those with regard to how big data informs or enables innovation--may be largely incremental, driving toward accelerating R&D while driving down cost and risks. Larger challenges lie in the potential for big data and analytics to disrupt or transform current business models, for instance, as nontraditional players find ways to use big data to dislodge established market leaders, or established leaders radically reshape their structures and processes to make better use of big data and potentially remake their businesses, rendering competitors' models obsolete.
The design of this project was based on broad input from IRI members at workshops created to elicit areas of interest and early questions. The primary input came from 22 attendees at a workshop held during IRI's 2015 Winter ROR Meeting. That workshop uncovered a wide range of questions, reflecting a broad range of understanding of and involvement with big data in participants' operations. Some members reported explorations of big data involving large research programs and dedicated groups; other organizations were still trying to determine the meaning of big data and why they should be concerned with it. With this diversity of understanding and utilization of big data, the first challenge was to develop a common understanding of what big data is and identify an approach to frame the discussion of its likely impacts on R&D.
That approach emerged from early literature analysis and the IRI's 2015 Winter ROR Meeting workshop as we worked to define the scope of the Digitalization project. We identified three kinds of impacts--inform, enable, and transform/disrupt--across four key elements of R&D operations--strategy, people, technology, and process (Table 1). The impacts align with innovation frameworks, which typically define projects in terms of incremental, adjacent, and transformational outcomes. The elements of people, process, and technology have long been used in change management (Maltaverne 2015); strategy was added because of its importance to R&D.
To begin to answer the questions raised by this way of thinking, we looked to interviews with thought leaders, further review of the literature, and requests for examples from attendees at IRI meetings. Ultimately, we hoped to gather a set of examples and cases that would help demonstrate how big data is being used now and illuminate how it is likely to change R&D practices going forward.
We began the study by interviewing eight thought leaders who are recognized by IRI members as leaders in understanding and using big data, to help gain an understanding of what big data applications look like and how big data is likely to develop in the future. The hour-long interviews were recorded and summarized, then reviewed by the group to extract key points. These interviews provided insights that helped refine the framework and structure our questions for the case studies.
We then began to gather examples of the application of big data in R&D that we could align to the framework. A literature survey identified nine case studies, and we interviewed five IRI members who had offered information about applications of big data they had implemented in their organizations. Additional examples were collected from 71 participants in a workshop held at the 2015 IRI Member Summit, 109 participants in a workshop at the 2016 IRI Annual Meeting, and 150 participants in three workshops held at the 2016 IRI Member Summit. In each of these workshops, we talked through the framework then asked participants to work in groups to describe examples that fit the framework. In the first two workshops, we discussed the examples brought up by participants and put them into the framework in a full-group discussion. The IRI case studies were identified from these examples.
In the 2016 IRI Member Summit workshop, we took a different approach. In this iteration, we asked participants to talk about both how they saw R&D being affected and where they had seen these impacts. This final session, which included 150 participants, generated a total of 237 ideas, demonstrating participants' recognition of the likely impact of big data for R&D.
After the workshops were complete, we reviewed each example--including both literature cases and examples collected in workshops--and placed it in the framework where it best fit; some examples were placed in more than one area of the framework. The final evaluation was organized by industry segments because we noted a broad distribution of knowledge and understanding of big data within the IRI members and we recognized an uneven impact of big data across industries, with some segments clearly ahead of others. We organized these data into eight categories: Industrial Manufacturing, Consumer Goods, Food & Beverage, High Tech, Energy, Chemicals, Health Care & Pharmaceuticals, and Government. We then classified each of these industry segments based on whether we identified many examples (4+), some examples (2-3), or one or no examples (0-1) of big data applications for R&D in that segment (Table 2).
The thought leader interviews were completed early in our efforts and helped to refine our framework. These interviews revealed several key themes. One was that every organization has a large amount of data that is being underutilized; big data analytics can pull value from this data. Another point that several interviewees highlighted was that the human capital required to deploy big data solutions effectively is different than it was in the past; today's data scientists require both scientific and data analysis skills. These interviews also pointed to the reality that some segments of industry are more affected than others. Finally, all of our early-stage interviewees expressed the belief that big data analytics will continue to develop and will become an accepted cost of doing business in the future.
The examples observed in the literature were quite diverse, including analysis of large data sets, cheminformatics (Bunger 2015), advanced analytics using approaches such as machine learning (Li 2011) and artificial intelligence (Wigley et al. 2016), pattern recognition, image analysis, text analytics (Markham, Kowolenko, and Michaelis 2015), virtual experimentation and simulation, forecasting (Huang et al. 2015), and bioinformatics and genomics (Stevens 2013). Many of these examples described applications that are quite complex.
In addition to providing examples for analysis, the workshops offered a sense of participants' understanding of and feelings about big data. These discussions made very apparent the diversity of understanding of big data. Often, statements or questions would be about the technology enabling big data rather than the application and impact of big data on R&D. In the small-group discussions, participants found it relatively easy to identify examples of big data informing R&D, more difficult to identify examples of big data enabling new approaches to R&D, and challenging to identify examples of big data disrupting or transforming R&D.
The examples we collected clearly demonstrate that all industry segments are being informed by big data and its uses in strategy, people, technology, and process integration. However, within a given segment, there can be a broad range of understanding and execution. While there are many examples of big data enabling new R&D approaches across industry segments, government and the more technology-focused industry segments are ahead in this regard. This same group leads in the use of big data to transform or disrupt R&D.
Insights provided by big data can inform both the kinds of innovations an organization pursues and the process it uses to produce new products and services. Big data can contribute to opportunity assessment, project selection, and even identification of potentially fruitful incremental product improvements.
For instance, Eastman Chemical Company engaged in a collaboration with North Carolina State University to apply big data to gain insight into 3D printing technology and the market environment. The project collected consumer responses to and attitudes toward relevant Eastman products and competitors' products from social media and used unstructured text analytics to identify consumer concerns and needs. Ultimately, the analysis highlighted the environmental impact of the products as a key consumer concern. Thus, big data delivered rapid opportunity assessment and identified a lucrative, underserved market space that could be addressed by Eastman's capabilities.
Eastman's use of big data spanned the boundary between marketing and R&D; consumer goods companies have also invested in big data capabilities in sales and marketing to provide leads and insights through, for example, monitoring social media feeds. These data, combined with data from consumer calls and external databases such as patents and scientific literature, generate insights for R&D in the form of recommendations for product improvements.
For Eastman and consumer goods companies, big data provides both marketing and innovation insight. In industrial manufacturing, big data also plays more than one role--both providing a customer service and feeding back into companies' innovation plans. The increasingly common incorporation of sensors and the Internet of Things (loT) into industrial products provides performance data that companies use to support the provision of services to existing customers. That performance data can also help identify opportunities to improve product performance, heighten efficiency, or fill new customer needs. In other words, even as data supports an existing product, it gives R&D information about what the next generation of projects in the portfolio should look like.
If big data is to inform R&D, new skills and competencies will be needed. The thought leaders we interviewed noted that analysts need to be familiar both with data analytics and with the underlying business or research question being addressed, calling to mind the pi-shaped skill set described by Alexander, Blackburn, and Legan (2015). These observations, our interviewees said, point to a need for changes in both hiring and training in R&D organizations and highlight the importance of organizational willingness to invest in the infrastructure and human capital needed to support the use of big data.
Big data can enable more efficient and effective innovation by increasing the ability of researchers to obtain needed information, allowing faster iteration on designs and supporting virtual design and experimentation. Increasingly, software is a key to lab management, linking experimental results with search and analytics tools, and automating experimental design can save thousands of dollars in time and materials (Bunger 2015). Big data can also enable R&D to respond to unexpected events before they become crises by providing early warning of an emerging issue.
The most common application in this domain is in literature search. There are numerous big data tools that search across all literature and relevant databases and even internal document repositories to identify information that is pertinent to a researcher-defined query. For example, Meta.com offers a product that helps researchers learn from the vast amount of research being produced around the world each day.
One basic use of big data tools to enable R&D is to manage the information needed to support innovation. For instance, consulting firm Decernis maintains a very large database of worldwide regulations pertaining to food, cosmetics, over-the-counter pharmaceuticals, medical devices, packaging, and more, with all the raw materials used in these areas catalogued. The data are translated into 40 languages. This database permits the R&D organizations that use the firm's service to formulate products with greater confidence that the products will be accepted by regulatory agencies in the target market; it also provides insight into potential regulatory issues before they cause operational difficulties.
Another challenge in R&D management is managing unexpected events that impact a company's products. One consumer goods company we spoke to monitors social media streams when a new product is launched, using analytics to gauge consumer reaction and monitor for unexpected issues. In one case, a packaging issue was identified by comments in social media; the company's R&D organization was able to create a correction and put it in place before any complaints were received in the company's call center. IBM's Watson has also been used to monitor social media streams for product "scares" or potential recalls.
For big data to enable R&D, we observe from those who have embraced big data, R&D management must be looking for new, more efficient ways to carry out research and be willing to make investments both in time to learn and in funds to purchase these tools. R&D management must also understand the change management required when these new tools and techniques are introduced to the R&D organization. Learning organizations will have an advantage, as they are quicker to absorb new knowledge and embrace new ways of working.
As big data increasingly enables new approaches to R&D, and new business models that change the market itself, R&D leaders must consider how big data developments will impact the future of their organizations. By driving down the transaction costs involved in innovation, big data can lessen many of the traditional advantages enjoyed by large R&D organizations. Eventually, organizations will have to embrace these tools--and transform themselves in the process--in order to keep up, or face the risk of being disrupted by competitors who deploy big data to drive new product development, streamline R&D to get to market faster, and move into new markets.
Some industries are already seeing these forces in play, forcing organizations to look not only at what work is being done but also at how that work is structured. One interviewee told us that the US intelligence community has responded by moving from a hierarchical organizational model to a network model. By aggregating large central data repositories, accessible through a secure network using a standardized suite of tools, intelligence agencies can perform multiple analyses simultaneously and collaborate organically, instead of routing all analytical work to one organization. This approach allows the community to process more data and exploit its insights more efficiently.
Big data is also changing the way organizations pursue open innovation. In an approach pioneered by Procter & Gamble in the early 2000s, companies form open networks of individual and organizational collaborators that share information, gleaning insights from widely distributed resources that can then be applied to R&D (Kastelle 2012; Ozkan 2015). This approach allowed P&G to integrate external resources into its innovation process; ultimately, the company was able to streamline its innovation infrastructure and reduce R&D spending. However, to accomplish this, P&G had to develop ways to manage and screen the flow of ideas and knowledge. This is an example of a transformation and disruption of R&D through the collection of ideas from a large group of external contributors, supported by a big data analytics solution.
Other organizations are deploying big data to support approaches to R&D that would not have been feasible without big data and analytics. DARPA's Big Mechanism program, for instance, seeks to accelerate cancer research by leveraging the entire research literature (Cohen 2015). The project is using big data tools to "read" every scientific article related to cancer, extract all instances suggesting
If big data is to inform R&D, new skills and competencies will be needed. a causal pathway, assemble those instances in context to create large-scale causal models (signaling networks) and derive new hypotheses about cancer mechanisms, and test those hypotheses in virtual experiments. Although the program is focused on cancer biology, the overarching goal is to develop technologies to support a new kind of science, one in which research is integrated more or less immediately--automatically or semi-automatically--into causal, explanatory models of unprecedented completeness and consistency.
Case Study: Big Data in the Pharmaceutical Industry
The pharmaceutical industry is one of the sectors where big data is having the most impact on R&D; our data set included multiple articles and case study examples from pharma. In part, this is driven by the nature of pharmaceutical R&D. The industry relies on clinical trials involving thousands of patients; often, those trials document inherently variable responses to a given compound because of genetic and physiological diversity across the patient population. These variations can obscure the true outcome of a trial and make it difficult to map a new drug's actual effects. To deal with this challenge, pharma companies have developed very sophisticated data analysis capabilities.
However, the industry has not definitively solved the big data problem. Traditional pharmaceutical research has become increasingly less productive over the past two decades, at least in part because of a "lack of data or lack of appropriate analysis of the available data" (Tormay 2015, 88). At the same time, certain kinds of data have rapidly gone from being unavailable or difficult to access to being overabundant, largely driven by advances in genome sequencing technology and a concomitant reduction in the costs associated with gene sequencing. The Human Genome Project announced its first draft sequence in 2000 and its first finished genome in 2003; this first effort took more than 10 years and cost approximately $2.7 billion with 20 different institutions collaborating (National Human Genome Research Institute 2003). Today, a human genome can be sequenced in a matter of hours for around $1,000; one high-throughput sequencer can deliver 400 billion base pairs per day and up to 12 human genome sequences and 1.5 terabytes of data per 3.5-day run (Illumina 2015), providing access to larger amounts of genomic data faster than ever before--and increasing the computing power needed to gather insight from that data.
These capabilities are rapidly informing, enabling, and transforming pharmaceutical R&D in a number of ways:
* Informing. The power to rapidly generate genetic data has been harnessed in the UK's 100,000 Genomes Project, which is sequencing entire genomes from a diverse set of subjects, including patients with rare diseases. The resulting knowledge and insight should help clinicians to improve diagnosis and outcomes (Genomics England n.d.). This is but one project that uses such approaches. The resulting torrent of data, when linked with information on medical conditions and disease states, will provide insights to identify new potential drug targets for a host of once-intractable conditions.
* Enabling. In the typical drug development process, once new targets for treatment have been identified, a researcher looks for a compound that can interact with the target in a desirable way, historically through wet chemistry and biological screening. More and more, however, this process is moving to virtual screening, in which computer models examine millions of compounds for potential interaction with targets and identify the most promising ones. Only a small subset undergoes traditional biological screening (Storrs 2015). This approach allows more potential molecules to be identified more quickly and at lower cost, bringing drugs to clinical trial and eventually to market more quickly.
* Transforming and disrupting. Extrapolating only a little, it is not hard to see how the revolution in the availability of genomic data and data analysis tools could change the nature of pharmaceutical discovery by permitting the identification of compounds with high efficacy in targeted genetic groups, even if that efficacy cannot be distinguished from a placebo when compared across the general population. Bernie Meyerson, Chief Innovation Officer at IBM and one of the preeminent thinkers on big data, suggested in his interview with us that health care is where big data will have the greatest societal impact (at least in the United States) and that the opportunities presented by the application of big data in pharmaceuticals will ultimately be a contributor to that outcome.
Big data will profoundly affect R&D, changing both what innovation looks like and how it is managed. We are already seeing this impact. Although R&D has not generally been at the forefront of big data applications, companies are starting to exploit these capabilities. GE's heavy investment in data analytics for its aircraft engine unit and other businesses (Winig 2016) is one manifestation of that trend. Looking at the evolution in companies like GE, which are early adopters of big data approaches, can give some sense of how the future might unfold for R&D in all industries.
We believe that the framework we developed for this study can be used as a guide in considering the impact big data is likely to have in a given industry. Further development of this framework could lead to a maturity model to define the progression of R&D organizations in building the big data capabilities they will need in the future.
The authors gratefully acknowledge the active participation of members of the working group in conducting the research and formulating the opinions presented in this article, including Jennifer Bennett (Phillips 66), Rob Fuchsteiner (Ingersoll Rand), Lou Reyes (J. M. Smucker), Mark Zhuravel (TCS), Daniel Toton (RTI), and Travis Gray (Archer Daniels Midland).
IRI Research Profile
Big Data and the Future of R&D Management
Exploring big data and its implications for R&D management.
Goal: To develop a common understanding of what big data is and explore how it will impact R&D management and activities in the next decade.
Co-Chairs: Jeffrey Alexander (RTI International), Michael Blackburn (Cargill), David Legan
Subject Matter Expert: Diego Klabjan (Northwestern University)
For more information, contact Mike Blackburn at firstname.lastname@example.org.
Alexander, J., Blackburn, M., and Legan, D. 2015. A Primer on Big Data for Innovation. Arlington, VA: Industrial Research Institute. http://www.iriweb.org/sites/default/files/Big%20 Data%20Primer_0.pdf
Bunger, M. 2015. Big data and analytics in chemicals: From cheminformatics and LIMS to Launch. State of the Market Report. Lux Research, December 29. https://members. luxresearchinc.com/research/report/18362
Cohen, P. 2015. DARPA's Big Mechanism Program. Physical Biology 12(4), July 15.
Delen, D. 2014. Real-World Data Mining: Applied Business Analytics and Decision Making. Upper Saddle River, NJ: Pearson FT Press.
Economist Intelligence Unit. 2012. The deciding factor: Big data and decision-making. Capgemini Consulting, June 4. https://www.capgemini.com/resource-file-access/resource/ pdf/The_Deciding_Factor_Big_Data_Decision_Making.pdf
Farrington, T., and Crews, C. 2013. The IRI2038 scenarios: Four views of the future. Research-Technology Management 56(6):23-32.
Genomics England, n.d. About Genomics England, https:// www.genomicsengland.co.uk/about-genomics-england/
Haftner, K. 2012. For second opinion, consult a computer? New York Times, December 3. http://www.nytimes.com/ 2012/12/04/health/quest-to-eliminate-diagnostic-lapses.html
Hoover, S. 2015. Disruptive Technologies in Manufacturing. Presentation given at the IRI Annual Meeting, April 28, 2015, Seattle, WA.
Huang, Y., Schuehle, J., Porter, A. L., and Youtie, J. 2015. A systematic method to create search strategies for emerging technologies based on the web of science: Illustrated for 'Big Data'. Scientometrics 105(3):2005-22.
Illumina. 2015. HiSeq 3000/HiSeq 4000 sequencing systems. Specification sheet: Sequencing. http://www.illumina, com/content/dam/illumina-marketing/documents/products/ datasheets/hiseq-3000-4000-specification-sheet-770-2014-057.pdf
Kastelle, T. 2012. Procter & Gamble--Using open innovation to become a world class innovator. The Discipline of Innovation, May 30. http://timkastelle.org/blog/2012/05/procter-gambleusing-open-innovation-to-become-a-world-class-innovator/
Laney, D. 2001. 3D data management: Controlling data volume, velocity, and variety. Application Delivery Strategies, File 949. Meta Group, February 6. https://blogs.gartner.com/ doug-laney/files/2012/01/ad949-3D -Data-Management-Controlling-Data-Volume-Veiocity-and-Variety.pdf
LaValle, S., Lesser, E., Shockley, R., Hopkins, M. S., and Kraschwitz, N. 2010. Big data, analytics and the path from insights to value. MIT Sloan Management Review, December 21. http:// sloanreview.mit.edu/article/big-data-analytics-and-the-pathfrom-insights-to-value/
Li, G.-Z. 2011. Special issue on massive data processing by using machine learning. International Journal of General Systems 40(4):351-54. doi:10.1080/03081079.2010.530025
Maltaverne, B. 2015. People, process and technology: The 3 crucial ingredients for a successful change management. Pool4Tool Procurement, September 14. https://blog. pool4tool.com/procurementworld/people-process-andt echnology-the-3-crucial-ingredients-for-a-successfulchange-management
Markham, S. K., Kowolenko, M., and Michaelis, T. L. 2015. Unstructured text analytics to support new product development decisions. Research-Technology Management 58(2):30-39.
Marr, B. 2014. Big data: The 5 Vs everyone must know. LinkedIn Pulse, March 6. https://www.linkedin.com/pulse/ 2014030607 3407-6487 5646-big-data-the-5-vs-everyonemust-know
National Human Genome Research Institute. 2003. The Fluman Genome Project completion: Frequently asked questions, https://www.genome.gov/11006943/
Ozkan, N. N. 2015. An example of open innovation: P&G. Procedia--Social and Behavioral Sciences 195:1496-502.
Robinson, A., Levis, J., and Bennett, G. 2010. INFORMS News: INFORMS to officially join analytics movement. ORMS-Today 37(5). https://www.informs.org/ORMS-Today/ Public-Articles/October-Volume-37-Number-5/INFORMS-News-INFORMS-to-Officially-Join-Analytics-Movement
Sharwood, S. 2015. Forget big data hype, says Gartner as it cans its hype cycle. The Register, August 21. http://www. theregister.co.uk/2015/08/2 l/forget_big_data_hype_says_ gartner_as_it_cans_its_hype_cycle/
Stevens, H. 2013. Life Out of Sequence: A Data-Driven History of Bioinformatics. Chicago, IL: Chicago University Press.
Storrs, C. 2015. Screening goes in silico. The Scientist, February, http://www.the-scientist.com/7articles.view/ articleNo/41979/title/Screening-Goes-In-Silico/
Tormay, P. 2015. Big data in pharmaceutical R&D: Creating a sustainable R&D engine. Pharmaceutical Medicine 29 (2):87-92. doi: 10.1007/s40290-015-0090-x
Wigley, P. B., Everitt, P. J., van den Henge, A., Bastian, J. W., Sooriyabandara, M. A., McDonald, G. D., Hardman, K. S., Quinlivan, C. D., Manju, P., Kuhn, C. C. N., Petersen, I. R., Luiten, A. N., Hope J. J., Robins, N. P., and Hush, M. R. 2016. Fast machine-learning online optimization of ultra-cold-atom experiments. Scientific Reports 6, Article number 25890. https://www.nature.com/articles/srep25890
Winig, L. 2016. GE's big bet on data and analytics. MIT Sloan Management Review, February 18. https://sloanreview.mit. edu/case-study/ge-big-bet-on-data-and-analytics/
Zhao, J. L., Fan, S., and Hu, D. 2014. Business challenges and research directions of management analytics in the big data era. Journal of Management Analytics 1(3): 169-74. doi: 10.1080/23270012.2014.968643
Michael Blackburn is the portfolio/program leader for Global Research Management at Cargill. Mike joined Cargill's corn milling business in 1980 and held multiple roles in Quality Assurance Management and Research and Development Management. In 2001, he became director of scientific resources, overseeing locations in the United States and Europe. He became Enterprise Architecture Modeling Lead in 2008 as part of Cargill's program to improve business processes and moved to his current role In 2011. He has a BS in biology and chemistry from Wright State University, an MS in engineering management from the University of Massachusetts, and a ThD from Christian Leadership University. email@example.com
Jeffrey Alexander is senior manager, innovation policy, at RTI International. He has more than 25 years of experience conducting in-depth analyses of high-technology markets, tracking and evaluating R&D strategies and policies, and advising national and regional governments on technology program funding and implementation. He is coauthor of Global and Local Knowledge: Glocal Transatlantic Public-Private Partnerships for Research & Technology Development (Palgrave 2006). He holds a PhD in the management of science, technology, and innovation from the George Washington University and a BA in international relations from Stanford University, where he completed the honors program in science, technology, and society, firstname.lastname@example.org
J. David Legan is an experienced technical leader with an international record of strategic project, team, and Idea leadership; breakthrough innovation; client collaboration; and people development in large and small businesses. He managed the corporate microbiology laboratory and initiated a program of microbial modeling and new technology evaluation at Nabisco. At Kraft, he worked in an open innovation model with universities and other external partners to find, evaluate, and develop food safety and preservation technologies to protect consumers, especially those that offer alternatives to artificial preservatives. He has a BSc in applied biology from the University of Bath and a PhD in food technology from the University of Reading, email@example.com
Diego Klabjan is a professor at Northwestern University. He joined Northwestern's faculty as an assistant professor in 2001 and was promoted to full professor in 2012. His research is focused on applying machine learning in the areas of health care, transportation, and finance. He has served as an assistant professor in the University of lllinois's Department of Mechanical and Industrial Engineering and the Department of Civil and Environmental Engineering at the University of Illinois at UrbanaChampaign. He is the recipient of the INFORMS 2000 Transportation Science Dissertation Award and the Preseren's Award, given by the University of Ljublana, Slovenia, for the outstanding undergraduate thesis. firstname.lastname@example.org
Caption: TABLE 2. Big data applications by industry
TABLE 1. Mapping big data's likely impacts on R&D Strategy People How will big data How could R&D Who will use big data inform R&D/ management improve to inform R&D innovation? through the use of big management, and what data? will they need to know? How will big data What new capabilities How will research enable new and approaches to teams' skills and approaches to innovation will big knowledge need to R&D/innovation? data make possible? change to make use of big data? How will big data How can big data Who will use big data transform/disrupt create/identify as a tool for existing approaches opportunities to disruption? to R&D/innovation? disrupt markets and industries? How might competitors use big data against incumbents? Technology Process integration How will big data What big data How will R&D management inform R&D/ technologies and practices and processes innovation? systems will R&D change as big data leaders use to improve becomes pervasive? decision making? How will big data What big data How will R&D activities enable new technology and systems change as big data approaches to will become part of the becomes pervasive? R&D/innovation? R&D process? How will big data What big data What should companies transform/disrupt technologies on the do to predict and existing approaches horizon will enable exploit disruptive to R&D/innovation? future disruptive opportunities presented opportunities? by big data?
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||IRI RESEARCH|
|Author:||Blackburn, Michael; Alexander, Jeffrey; Legan, J. David; Klabjan, Diego|
|Date:||Sep 1, 2017|
|Previous Article:||Leveraging virtual experimentation and simulation to improve R&D performance: a preliminary maturity matrix for virtual experimentation and...|
|Next Article:||Connecting to the counterculture: the interview guide.|