Enterprise

Google’s parent company Alphabet last week launched a subsidiary focused on AI-powered drug discovery called Isomorphic Labs. Helmed by Demis Hassabis, the cofounder of DeepMind, Isomorphic will use AI to identify disease treatments that have thus far eluded researchers, according to a blog post.

“Isomorphic Labs [is] a commercial venture with the mission to reimagine the entire drug discovery process from the ground up with an AI-first approach,” Hassabis wrote. “[Ultimately, we hope] to model and understand some of the fundamental mechanisms of life … There may be a common underlying structure between biology and information science — an isomorphic mapping between the two — hence the name of the company.”

The launch of Isomorphic underlines the pressure on corporate-backed AI labs to pursue research with commercial, as opposed to theoretical, applications. After losing nearly £2 billion ($2.7 billion), DeepMind recorded a profit for the first time in 2020, notching £43.8 million ($59.14 million) on £826 million ($1.12 billion) in revenue. While the lab remains engaged in prestige projects like systems that can beat champions at StarCraft II and Go, DeepMind has in recent years turned its attention to more practical domains, like weather forecasting, materials modeling, atomic energy computation, app recommendations, and datacenter cooling optimization.

As the change in priorities fuels a reported power struggle within Alphabet, DeepMind moves further afield from its original mission of developing artificial general intelligence (AGI), or AI capable of tackling any task, in an open source fashion.

It’s not just DeepMind that’s leaning increasingly into commercialization. OpenAI — the company behind GPT-3 — launched as a nonprofit in 2016, but transitioned to a “capped-profit” structure in 2019 in a bid to attract new investors. The strategy worked. Roughly a year ago, Microsoft announced it would invest $1 billion in OpenAI to jointly develop new technologies for Microsoft’s Azure cloud platform. In exchange, OpenAI agreed to license some of its intellectual property to Microsoft, which the company would then package and sell to partners, and to train and run AI models on Azure as OpenAI worked to develop next-generation computing hardware.

Of course, embracing potentially more lucrative AI research directions isn’t necessarily a bad thing. Isomorphic arises from DeepMind’s work in protein shape prediction with its AlphaFold 2 system, which is being used by researchers at the University of Colorado Boulder and the University of California, San Francisco to study antibiotic resistance and biology of SARS-CoV-2, (also known as the coronavirus disease). However, when profit becomes the priority, important fundamental work can fall by the wayside.

“The tech industry is endangering its own future as well as progress in AI,” CEO of SnapLogic, an enterprise app, and service orchestration platform wrote in a recent essay. “Besides nurturing tomorrow’s talent, [centers like] universities host the kind of blue-sky research that corporations are often reluctant to take on because the financial returns are unclear.”

As an example, Microsoft and Nvidia last month announced that they trained what they claim is one of the most capable language models to date. But building it didn’t come cheap. Experts peg the cost in the millions of dollars, a total that exceeds the compute budgets of most startups, governments, nonprofits, and colleges. While the cost of basic machine learning operations has been falling over the past few years, it’s not falling fast enough to make up the difference — and techniques like network pruning prior to training are far from a solved science.

“I think the best analogy is with some oil-rich country being able to build a very tall skyscraper,” Guy Van den Broeck, an assistant professor of computer science at UCLA, said in a previous interview with VentureBeat. “Sure, a lot of money and engineering effort goes into building these things. And you do get the ‘state of the art’ in building tall buildings. But there is no scientific advancement per se … I’m sure academics and other companies will be happy to use these [models] in downstream tasks, but I don’t think they fundamentally change progress in AI.”

In another instance of corporate ambitions run amok, Google last January released an AI model trained on over 90,000 mammogram X-rays that the company said achieved better results than human radiologists. Google claimed that the algorithm could recognize more false negatives — the kind of images that look normal but contain breast cancer — than previous work, but some clinicians, data scientists, and engineers took issue with that statement. In a rebuttal published in the journal Nature, the coauthors said that the lack of methods and code in Google’s research “undermines its scientific value.”

Academic investments

One paper found that ties to corporations — either funding or affiliation — in AI research doubled to 79% from 2008 and 2009 to 2018 and 2019. And from 2006 to 2014, the proportion of AI publications with a corporate-affiliated author increased from about 0% to 40%, reflecting the growing movement of researchers from academia to enterprise.

The solution might lie in increased investments in universities and other institutions with a greater appetite for risk. Recently, the U.S. government took steps toward this with the National Science Foundation’s (NSF) funding of 11 new National Artificial Intelligence (AI) Research Institutes. The NSF will set aside upwards of $220 million for initiatives including the AI Institute for Foundations of Machine Learning and the AI Institute for Artificial Intelligence and Fundamental Interactions, which will investigate theoretical AI challenges like neural architecture optimization and incorporate workforce development, digital learning, outreach, and knowledge transfer programs to develop AI that integrates the laws of physics.

This isn’t to suggest that the academic process is without flaws of its own. There’s a concentration of compute power at elite universities; AI research still has a reproducibility problem, and some researchers suggest the relentless push for progress might be causing more harm than good. A 2018 meta-analysis highlights troubling trends that have emerged in machine learning scholarship, including a failure to identify the sources of empirical gains and the use of mathematics that obfuscates or impresses rather than clarifies.

Still, however it’s achieved, a greater focus on fundamental, basic AI research could lead to theoretical breakthroughs to significantly advance the state of the art. Moreover, it could promote values such as beneficence, justice, and inclusion, which work with strictly commercial motivations tends to underemphasize.

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member