Enterprise

Artificial intelligence and machine learning pioneers are rapidly expanding on techniques that were originally designed for natural language processing and translation to other domains, including critical infrastructure and the genetic language of life. This was reported in the 2021 edition of the State of AI Report by investors Nathan Benaich of Air Street Capital and Ian Hogarth, an angel investor.

Started in 2018, their report aims to be a comprehensive survey of trends in research, talent, industry, and politics, with predictions mixed in. The authors are tracking “182 active AI unicorns totaling $1.3 trillion of combined enterprise value” and estimate that exits by AI companies have created $2.3 trillion in enterprise value since 2010.

One of their 2020 predictions was that we would see the attention-based transformers architecture for machine learning models branch out from natural language processing to computer vision applications. Google made that come true with its vision transformer, known as ViT. The approach has also shown success with audio and 3D point cloud models and shows potential to grow as a general-purpose modeling tool. Transformers have also demonstrated superior performance at predicting chemical reactions, for example — the UK’s National Grid utility significantly halved the error in its forecasts for electricity demand using a type of transformer.

Introduced in a 2017 paper, “Attention Is All You Need,” Transformers take an “attention-based” approach to limit the computing power required for analysis, for example by focusing attention on one word at a time in a sentence rather than letting the model grow exponentially in complexity with each additional word. The Perceivers’ architecture from DeepMind, the deep learning business unit of Google’s parent company Alphabet, is another variation on the attention concept that has shown strong results with inputs and outputs of various sizes, according to the report.

Amping up linguistic analysis

Making sense of human language is one of the toughest problems in AI, but lessons learned from linguistic analysis turn out to pay off in other realms such as computational biology and drug discovery.

As one example, researchers are “learning the language of COVID-19” for a grammatical understanding of its genetics, showing the potential to identify future possible mutations that could produce the next threat akin to the Delta variant. This raises the possibility that future vaccines and treatments could be prepared to address those variants before they emerge, the authors suggest.

Investor dollars are following for AI-first biotech and drug discovery firms, most notably with the October IPO of Britain’s Exscientia at a valuation of over $3 billion. Recursion Pharmaceuticals of Utah raised $436 million in an April IPO.

Yet, despite the promising outlook for AI in medicine, the report’s authors also note that “despite a loud call to arms and many willing participants, the ML community has had surprisingly little positive impact against COVID-19. One of the most popular problems – diagnosing coronavirus pathology from chest X-ray or chest computed tomography images using computer vision – has been a universal clinical failure.” They also caution against overstated claims about the applications of AI to domains such as radiology, noting that one study found 94% of AI systems designed to improve breast cancer screening are less accurate than the original radiologist.

Global rush for large language models and critical infrastructure

Large language models (LMMs) are proving so important that they “have become ‘nationalized,’ where every country wants their own LMM,” according to the report. These are models that attempt to understand all the words in a given language, and the largest to date is the Chinese model, Wudao, with 1.75 trillion parameters. In general, China has emerged as the world leader in academic AI research – at the same time that U.S. universities are suffering a significant “brain drain,” according to the report.

In addition to being one of the most important fronts in AI research, linguistic understanding is one of the most fraught. The machine understanding that emerges often turns out to reveal racist and sexist biases that might reflect an accurate understanding of human nature – but not one we want to promote. One of the recent scandals in the field was Google’s firing of Timnit Gebru, an AI researcher who says she was cut loose after raising ethical objections to the way Google was using LMMs. Alphabet/Google also quashed an effort by DeepMind to be spun off as a nonprofit research group, according to the report.

The report highlights these incidents in the context of a broader discussion of AI safety – the challenge of ensuring that AI progress is kept in alignment with human wellbeing – including worries over military applications like autonomous war-fighting machines.

These are just some of the many highlights included in the report, which was published as a 168-screen Google Slides deck. Among their predictions for the coming year are that this sector will likely see a wave of consolidation among AI semiconductor companies and the emergence of a new research company focused on artificial general intelligence (the most ambitious branch of AI) with a focus on a vertical like life sciences, critical infrastructure or developer tools.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member