alphalist Blog

2021 AI and Machine Learning Trends

Share

As tools, artificial intelligence (AI) and machine learning (ML) have become intertwined with most people's everyday lives. Available research confirms this: AI comes inbuilt in 77% of the devices we use today. In a conversation with Tobi, Rasmus Rothe, the founder of Merantix, a Berlin-based AI company builder, shares his take on the AI and machine learning trends to keep tabs on in 2021

Table of Contents

Language processing AIs will get exponentially better Last year, OpenAPI published GPT 3, a general-purpose AI system with a "text in, text out" interface that users can try by adding English text prompts. The API returns a text completion by attempting to match the text pattern you give it. Since its launch, over 300 applications have been created, powered by its search, text completion, conversation, and other advanced features. The Guardian produced an entire op-ed using GPT-3 with an editor's note saying that it took less time to edit the API's op-ed than it does human op-eds. This month, researchers in China published a language processing model ten times larger than GPT-3. Unlike in the past, where you had to train language models on specific tasks, in the new paradigm, the models crawl the entire internet and use all those texts to train the neural network. As a result, the neural network can complete all sorts of tasks: from classifying reviews to writing speeches and financial reports — even though you've never trained it for those specific tasks. Rasmus reveals that the German AI association (KI Bundesverband e.V.)— which he co-founded— is trying to push for a European AI-powered language processing model that's free and accessible for everyone. (OpenAPI's GPT-3 is publicly unavailable and was launched in private beta.) Growing popularity of Biology lab automation for running experiments at scale Biology is an experiment-heavy space. The latest advancements in artificial intelligence have brought in the concept of lab automation, where you can run experiments cheaply and at scale. What's more, DNA sequencing costs have been going down fast— even faster than Moore's Law. This has made it possible to run more experiments using robots. You could also sequence DNA of, say, proteins and use machine learning to analyze them. Using the DNA sequence, you could predict its behaviour and then run experiments in the lab to see whether it behaves the way you had anticipated. You can then iterate over again. Researchers are also sequencing mRNA and, based on that, try to predict the kind of experiments they could run to optimize them. Improved cancer diagnosis Early diagnosis has been the driver behind improved survival rates for cancer patients. Histopathological images and radiology have been the tried and tested ways of detecting cancer. However, the exponential growth of cancer cases has made these methods unrealistic and ineffective. Today, a radiologist needs to analyze more than 12 images every minute to keep up with the daily workload demands. Unfortunately, with such pressures comes misdiagnoses and burnout that are detrimental to the physician and patient alike. What's more, these tasks are labour-intensive and suffer from high inter-rater variability and low reproducibility because the methods used are pretty subjective. In contrast, AI models are improving the accuracy and efficiency of a cancer diagnosis. Increasingly, the models are getting better at detecting cancers that doctors would have missed. MLOps tools are going mainstream Machine learning systems are hard to build and scale. It is for this reason that MLOps have been gaining traction in recent years. To make ML systems manageable and scalable, companies like Twitter, Facebook, Airbnb, Uber, and Netflix began sharing information about their in-house ML stacks. Now, the MLOps idea is increasingly becoming an industry-standard because most of the challenges you'll encounter in machine learning are strikingly similar; they revolve around scalability and reproducibility. The amount of data needed to create intelligent algorithms is reducing Traditionally, AI models require tons of data to become intelligent. One of the extensive areas of development in AI is reducing this data and still achieving sufficiently intelligent AI models. If a human being can learn how to drive after 30 hours of training, why can't an AI model? The gap between how humans learn and how intelligent AIs are built is still big but has reduced year after year. People are increasingly building algorithms that can learn from much less and deliver the same performance as those models using tons of data. Gross overgeneralization that more data always gives better results is misguided. In this podcast, Tobi and Rasmus discuss several other things, including: - Rasmus' nerd journey to Merantix - Wife acceptance factor with smart home stuff - The bottlenecks of machine learning solutions - Stupidity detector in autonomous driving - Rasmus cloud provider recommendations for startups - The startups under Merantix Join the Alphalist and connect with top CTOs across Europe, the U.S., and beyond.

Tobias Schlottke

Tobias Schlottke

CTO @ saas.group

Tobias Schlottke is the founder of alphalist, a community dedicated to CTOs, and the host of the alphalist CTO podcast. Currently serving as the CTO of saas.group, he brings extensive experience in technology leadership. Previously, Tobias was the Founding CTO of OMR, notable for hosting Germany's largest marketing conference. He also founded the adtech lab (acquired by Zalando) and the performance marketing company adyard, which was sold to Ligatus/Gruner + Jahr in 2010.