People think that AI is only about business functions and personal assistants. There are many AI-applications in customer service, visualizations, etc. But, rarely do we consider the fact that AI is, in fact, an excellent tool for science as well.
Traditionally, we’ve learned about science through observation. Science has also advanced through simulation. Both observation and simulation helped scientists generate hypotheses that can then be tested with further comments. Generative modeling differs from both of these approaches. Machine learning is a third approach, between representation and simulation, can be thought of as a different way to attack a problem.
Scientists agree that AI has a huge impact and that its role in science will only advance. Brian Nord, an astrophysicist at Fermi National Accelerator Laboratory who leveraged artificial neural networks to study the cosmos, belives there’s nothing a human-scientist does that will be not be possible to automate. “It’s a bit of a chilling thought,” he said.
The best-known generative modeling systems are “generative adversarial networks” (GANs). After good training data, a GAN can repair images that have damaged or missing pixels, or they can transform blurry photographs into sharp. They learn to infer the missing information using a competition (hence the term “adversarial”): One part of the network, known as the generator, generates fake data, while the second part, the discriminator, tries to distinguish false data from real data. As the program runs, both halves get progressively better. You may have seen some of the hyper-realistic, GAN-produced “faces” that have circulated recently — images of “freakishly realistic people who don’t exist,” as one headline put it.
Generally, generative modelling uses sets of data and transforms each of them down into a set of basic, abstract building blocks — scientists refer to this as the data’s “latent space.” The algorithm manipulates elements of the latent space to see how this affects the original information, and this helps uncover physical processes that are at work in the system.
Researchers are unleashing artificial intelligence (AI), in the form of artificial neural networks, on the data torrents. Unlike earlier attempts at AI, such as “deep learning” systems don’t need to be programmed with a human expert’s knowledge. Instead, they learn on their own, often from large training data sets, until they can see patterns and spot anomalies in data sets that are far larger and messier than human beings can cope with.
AI is transforming science; it is speaking to you on your smartphone, taking to the road in driverless cars. For scientists, prospects are mostly bright: AI promises to supercharge the process of discovery.
Unlike a human, neural networks can’t explain their thinking: The computations that lead to an outcome are hidden. So their rise has spawned a field some call “AI neuroscience”: an effort to open up the black box of neural networks, building confidence in the insights that they yield.
Understanding the mind inside of a machine is likely to become more crucial as AI’s role in science advances.