Visualizing the potential impact on people’s homes before a hurricane hits can help residents prepare and decide whether to evacuate.
Scientists at the Massachusetts Institute of Technology have developed a way to generate future satellite images to depict the state of an area after a potential flood. The method combines generative artificial intelligence models with physics-based flooding models to create a realistic bird’s-eye view of the area, showing where flooding is likely to occur given the strength of the approaching storm. .
As a test case, the team applied the method to Houston, producing satellite images showing what certain parts of the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. . The team compared these generated images to real satellites. An image taken of the same area after Harvey’s impact. We also compared AI-generated imagery that does not include physically-based flood models.
The team’s physically enhanced method produced more realistic and accurate satellite images of future flooding. In contrast, AI-only methods produced images of flooding in locations where flooding was physically impossible.
The team’s approach is a proof of concept, aimed at demonstrating cases where generative AI models can produce realistic and believable content when combined with physically-based models. To apply this method to other regions to depict flooding from future storms, it must be trained on more satellite images to learn what flooding looks like in other regions.
“The idea is that one day we could use this before a hurricane to provide an additional layer of visualization for the public,” said John C., a postdoctoral fellow in the Massachusetts Institute of Technology’s Department of Earth, Atmospheric, and Planetary Sciences, who conducted the research. Björn Lütjens, who led the project, says: He was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is getting people to evacuate when they’re in danger. Perhaps this could be another visualization to help strengthen that preparedness.”
To illustrate the potential of this new method, which they dubbed the “Earth Intelligence Engine,” the team has made it available as an online resource for others to try.
The researchers reported their results today in the journal IEEE Transactions on Geoscience and Remote Sensing. MIT co-authors of the study include Brandon Leshchinskiy. Aruna Sankaranarayanan; Dava Newman, Professor of Aeroastro and Director of the MIT Media Lab; With collaborators from multiple institutions.
Generation of adversarial images
This new research is an extension of the team’s efforts to apply generative AI tools to visualize future climate scenarios.
“Providing a highly localized view of climate seems to be the most effective way to communicate scientific results,” says Newman, lead author of the study. “People relate to the local environment in their zip code and where their family and friends live. Providing local climate simulations makes it intuitive, personal and relatable.”
In this study, the authors use conditional generative adversarial networks (GANs). GAN is a type of machine learning technique that can generate realistic images using two competing, or “adversarial,” neural networks. The first “generator” network is trained on real data pairs, such as satellite images before and after a hurricane. A second “discrimination” network is then trained to distinguish between the real satellite images and the satellite images synthesized by the first network.
Each network automatically improves performance based on feedback from other networks. The idea is that such adversarial push and pull should eventually produce a synthetic image that is indistinguishable from the real thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in realistic images that shouldn’t be there.
“Hallucinations can mislead viewers,” says Lütjens. I started thinking about whether we could avoid such illusions so that we could trust generative AI tools to inform people, especially in risk-sensitive scenarios. “We were wondering how we could use these generative AI models in a climate change environment where having a reliable data source is so important.”
flood hallucination
In the new study, researchers considered risk-sensitive scenarios. In this scenario, generative AI is tasked with creating satellite images of future floods that are reliable enough to inform decisions about how to prepare and evacuate people from danger. There is a possibility.
Typically, policy makers can understand where flooding is likely to occur based on information visualized in the form of color-coded maps. These maps are the final product of a pipeline of physical models, typically starting with a hurricane tracking model and then input into a wind model that simulates regional wind patterns and strength. This is combined with flood or storm surge models that predict the likelihood that winds will push nearby bodies of water onto land. The hydraulic model then maps where flooding will occur based on the region’s flood infrastructure, producing a color-coded visual map showing flood elevations in specific areas.
“The question is, can we add another level to this with satellite imagery visualizations, one that’s a little more tangible and emotionally engaging than a red, yellow, blue color-coded map, but also trustworthy? Will it be possible?” Lütjens said.
The research team first tested how generative AI alone could generate satellite images of future floods. They trained the GAN using real satellite images taken by satellites passing over Houston before and after Hurricane Harvey. When we asked the generator to create new flood images of the same area, we found that the images looked similar to typical satellite images, but upon closer inspection, some images show that flooding should never occur. It became clear that hallucinations appear in the form of floods in places (for example, in high altitudes).
To reduce hallucinations and increase the reliability of AI-generated images, the researchers combined GANs with physically-based flood models that incorporate real-world physical parameters and phenomena such as approaching hurricane trajectories, storm surge, and flood patterns. I did. Using this physically enhanced method, the research team generated satellite images of the Houston area that show, pixel by pixel, the same flood extent predicted by the flood model.
“We demonstrate concrete ways to combine machine learning and physics in risk-sensitive use cases, including analyzing the complexity of Earth’s systems and future research to protect people from harm. behavior and possible scenarios,” says Newman. “We can’t wait to get our generative AI tools into the hands of decision-makers at the local community level. This has the potential to make a huge difference and possibly save lives.”
This research was supported in part by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud.