George Berkeley, the Irish philosopher best known for his theory of immaterialism, once famously mused, “If a tree falls in a forest and no one is around to hear it, does it make a sound?” did.
What about AI-generated trees? They probably won’t make a sound, but they’ll still be important for applications like adapting urban flora to climate change. To that end, the new “Tree-D Fusion” system, developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google, and Purdue University, combines AI and tree growth models with Google’s Auto Arborist data. to create accurate 3D. A model of an existing urban tree. This project created the first-ever large-scale database of 600,000 environmentally friendly, simulation-ready tree models from across North America.
“We are bridging decades of forest science with modern AI capabilities,” says MIT Electrical Engineering and Computer Science (EECS) Assistant Professor and MIT CSAIL Principal Investigator in a new paper on Tree-D Fusion. said co-author Sara Beery. . “This allows us to not only identify urban trees, but also predict how they will grow and impact their surroundings over time. We’re not ignoring the past 30 years of research into understanding how to build. Instead, we’re using AI to leverage this existing knowledge into cities across North America and, ultimately, We’re making it more useful across a broader range of individual trees around the world.”
Tree-D Fusion builds on previous urban forest monitoring efforts using Google Street View data, but takes it further by generating a complete 3D model from a single image. . Previous attempts at tree modeling have been limited to specific neighborhoods or struggled with large-scale accuracy, but Tree-D Fusion lets you explore areas that are typically hidden behind trees, such as behind trees that aren’t visible in Street View photos. Create detailed models with hidden features. .
The practical application of this technique goes far beyond mere observation. Urban planners can use Tree-D Fusion to look into the future and predict where growing branches might become tangled in power lines, and strategically place trees to improve cooling and air quality. One day, we may be able to identify neighborhoods that can maximize improvements. These predictive capabilities could transform urban forest management from reactive maintenance to proactive planning, the researchers say.
Trees grow in Brooklyn (and many other places)
The researchers used deep learning to create a 3D envelope of each tree’s shape, then used traditional procedural models to simulate realistic branch and leaf patterns based on the tree’s genus. We adopted a hybrid approach to our method. This combination helped the model predict how trees would grow under different environmental conditions and climate scenarios, such as possible regional temperature differences and changes in groundwater access.
As cities around the world grapple with rising temperatures, this research provides a new window into the future of urban forests. In collaboration with MIT’s Senseable City Lab, teams at Purdue University and Google are embarking on a global study to reimagine trees as living climate shields. Their digital modeling system captures the complex dance of shading patterns throughout the seasons, revealing how strategic urban forestry can transform sweltering city blocks into more naturally cool areas.
“Now, every time a street mapping vehicle passes through a city, we don’t just take a snapshot, we watch the urban forest evolve in real time,” Beery says. “This continuous monitoring creates a living digital forest that mirrors the physical forest, providing a powerful tool for observing how environmental stresses shape tree health and growth patterns across urban landscapes. Lenses will be provided to the city.”
AI-based tree modeling has emerged as an ally in the pursuit of environmental justice. A sister project from the Google AI for Nature team helped uncover disparities in access to green space across different socio-economic areas by mapping urban tree canopies in unprecedented detail. “We’re not just studying urban forests, we’re trying to foster more equity,” Beery says. The team is now working closely with ecologists and tree health experts to refine these models and ensure that as cities expand their green canopies, the benefits are distributed equally to all residents.
It’s very easy
Although Tree-D fusion has shown significant “growth” in this field, trees can pose unique challenges for computer vision systems. Unlike the rigid structures of buildings and vehicles, which current 3D modeling techniques can handle well, trees are natural shape-shifters, constantly changing their shape as they sway in the wind, intertwine with neighboring branches, and grow. The Tree-D fusion model is “simulation-enabled” in that it can estimate future tree shapes depending on environmental conditions.
“What’s exciting about this research is that it forces us to rethink the fundamental assumptions of computer vision,” Beery says. “While 3D scene understanding techniques such as photogrammetry and NeRF (Neural Radiation Fields) are great at capturing static objects, trees have the ability to dramatically change their structure from moment to moment, even with a breeze. We need new approaches that take into account the dynamic nature of
The team’s approach of creating a rough structural envelope that approximates the shape of each tree has proven to be highly effective, but several issues remain unresolved. Perhaps the most troublesome is the “tangled tree problem.” As neighboring trees grow into each other, their branches intertwine, creating puzzles that current AI systems cannot fully solve.
The scientists see their dataset as a springboard for future innovations in computer vision and are already exploring applications beyond Street View imagery, using their approach in applications such as iNaturalist and wildlife camera traps. We are considering expanding to other platforms.
“This is just the beginning for Tree-D Fusion,” says Jae Joong Lee, a Purdue University doctoral student who developed, implemented, and deployed the Tree-D-Fusion algorithm. “Together with my collaborators, I envision extending the capabilities of the platform globally. Our goal is to bring AI-powered insights to the benefit of natural ecosystems, which in turn helps reduce biodiversity. to support and promote global sustainability, ultimately benefiting the health of the planet as a whole.”
Beery and Lee’s co-author is Jonathan Huang, head of AI at Scaled Foundations (formerly at Google). From Purdue University, doctoral students Jaejun Li and Boshen Li, remote sensing professor and department chair Songlin Fei, assistant professor Raymond Ye, and computer science professor and associate dean Bedrich Benes. Four people participated. Their work is based on efforts supported by the U.S. Department of Agriculture (USDA) Natural Resources Conservation Service and directly supported by USDA’s National Institute of Food and Agriculture. The researchers presented their findings at this month’s European Conference on Computer Vision.