AI Startups Promise to Help Disaster Relief and Evacuation
(Bloomberg Businessweek) -- As record wildfires raced through California’s wine country last October, devouring thousands of homes, Sonoma County Sheriff’s deputies drove up and down streets with sirens blaring, warning residents over bullhorns to evacuate. Twenty-four people didn’t make it, and more than half of them were at least 70 years old. That suggests they either couldn’t hear the warnings or couldn’t leave under their own power, according to Sarah Tuneberg. “Those people did not need to die,” she says.
Tuneberg’s company, Geospiza Inc., sells artificial intelligence software that scours data to help cities find and protect their most vulnerable residents during a disaster. She says the platform can check multiple databases to guess which residents, in this case, have hearing impairments or use personal-care attendants. It then alerts emergency managers about who’s likely to need assistance. “It’s just too hard in the fog of war, given our current technology, to pull those pieces together in a timely fashion,” Tuneberg says. “That’s what we’re trying to do.”
As climate change makes weather more extreme, a new crop of startups—including Geospiza, named for the adaptable finches made famous by Charles Darwin—is trying to harness AI to save lives by predicting damage from hurricanes, wildfires, and earthquakes better and faster than humans do. Although the services remain largely unproven, cities around the U.S., many still reeling from last year’s record disasters, have begun signing up for them. Geospiza has a contract with Redmond, Wash., and has pilot agreements with Multnomah County, Ore., which includes Portland, and Jefferson County, Fla.
Another startup, One Concern Inc., assesses building data, elevation, soil types, weather data, and other factors to predict earthquake damage block by block and identify which buildings are most in need of reinforcement. After a quake, the company says, it can recommend where to evacuate, send first responders, and set up shelters. “You can save an order-of-magnitude more lives with good planning,” says co-founder Nicole Hu. The company says it plans to introduce similar services for wildfires and floods later this year.
The One Concern software is already being used in San Francisco to plan drills, says Michael Dayton, deputy director for the city’s Department of Emergency Management. Dayton says the software can predict, for example, whether and where an earthquake is likely to cause fires, based on where that earthquake strikes and how strong it is. It can also predict the safest routes for bringing aid and other supplies into the city.
In Utah, the nonprofit Field Innovation Team is experimenting with AI software that can anticipate what people in shelters will need based on the ages and health of those most likely to lose their homes. That information can guide how the shelters are designed, what help they offer, and even what kinds of donations officials solicit from the public, says founder Desiree Matel-Anderson, former chief innovation adviser at the Federal Emergency Management Agency.
AI isn’t a panacea, especially if the relevant databases aren’t kept current. “That has to be an ongoing effort,” says Mark Ghilarducci, director of the California Governor’s Office of Emergency Services. “The last thing you want to do is make decisions on old or bad information.” Given the power of such systems, privacy is another concern. Just as important, the software can often be a black box that’s difficult to hold accountable, says Sarah Miller, chair of the Emerging Technology Caucus for the International Association of Emergency Managers. “If the AI somehow accidentally decides that those who have higher incomes are more worthy of saving, then it might redirect resources accordingly, and we might not know that,” she says.
Hu says One Concern will never sell or share any personal information, while Tuneberg says Geospiza ensures that such data are available only to people with “an appropriate and valid need to know.” She says that in the unlikely event that AI inadvertently privileges some groups over others, as it can in the risk assessment software sometimes used in criminal sentencing, developers would be able to spot those outcomes through regular testing. “Minorities and people with disabilities are ignored by the system through human approaches every single day,” she says. “This idea that AI is going to do worse by them, I would say, is ridiculous.”
To contact the editor responsible for this story: Jeff Muskus at firstname.lastname@example.org
©2018 Bloomberg L.P.