Quantifying Cascading Power Failure

Around 2 p.m. on August 14, 2003, an overhead transmission line carrying 345 kilovolts of electricity near Walton Hills, Ohio sagged too close to a nearby tree and shorted out. By 4 p.m., more than 50 million people were affected by one of the largest blackouts in history.In September 2011, an Arizona Public Works employee, performing a routine procedure at the North Gila substation near Yuma, tripped off a 500-kilovolt line and began a series of failures that left more than 2 million people without power in the Southwest United States.Both trigger events were small, seemingly inconsequential incidents. Both resulted in massive power outages by setting off an effect called cascading failure, a topic of considerable study for Ian Dobson, Arend J. and Verna V. Sandbulte Professor in Engineering.“What happens is, a failure occurs somewhere and weakens the system a bit,” Dobson says. “On a bad day, something else happens. Usually it doesn’t, but on that day, let’s say, it does. If it’s a really bad day, then a third thing happens and the system becomes degraded. You’re in a situation where it’s more likely that the next failure is going to happen because the last failure already happened. That’s the idea of cascading failure.”The failure of the Walton Hills line, a relatively minor occurrence given the size and scale of the power grid, reverberated through the network and helped cause a series of events that brought down a sizable chunk of the nation’s power infrastructure. The initial point of failure in Ohio shifted the power burden to other points down the line and made a malfunction in these points much more likely – a classic case of cascading failure.“What we’re talking about is the big power grid that stretches from here to Florida and Maine and Canada – everything east of the Rockies is all connected together, all humming together,” Dobson says. “Everything in the power system is protected so it doesn’t fry when something goes wrong. Things can disconnect to protect the equipment, but if you disconnect enough things, you get a blackout.”Those disconnects are usually the very thing keeping the grid from destroying itself during a large-scale cascading event. Failures in the grid are rare and typically unanticipated because, as Dobson says, everything that can be anticipated has usually already been integrated into the grid.“Something trips out the line and the power system wobbles a little bit,” Dobson says. “Under normal operation you’ve already designed for normal faults. With anything that commonly goes wrong with the system, engineers and everyone in the utility industry rushes around and makes sure that it doesn’t happen again. Most common, understandable, or easy to figure out things are already mitigated. Unusual stuff – rare interactions, unusual combinations of things when the system is already degraded – is a lot harder to control.”Dobson’s research goes beyond what can be anticipated and attempts to figure out the overall likelihood of large-scale blackouts, like the events in 2003 and 2011, by studying the interactions between various points in the system using a series of math equations and simulations. In effect, Dobson is using models to simulate the “perfect storm” in the power grid, though he disputes the terminology.“People always say ‘It was the perfect storm.’” Dobson says. “But these large blackouts happen because of the cascading effect. You’re never going to get 20 different independent failures to happen at the same time because that’s vanishingly unlikely. But if the first couple events make the next events more likely, then those events happen and make the next ones more likely – then you get those rare events happening. This is the typical way that large complicated systems have catastrophic failures, and it is not really a perfect storm.”Cascading failure is difficult to analyze because of the huge number of unanticipated variables. In other words, researchers don’t know what they don’t know. In addition, the dependence of individual failures on previous failures and their effect on subsequent failures creates an incredibly complex system of dependent variables. Large blackouts involve the failure of many interconnected variables, each of which affect how variables down the line interact with each other.“Imagine you’re very, very tightly scheduled on a certain day,” Dobson says. “Then, things start getting delayed in the morning and things get worse and worse throughout the day. Because your first appointment was delayed, It’s more likely that the next one will be delayed. Pretty soon you start missing appointments altogether in the afternoon. That’s a very small example of cascading failure.”There are a few common attributes, like critical loading, that researchers can look for when studying cases of cascading failure. A power grid’s critical loading can be defined as a point somewhere between a very low load and a very high load where the risk of a blackout increases sharply. If the amount of electricity flowing through the system is higher than the power grid critical load, the likelihood of a blackout spikes. The power grid’s critical load acts as a reference point for cascading failure; stay below it and the system will likely be fine. Go above it, and the risk of a blackout is more severe.

“If a transmission line carrying its usual load fails, other lines can pick up the slack without much trouble,” he says. “But if the power grid as a whole is carrying a load that is above its critical loading, its burden has a much greater effect on the other lines. That’s something we look for.”

Dobson uses a number of models and power system simulations of cascading failure to develop risk analysis methods for the power grid. Much like businesses use risk analysis procedures to identify and assess potential shortcomings within a project or account, Dobson uses his models to quantify the size and cost of a blackout given data on the power grid and its internal interactions. His findings can eventually be used to recommend upgrades in the power grid and determine the value and necessity of those upgrades.

“There’s a difference between recommending power grid upgrades and recommending prudent and cost-effective power grid upgrades,” Dobson says. “We have to figure out the best places to upgrade and focus resources there.”

August 17, 2012 by Brock Ascher

Iowa State Engineer Discovers Spider Silk Conducts Heat as well as Metals

Xinwei Wang, Guoqing Liu and Xiaopeng Huang, left to right, show the instruments they used to study the thermal conductivity of spider silk
Xinwei Wang, Guoqing Liu and Xiaopeng Huang, left to right, show the instruments they used to study the thermal conductivity of spider silk

Xinwei Wang had a hunch that spider webs were worth a much closer look.

So he ordered eight spiders – Nephila clavipes, golden silk orbweavers – and put them to work eating crickets and spinning webs in the cages he set up in an Iowa State University greenhouse.

Wang, an associate professor of mechanical engineering at Iowa State, studies thermal conductivity, the ability of materials to conduct heat. He’s been looking for organic materials that can effectively transfer heat. It’s something diamonds, copper and aluminum are very good at; most materials from living things aren’t very good at all.

But spider silk has some interesting properties: it’s very strong, very stretchy, only 4 microns thick (human hair is about 60 microns) and, according to some speculation, could be a good conductor of heat. But nobody had actually tested spider silk for its thermal conductivity.

And so Wang, with partial support from the Army Research Office and the National Science Foundation, decided to try some lab experiments. Xiaopeng Huang, a post-doctoral research associate in mechanical engineering; and Guoqing Liu, a doctoral student in mechanical engineering, helped with the project.

“I think we tried the right material,” Wang said of the results.

What Wang and his research team found was that spider silks – particularly the draglines that anchor webs in place – conduct heat better than most materials, including very good conductors such as silicon, aluminum and pure iron. Spider silk also conducts heat 1,000 times better than woven silkworm silk and 800 times better than other organic tissues.

A paper about the discovery – “New Secrets of Spider Silk: Exceptionally High Thermal Conductivity and its Abnormal Change under Stretching” – has just been published online by the journal Advanced Materials.

“Our discoveries will revolutionize the conventional thought on the low thermal conductivity of biological materials,” Wang wrote in the paper.

The paper reports that using laboratory techniques developed by Wang – “this takes time and patience” – spider silk conducts heat at the rate of 416 watts per meter Kelvin. Copper measures 401. And skin tissues measure .6.

“This is very surprising because spider silk is organic material,” Wang said. “For organic material, this is the highest ever. There are only a few materials higher – silver and diamond.”

Even more surprising, he said, is when spider silk is stretched, thermal conductivity also goes up. Wang said stretching spider silk to its 20 percent limit also increases conductivity by 20 percent. Most materials lose thermal conductivity when they’re stretched.

That discovery “opens a door for soft materials to be another option for thermal conductivity tuning,” Wang wrote in the paper.

And that could lead to spider silk helping to create flexible, heat-dissipating parts for electronics, better clothes for hot weather, bandages that don’t trap heat and many other everyday applications.

What is it about spider silk that gives it these unusual heat-carrying properties?

Wang said it’s all about the defect-free molecular structure of spider silk, including proteins that contain nanocrystals and the spring-shaped structures connecting the proteins. He said more research needs to be done to fully understand spider silk’s heat-conducting abilities.

Wang is also wondering if spider silk can be modified in ways that enhance its thermal conductivity. He said the researchers’ preliminary results are very promising.

And then Wang marveled at what he’s learning about spider webs, everything from spider care to web unraveling techniques to the different silks within a single web. All that has one colleague calling him Iowa State’s Spiderman.

“I’ve been doing thermal transport for many years,” Wang said. “This is the most exciting thing, what I’m doing right now.”

March 5, 2012 by Mike Krapfl

ISU Professor takes on Threat of Espionage via Hacked Smartphones

It’s not exactly dinner-table conversation, but cyber insecurity is bearing down on everyone from company CEOs to generals at U.S. military bases overseas.Recent incidents, particularly the hacking of government websites by the group Anonymous and the theft of confidential data from online retailers like Zappos, have raised questions about Internet safety. Congress’ recent introduction of the Stop Online Piracy Act exposed how complex the issue has become.In an age where most American businesses are reliant on computers to help run their day-to-day operations, and citizens habitually keep their tablets or smart phones within reach, the task of locking out cyber threats has become increasingly difficult.Suraj Kothari, a professor of electrical and computer engineering, is researching how to ward off cyber infiltration. His newest endeavor, a $4.1 million project to develop security software for Android-powered smart phones, could potentially affect every American with a hand-held mobile device.“We hear about cyber security,” Kothari said, “For example, a computer can be attacked, and you will see things on your disk are wiped out so you know something bad has happened. Now, there are new types of attacks that are going to happen or maybe are happening now. Your cell phone has been compromised, but you don’t even know it has been compromised.”In conjunction with Iowa-based EnSoft Corp., a software management company, Kothari is developing a tool to analyze potentially malicious software on Android phones.His research, funded through the Defense Advanced Research Projects Agency (DARPA), will focus on software applications commonly used by members of the U.S. military who carry smart phones.• • •Since the incident in 2005 when Paris Hilton’s cell phone was hacked and explicit photos were leaked onto the Internet, the ease of hacking into personal devices has become ordinary for some and frightening for others.In the case of military phones, keeping sensitive information out of the wrong hands could be key to American national security.

“Let’s say a general is talking to somebody else and that conversation is being leaked through the phone because the phone is interacting with the outside world … but somebody has now sneaked in software which is taking sensitive information and leaking it out to other sources,” Kothari said. “And the person who is using the phone doesn’t even know that’s what’s happening. That would be a very serious problem.”

Jeremías Sauceda, a co-principal researcher, said there haven’t been any major hacking incidents on military phones. But, he said, funding research in this area will hopefully help prevent dangerous episodes in the future.

“It’s not that some incident has happened and they are responding,” Sauceda said. “They are being proactive. Now they want to equip their personnel with smart phones. In the process of adopting that technology, they need to make sure it’s secure.”

Sauceda is a researcher for EnSoft Corp., a company located at ISU’s Research Park. Using Kothari’s innovations, Sauceda will develop a product that can be installed on military phones by the end of the 3 1/2-year project.

The idea isn’t simple, but it also isn’t new.

The project, which officially kicks off Feb. 22, will use techniques Kothari has been developing over a 15-year professional career in software analysis.

“Forty or 50 years ago, if somebody went to a doctor, the doctor would say, ‘OK, what are your symptoms?’ … The doctor is observing what’s going on in your body from the outside,” he said. “Testing is like that.”

Kothari’s analysis, however, looks at the software from the inside out, making his technique more like a modern doctor’s MRI machine.

“This is a very different way of analyzing and understanding software,” Kothari said, “and one application of it is to improve reliability.”

Downloadable mobile apps, which are often updated by their developer to improve usability, pose a tricky problem for software analysts who only rely on testing-based methods. Kothari said his goal is to develop a tool capable of probing a downloaded app and understanding its content, even after multiple updates or changes are made to the program.

February 13, 2012 by Hannah Furfaro