Tilting the Vocal Scales: One Loud Voice > 40

voice-vote-illus-2-640There exist a few situations in which taking a voice vote of “yay” or “nay” makes a lot of sense. If there’s only a handful of voting members, the common parliamentary procedure can expedite the voting process. The same goes for motions that have a clear and overwhelming majority.

It would seem obvious to me, however, that taking a vocal vote with some thousands of people present on an issue that is just on the cusp of being passed or defeated is an absolutely horrendous idea. Not so, apparently, to the Democratic National Convention.

In the video above, taken from the 2012, the large crowd is asked whether the party should reinsert the phrases “God-given talents” and “Jerusalem is and will remain the capital of the State of Israel” into the party’s platform document. The voice vote is so divided, that the question is repeated three times before Los Angeles Mayor Antonio Villaraigosa names a winner—some say erroneously.

Clearly this is a terrible way to do things when a two-thirds majority is required and it’s not anywhere remotely close to unanimous. One researcher from the University of Iowa, however, wondered just how awful of an idea it was.

Ingo Titze, professor of communications sciences and disorders at Iowa, and Anil Palaparthi, a research engineer with the National Center for Voice and Speech in Utah, put together a voting body of 54 students in a large classroom with five judges blindfolded in the front. They then had the students slowly shift the number of “yays” to “nays” or had certain students start being more acoustically aggressive in their voting.

The surprising result was that a single emotional voter with a nice set of pipes could make a lot more of a difference than sheer number of voters alone. While it took at least two people shifting their vote for the judges to notice a difference, a single person raising the volume swayed the result.

The researchers calculate it would take at least 40 normal loudness voices to overcome the bias of a single loud vote, in order to establish roughly a two-thirds majority.

So, yeah, I’m thinking it might be worth the time to take actual tallies on things like a party’s platform on emotionally charged topics in the future.

The paper, “The accuracy of a voice vote,” was published in the Journal of the Acoustical Society of America by Titze and Palaparthi.

Posted in Iowa | Leave a comment

Narcissism Great at Making Leaders, Terrible at Making Great Leaders

Douche

Douche

We’ve all had to deal with a jackass in a position of leadership at some point or another. It could be that your boss is about as competent as Nick Cage’s agent. Or perhaps the CEO of your company is too far removed from reality to remember what life is like as a peon with kids and a commute. But more likely than not, we’ve all run across the person in a leadership position that is a narcissistic asshole.

You know the one. Never listens to anyone else’s ideas because they are clearly inferior than their own. Doesn’t recognize when someone else has a major success. Spends their time looking down their noses at the morons who didn’t attend State University.

How do these people keep getting the reigns in their hand?

Research on the subject is conflicted. About half of the literature says that narcissism is indeed linked to leadership, and in a positive way at that. But then there’s the other half that says narcissistic tendencies make the worst persons of power and privilege. So which is it?

A new study from the University of Illinois seeks to crack the self-indulgent code. Professor and study leader Emily Grijalva led a team that conducted a meta-study by reviewing all of the literature and trying to tease out common threads of insight. Their conclusion: narcissism is great at making leaders, but terrible at making leaders great.

As most anyone who knows what the words mean can tell you, narcissistic people tend to have extroverted interactions with those around them. And nothing forms a leader from a ball of personality putty like someone willing to throw their opinions into the ring early and often. So as one might expect, a certain level of narcissism is great at making leaders in the first place.

It’s once people are in positions of leadership that you start having too much of a good thing. The study found that people who think too highly of themselves eventually make terrible leaders. They’ll tell you that they’re great in their positions, but those around them have different opinions of their effectiveness.

That’s not to say that narcissism is all bad, though. The study concluded that, as with most things, narcissism is great in moderation. A certain healthy amount is needed for leadership to be confident in their decisions and, well, actually lead.

So what’s the perfect amount? I would say Joffrey Lannister might be a bit far to the right on the bell curve, while Sansa Stark is too far to the left. I’d put my bets on Jon Snow – he seems just confident enough in his abilities to pull off something great.

That is, unless, he starts using Nick Cage’s agent. Wait, has anyone seen Pompeii?

God damnit.

The study, “Narcissism and Leadership: A Meta-Analytic Review of Linear and Nonlinear Relationships,” was published in Personnel Psychology by Grijlava, led by advisor Chris Fraley.

| Leave a comment

Law of Physics Helps Explain Airplane Evolution

This graph shows how the mass and speed of man-made airplanes fits on a curve with the mass and speed of various kinds of animal locomotion. The supersonic Concorde was too far off the curve to be a winner. Photo courtesy Adrian Bejan.

This graph shows how the mass and speed of man-made airplanes fits on a curve with the mass and speed of various kinds of animal locomotion. The supersonic Concorde was too far off the curve to be a winner. Photo courtesy Adrian Bejan.

Researchers believe they now know why the supersonic trans-Atlantic Concorde aircraft went the way of the dodo — it hit an evolutionary cul-de-sac.

In a new study, Adrian Bejan, professor of mechanical engineering and materials science at Duke University, shows that a law of physics he penned more than two decades ago helps explain the evolution of passenger airplanes from the small, propeller-driven DC-3s of yore to today’s behemoth Boeing 787s. The analysis also provides insights into how aerospace companies can develop successful future designs.

The Concorde, alas, was too far from the curve of these good designs, Bejan says. The paper appears online July 22, in the Journal of Applied Physics.

“The evolution of Earth’s species occurred on a timescale far too large for humans to witness,” said Bejan. “But the evolution of our use of technology and airplanes to transport people and goods has taken place in little more than a single lifetime, making it visible to those who look. Evolution is a universal phenomenon encompassing technology, river basins and animal design alike, and it is rooted in physics as the constructal law.”

Adrian Bejan

Adrian Bejan

The constructal law was developed by Bejan in 1996 and states that for a system to survive, it must evolve to increase its access to flow. For example, the human vascular system has evolved to provide blood access to flow through a network of a few large arteries and many small capillaries. River systems, tree branches and modern highway and road networks show the same forces at work, he says.

In the case of commercial aircraft, designs have evolved to allow more people and goods to flow across the face of the Earth. Constructal law has also dictated the main design features needed for aircraft to succeed; the engine mass has remained proportional to the body size, the wing size has been tied to the fuselage length, and the fuel load has grown in step with the total weight.

“The same design features can be seen in any large land animal,” said Bejan. “Larger animals have longer lifespans and travel farther distances, just as passenger airplanes have been designed to do. For example, the ratio of the engine to aircraft size is analogous to the ratio of a large animal’s total body size to its heart, lungs and muscles.”

To apply his theories to airplane design, Bejan teamed up with Jordan Charles, a researcher and development engineer, and Sylvie Lorente, a professor of civil engineering at the University of Toulouse, to mine the historical databases of successful commercial aircraft. As they plotted thousands of statistics including year of introduction, size, cruising speed, engine weight, fuel weight, range, wingspan and fuselage length, many patterns began to emerge.

But two in particular stood out.

This chart shows how bigger and bigger commercial aircraft evolved over the decades to join their behemoth brethren from previous years. Courtesy Adrian Bejan.

This chart shows how bigger and bigger commercial aircraft evolved over the decades to join their behemoth brethren from previous years. Courtesy Adrian Bejan.

In one chart, a clear curve tracks the increasing size of commercial airplanes through nearly a century of aviation. As time moves on, new commercial airliners come in all sizes but the biggest are joined by even bigger models. In another chart, the line that best tracks the relationship of body mass to airplane speeds is nearly identical to mass and speed statistics from various mammals, lizards, birds, insects and more. Evolutionary constraints found in nature, in other words, can be seen at work in the airline industry.

There was, however, one outlier on the chart—the Concorde.

“The Concorde was too far off from the ratios that evolution has produced in passenger jets,” explained Bejan, who points out that the doomed aircraft had limited passenger capacity, a low mass-to-velocity ratio, an off-the-charts fuselage-to-wingspan ratio, massive engines and poor fuel economy. “It would have had to adhere to the constructal design rules to succeed.”

Bejan said this analysis shows that the aviation industry has done well with its designs over the decades, and that the trends dominating the industry are indeed the most efficient. They also reveal the general design parameters that future passenger aircraft should follow to succeed economically.

“This study gives the rough sketch of what airplane designs will put you in the game,” said Bejan. “For design companies, it is money in the bank.”

Jose Camberos, research aerospace engineer and lead of design space exploration at the Multidisciplinary Science & Technology Center of the Air Force Research Laboratory at Wright-Patterson Air Force Base in Dayton, Ohio, said that the work will hopefully give the field better insight into where the design of airplanes is going.

“There is definitely an analogy to be understood and articulated to explain why engines and airplanes are sized the way they currently are and how that has evolved,” said Camberos, who was not involved with this study. “By looking at the development of aircraft in a larger context in these terms, it may be possible to gain insights into how best to achieve what nature has been able to accomplish already.”

This research was supported by the National Science Foundation.

# # #

CITATION:  “The Evolution of Airplanes,” A. Bejan, J.D. Charles, S. Lorente. Journal of Applied Physics, July 22, 2014. DOI: 10.1063/1.4886855

Posted in Duke | Leave a comment

Men in Academia Just as Swiney as Uneducated Brethren

A pretty important paper came out last week from the University of Illinois on a topic that I don’t believe I’ve ever heard discussed before – the prevalence of sexual harassment during scientific field studies.

It stands to reason that there should be decent amount of this going on. Field work is something that most fields require – from collecting skin samples from frogs in the wild to setting up camp by a new excavation site – and pretty much every field is dominated by men. Take further into account that most people in positions of power in academia are also men, and you have a situation ripe for unwanted sexual advances.

On the other hand, you’d hope that highly educated people trying to train the next generation of scientists would be able to keep their hands out of their pants during excursions.

But nope.

The survey asked 666 respondents – most of whom were female – about their experiences in the field. The questions used standard sexual harassment and assault terminology, asking generalized questions and operational definitions without specifically using the words harassment or assault to avoid making respondents name their experiences. And unfortunately, as one might expect, the results were pretty damning.

A majority- 64 percent of the respondents – said that they had experienced sexual harassment, with more than 20 percent reporting sexual assault. What’s more, a vast majority of those incidents occurred while they were trainees and were perpetrated by those in power. More than 90 percent of women and 70 percent of men were trainees or employees, and women were 3.5 times more likely to have been harassed than men. And roughly half of all incidents reported by women were perpetrated by their superiors.

What’s more, it seems like there isn’t much help on the way. Very few people reported ever having sexual harassment guidelines or reporting mechanisms presented to them, and most everyone who reported their incidents were happy with the outcome.

I guess I would hope that such behaviors and practices would be sparse among those who are supposedly educated, but I guess even the best of society has a long way to go. But this study is a good start – it begins by admitting there’s a problem, right?

The paper, “Survey of academic field experiences (SAFE): Trainees report harassment and assault,” was published in PLOS ONE by University of Illinois anthropology professor kate Clancy.

Posted in Illinois | Leave a comment

Dual Contrast Agent to Light Up Arterial Health Risks

WestAshtonTwo degrees plus two scan energies and one heavy metal equals a new way to detect dangerous plaques in the coronary arteries.

Potentially.

Jeffrey Ashton, a biomedical engineering graduate student in Duke University’s  MD-PhD program, has won an American Heart Association Fellowship to develop a new contrast agent for CT scans. Not only would the agent be able to detect plaque buildup in arteries, but also reveal how likely the plaque is to rupture and cause a heart attack or stroke.

The  prestigious fellowship, which comes with a two-year, $50,000 grant, is intended to help young researchers launch independent careers in cardiovascular and stroke research by obtaining significant scientific results under the supervision of a mentor.

Or in this case, two mentors. The research is made possible through a collaboration betweenJennifer West, the Fitzpatrick Family University Professor of Engineering at Duke University, andCristian Badea, a professor of radiology at Duke Medicine.

“CT scans are very effective for seeing where there’s a pathology and how big it is,” said Ashton. “But that information can’t accurately predict which plaques pose an imminent risk to the patient.”

A better predictor is the proteases secreted by advanced plaques. Previous research has shown that plaques nearing their tipping point pump out more of these specialized enzymes than typical tissue.

To find plaques and determine their chances of rupturing, the project will use a relatively new technology called dual-energy CT scanning. Aptly named, the technique conducts two scans simultaneously with x-rays of differing energies. This allows doctors to see multiple materials at the same time.

“If we see an atherosclerotic plaque with a normal CT scan, we could do a dual energy CT scan using this new contrast agent to determine the risk,” said Ashton.

The first material the dual energies will light up is iodine, a contrast agent commonly used in CT scans. The second is gold nanoparticles. But the two won’t be jumping into the pool alone; they’ll be joined at the hip.

Ashton plans to connect the two elements using peptides that are easily broken by the proteases secreted by advanced plaques. As the iodine builds up in the plaque, the attached gold nanoparticles will either stay put or break free. The former indicates the plaque is in no danger of rupturing; the latter indicates a need for intervention.

“Working with Cristian has been great because, while we have done a lot of work in contrast agents in the past, we have no expertise in CT scanning technology, and he is one of the world’s leading experts in multi-modal CT imaging,” said West. “By coming together, we’ve been able to translate some of the approaches we’ve used in the past from an optical imaging platform onto a CT platform, which enables many more types of clinical applications.”

For his part, Ashton couldn’t be happier with the situation.

“Even before I started my PhD, I knew I was interested in contrast agents, but I didn’t think anyone at Duke was working with them,” said Ashton. “But rotating through Professor West’s and Professor Badea’s labs, I found that both were interested in the intersection between CT imaging and nanoparticle contrast agent development. It’s been great. I was lucky to find two labs with a need that I could fill, which also happened to be exactly what I was interested in.”

Posted in Duke | Leave a comment

Watching Individual Neurons Respond to Magnetic Therapy

The light grey coil on the left is a conventional, commercially available TMS coil. The black coil on the right is the new, innovative version designed to fit a smaller non-human primate’s cranium and work with the neural monitoring device. Photo courtesy of Warren Grill.

The light grey coil on the left is a conventional, commercially available TMS coil. The black coil on the right is the new, innovative version designed to fit a smaller non-human primate’s cranium and work with the neural monitoring device. Photo courtesy of Warren Grill.

Engineers and neuroscientists at Duke University have developed a method to measure the response of an individual neuron to transcranial magnetic stimulation (TMS) of the brain. The advance will help researchers understand the underlying physiological effects of TMS — a procedure used to treat psychiatric disorders — and optimize its use as a therapeutic treatment.

TMS uses magnetic fields created by electric currents running through a wire coil to induce neural activity in the brain. With the flip of a switch, researchers can cause a hand to move or influence behavior. The technique has long been used in conjunction with other treatments in the hopes of improving treatment for conditions including depression and substance abuse.

While studies have demonstrated the efficacy of TMS, the technique’s physiological mechanisms have long been lost in a “black box.” Researchers know what goes into the treatment and the results that come out, but do not understand what’s happening in between.

Part of the reason for this mystery lies in the difficulty of measuring neural responses during the procedure; the comparatively tiny activity of a single neuron is lost in the tidal wave of current being generated by TMS. But the new study demonstrates a way to remove the proverbial haystack.

The results were published online June 29 inNature Neuroscience.

“Nobody really knows what TMS is doing inside the brain, and given that lack of information, it has been very hard to interpret the outcomes of studies or to make therapies more effective,” saidWarren Grill, professor of biomedical engineering, electrical and computer engineering, and neurobiology at Duke. “We set out to try to understand what’s happening inside that black box by recording activity from single neurons during the delivery of TMS in a non-human primate. Conceptually, it was a very simple goal. But technically, it turned out to be very challenging.”

First, Grill and his colleagues in the Duke Institute for Brain Sciences (DIBS) engineered new hardware that could separate the TMS current from the neural response, which is thousands of times smaller. Once that was achieved, however, they discovered that their recording instrument was doing more than simply recording.

The TMS magnetic field was creating an electric current through the electrode measuring the neuron, raising the possibility that this current, instead of the TMS, was causing the neural response. The team had to characterize this current and make it small enough to ignore.

Finally, the researchers had to account for vibrations caused by the large current passing through the TMS device’s small coil of wire — a design problem in and of itself, because the typical TMS coil is too large for a non-human primate’s head. Because the coil is physically connected to the skull, the vibration was jostling the measurement electrode.

Michael Platt, director of the Duke Institute for Brain Sciences, Center for Cognitive Neuroscience; Warren Grill, professor of biomedical engineering, electrical and computer engineering, and neurobiology; Marc Sommer, associate professor of biomedical engineering and neurobiology; and Tobias Egner, assistant professor of psychology and neuroscience. Photo courtesy of Duke University.

Michael Platt, director of the Duke Institute for Brain Sciences, Center for Cognitive Neuroscience; Warren Grill, professor of biomedical engineering, electrical and computer engineering, and neurobiology; Marc Sommer, associate professor of biomedical engineering and neurobiology; and Tobias Egner, assistant professor of psychology and neuroscience. Photo courtesy of Duke University.

The researchers were able to compensate for each artifact, however, and see for the first time into the black box of TMS. They successfully recorded the action potentials of an individual neuron moments after TMS pulses and observed changes in its activity that significantly differed from activity following placebo treatments.

Grill worked with Angel Peterchev, assistant professor in psychiatry and behavioral science, biomedical engineering, and electrical and computer engineering, on the design of the coil. The team also included Michael Platt, director of DIBS and professor of neurobiology, and Mark Sommer, a professor of biomedical engineering.

They demonstrated that the technique could be recreated in different labs. “So, any modern lab working with non-human primates and electrophysiology can use this same approach in their studies,” said Grill.

The researchers hope that many others will take their method and use it to reveal the effects TMS has on neurons. Once a basic understanding is gained of how TMS interacts with neurons on an individual scale, its effects could be amplified and the therapeutic benefits of TMS increased.

“Studies with TMS have all been empirical,” said Grill. “You could look at the effects and change the coil, frequency, duration or many other variables. Now we can begin to understand the physiological effects of TMS and carefully craft protocols rather than relying on trial and error. I think that is where the real power of this research is going to come from.”

This research was supported by a Research Incubator Award from the Duke Institute for Brain Sciences and by a grant from the National Institute of Neurological Disorders and Stroke of the National Institutes of Health (grant R21 NS078687).

###

“Optimization Of Transcranial Magnetic Stimulation And Single Neuron Recording Methods For Combined Application In Alert Non-Human Primates.” Mueller, J.K., Grigsby, E.M., Prevosto, V., Petraglia III, F.W., Rao, H., Deng, Z., Peterchev, A.V., Sommer, M.A., Egner, T., Platt, M.L., Grill, W.M. Nature Neuroscience, June 29, 2014. DOI:10.1038/nn.3751

Posted in Duke | 1 Comment

Big Ten Still Big in Research and Development

It’s time once again to revisit why the hell I’d choose the Big Ten as the conference to follow for science and research stories. Besides the obvious reasons that I’m from Ohio, went to Ohio State and Indiana, and worked for Michigan State, there’s also the small fact that its the best conference for research.

Don’t believe me? Just ask the National Science Foundation and their annual report on how much money is being spent on research at each and every school in the country. True, just spending a lot of money doesn’t necessarily mean that the research is awesome, but just name me one example of a company that spends a ton of cash and also sucks at what it does.

In any case, as you might expect with the Federal government all tied up in knots, overall spending across the board was flat between 2011 and 2012 (yup, it takes a while to get all of these statistics in). And yet, most Big Ten schools saw a small increase in their annual R&D expenditures.

Let’s take a look at their standings:

  • #2 – University of Michigan, Ann Arbor – $1.322 billion
  •  #3 – University of Wisconsin, Madison – $1.169 billion
  • #14 – University of Minnesota, Twin Cities – $826 million
  • $18 – Pennsylvania State University – $797 million
  • #19 – Ohio State University – $766 million
  • #28 – Northwestern University – $631 million
  • #32 – Purdue University – $602 million
  • #33 – University of Illinois, Urbana-Champaign – $583 million
  • #36 – Michigan State University – $507 million
  • #42 – University of Iowa – $446 million
  • #51 – University of Chicago – $419 million

Not too shabby, eh? Besides having two schools in the top three and five in the top 25, the Big Ten just barely misses out on having all of its schools ranked in the top 50. And if you’re wondering why the University of Chicago is on the list, check out my “Why the Big Ten” page listed up there on the blog’s header.

Some of you smart folks out there might have noted that I left out the University of Nebraska. Yes, they’re now a part of the Big Ten and yes, they have a good football team. But are they up to snuff when it comes to science?

  • #83 – University of Nebraska, Lincoln – $253 million

In a word, nope.

What about next year’s class, with Rutgers and the University of Maryland bringing up the Big Ten to a weird total of 14?

  • #45 – Rutgers University – $434 million
  • #47 – University of Maryland – $433 million

Not too shabby. That’ll do.

Posted in Chicago, Duke, Illinois, Indiana, Iowa, Michigan, Michigan State, Minnesota, Nebraska, Northwestern, Ohio State, Penn State, Purdue, Virginia Tech, Wisconsin | Leave a comment