papers

Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).

American Sociology’s Emergence and Separation from Political Economy

Published by Anonymous (not verified) on Sat, 25/07/2020 - 10:27am in

Rereading Philippe Steiner’s excellent, thorough and highly recommended Durkheim and the Birth of Economic Sociology (2011) — in which Steiner argues that there were two stages in Durkheim’s approach to the economy: a sociological critique of political economy and a sociology … Continue reading →

How Did Corporations Spread CSR from the US to the Rest of the World?

Published by Anonymous (not verified) on Sat, 30/05/2020 - 2:38am in

by Rami Kaplan & Daniel Kinderman* Why do firms adopt Corporate Social Responsibility (CSR) practices? How does CSR spread across the globe? Our paper “The Business-Led Globalization of CSR” provides surprising answers to these most fundamental questions, based on the … Continue reading →

Still 'Profiteering From Anxiety'

Published by Anonymous (not verified) on Thu, 07/02/2013 - 8:23am in


Late last year, the excellent Neurobonkers blog covered a case of 'Profiteering from anxiety'.

It seems one Nader Amir has applied for a patent on the psychological technique of 'Attentional Retraining', a method designed to treat anxiety and other emotional problems by conditioning the mind to unconsciously pay more attention to positive things and ignore unpleasant stuff.

For just $139.99, you can have a crack at modifying your unconscious with the help of Amir's Cognitive Retraining Technologies.

It's a clever idea... but hardly a new one. As Neurobonkers said, research on these kinds of methods had been going on for years before Amir came on the scene. In a comment, Prof. Colin MacLeod (who's been researching this stuff for over 20 years) argued that "I do not believe that a US patent granted to Prof Amir for the attentional bias modification approach would withstand challenge."

Well, in an interesting turn of events, Amir has issued just Corrections (1,2) to two of his papers. Both of the articles reported that retraining was an effective treatment for anxiety; but in both cases he now reveals that there was

an error...in the article a disclosure should have been noted that Nader Amir is the co-founder of a company that markets anxiety relief products.

Omitting to declare a conflict of interest... how unfortunate.

Still, it's an easy mistake to make: when you're focused on doing unbiased, objective, original research, as Amir doubtless was, such mundane matters are the last thing you tend to pay attention to.

ResearchBlogging.orgAmir, N., and Taylor, C. (2013). Correction to Amir and Taylor (2012). Journal of Consulting and Clinical Psychology, 81 (1), 74-74 DOI: 10.1037/a0031156

Amir, N., Taylor, C., and Donohue, M. (2013). Correction to Amir et al. (2011). Journal of Consulting and Clinical Psychology, 81 (1), 112-112 DOI: 10.1037/a0031157

Another Scuffle In The Coma Ward

Published by Anonymous (not verified) on Tue, 29/01/2013 - 6:22am in

It's not been a good few weeks for Adrian Owen and his team of Canadian neurologists.

Over the past few years, Owen's made numerous waves, thanks to his claim that some patients thought to be in a vegetative state may, in fact, be at least somewhat conscious, and able to respond to commands. Remarkable if true, but not everyone's convinced.

A few weeks ago, Owen et al were criticized over their appearance in a British TV program about their use of fMRI to measure brain activity in coma patients. Now, they're under fire from a second group of critics over a different project.

The new bone of contention is a paper published in 2011 called Bedside detection of awareness in the vegetative state. In this report, Owen and colleagues presented EEG results that, they said, show that some vegetative patients are able to understand speech.

In this study, healthy controls and patients were asked to imagine performing two different actions: moving their hand, or their toe. Owen et al found that it was possible to distinguish between the 'hand' and 'toe'-related patterns of brain electrical activity. This was true of most healthy control subjects, as expected, but also of some - not all - patients in a 'vegetative' state.

The skeptics aren't convinced, however. They reanalyzed the raw EEG data and claim that it just doesn't prove anything.

This image shows that in a healthy control, EEG activity was "clean" and generally normal. However in the coma patient, the data's a mess. It's dominated by large slow delta waves - in healthy people, you only see those during deep sleep - and there's also a lot of muscle artefacts which can be seen as 'thickening' of the lines.

These don't come from the brain at all, they're just muscle twitches. Crucially, the location and power of these twitches varied over time (as muscle spikes often do).

This wouldn't necessarily be a problem, the critics say, except that the statistics used by Owen et al didn't control for slow variations over time i.e. of correlations between consecutive trials (non-independence). If you do take account of these, there's no statistically significant evidence that you can distinguish the EEG associated with 'hand' vs 'toe' in any patients.

However, in their reply, Owen's team say that:

their reanalysis only pushes two of our three positive patients to just beyond the widely accepted p=0.05 threshold for significance - to p=0.06 and p=0·09, respectively. To dismiss the third patient, whose data remain significant, they state that the statistical threshold for accepting command-following should be adjusted for multiple comparisons... but we know of no groups in this field who routinely use such a conservative correction with patient data, including the critics themselves.

I have to say that, statistical arguments aside, the EEGs from the patients just don't look very reliable, largely because of those pesky muscle spikes. A new method for removing these annoyances has just been proposed... I wonder if that could help settle this?

ResearchBlogging.orgGoldfine, A., Bardin, J., Noirhomme, Q., Fins, J., Schiff, N., and Victor, J. (2013). Reanalysis of "Bedside detection of awareness in the vegetative state: a cohort study" The Lancet, 381 (9863), 289-291 DOI: 10.1016/S0140-6736(13)60125-7

Is This How Memory Works?

Published by Anonymous (not verified) on Sun, 27/01/2013 - 8:46pm in

Tags 

papers, Science

We know quite a bit about how long-term memory is formed in the brain - it's all about strengthening of synaptic connections between neurons. But what about remembering something over the course of just a few seconds? Like how you (hopefully) still recall what that last sentence as about?

Short-term memory is formed and lost far too quickly for it to be explained by any (known) kind of synaptic plasticity. So how does it work? British mathematicians Samuel Johnson and colleagues say they have the answer: Robust Short-Term Memory without Synaptic Learning.

They write:

The mechanism, which we call Cluster Reverberation (CR), is very simple. If neurons in a group are more densely connected to each other than to the rest of the network, either because they form a module or because the network is significantly clustered, they will tend to retain the activity of the group: when they are all initially firing, they each continue to receive many action potentials and so go on firing.

The idea is that a neural network will naturally exhibit short-term memory - i.e. a pattern of electrical activity will tend to be maintained over time - so long as neurons are wired up in the form of clusters of cells mostly connected to their neighbours:


The cells within a cluster (or module) are all connected to each other, so once a module becomes active, it will stay active as the cells stimulate each other.

Why, you might ask, are the clusters necessary? Couldn't each individual cell have a memory - a tendency for its activity level to be 'sticky' over time, so that it kept firing even after it had stopped receiving input?

The authors say that even 'sticky' cells couldn't store memory effectively, because we know that the firing pattern of any individual cell is subject to a lot of random variation. If all of the cells were interconnected, this noise would quickly erase the signal. Clustering overcomes this problem.

But how could a neural clustering system develop in the first place? And how would the brain ensure that the clusters were 'useful' groups, rather than just being a bunch of different neurons doing entirely different things? Here's the clever bit:

If an initially homogeneous (i.e., neither modular nor clustered) area of brain tissue were repeatedly stimulated with different patterns... then synaptic plasticity mechanisms might be expected to alter the network structure in such a way that synapses within each of the imposed modules would all tend to become strengthened.

In other words, even if the brain started out life with a random pattern of connections, everyday experience (e.g. sensory input) could create a modular structure of just the right kind to allow short-term memory. Incidentally, such a 'modular' network would also be one of those famous small-world networks.

It strikes me as a very elegant model. But it is just a model, and neuroscience has a lot of those; as always, it awaits experimental proof.

One possible implication of this idea, it seems to me, is that short-term memory ought to be pretty conservative, in the sense that it could only store reactivations of existing neural circuits, rather than entirely new patterns of activity. Might it be possible to test that...?

ResearchBlogging.orgJohnson S, Marro J, and Torres JJ (2013). Robust Short-Term Memory without Synaptic Learning. PloS ONE, 8 (1) PMID: 23349664

Is Medical Science Really 86% True?

Published by Anonymous (not verified) on Fri, 25/01/2013 - 5:39am in

The idea that Most Published Research Findings Are False rocked the world of science when it was proposed in 2005. Since then, however, it's become widely accepted - at least with respect to many kinds of studies in biology, genetics, medicine and psychology.

Now, however, a new analysis from Jager and Leek says things are nowhere near as bad after all: only 14% of the medical literature is wrong, not half of it. Phew!

But is this conclusion... falsely positive?

I'm skeptical of this result for two separate reasons. First off, I have problems with the sample of the literature they used: it seems likely to contain only the 'best' results. This is because the authors:

  • only considered the creme-de-la-creme of top-ranked medical journals, which may be more reliable than others.
  • only looked at the Abstracts of the papers, which generally contain the best results in the paper.
  • only included the just over 5000 statistically significant p-values present in the 75,000 Abstracts published. Those papers that put their p-values up front might be more reliable than those that bury them deep in the Results.

In other words, even if it's true that only 14% of the results in these Abstracts were false, the proportion in the medical literature as a whole might be much higher.

Secondly, I have doubts about the statistics. Jager and Leek estimated the proportion of false positive p values, by assuming that true p-values tend to be low: not just below the arbitrary 0.05 cutoff, but well below it.

It turns out that p-values in these Abstracts strongly cluster around 0, and the conclusion is that most of them are real:

But this depends on the crucial assumption that false-positive p values are different from real ones, and equally likely to be anywhere from 0 to 0.05.

"if we consider only the P-­values that are less than 0.05, the P-­values for false positives must be distributed uniformly between 0 and 0.05."

The statement is true in theory - by definition, p values should behave in that way assuming the null hypothesis is true. In theory.

But... we have no way of knowing if it's true in practice. It might well not be.

For example, authors tend to put their best p-values in the Abstract. If they have several significant findings below 0.05, they'll likely put the lowest one up front. This works for both true and false positives: if you get p=0.01 and p=0.05, you'll probably highlight the 0.01. Therefore, false positive p values in Abstracts might cluster low, just like true positives.

Alternatively, false p's could also cluster the other way, just below 0.05. This is because running lots of independent comparisons is not the only way to generate false positives. You can also take almost-significant p's and fudge them downwards, for example by excluding 'outliers', or running slightly different statistical tests. You won't get p=0.06 down to p=0.001 by doing that, but you can get it down to p=0.04.

In this dataset, there's no evidence that p's just below 0.05 were more common. However, in many other sets of scientific papers, clear evidence of such "p hacking" has been found. That reinforces my suspicion that this is an especially 'good' sample.

Anyway, those are just two examples of why false p's might be unevenly distributed; there are plenty of others: 'there are more bad scientific practices in heaven and earth, Horatio, than are dreamt of in your model...'

In summary, although I think the idea of modelling the distribution of true and false findings, and using these models to estimate the proportions of each in a sample, is promising, I think a lot more work is needed before we can be confident in the results of the approach.

A Scuffle In The Coma Ward

Published by Anonymous (not verified) on Fri, 18/01/2013 - 5:12am in

A couple of months ago, the BBC TV show Panorama covered the work of a team of neurologists (led by Prof. Adrian Owen) who are pioneering the use of fMRI scanning to measure brain activity in coma patients.

The startling claim is that some people who have been considered entirely unconscious for years, are actually able to understand speech and respond to requests - not by body movements, but purely on the level of brain activation.

However, not everyone was impressed. A group of doctors swiftly wrote a critical response, published in the British Medical Journal as fMRI for vegetative and minimally conscious states: A more balanced perspective

The Panorama programme... failed to distinguish clearly between vegetative vs. minimally conscious states, and gave the impression that 20% of patients in a vegetative state show cognitive responses on fMRI.

There are important differences between the two states. Patients in a vegetative state have no discernible awareness of self and no cognitive interaction with their environment. Patients in a minimally conscious state show evidence of interaction through behaviours...

The programme presented two patients said to be in a “vegetative state” who showed evidence of cognitive interaction on assessment using fMRI but the clinical methods used for the original diagnosis were not stated. In both cases, family members clearly reported that the patient made positive but inconsistent behavioural responses to questions... one of these patients was filmed responding to a question from his mother by raising his thumb and the other seemed to turn his head purposefully.

So Panorama stands accused of passing off patients who were really minimally conscious, as being in a vegetative state. To see signs of understanding on brain scans from the latter would be truly amazing because it would be the first evidence that they weren't, well, vegetative.

However if they were 'merely' minimally conscious patients, it's not as interesting, because we already knew they were capable of making responses.

Now the Panorama team - and Professor Owen - have replied in a BMJ piece of their own. Given that they're charged with  misleading journalism and sloppy medicine, they're understandably a bit snarky:

Just by viewing this one hour documentary the authors felt able to discern that both the patients “said to be in a vegetative state” are “probably” minimally conscious... One of these patients, Scott, has had the same neurologist for more than a decade. Professor Young, who appeared in the film, made it clear that Scott had appeared vegetative in every assessment...

The fact that these authors took Scott’s fleeting movement, shown in the programme, to indicate a purposeful (“minimally conscious”) response shows why it is so important that the diagnosis is made in person, by an experienced neurologist, using internationally agreed criteria.

In other words, they were vegetative, and the critics who said otherwise, on the basis of some TV footage, were being silly.

In other words...it's on.

ResearchBlogging.orgTurner-Stokes L, Kitzinger J, Gill-Thwaites H, Playford ED, Wade D, Allanson J, Pickard J, & Royal College of Physicians' Prolonged Disorders of Consciousness Guidelines Development Group (2012). fMRI for vegetative and minimally conscious states. BMJ (Clinical research ed.), 345 PMID: 23190911

Walsh F, Simmonds F, Young GB, & Owen AM (2013). Panorama responds to editorial on fMRI for vegetative and minimally conscious states. BMJ (Clinical research ed.), 346 PMID: 23298817

Drunk Rats Could Overturn Neurological Orthodoxy

Published by Anonymous (not verified) on Tue, 15/01/2013 - 9:41am in

A form of brain abnormality long regarded as permanent is, in fact, sometimes reversible, according to an unassuming little paper with big implications.

Here's the key data: some rats were given a lot of alcohol for four days (the "binge"), and then allowed to sober up for a week. Before, during and after their rodent Spring Break, they had brain scans. And these revealed something remarkable - the size of the rats' lateral ventricles increased during the binge, but later returned to normal.

Control rats, given lots of sugar instead of alcohol, did not show these changes.

This is really pretty surprising. The ventricles are simply fluid-filled holes in the brain. Increased ventricular size is generally regarded as a sign that the brain is shrinking - less brain, bigger holes - and if the brain is shrinking that must be because cells are dying or at least getting smaller. So bigger ventricles is bad.

Or so we thought... but this study shows that it might not always be true: alcohol reversibly increases ventricular volume over a timescale of days. It does so, the authors say, essentially by drying brain tissue out; like most things, if you dry the brain out, it gets smaller (and the ventricles get bigger) but when the water comes back to the tissues, it expands again.

As you can see here in Figure 2...

Maybe. I admit that just eyeballing this, it looks more like the ventricles are getting brighter, rather than bigger, but I'm not familiar with the details of water scanning. Maybe some readers will know more about it.

If it's true, this is big - maybe it's not just high doses of alcohol that does this. Maybe other drugs or factors can shrink or expand, the ventricles, or even other areas, purely by acting on tissue water regulation, rather than by anything more 'interesting'.

Take the various claims that some psychiatric drugs boost brain volume while others decrease it, just for starters...could they be headed for a watery grave?

Of course, this is in mice - and it might not translate to humans... we need to find out, and I for one am keen to apply for a grant. Here's my draft:

Participants: 8 healthy-livered neuroscientists.
Materials: 1 MRI scanner, 1 crate Jack Daniels.
Methods: Subjects will confer to pick a Designated Operator, who will remain sober. If no volunteers for this role are forthcoming, selection will be randomized by Bottle Spinning. All other participants will consume Jack Daniels ad libitum, and take turns being scanned. Once all Jack Daniels is depleted, participants will continue to be scanned until fully sobered up (defined as when they can successfully spell "amygdalohippocampal").
Instructions to Participants: i) what happens in the magnet, stays in the magnet. ii) If you 'dirty' the scanner, you clean it up. iii) Bottle caps are not MRI safe!

Er... seriously though, someone should check.

ResearchBlogging.orgZahr NM, Mayer D, Rohlfing T, Orduna J, Luong R, Sullivan EV, and Pfefferbaum A (2013). A mechanism of rapidly reversible cerebral ventricular enlargement independent of tissue atrophy. Neuropsychopharmacology  PMID: 23306181

DSM-5: A Ruse By Any Other Name...

Published by Anonymous (not verified) on Sun, 13/01/2013 - 8:45pm in

In psychiatry, "a rose is a rose is a rose" as Gertrude Stein put it. That's according to an editorial in the American Journal of Psychiatry called: The Initial Field Trials of DSM-5: New Blooms and Old Thorns.

Like the authors, I was searching for some petal-based puns to start this piece off, but then I found this "flower with an uncanny resemblance to a MONKEY" which I think does the job quite nicely:
Anyway, the editorial is about the upcoming, controversial fifth revision to the Diagnostic and Statistical Manual (DSM) of the American Psychiatric Association (APA).

A great deal has been written about the DSM-5 over the past few years, as "the rough beast, its hour come round at last / Slouches towards Bethlehem to be born" (see, I can reference early-20th-century poetry too).

But now the talk has moved into a new phase, because the results of the DSM-5 'field trials' are finally out. In these studies, the reliability of the new diagnostic criteria for different psychiatric disorders was measured. The new editorial is a summary and discussion of the field trial data.

Two different psychiatrists assessed each patient, and the agreement between their diagnoses was calculated, as the kappa statistic, where 0 indicates no correlation at all and 1 is perfect.

It turns out that the reliabilities of most DSM-5 disorders were not very good. The majority were around 0.5, which is at best mediocre. These included such pillars of psychiatric diagnosis like schizophrenia, bipolar disorder, and alcoholism.

Others were worse. Depression, had a frankly crap kappa of 0.28, and the new 'Mixed Anxiety-Depressive Disorder' came in at -0.004 (sic). It was completely meaningless.

The American Journal editorial was written by a group of senior DSM-5 team members. I'm sure they wanted to write a triumphant presentation of their work, but in fact the tone is subdued, even apologetic in places:

As for most new endeavours, the end results are mixed, with both positive and disappointing findings...Experienced clinicians have severe reservations about the proposed research diagnostic scheme for personality disorder...like its predecessors, DSM-5 does not accomplish all that it intended, but it marks continued progress for many patients for whom the benefits of diagnoses and treatment were previously unrealized.

Remember: this is the journal published by the organization responsible for the DSM and even they don't much like it.

But the real story is even worse. The previous editions of the DSM also conducted field trials. These trials had a system to describe different kappa values: for example, 0.6-0.8 was 'satisfactory'.

However, the new DSM-5 studies used a different, lower threshold. They simply moved the goalposts, deeming lower kappa values to be good. At one point, they wrote that values of above 0.8 would be 'miraculous' and above 0.6 a 'cause for celebration', yet this wasn't the view of previous DSM developers.

The indispensable 1boringoldman blog has a nice graphic showing the results of the DSM-5 trials, with the kappas graded according to the old vs. the new criteria. As you can see, the grass is greener on the new side.
The fact is that the DSM-5 field trial results are worse than the results from DSM-III, the 1980 version that's served mostly unchanged for 30 years (DSM-IV made fairly modest changes.) The reliabilities have got worse - despite the editorial's claims of 'continued progress'. It's true that the DSM-5 field trials were a lot bigger and conducted rather differently, but still, it's a serious warning sign.

Finally, there was great variability in the results between different hospitals - in other words the reliability scores were not, themselves, reliable. Some institutions achieved much higher kappa values than others, but it's anyone's guess how they managed to do so.

Still, there's great news: the DSM-5 is just a piece of paper (well, a big stack of them). Any psychiatrist is free to ignore it - as the creator of the more reliable DSM-IV (not III, oops) is now urging them to do.

ResearchBlogging.orgFreedman R, Lewis DA, Michels R, Pine DS, Schultz SK, Tamminga CA, Gabbard GO, Gau SS, Javitt DC, Oquendo MA, Shrout PE, Vieta E, and Yager J (2013). The Initial Field Trials of DSM-5: New Blooms and Old Thorns. The American Journal of Psychiatry, 170 (1), 1-5 PMID: 23288382

Smart People Say They're Less Depressed

Published by Anonymous (not verified) on Sat, 12/01/2013 - 8:26pm in

The questionable validity of self-report measures in psychiatry has been the topic of a few recent  posts here at Neuroskeptic.


Now an interesting new study looks at the question in issue from a new angle, asking: what kind of people report feeling more or less depressed? Korean researchers Kim and colleagues found that intelligence and personality variables were both linked to the tendency to self-rate depression more severely.

The study involved 100 patients who'd previously suffered from an episode of depression or mania and who, according to their psychiatrist, had now recovered and were back to normal. Kim et al looked to see what the patient thought about their mood, by getting them to complete the Beck Depression Inventory (BDI) self-report questionnaire.

This was compared to the clinican-administered HAMD scale (another Neuroskeptic favourite) which is meant to be independent of self report.

It turns out that the BDI and HAMD scores were only weakly correlated - with a coefficient of just r=0.32. That's really not very good considering that, in theory, they both measure the same thing: 'depression'. Many people reported being considerably depressed when their clinicians rated them as fine.

But more interestingly, certain characteristics of the patients were correlated with their self-report/clinician-rating discrepancy. Specifically, patients with a lower IQ, who were more impulsive, and less conscientious, tended to self-report more severe depression.

Now, the uncharitable interpretation of these people is that they were just too sloppy to complete the form properly... the uncharitable interpretation of the psychiatrists is that it's their fault for underestimating depression in people less inclined to express themselves in 'the right way'. There's no way to know.

Either way, it's a serious problem because it shows that self-report and observer-report measures of depression aren't just poorly correlated, they're actually measuring different things for different people.

It could be even worse than it appears because the HAMD, although supposedly not a self-report measure, does in fact heavily rely on the patient's cooperation. So a 100% clinician-rated scale might be even further removed from self-report.

ResearchBlogging.orgKim EY, Hwang SS, Lee NY, Kim SH, Lee HJ, Kim YS, and Ahn YM (2012). Intelligence, temperament, and personality are related to over- or under-reporting of affective symptoms by patients with euthymic mood disorder. Journal of affective disorders PMID: 23270973

Pages