Cognitive Neuroscience Research, Intelligence, and Memories Prior to beginning this discussion, please read the Frost and Lumia (2012) and the Rhodes, Rodriguez, and Shah (2014) required articles as

RESEARCH REPORT Explaining the Alluring Influence of Neuroscience Information on Scientific Reasoning Rebecca E. RhodesUniversity of Michigan Fernando Rodriguez WestEd, Los Alamitos, California Priti Shah University of Michigan Previous studies have investigated the influence of neuroscience information or images on ratings of scientific evidence quality but have yielded mixed results. We examined the influence of neuroscience information on evaluations of flawed scientific studies after taking into account individual differences in scientific reasoning skills, thinking dispositions, and prior beliefs about a claim. We found that neuroscience information, even though irrelevant, made people believe they had a better understanding of the mechanism underlying a behavioral phenomenon. Neuroscience information had a smaller effect on ratings of article quality and scientist quality. Our study suggests that neuroscience information may provide an illusion of explanatory depth.

Keywords:neuroimaging, reasoning, scientific communication The New York Timesrecently featured an article arguing that people are not merely addicted to their iPhones but actually feel love for them in the same way they feel love for a significant other (Lindstrom, 2011). This conclusion was based on an fMRI exper- iment that found similar levels of insular cortex activity when individuals thought about their significant others and when they thought about their iPhones. Explaining behavior by citing data from neuroscience has become a common trend in the media. The degree to which neuroscience evidence is particularly alluring has been a hot topic in research on scientific reasoning. Specifically, some work has suggested that providing neuroimaging data or irrelevant neuroscience explanations makes readers more likely to believe and be less critical of scientific information (McCabe & Castel, 2008;Weisberg, Keil, Goodstein, Rawson, & Gray, 2008).

Thus, one of the potential consequences of highlighting neurosci-ence information is that it may persuade individuals to believe a claim to be more valid than what the evidence actually implies.

Although some recent studies have found that neuroscience information or images affect readers’ evaluations of scientific studies, others have not found neuroscience to have a significant impact (seeFarah & Hook, 2013). For example, although one study found significant effects of brain images on judgments of reasoning (McCabe & Castel, 2008), another found brain images to be no more influential than other types of images (Gruber & Dickerson, 2012). One possible reason for this discrepancy is that individual factors may influence how, or even whether, neurosci- ence information affects judgments. For instance, previous re- search on reasoning finds that individuals’ prior beliefs predict the tendency to engage in belief-biased reasoning. In situations where the presented evidence is incongruent with one’s prior beliefs, individuals may be more motivated to be critical and, therefore, less susceptible to the influence of neuroscience. Additional fac- tors like scientific knowledge and thinking dispositions are also related to how individuals reason about claims, and those with more knowledge and sophisticated thinking styles may be less influenced by the presence of neuroscience information. In this study, we shed light on inconsistencies in the literature by explor- ing how the influence of neuroscience information may be mod- erated by individuals’ prior beliefs and their dispositions toward critical thinking. The Influence of Neuroscience Information on Reasoning Several recent studies have suggested that neuroscience infor- mation can influence the way people evaluate scientific evidence. This article was published Online First May 12, 2014.

Rebecca E. Rhodes, Department of Psychology, University of Michigan; Fernando Rodriguez, WestEd, Los Alamitos, California; Priti Shah, De- partment of Psychology, University of Michigan.

Rebecca E. Rhodes, Fernando Rodriguez, and Priti Shah developed the study concept. Rebecca E. Rhodes and Priti Shah developed the study design. Data collection was performed by Rebecca E. Rhodes. Rebecca E.

Rhodes performed the data analysis and interpretation under the supervi- sion of Priti Shah. Rebecca E. Rhodes drafted the article, and Fernando Rodriguez and Priti Shah provided critical revisions. All authors approved the final version of the article for submission.

Correspondence concerning this article should be addressed to Rebecca E. Rhodes, Department of Psychology, University of Michigan, 530 Church Street, Ann Arbor, MI 48109-1043. E-mail:[email protected] This document is copyrighted by the American Psychological Association or one of its allied publishers.

This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. Journal of Experimental Psychology:

Learning, Memory, and Cognition© 2014 American Psychological Association 2014, Vol. 40, No. 5, 1432–14400278-7393/14/$12.00http://dx.doi.org/10.1037/a0036844 1432 Weisberg et al. (2008)found that including an irrelevant sentence that contained neuroscience information made individuals evaluate poor explanations more favorably. Similarly,McCabe and Castel (2008)found that pictures of brain activations, compared to bar graphs or topographical brain maps, presented with a news article made individuals give higher ratings of the article’s scientific reasoning quality. Similar findings have been found in practical contexts, such as jury decision making (Greene & Cahill, 2012).

However, more recent studies have only found trivial effects of neuroscience (Hook & Farah, 2013;Michael, Newman, Vuorre, Cumming, & Garry, 2013).

Several explanations have been offered for the potential effect of neuroscience. One possibility offered byWeisberg et al. (2008) is that neuroscience information acts as a seductive detail. Liter- ature on seductive details reveals that text that is considered fascinating but irrelevant can actually impair one’s ability to encode the important details of instructional material (Harp & Mayer, 1998;Rey, 2012). People tend to view neuroscience with fascination (Tallis, 2012), and, if it acts as a seductive detail, it is possible that neuroscience may distract people from paying due attention to other important details of a research study. Alterna- tively, another explanation may be that people prefer a reductionist explanation of complex phenomena (Keil, 2006;McCabe & Cas- tel, 2008;Weisberg et al., 2008). For example,Keil (2006, p. 242) discussed an “illusion of explanatory depth” in which people can be misled into thinking they understand how complex systems work when they can visualize parts of that system. In other words, the concrete nature of neuroscience explanations may lead to a false sense of clarity and understanding, heightening its perceived value. To clarify the mechanism by which neuroscience informa- tion influences the evaluation of scientific evidence, the present studies test the possibility that irrelevant neuroscience information impairs the recognition of flawed evidence and/or increases per- ceived understanding of a phenomenon. Prior Beliefs and Reasoning Prior beliefs have a robust influence on our tendency to be critical of new information.Lord, Ross, and Lepper (1979)showed that, when giving participants the same data summaries from empirical studies, those who were initially in favor of the claim rated the study as more convincing evidence than those who disagreed with the claim. Other research has found similar effects; for example, people who receive preference-inconsistent informa- tion are more likely to give sophisticated and skeptical responses than those who receive preference-consistent information (Ditto & Lopez, 1992), and people are more likely to use statistical princi- ples such as base rates and the law of large numbers when the attainment of a desired solution requires that they do so (Ginossar & Trope, 1987;Sanitioso & Kunda, 1991). Dual process explana- tions suggest that belief-biased reasoning results from activation of the heuristic system when evidence is congruent with personal beliefs and the analytic system when evidence is incongruent (Evans, 2003;Klaczynski, 2000;Kunda, 1990). As such, people with congruent beliefs might be especially likely to use neurosci- ence information (even if irrelevant), as additional evidence to support their point of view, without considering whether that evidence actually improves the claim. Additionally, given that individuals are more likely to be critical of claims they do notagree with, it is possible that holding incongruent prior beliefs inoculates one against any potential influence of neuroscience.

Although prior beliefs share an important relationship with rea- soning, they have been largely ignored in studies looking at the influence of neuroscience information. Thinking Dispositions and Reasoning Another factor relevant to understanding how neuroscience in- formation influences reasoning is individual thinking dispositions.

Individuals differ in their ability to be flexible in their thinking and consider evidence that contradicts their beliefs. For example, re- search suggests that individuals high in actively open-minded thinking are less susceptible to being influenced by their prior beliefs (Stanovich & West, 1997). Additionally, individuals differ in their ability to override automatic responses and engage in deliberative processing, and this ability predicts performance on numerous classic heuristics and biases tasks (Toplak, West, & Stanovich, 2011). Both of these thinking dispositions are related to reasoning abilities and may influence the extent to which individ- uals are affected by neuroscience information and prior beliefs. Overview We conducted two experiments to examine the influence of irrelevant neuroscience information on lay reasoning in the context of public news reports about a scientific finding. Experiment 1 assessed the influence of neuroscience information on news article evaluations among participants who already had prior beliefs (ei- ther congruent or incongruent) about the claim, controlling for individual differences in thinking dispositions and knowledge.

Experiment 2 controlled for the number of words in the news article and measured evaluations for individuals with congruent, incongruent, and neutral prior beliefs. To address potential pro- cesses by which irrelevant neuroscience information affects eval- uations, Experiment 2 tested whether neuroscience distracts people from attending to important details, increases one’s feeling of understanding, or both. Experiment 1 The goal of Experiment 1 was to examine the influence of neuroscience information on scientific reasoning after controlling for other variables that affect everyday reasoning. We expected that prior beliefs (congruent/incongruent with the claim), knowl- edge of methodological principles, and thinking dispositions (e.g., the ability to think flexibly and the ability to override prepotent responses) would predict how favorably participants evaluate the evidence and that more consistent effects of neuroscience infor- mation would emerge after accounting for the variance associated with these factors.

Method Participants.The study was conducted online through Ama- zon’s Mechanical Turk, a crowdsourcing system in which thou- sands of users can complete tasks for monetary compensation and that has been shown to yield high-quality data (Buhrmester, Kwang, & Gosling, 2011). Participants were 201 adults (110 female; median age 30 years; range 18 –72) from the United This document is copyrighted by the American Psychological Association or one of its allied publishers.

This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 1433 EXPLAINING ALLURING INFLUENCE OF NEUROSCIENCE States. Participants were told that they would spend 10 –15 min reading a brief news clipping and answering some questions.

Participants were then given a URL that randomly assigned them to the neuroscience (n 103) or control (n 98) condition. Five participants were removed for indicating that they spent little effort on the tasks in the study. Approximately half of the participants (n 100 and 110, respectively) had earned a 4-year college degree and reported having taken a statistics class at some point in their education. Additionally, the majority of participants (n 158, 176, and 183, respectively) reported being at least somewhat familiar with the principles of the scientific method and the im- portance of random samples and the law of large numbers. Par- ticipants were compensated $0.80.

Materials and procedure.We constructed a news-like article that introduced a type of claim often encountered in media reports of scientific studies. The article claimed that listening to music while studying was beneficial for learning and provided evidence to support that claim. Similar toKlaczynski (2000), the evidence consisted of a study with a significant sampling error: the partic- ipants in the fictitious study self-selected themselves into the two conditions, creating a clear selection bias (seeFigure 1).

Participants in both conditions read the research study descrip- tion. Similar to the approach used byWeisberg et al. (2008), the neuroscience condition saw the research study description pre- ceded by the following two sentences that contained neuroscience jargon but did not provide any clear explanation for the effect described in the research study: Years of neuroscience research have made it clear that listening to music is associated with distinct neural processes. Functional MRI scans reveal that listening to music engages cortical areas involved in music and sound perception, and this activation is thought to be present even while doing other tasks, such as studying or learning new information. Prior beliefs measure.Before reading the article, participants were first screened about their prior beliefs about the claim and were asked to choose one from the following list: (a) “listening to music has a negative impact on studying/learning,” (b) “listening to music has no impact on studying/learning,” (c) “listening to music has a positive impact on studying/learning,” (d) “I have no expectation about the relationship between listening to music and studying/learning.” Participants who chose option (a) or (b) were classified as having incongruent prior beliefs (n 98) and those who chose (c) were classified as having congruent prior beliefs (n 98). Participants who chose option (d) were not eligible for the study (n 85). Prior beliefs were coded such that a higher score indicated congruent prior beliefs.

Evaluation measures.After reading the article, participants rated the quality of the article (1 Very poor,5 Very good), the quality of the research study (1 Very poor,5 Very good), and how convincing the article was as evidence of the claim (1 Completely unconvincing,7 Completely convincing). Partici- pants were also asked to justify their convincingness ratings in their own words. The first author (blind to condition) and an independent rater (blind to both condition and hypotheses) coded whether these open-ended justifications mentioned the method- ological flaw. Interrater agreement was good (Kappa .73,p .001).

Thinking disposition measures.After reading the article, par- ticipants completed the 41-item Actively Open-Minded Thinking (AOT) scale (Stanovich & West, 1997) and the three-item Cogni- tive Reflection Test (CRT;Frederick, 2005). High scores on the AOT scale indicate more flexible and open-minded thinking dis- positions, and high scores on the CRT reflect an ability to engage in deliberative over automatic processing.

Knowledge measure.Finally, participants completed the fol- lowing series of questions measuring overall knowledge and fa- miliarity with scientific reasoning principles: (a) “Are you familiar with the general principles of the scientific method?” (1 Not at all familiar,5 Very familiar); (b) “What is the highest grade or year of school you completed?” (1 Elementary school only,9 Advanced graduate work or PhD); (c) “Are you familiar with the idea that, for the purpose of research, one must have a large enough sample size to draw generalizations about the results?” (1 Not at all familiar,5 Very familiar); (d) “Are you familiar with the idea that, for the purpose of research, one must select a random sample of participants from the population of interest?” (1 Not at all familiar,5 Very familiar). To create an overall knowledge score, scientific method, sample size, random sample, and education variables were all converted toz-scores, and the average was computed.

Results Overall, participants were poor at identifying the methodologi- cal flaw, regardless of whether neuroscience information was present. Forty participants (20.4%) mentioned the methodological flaw when explaining their ratings of convincingness, and this percentage did not differ significantly between the neuroscience and control conditions, 17.8% vs. 23.1%, respectively, 2(1,N 196) 0.56,p .4. Group means for the three evaluation measures can be seen inTable 1. Figure 1.Research study description seen by all participants. See the online article for the color version of this figure. This document is copyrighted by the American Psychological Association or one of its allied publishers.

This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 1434 RHODES, RODRIGUEZ, AND SHAH In order to examine the influence of neuroscience information on reasoning, we conducted a multivariate analysis of variance (MANOVA) on the three evaluation measures. This revealed an overall effect of condition,F(3, 192) 2.73,p .05; Hotelling’s T 2 .04, partial 2 .04. Neuroscience resulted in higher ratings of article quality,t(194) 2.63,p .01,d 0.37, and ratings of study quality,t(194) 2.16,p .05,d 0.30. Neuroscience did not have a significant effect on convincingness, t(194) 1.19,p .23.

To control for individual differences, we constructed three re- gression models predicting ratings of convincingness, quality of the article, and quality of the study, with knowledge, prior beliefs, AOT, and CRT entered as covariates (Table 2). Condition became a significant predictor of convincingness, study quality ratings, and article quality ratings. In all cases, the presence of neuroscience resulted in more favorable evaluations. As expected, prior beliefs significantly predicted all three types of evaluations. Incongruent beliefs about the claim resulted in less favorable evaluations and congruent beliefs resulted in more favorable evaluations. In addi- tion, participants with more methodological knowledge and higher scores on the AOT and CRT scales consistently gave less favor- able evaluations.

There was a significant interaction between CRT and condition for predicting convincingness ratings,B 0.36,SE(B) 0.17, p .05, and a marginal interaction between CRT and condition for predicting study quality,B 0.19,SE(B) 0.11,p .07. To investigate the nature of the interaction for convincingness ratings, we compared the influence of condition on convincingness ratings for individuals with the lowest CRT score (score of 0) and the highest CRT score (score of 3). Among individuals with the lowest CRT score, condition had no effect on ratings (M 4.78,SD 1.41 for control condition,M 4.61,SD 1.72 for neuroscience condition),t(75) 0.48,p .63. However, among individualswith the highest CRT score, convincingness ratings were signifi- cantly higher for the neuroscience condition (M 4.0,SD 1.67) than for the control condition (M 3.1,SD 1.33), t(49) 2.02,p .05. There was no evidence of interactions between other individual difference measures (prior beliefs, knowledge, and AOT) and the presence of neuroscience (ps ranged from .23 to .79). Discussion Experiment 1 showed that neuroscience information had some effect on all three of the evaluation measures after individual differences were taken into account. Individual differences in prior beliefs, methodological knowledge, and thinking dispositions were consistently significant predictors of article evaluations, and the effect sizes were small-to-moderate. Interestingly, this experiment suggested that the influence of neuroscience is relatively indepen- dent of individual differences. This was surprising, since individ- ual differences were expected to play a significant role in the way people responded to irrelevant neuroscience information. The only case in which this was true was for convincingness ratings, where there was a significant interaction between condition and CRT score. However, our prediction was that individuals with more sophisticated thinking dispositions would be less influenced by neuroscience information and, if anything, we found some evi- dence of the opposite: Although higher CRT scores led to lower convincingness ratings for the control condition, convincingness ratings remained relatively high in the neuroscience condition, regardless of CRT score.

Experiment 1 also showed that participants were relatively poor at identifying the methodological flaw, as less than a quarter cited the selection bias in their justification of their convincingness rating. Performance was equally poor for both conditions, which Table 1 Group Means for Evaluation Measures ConditionnQuality of study Quality of article Convincingness M SD M SD M SD Control 95 2.81 1.07 3.30 0.90 3.94 1.76 Neuroscience 101 3.13 1.04 3.63 0.84 4.24 1.75 Table 2 Regression Models Predicting Evaluation Measures After Controlling for Individual Differences PredictorModel 1 DV: ConvincingModel 2 Quality of the articleModel 3 Quality of the study BSE pf 2 BSEpf 2 BSE pf 2 Condition: Neuroscience 0.44 0.22 .05 .02 0.36 0.11 .01 .04 0.40 0.13 .01 .04 Prior beliefs 1.26 0.21 .001 .17 0.33 0.11 .01 .04 0.39 0.13 .01 .04 Methodological knowledge 0.41 1.16 .01 .03 0.12 0.08 .16 .01 0.27 0.10 .01 .03 AOT 0.46 0.18 .05 .04 0.20 0.09 .05 .02 0.40 0.11 .001 .03 CRT 0.27 0.09 .01 .04 0.11 0.05 .05 .03 0.14 0.05 .05 .06 Note.DV dependent variable; AOT Actively Open-Minded Thinking scale (Stanovich & West, 1997); CRT Cognitive Reflection Test (Frederick, 2005). Model 1 Fit:F(5, 188) 14.42,p .001,R 2 .27. Model 2 Fit:F(5, 188) 6.86,p .001,R 2 .15. Model 3 Fit:F(5, 188) 11.46p .001, R 2 .23. Cohen’sf 2is a measure of local effect size and represents the proportion of variance uniquely accounted for by one predictor, above and beyond all other predictors (Cohen, 1988). Cutoff values for small, medium, and large effects are .02, .15, and .35, respectively. This document is copyrighted by the American Psychological Association or one of its allied publishers.

This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 1435 EXPLAINING ALLURING INFLUENCE OF NEUROSCIENCE provides some preliminary evidence against the possibility that neuroscience simply distracts people from attending to the meth- odological details of a research study. We test this distraction hypothesis more directly in Experiment 2.

It should be noted that, consistent with a recent meta-analysis (Michael et al., 2013), the effect sizes of neuroscience were small.

However, it is possible that these effect sizes were a result of our sample selection. We limited our sample to people who were already convinced one way or the other about the effect of music on studying, expecting to find that the influence of neuroscience would be exaggerated in these groups. We found no evidence that prior beliefs exacerbate the influence of neuroscience; instead, it is possible that people with strong prior beliefs are simply less influenced by additional information. As such, the effects of neu- roscience may be more pronounced among participants who have no prior expectation of how listening to music should affect studying/learning. Experiment 2 addresses this issue by including participants with congruent, incongruent, or neutral prior beliefs about the claim.

Finally, a limitation of Experiment 1 was that the article in the neuroscience condition contained more words, so it is unclear whether an increase in ratings was due to the neuroscience jargon or simply because more information was present. A proper control for word count is needed to rule out this possibility. Additionally, regarding the lack of interactions between condition and individual differences, it is possible that we neglected to include other rele- vant individual difference measures in the model. For example, it is possible that the influence of neuroscience may be present only when people do not have adequate knowledge about the brain. We address this possibility in Experiment 2. Experiment 2 Experiment 2 had four objectives: (a) eliminate the potential confound of article word count, (b) examine whether the effect of irrelevant neuroscience is greater among participants with neutral prior beliefs compared to participants with congruent and incon- gruent prior beliefs, (c) examine whether irrelevant neuroscience distracts people from attending to the details of the research methodology, and (d) test the possibility that irrelevant neurosci- ence inflates one’s feeling of understanding a behavioral phenom- enon.

Method Participants.Four hundred U.S. participants were recruited from Amazon’s Mechanical Turk (188 female; median age 31 years; range 18 –72). Six participants were excluded for indi- cating that they spent little effort on the study and five participants were excluded for having completed a previous version of the study; thus, all participants were seeing the stimuli for the first time. Approximately half of the participants (48.5%) reported that they had attained a 4-year college degree. Additionally, 54% of participants reported having taken at least one basic statistics or research methodology course, and most people reported being familiar with the principles of the scientific method, random sam- pling, and the importance of sample size (79.6%, 91.2%, and 94.0%, respectively).

Materials and procedure.The protocol for Experiment 2 was similar to Experiment 1. Participants were randomly assigned tothe neuroscience (n 204) or control (n 185) condition and read a research study summary. Preceding the research summary was a paragraph that either contained neuroscience jargon (neuro- science condition) or described the popularity of listening to music while studying (control condition). The neuroscience paragraph was the same as in Experiment 1. Participants in the control condition read the following paragraph: Although some people prefer to work in silence, many people opt to listen to music while working or studying. In fact, due to the increased mobile access to music, a brief glimpse into a library or coffee shop will reveal dozens of individuals poring over their laptops and books with earphones wedged into their ears. In contrast to the articles used in Experiment 1, both articles contained the same number of words (220). Participants were compensated $1.10.

Prior beliefs measure.We assessed prior beliefs at the begin- ning of the study using the same questions from Experiment 1.

Participants were categorized as having congruent (n 99), in- congruent (n 98), or neutral (n 192) beliefs about the claim that listening to music improves studying/learning. We predicted that neuroscience would be most influential for participants with neutral prior beliefs.

Evaluation measures.In addition to rating study quality, ar- ticle quality, and convincingness of the article, participants also rated the quality of the scientist (“Please rate the quality of the scientist who conducted the study described in the article”) and how well they understood why music might have an influence on learning (“On a scale of 0 to 100, how well did this article help you understand WHY music may have an impact on learning/study- ing?”). If participants have a preference for reductionist explana- tions, we predicted that neuroscience language, even if irrelevant, would make them think they have a better understanding of the phenomenon. The ratings for all five dependent variables were done on a sliding scale that ranged from 0 to 100% to allow more flexibility in responses.

Participants also justified their convincingness ratings in their own words and responses were coded according to whether the methodological flaw was mentioned, in the same manner as in Experiment 1. Interrater agreement was excellent (Kappa .94, p .001).

Individual differences.We again collected measures of AOT, CRT, and methodological knowledge. Additionally, because the influence of neuroscience may depend on how much knowledge of the brain one has, we included five questions about neuroanatomy, collected from introductory psychology textbook companion web- sites (Morris & Maisto, 2002;Schacter, Gilbert, & Wegner, 2009).

The questions used are listed in theAppendix. The brain knowl- edge score was computed by adding up the number of questions they answered correctly (M 2.37,SD 1.26).

Recall measure.To test the possibility that neuroscience in- formation distracts people from recalling the details of the study, we measured participants’ free recall of the study they read about at the beginning of the experiment, following a typical protocol used in seductive details literature (seeHarp & Mayer, 1998). We identi- fied four main idea units in the study description: (a) Participants in the study volunteered/self-selected into the conditions, (b) 75% of the class was in the music listening condition, (c) the students who listened to music received higher grades, and (d) the conclusion that This document is copyrighted by the American Psychological Association or one of its allied publishers.

This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 1436 RHODES, RODRIGUEZ, AND SHAH listening to music improves studying. Participants received a “1” for each main idea unit they were able to recall, and points were summed for a total possible score of 0 – 4. A primary rater (blind to the condition) coded all responses, and reliability was measured by hav- ing a second rater (blind to the hypotheses and condition), code 75% of the responses (Kappa 0.72,p .001).

Results Overall, 34% of participants identified the methodological flaw in the research study, and this percentage did not differ between the neuroscience and control conditions (M 32.8% andM 37.5%, respectively), 2(1,N 389) 0.52,p .4. The mean evaluation scores for each condition can be seen inTable 3.A MANOVA on the five evaluation measures revealed a significant overall effect of condition,F(5, 382) 20.49,p .001; Hotell- ing’sT 2 .26, partial 2 .21. There were significant effects of neuroscience on ratings of scientist quality,t(386) 4.59,p .001,d 0.46, and self-assessed understanding of the mechanism by which music might impact learning,t(386) 9.27,p .001, d 0.94. In other words, the presence of neuroscience information increased perceived understanding of the mechanism underlying the effect. Neuroscience also had a marginal effect on ratings of article quality,t(386) 1.82,p .06,d 0.18, and no effect on ratings of study quality or convincingness,t(386) 1.01,p .31, andt(386) 1.23,p .21, respectively.

As in Experiment 1, we constructed separate linear regression models predicting each of the evaluation measures and controlling for differences in AOT, CRT, prior beliefs, methodological knowledge, and brain knowledge. The pattern remained the same; the presence of neuroscience predicted ratings of article quality, scientist quality, and mechanistic understanding. In all cases, the addition of neuroscience information increased ratings, and the effect size was medium-to- large for the mechanistic understanding variable. Condition was not a significant predictor for ratings of study quality or convincingness.

The full models for these measures are presented inTable 4.

As expected, being more knowledgeable about methodological principles and having more sophisticated thinking dispositions (measured by AOT and CRT) resulted in less favorable evalua- tions for all measures. Prior beliefs were positively associated with ratings of convincingness, such that having congruent beliefs led to higher ratings. Surprisingly, knowledge of the brain did not predict evaluations. Additionally, there were no significant interactions between condition and brain knowledge, methodological knowl- edge, AOT, or CRT (ps ranged from .17 to .88).

To provide an overall test of our hypothesis that the influence of neuroscience is larger for those with neutral prior beliefs, we first recoded the prior beliefs measure assigning a “0” to participants with neutral prior beliefs and a “1” for participants with eithercongruent or incongruent beliefs. We ran a MANOVA on all five evaluation measures with condition and prior beliefs entered as between-subjects variables and found no significant interaction between beliefs and condition,F(5, 380) 0.65,p .66; Hotell- ing’sT 2 .01, partial 2 .01. Looking more specifically at all three prior beliefs groups,Figure 2shows the ratings of the five evaluation measures broken down by each beliefs group— congru- ent, incongruent, and neutral. AsFigure 2illustrates, the effect of neuroscience is substantial for ratings of scientist quality and mechanistic understanding, regardless of prior beliefs.

To assess whether participants in the neuroscience condition were distracted from the methodological details of the study, we compared the recall of the methodological flaw (that participants self-selected into the study conditions) for both conditions. There was not a significant difference in the recall of the flaw between the control (M 0.42,SD 0.49) and neuroscience (M .37,SD .48) conditions,t(386) 1.03,p .29. We also looked more generally at the total number of main idea units recalled in both conditions, and again found no difference between the control (M 2.04,SD 1.16) and neuroscience (M 1.90,SD 1.06) conditions,t(386) 1.11, p .26. These results suggest that neuroscience did not interfere with participants’ ability to recall the key points of the research study. Discussion Experiment 2 showed a general effect of neuroscience informa- tion, even after controlling for the number of words in the articles.

In particular, neuroscience increased ratings of scientist quality by 10% and improved mechanistic understanding by 26%. Similar to Experiment 1, we found that individual differences such as prior beliefs, knowledge, and thinking dispositions predicted evalua- tions. However, we again found no evidence to suggest that one’s prior beliefs about a claim (whether congruent, incongruent, or neutral) moderate the effect of neuroscience.

Somewhat surprisingly, brain knowledge did not predict evalu- ations. It is possible that the questions we selected to measure brain knowledge were too difficult for participants, although we rea- soned that many people would have had exposure to these brain knowledge questions in any introductory biology or psychology class. Nevertheless, future studies could explore different mea- sures of brain knowledge. Experiment 2 also replicated the finding from Experiment 1 that participants in both the neuroscience and control condition perform equally poorly at identifying the meth- odological flaw in the study, suggesting that neuroscience is not particularly distracting. Experiment 2 provided further evidence against the distraction hypothesis by showing that neuroscience did not affect the ability to recall the details of the study.

Table 3 Group Means for Evaluation Measures ConditionnQuality of study Quality of article ConvincingQuality of scientistUnderstanding of mechanism MSDMSDMSDMSDMSD Control 185 54.11 23.17 67.74 17.53 53.42 22.30 44.04 23.78 20.25 25.82 Neuroscience 203 56.44 21.95 70.90 16.61 56.17 21.37 54.41 20.59 46.48 29.51 This document is copyrighted by the American Psychological Association or one of its allied publishers.

This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 1437 EXPLAINING ALLURING INFLUENCE OF NEUROSCIENCE General Discussion The present study investigated the influence of neuroscience information on evaluations of scientific evidence after controlling for individual differences. Overall, we found an effect of irrelevant neuroscience on evaluations that is small for subjective ratings such as article quality, study quality, and convincingness of the evidence. However, we did find moderate-to-large effects for ratings of scientist quality and mechanistic understanding.

The small effects of neuroscience information on subjective ratings are consistent with recent literature (Farah & Hook, 2013;Hook & Farah, 2013;Michael et al., 2013), but the large effect on understand- ing of the mechanism represents a novel finding. This self-assessed understanding of the mechanism provides insight into how irrelevant neuroscience information may influence the way people are thinking about the relevant variables in the study. Specifically, the indication that participants’ understanding improved suggests that they are mak- ing causal associations between listening to music and quality of studying; as such, the fact that neuroscience information increased these ratings of understanding suggests that it may be influencing participants’ understanding about causation in the experiment. Future studies should further explore this effect on causal understanding.

Interestingly, we found no evidence to suggest that the influence of neuroscience interacts significantly with individual difference measures. Consistent with research on belief-biased reasoning (Klaczynski, 2000;Lord, Ross, & Lepper, 1979), participants with congruent prior beliefs were more convinced by the evidence than participants with incongruent prior beliefs, regardless of whether neuroscience was present. We reasoned that having neutral prior beliefs would allow for more movement in the evaluations and a potentially bigger effect size of neuroscience; however, Experi- ment 2 showed that this was not the case. Additionally, although methodological knowledge and thinking dispositions often pre- dicted evaluations, we found no evidence to suggest that having more knowledge or more sophisticated thinking styles mitigated the influence of irrelevant neuroscience information.

Another goal of the present study was to investigate potential mechanisms for the effect of neuroscience information on reasoning and critical thinking. We tested the possibility that neuroscience behaves as a seductive detail (cf.Rey, 2012) but found little evidence to suggest that neuroscience itself distracts people from paying atten- tion to the methodological aspects of the article. A minority of participants were able to identify the methodological flaw in the article, regardless of condition; additionally, performance on the recall measure was comparable for both conditions. Instead, we found evidence to suggest that the reductionist nature of neuroscience in- formation may mislead people into thinking they are getting more information than they actually are. Although the neuroscience infor- mation in the present study was not particularly relevant or informa- tive, participants indicated that it increased their understanding of the relationship between listening to music and studying. It is also worth noting that this effect may not be specific to neuroscience. For example,Eriksson (2012)demonstrated that meaningless mathemat- ical equations made journal abstracts more likely to be accepted.

Taken together, these results support the notion that reductionist information, even if irrelevant or not fully understood, appear to be providing valuable information and may even convince people that they understand a phenomenon better. Table 4 Regression Models Predicting Evaluation Measures After Controlling for Individual Differences PredictorModel 1 DV: ConvincingModel 2 Quality of the articleModel 3 Quality of the studyModel 4 Quality of the scientistModel 5 Mechanistic understanding BSE pf 2 BSE pf 2 BSE pf 2 BSEpf 2 BSEpf 2 Condition: Neuroscience 2.66 2.10 .20 .00 3.08 1.65 .06 .01 2.20 2.15 .30 .00 10.29 2.14 .001 .05 26.12 2.60 .001 .26 Prior beliefs 4.54 1.48 .01 .02 0.67 1.17 .56 .00 2.37 1.52 .12 .00 2.39 1.51 .11 .01 1.44 1.85 .43 .00 Methodological knowledge 6.55 1.47 .001 .04 3.94 1.16 .001 .02 7.05 1.51 .001 .06 7.66 1.50 .001 .07 5.68 1.84 .01 .01 AOT 3.32 1.76 .06 .01 4.55 1.39 .01 .02 5.54 1.81 .01 .03 4.69 1.80 .01 .02 12.44 2.20 .001 .07 CRT 2.32 0.88 .01 .01 1.92 0.70 .01 .01 2.27 0.91 .05 .02 0.89 0.90 .32 .00 2.19 1.10 .05 .00 Brain knowledge 0.29 0.83 .72 .00 0.65 0.66 .32 .00 0.08 0.86 .91 .00 0.01 0.85 .98 .00 1.01 1.04 .29 .00 Note.DV dependent variable; AOT Actively Open-Minded Thinking scale (Stanovich & West, 1997); CRT Cognitive Reflection Test (Frederick, 2005). Model 1 Fit:F(6, 378) 9.18,p .001,R 2 .12. Model 2 Fit:F(6, 378) 8.40,p .001,R 2 .11. Model 3 Fit:F(6, 378) 10.12,p .001,R 2 .13. Model 4 Fit:F(6, 378) 12.18,p .001,R 2 .16. Model 5 Fit:F(6, 378) 29.49,p .001,R 2 .31.This document is copyrighted by the American Psychological Association or one of its allied publishers.

This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 1438 RHODES, RODRIGUEZ, AND SHAH Given the popularity of neuroimaging and the attention it receives in the press, it is important to understand how people are weighting this evidence and how it may or may not affect people’s decisions. While the effect of neuroscience is small in cases of subjective evaluations, its effect on the mechanistic understanding of a phenomenon is compelling. Future research should continue to examine the extent to which this is a neuroscience-specific effect or, more generally, an effect of any kind of concrete or reductionist explanation. References Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechan- ical Turk: A new source of inexpensive, yet high-quality data.Perspec- tives on Psychological Science, 6,3–5.doi:10.1177/1745691610393980 Cohen, J. E. (1988).Statistical power analyses for the behavioral sciences.

Hillsdale, NJ: Erlbaum.

Ditto, P. H., & Lopez, D. F. (1992). Motivated skepticism: Use of differ- ential decision criteria for preferred and non preferred conclusions.

Journal of Personality and Social Psychology, 63,568 –584.doi:

10.1037/0022-3514.63.4.568 Eriksson, K. (2012). The nonsense math effect.Judgment and Decision Making, 7,746 –749.

Evans, J. S. T. (2003). In two minds: Dual-process accounts of reasoning.

Trends in Cognitive Sciences, 7,454 – 459.doi:10.1016/j.tics.2003.08.012 Farah, M. J., & Hook, C. J. (2013). The seductive allure of “seductive allure.”Perspectives on Psychological Science, 8,88 –90.doi:10.1177/ 1745691612469035Frederick, S. (2005). Cognitive reflection and decision making.Journal of Economic Perspectives, 19,25– 42.doi:10.1257/089533005775196732 Ginossar, Z., & Trope, Y. (1987). Problem solving in judgment under uncertainty.Journal of Personality and Social Psychology, 52,464 – 474.doi:10.1037/0022-3514.52.3.464 Greene, E., & Cahill, B. S. (2012). Effects of neuroimaging evidence on mock juror decision making.Behavioral Sciences & the Law, 30,280 – 296.doi:10.1002/bsl.1993 Gruber, D., & Dickerson, J. A. (2012).Persuasive images in popular science: Testing judgments of scientific reasoning and credibility.

Public Understanding of Science, 21,938 –948.doi:10.1177/ 0963662512454072 Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage: A theory of cognitive interest in science learning.Journal of Educational Psychology, 90,414 – 434.doi:10.1037/0022-0663.90.3 .414 Hook, C. J., & Farah, M. J. (2013). Look again: Effects of brain images and mind– brain dualism on lay evaluations of research.Journal of Cognitive Neuroscience, 25,1397–1405.doi:10.1162/jocn_a_ 00407 Keil, F. C. (2006). Explanation and understanding.Annual Review of Psychology, 57,227–254.doi:10.1146/annurev.psych.57.102904 .190100 Klaczynski, P. A. (2000). Motivated scientific reasoning biases, epistemolog- ical beliefs, and theory polarization: A two-process approach to adolescent cognition.Child Development, 71,1347–1366.doi:10.1111/1467-8624 .00232 Kunda, Z. (1990). The case for motivated reasoning.Psychological Bul- letin, 108,480 – 498.doi:10.1037/0033-2909.108.3.480 Figure 2.Means of each evaluation measure, by condition and prior beliefs subgroup. CB Congruent prior beliefs. IB Incongruent prior beliefs. NB Neutral prior beliefs. Bars represent standard errors of the mean. This document is copyrighted by the American Psychological Association or one of its allied publishers.

This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 1439 EXPLAINING ALLURING INFLUENCE OF NEUROSCIENCE Lindstrom, M. (2011, September 30). You love your iPhone. Literally.The New York Times. Retrieved fromhttp://www.nytimes.com/pages/opinion/ index.html Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence.Journal of Personality and Social Psychology, 37,2098 –2109.doi:10.1037/0022-3514.37.11.2098 McCabe, D. P., & Castel, A. D. (2008). Seeing is believing: The effect of brain images on judgments of scientific reasoning.Cognition, 107, 343–352.doi:10.1016/j.cognition.2007.07.017 Michael, R. B., Newman, E. J., Vuorre, M., Cumming, G., & Garry, M.

(2013). On the (non)persuasive power of a brain image.Psychonomic Bulletin & Review, 20,720 –725.doi:10.3758/s13423-013-0391-6 Morris, C. G., & Maisto, A. A. (2002).Psychology: An introduction.Upper Saddle River, NJ: Prentice Hall. Retrieved fromhttp://cwx.prenhall.com/ bookbind/pubbooks/morris5/ Rey, G. D. (2012). A review of research and a meta-analysis of the seductive detail effect.Educational Research Review, 7,216 –237.doi:

10.1016/j.edurev.2012.05.003Sanitioso, R., & Kunda, Z. (1991). Ducking the collection of costly evidence: Motivated use of statistical heuristics.Journal of Behavioral Decision Making, 4,161–176.doi:10.1002/bdm.3960040302 Schacter, D. L., Gilbert, D. T., & Wegner, D. M. (2009).Introducing psychology. New York, NY: Worth.

Stanovich, K. E., & West, R. F. (1997). Reasoning independently of prior belief and individual differences in actively open-minded think- ing.Journal of Educational Psychology, 89,342–357.doi:10.1037/ 0022-0663.89.2.342 Tallis, R. (2012).Aping mankind: Neuromania, Darwinitis and the mis- representation of humanity. London, England: Acumen.doi:10.1017/ UPO9781844652747 Toplak, M. E., West, R. F., & Stanovich, K. E. (2011). The Cognitive Reflection Test as a predictor of performance on heuristics-and- biases tasks.Memory & Cognition, 39,1275–1289.doi:10.3758/ s13421-011-0104-1 Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R.

(2008). The seductive allure of neuroscience explanations.Journal of Cognitive Neuroscience, 20,470 – 477.doi:10.1162/jocn.2008.20040 Appendix Brain Knowledge Questions Question Source 1. Functional neuroimaging (fMRI) directly measures the: a Schacter, Gilbert, and Wegner (2009) a) Neural activity of the brain during a specific task b) Activity of oxygenated hemoglobin in the blood in the brain and body c) Electrical activity in the brain d) Different function of the brain’s two hemispheres.

2. The hippocampus, amygdala, and hypothalamus are all part of the:

a) limbic system b) brainstem c) cerebral cortex d) association areas e) somatosensory cortex 3. The occipital lobe receives and interprets __________________ information.Morris and Maisto (2002) a) pain b) auditory c) visual d) bodily position 4. What structure connects the two hemispheres of the brain and coordinates their activities?

a) amygdala b) reticular formation c) corpus callosum d) hippocampus 5. A single long fiber extending from the cell body that carries outgoing messages is called a/an:

a) axon b) nerve c) terminal d) dendrite aBoth (a) and (b) were accepted as correct answers. Received June 30, 2013 Revision received March 13, 2014 Accepted March 17, 2014 This document is copyrighted by the American Psychological Association or one of its allied publishers.

This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 1440 RHODES, RODRIGUEZ, AND SHAH