Review Chapter 5 Searching the Evidence, Chapter 6 Evidence Appraisal Research, and Chapter 7 Evidence Appraisal Nonresearch in the Johns Hopkins Evidence-based Practice for Nurses and Healthcare Prof

Evidence-rating schemes consider scientific evidence—also referred \ to as research —to be the strongest form of evidence. The underly - ing assumption is that recommendations from higher levels of high- quality evidence will be more likely to represent best practices. While \ comparatively stronger than nonresearch evidence, the strength of re - search (scientific) evidence can vary between studies depending upon\ the methods used and the quality of reporting by the researchers. The EBP team begins its evidence search in the hope of finding the highest\ level of scientific evidence available on the topic of interest.

This chapter provides: ■ An overview of the various types of research approaches, designs, and methods ■ Guidance on how to appraise the level and quality of research evidence to determine its overall strength ■ Tips and tools for reading and evaluating research evidence 6 Evidence Appraisal: Research Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

130 Types of Scientific Research The goal of research is to produce new knowledge that can be generalized\ to a wider population by systematically following the scientific method. Research approaches are the general frameworks researchers use to structure a study and collect and analyze data (Polit & Beck, 2017). These fall in three\ broad categories: quantitative, qualitative, and mixed methods. Researchers us\ e these approaches across the spectrum of research designs (e.g., experimental,\ quasi- experimental, descriptive), which primarily dictate the research method\ s used to gather, analyze, interpret, and validate data during the study. The chosen technique will depend on the research question as well as the investigat\ ors’ background, worldviews (paradigms), and goals (Polit & Beck, 2017). \ Quantitative Research Most scientific disciplines predominantly use a quantitative research \ approach to examine relationships among variables. This approach aims to establis\ h laws of behavior and phenomena that are generalizable across different settin\ gs and contexts. This research approach uses objective and precise collection s\ uch as observation, surveys, interviews, documents, audiovisuals, or polls to m\ easure data quantity or amount. Through numerical comparisons and statistical inferences, data analysis allows researchers to describe, predict, test \ hypotheses, classify features, and construct models and figures to explain what th\ ey observe. Qualitative Research Qualitative research approaches, rooted in sociology and anthropology, seek to explore the meaning individuals, groups, and cultures attribute to a \ social or human problem (Creswell & Creswell, 2018). Thus, the researcher stu\ dies people and groups in their natural setting and obtains data from an info\ rmants’ perspective. Using a systematic subjective approach to describe life exp\ eriences, qualitative researchers are the primary data collection instrument. By a\ nalyzing data, they attempt to make sense of or interpret phenomena in terms of t\ he meanings people bring to them. In contrast to quantitative research, qua\ litative studies do not seek to provide representative data but rather informatio\ n saturation. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 131 Mixed-Methods Research A mixed-methods research approach intentionally incorporates or “mixes” both quantitative and qualitative designs and data in a single study (C\ reswell & Creswell, 2018). Researchers use mixed methods to understand contradict\ ions between quantitative and qualitative findings; assess complex interven\ tions; address complex social issues; explore diverse perspectives; uncover rel\ ationships; and, in multidisciplinary research, focus on a substantive field, such\ as childhood depression.

Qualitative and quantitative research designs are complementary. However, while a quantitative study can include qualitative data, such as asking an ope\ n-ended question in a survey, it is not automatically considered to be mixed-methods be - cause the design sought to address the research questions from a quantit\ ative per - spective (how many, how much, etc.). Likewise, a qualitative study may gather quantitative data, such as demographics, but only to provide further ins\ ights into the qualitative analysis. The research problem and question drive the de\ cision to use a true mixed-methods approach and leverage the strengths of both qua\ ntita - tive and qualitative designs to provide a more in-depth understanding th\ an either would if used independently. If using only quantitative or qualitative designs would provide sufficient data, then mixed methods are unnecessary. Types of Research Designs Research problems and questions guide the selection of a research approa\ ch (qualitative, quantitative, or mixed methods) and, within each approac\ h, there are different types of inquiries, referred to as research designs (Cres\ well & Creswell, 2018). A research design provides specific direction for th\ e methods used in the conduct of the actual study. Additionally, studies can take the form of single research studies to create new data (primary research), summ\ arize and analyze existing data for an intervention or outcome of interest (s\ econdary research), or represent summaries of multiple studies.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 132 Single Research Studies The evidence-based practice (EBP) team will typically review evidence \ from single research studies or primary research. Primary research comprises data co\ llected to answer one or more research questions or hypotheses. Reviewers may also \ find secondary analyses that use data from primary studies to ask different q\ uestions.

Single research studies fall into three broad categories: true experimen\ tal, quasi- experimental, and nonexperimental (observational).

Table 6.1 outlines the quantitative research design, aim, distinctive fea\ tures, and types of study methods frequently used in social sciences. Table 6.1 Research Design, Aim, Distinctive Features, and Types of Study Methods Research Design Aim Features Type of Study Methods True Experimental Establish existence of a cause and effect relationship between an intervention and an outcome ■ Manipulation of a variable in the form of an intervention ■ Control group ■ Random assignment to the intervention or control group ■ Randomized controlled trial ■ Posttest-only with randomization ■ Pre- and posttest with randomization ■ Solomon 4 group Quasi- experimental Estimate the causal relationship between an intervention and an outcome without randomization ■ An intervention ■ Nonrandom assignment to an intervention group; may also lack a control group ■ Nonequivalent groups:

not randomized ■ control (comparison) group posttest only ■ pretest–posttest ■ One group: not randomized ■ posttest only ■ pretest–posttest ■ Interrupted time-series Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 133 Non- experimental Measures one or more variables as they naturally occur without manipulation ■ May or may not have an intervention ■ No random assignment to a group ■ No control group ■ Descriptive ■ Correlational ■ Qualitative Univariate Answers a research question about one variable or describes one characteristic or attribute that varies from observation to observation ■ No attempt to relate variables to each other ■ Variables are observed as they naturally occur ■ Exploratory ■ Survey ■ Interview Source: Creswell & Creswell, 2018 True Experimental Designs (Level I Evidence) True experimental studies use the traditional scientific method: independent and dependent variables, pretest and posttest, and experimental and cont\ rol groups. One group (experimental) is exposed to an intervention; the ot\ her is not exposed to the intervention (the control group). This study design all\ ows for the highest level of confidence in establishing causal relationships betwe\ en two or more variables because the variables are observed under controlled condi\ tions (Polit & Beck, 2017). True experiments are defined by the use of randomization.

The most commonly recognized true experimental method is the randomized \ controlled trial, which aims to reduce certain sources of bias when test\ ing the effectiveness of new treatments and drugs. However, other methods of true experimental designs that require randomization are listed in Table 6.1. The Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 134 Solomon 4 group is frequently used in psychology and sometimes used in s\ ocial sciences and medicine. It is used specifically to assess whether takin\ g a pretest in - fluences scores on a posttest.

A true experimental study has three distinctive criteria: randomization, manipu - lation , and control . Randomization occurs when the researcher assigns subjects to a control or ex - perimental group arbitrarily, similar to the roll of dice. This process ensures that each potential subject who meets inclusion criteria has the same probabi\ lity of selection for the experiment. The goal is that people in the experimenta\ l group and the control group generally will be similar, except for the experimental inter - vention or treatment. This is important because subjects who take part i\ n an ex - periment serve as representatives of the population, and as little bias \ as possible should influence who does and does not receive the intervention.

Manipulation occurs when the researcher implements an intervention with at least some of the subjects. In experimental research, some subjects (th\ e experi - mental group ) receive an intervention and other subjects do not (the control group ). The experimental intervention is the independent variable, or the action the researcher will take (e.g., applying low-level heat therapy) to tr\ y to change the dependent variable (e.g., the experience of low back pain). Control usually refers to the introduction of a control or comparison group, su\ ch as a group of subjects to which the experimental intervention is not applied. The goal is to compare the effect of no intervention on the dependent variab\ le in the control group against the experimental intervention’s effect on the dependent variable in the experimental group. Control groups can be achieved throu\ gh vari - ous approaches and include the use of placebos, varying doses of the int\ ervention between groups, or providing alternative interventions (Polit & Beck, 2\ 017).

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 135 Quasi-Experimental Designs (Level II Evidence) Quasi-experimental studies are similar to experimental studies in that they try to show that an intervention causes a particular outcome. Quasi-experime\ ntal studies always include manipulation of the independent variable (interv\ ention).

They differ from true experimental studies because it is not always poss\ ible to randomize subjects, and they may or may not have a control group. For ex\ ample, an investigator can assign the intervention (manipulation) to one of t\ wo groups (e.g., two medical units); one unit volunteers to pilot a remote video\ fall reminder system (intervention group) and is compared to the other unit that con\ tinues delivering the standard of care (control group). Although the preexist\ ing units were not randomly assigned, they can be used to study the effectiveness \ of the remote video reminder system.

In cases where a particular intervention is effective, withholding that \ intervention would be unethical. In the same vein, it may not be feasible to randomiz\ e pa - tients or geographic location, or it would not be practical to perform a\ study that requires more human, financial, or material resources than are availab\ le.

Examples of types of important and frequently used quasi-experimental de\ signs that an EBP team may see during the course of their search include noneq\ uivalent control (comparison), one group posttest only, one group pretest–posttest, and interrupted time series design. The EBP team members should refer to a r\ esearch text when they encounter any unfamiliar study designs. Example: Experimental Randomized Controlled Trial Rahmani et al. (2020) conducted a randomized control trial to investig\ ate the impact of Johnson’s Behavioral System Model in the health of heart failure patients. They randomized 150 people to a control group and an intervent\ ion group. The intervention group received care based on findings from a b\ ehavioral subsystem assessment tool, and the control group received care based on \ their worst subsystem scores over a two-week period. The researchers fou\ nd that the intervention group showed significant improvement in six of t\ he eight subsystems over the control group.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 136 Nonexperimental Designs (Level III Evidence) When reviewing evidence related to healthcare questions, EBP teams will \ often find studies of naturally occurring phenomena (groups, treatmen\ ts, and individuals), situations, or descriptions of the relationship between t\ wo or more variables. These studies are nonexperimental because there is no interfe\ rence by the researcher. This means that there is no manipulation of the independent variable or random assignment of participants to a control or treatment \ group, or both. Additionally, the validity of measurements (e.g., physiologic values, survey tools), rather than the validity of effects (e.g., lung cancer \ is caused by smoking), is the focus of attention. Nonexperimental studies fall into three broad categories—descriptive,\ correlational, and qualitative univariate (Polit & Beck, 2017)—and \ can simultaneously be characterized from a time perspective. In retrospective studies, the outcome of the study has already occurred (or has not occurred in t\ he case of the controls) in each subject or group before they are asked to enro\ ll in the study. The investigator then collects data either from charts and records or \ by obtaining recall information from the subjects or groups. In contrast\ , for prospective studies, the outcome has not occurred at the time the study begins, and the investigator follows up with subjects or groups over a specifi\ c period to determine the occurrence of outcomes. In cross-sectional studies, researchers collect data from many different individuals at a single point in time a\ nd observe the variables of interest without influencing them. Longitudinal studies look at changes in the same subjects over a long period. Examples: Quasi-Experimental Studies Awoke et al. (2019) conducted a quasi-experimental study to evaluate th\ e impact of nurse-led heart failure patient education on knowledge, self-c\ are behaviors, and all cause 30-day hospital readmission. The study used a p\ retest and posttest experimental design on a convenience sample in two cardiac \ units.

An evidence-based education program was developed based on guidelines from the American Colleges of Cardiology and American Heart Association.\ Participants were invited to complete two validated scales assessing hea\ rt failure knowledge and self-care. The researchers found a statistically s\ ignificant difference in knowledge and self-care behaviors. A significant improve\ ment in 30-day readmission was not found.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 137 Descriptive Studies Descriptive studies accurately and systematically describe a population, situation, or phenomenon as it naturally occurs. It answers what, where, when, and \ how questions but does not answer questions about statistical relationships \ between variables. There is no manipulation of variables and no attempt to deter\ mine that a particular intervention or characteristic causes a specific occurren\ ce to happen.

Answers to descriptive research questions are objectively measured using statistics, and analysis is generally limited to measures of frequency (count, perc\ ent, ratios, proportions), central tendency (mean, median, mode), dispersion or va\ riation (range, variance, standard deviation), and position (percentile rank,\ quartile rank). A descriptive research question primarily quantifies a single \ variable but can also cover multiple variables within a single question. Common types\ of descriptive designs include descriptive comparative, descriptive correla\ tional, predictive correlational, and epidemiologic descriptive studies (preval\ ence and incidence). Table 6.2 outlines the purpose and uses of quantitative descriptive design study types. Table 6.2 Descriptive Study Type, Purpose, and Use Descriptive Study Type Purpose Use Comparative Determine similarities and difference or compare and contrast variables without manipulation. Account for differences and similarities across cases; judge if a certain method, intervention, or approach is superior to another. Descriptive correlational Describe two variables and the relationship (strength and magnitude) that occurs naturally between them. Find out if and how a change in one variable is related to a change in the other variable(s). Incidence (epidemiologic descriptive) Determine the occurrence of new cases of a specific disease or condition in a population over a specified period of time. Understand the frequency of new cases for disease development. continues Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 138 Descriptive Study Type Purpose Use Predictive correlational Predict the variance of one or more variables based on the variance of another variable(s). Examine the relationship between a predictor (independent variable) and an outcome/criterion variable. Prevalence (epidemiologic descriptive) Determine the proportion of a population that has a particular condition at a specific point in time. Compare prevalence of disease in different populations; examine trends in disease severity over time. Correlational Studies Correlational studies measure a relationship between two variables without the researcher controlling either of them. These studies aim to find out w\ hether there is: ■ Positive correlation: One variable changes in the same direction as the other variable direction.

■ Negative correlation: Variables change in opposite directions, one increases, and the other decreases.

■ Zero correlation: There is no relationship between the variables. Table 6.3 outlines common types of correlational studies, such as case-co\ ntrol, cohort, and natural experiments. Table 6.2 Descriptive Study Type, Purpose, and Use (cont.) Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 139 Table 6.3 Correlational Study Type, Purpose, and Use Correlational Study Type Purpose Use Case-control Examine possible relationships between exposure and disease occurrence by comparing the frequency of exposure of the group with the outcome (cases) to a group without (controls).

Can be either retrospective or prospective. Identifies factors that may contribute to a medical condition.

Often used when the outcome is rare. Cohort Examine whether the risk of disease was different between exposed and nonexposed patients.

Can be either retrospective or prospective. Investigates the causes of disease to establish links between risk factors and health outcomes. Natural experiments Study a naturally occurring situation and its effect on groups with different levels of exposure to a supposed causal factor. Beneficial when there has been a clearly defined exposure involving a well-defined group and the absence of exposure in a similar group. Univariate Studies Univariate studies, also referred to as single-variable research, use exploratory or survey methods and aim to describe the frequency of a behavior or an occ\ urrence.

Univariate descriptive studies summarize or describe one variable, rathe\ r than examine a relationship between the variables (Polit & Beck, 2017). Exp\ loratory and survey designs are common in nursing and healthcare. When little kno\ wledge about the phenomenon of interest exists, these designs offer the greates\ t degree of flexibility. Though new information is learned, the direction of the exploration may change. With exploratory designs, the investigator does not know enough about a phenomenon to identify variables of interest completely. Researchers Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 140 observe variables as they happen; there is no researcher control. When i\ nvestigators know enough about a particular phenomenon and can identify specific va\ riables of interest, a descriptive survey design more fully describes the phenom\ enon.

Questionnaire (survey) or interview techniques assess the variables of\ interest. Qualitative Research Designs Qualitative research designs seek to discover the whys and hows of a phenomenon of interest in a written format as opposed to numerical. Types of qualitative studies (sometimes referred to as traditions) include ethnography, grounded theory, phenomenology, narrative inquiry, case study, and basic qualitative descriptive.

With the exception of basic qualitative descriptive, each study type adhe\ res to a specific method for collecting and analyzing data; each methodology is\ based upon the researcher’s worldview that consists of beliefs that guide decisions or behaviors. \ Table 6.4 details qualitative study types. Table 6.4 Qualitative Study Type, Purpose, Uses Type Purpose Use Ethnography Study of people in their own environment to understand cultural rules Gain insights into how people interact with things in their natural environment. Grounded theory Examine the basic social and psychological problems/concerns that characterize real-world phenomena Used where very little is known about the topic to generate data to develop an explanation of why a course of action evolved the way it did. Phenomenology Explore experience as people live it rather than as they conceptualize it Understand the lived experience of a person and its meaning. Narrative inquiry Reveal the meanings of individuals’ experiences combined with the researcher’s perspective in a collaborative and narrative chronology Understand the way people create meaning in their lives.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 141 Case study Describe the characteristics of a specific subject (such as a person, group, event, or organization) to gather detailed data to identify the characteristics of a narrowly defined subject Gain concrete, contextual, in- depth knowledge about an unusual or interesting case that challenges assumptions, adds complexity, or reveals something new about a specific real-world subject. Basic qualitative (also referred to as generic or interpretive descriptive) Create knowledge through subjective analysis of participants in a naturalistic setting by incorporating the strengths of different qualitative designs without adhering to the philosophical assumptions inherent in those designs Problem or phenomenon of interest is unsuitable for, or cannot be adapted to, the traditional qualitative designs. Systematic Reviews: Summaries of Multiple Research Studies Summaries of multiple research studies are one type of evidence synthesi\ s that generates an exhaustive summary of current evidence relevant to a resear\ ch question. Often referred to as systematic reviews , they use explicit methods to search the scientific evidence, summarize critically appraised and rel\ evant primary research, and extract and analyze data from the studies that are include\ d in the review. To minimize bias, a group of experts, rather than individuals, applies th\ ese standardized methods to the review process. A key requirement of systema\ tic reviews is the transparency of methods to ensure that rationale, assumpt\ ions, and processes are open to scrutiny and can be replicated or updated. A systematic review does not create new knowledge; rather, it provides a concise and relatively unbiased synthesis of the research evidence for a topic of interest (Ar\ omataris & Munn, 2020). There are at least 14 types of systematic review study designs (Aromata\ ris & Munn, 2020) with specific critical appraisal checklists by specific\ study design type (Grant & Booth, 2009). Healthcare summaries of multiple studies most o\ ften use meta-analyses with quantitative data and meta-syntheses with qualitative\ data.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 142 Systematic Review With Meta-Analysis Meta-analyses are systematic reviews of quantitative research studies th\ at statically combine the results of multiple studies that have a common in\ tervention (independent variables) and outcomes (dependent variables) to create\ new summary statistics. Meta-analysis offers the advantage of objectivity be\ cause the study reviewers’ decisions are explicit and integrate data from all i\ ncluded studies.

By combining results across several smaller studies, the researcher can \ increase the power, or the probability of detecting a true relationship between the interv\ ention and the outcomes of the intervention (Polit & Beck, 2017).

For each of the primary studies, the researcher develops a common metric\ called effect size (ES), a measure of the strength of the relationship\ between two variables. This summary statistic combines and averages effect sizes acr\ oss the included studies. Cohen’s (1988) methodology for determining effect sizes defines the strength of correlation ratings as trivial (ES = 0.01–0.09), lo\ w to moderate (0.10–0.29), moderate to substantial (0.30–0.49), substantial \ to very strong (0.50–0.69), very strong (0.70–0.89), and almost perfect (0.9\ 0–0.99).

Researchers display the results of meta-analysis of the included individ\ ual studies in a forest plot graph. A forest plot shows the variation between the st\ udies and an estimate of the overall result of all the studies together. This is usually accompanied by a table listing references (author and date) of the stu\ dies included Systematic Reviews Versus Narrative Reviews Systematic reviews differ from traditional narrative literature reviews.\ Narrative reviews often contain references to research studies but do not critical\ ly appraise, evaluate, and summarize the relative merits of the included st\ udies.

True systematic reviews address both the strengths and the limitations of\ each study included in the review. Readers should not differentiate between a systematic review and a narrative literature review based solely on the \ article’s title. At times, the title will state that the article presents a litera\ ture review when it is in fact a systematic review or state that the article is a sy\ stematic review when it is a literature review. EBP teams generally consider themselves lucky when they uncover well-executed systematic reviews that include summative research techniques that apply to the practice question of int\ erest. Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 143 in the meta-analysis and the statistical results (Centre for Evidence-B\ ased Intervention, n.d.). Example: Meta-Analysis Meserve and colleagues (2021) conducted a meta-analysis of randomized \ controlled trials, cohort studies, and case series to evaluate the risks\ and outcomes of adverse events in patients with preexisting inflammatory b\ owel diseases treated with immune checkpoint inhibitors. They identified 12\ studies reporting the impact of immune checkpoint inhibitors in 193 patients wit\ h inflammatory bowel disease and calculated pooled rates (with 95% confi\ dence intervals [CI]) and examined risk factors associated with adverse outco\ mes through qualitative synthesis of individual studies. Approximately 40% o\ f patients with preexisting inflammatory bowel diseases experienced rela\ pse with immune checkpoint inhibitors, with most relapsing patients requirin\ g corticosteroids and one-third requiring biologics. Systematic Review With Meta-Synthesis Meta-synthesis is the qualitative counterpart to meta-analysis. It involves interpreting data from multiple sources to produce a high-level narrativ\ e rather than aggregating data or producing a summary statistic. Meta-synthesis s\ upports developing a broader interpretation than can be gained from a single pri\ mary qualitative study by combing the results from several qualitative studie\ s to arrive at a deeper understanding of the phenomenon under review (Polit & Beck,\ 2017). Example: Meta-Synthesis Danielis et al. (2020) conducted a meta-synthesis and meta-summary to \ understand the physical and emotional experiences of adult intensive car\ e unit (ICU) patients who receive mechanical ventilation. They searched \ four electronic databases and used the Critical Appraisal Skills Programme ch\ ecklist to evaluate articles on their methodological quality. Nine studies met the criteria. The researchers identified twenty-four codes across eleven c\ ategories that indicated a need for improvements in clinical care, education, and \ policy to address this populations’ feelings associated with fear, inability to communicate, and “feeling supervised.” Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 144 Sources of Systematic Reviews The Institute of Medicine (2011) appointed an expert committee to esta\ blish methodological standards for developing and reporting of all types of sy\ stematic reviews. The Agency for Healthcare Research and Quality (AHRQ), the le\ ad federal agency charged with improving America’s healthcare system’s safety and quality, awards five-year contracts to North American institutions to serve as Evidence-Based Practice Centers (EPCs). EPCs review scientific li\ terature on clinical, behavioral, organizational, and financial topics to produce \ evidence reports and technology assessments (AHRQ, 2016). Additionally, EPCs conduct research on systematic review methodology.

Research designs for conducting summaries of multiple studies include sy\ stem - atic reviews, meta-analysis (quantitative data), meta-synthesis (qual\ itative), and mixed methods (both quantitative and qualitative data). See Table 6.5 for nation - al and international organizations that generate summaries of multiple s\ tudies. Table 6.5 Organizations That Generate Summaries of Multiple Studies Organization Description Agency for Healthcare Research and Quality (AHRQ) AHRQ Methods Guide for Effectiveness and Comparative Effectiveness Reviews The AHRQ Effective Health Care Program has several tools and resources for consumers, clinicians, policymakers, and others to make informed healthcare decisions.   The Campbell Collaboration  The Campbell Collaboration is an international research network that produces systematic reviews of the effects of social interventions.   Centre for Reviews and Dissemination (CRD) The Centre for Reviews and Dissemination provides research- based information about the effects of health and social care interventions and provides guidance on the undertaking of systematic reviews.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 145 Cochrane Collaboration  The Cochrane Collaboration is an international organization that helps prepare and maintain the results of systematic reviews of healthcare interventions. Systematic reviews are disseminated through the online Cochrane Library, which is accessible through this guide and under databases on the NIH Library website. JBI JBI (formerly the Joanna Briggs Institute) is an international not- for-profit research and development Centre within the Faculty of Health Sciences and Medical at the University of Adelaide, South Australia that produces systematic reviews. JBI also provides comprehensive systematic review training. Mixed-Methods Studies As with quantitative and qualitative approaches, there are different des\ igns within mixed methods (see Table 6.6). The most common mixed-method designs are convergent parallel, explanatory sequential, exploratory sequential,\ and multiphasic (Creswell & Plano Clark, 2018). Table 6.6 Mixed-Methods Design, Procedure, and Use Design Procedure Use Convergent parallel Concurrently conducts quantitative and qualitative elements in the same phase of the research process, weighs the methods equally, analyzes the two components independently, and interprets the results together Validate quantitative scales and form a more complete understanding of a research topic Explanatory sequential Sequential design with quantitative data collected in the initial phase, followed by qualitative data Used when quantitative findings are explained and interpreted with the assistance of qualitative data continues Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 146 Design Procedure Use Exploratory sequential designs Sequential design with qualitative data collected in the initial phase, followed by quantitative data Qualitative results need to be tested or generalized or for theory development or instrument development Multiphasic Combines the concurrent or sequential collection of quantitative and qualitative data sets over multiple phases of a study Useful in comprehensive program evaluations by addressing a set of incremental research questions focused on a central objective Determining the Level of Research Evidence The JHEBP model encompasses quantitative and qualitative studies, primar\ y studies, and summaries of multiple studies within three levels of eviden\ ce.

The level of research evidence (true experimental, quasi-experimental, \ nonexperimental) is an objective determination based on the study desig\ n meeting the scientific evidence design requirements—manipulation of\ a variable in the form of an intervention, a control group, and random assignment t\ o the intervention or control group. Table 6.7 identifies the type of research studies in each of the three levels of scientific evidence. The Research Appra\ isal Tool (Appendix E) provides specific criteria and decision points for dete\ rmining the level of research evidence. Table 6.7 Rating the Level of Research Evidence Level Type of Evidence I A true experimental study, randomized controlled trial (RCT), or systematic review of RCTs, with or without meta-analysis Table 6.6 Mixed-Methods Design, Procedure, and Use (cont.) Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 147 II A quasi-experimental study or systematic review of a combination of RCTs and quasi-experimental studies, or quasi-experimental studies only, with or without meta-analysis III A quantitative nonexperimental study; systematic review of a combination\ of RCTs, quasi-experimental, and nonexperimental studies, or nonexperimental studies only; or qualitative study or systematic review of qualitative s\ tudies, with or without a meta-synthesis Appraising the Quality of Research Evidence After the EBP team has determined the level of research evidence, the te\ am evaluates the quality of the evidence with the corresponding expectation\ s of the chosen study design. Individual elements to be evaluated for each pi\ ece of evidence will depend on the type of evidence but can include the quality\ (validity and reliability) of the researchers’ measurements, statistical fin\ dings, and quality of reporting. Quality of Measurement Findings of research studies are only as good as the tools used to gathe\ r the data.

Understanding and evaluating the psychometric properties of a given inst\ rument, such as validity and reliability, allow for an in-depth understanding of the quality of the measurement. Validity Validity refers to the credibility of the research—the extent to which the re\ search measures what it claims to measure. The validity of research is importan\ t because if the study does not measure what it intended, the results will not eff\ ectively answer the aim of the research. There are several ways to ensure validit\ y, including expert review, Delphi studies, comparison with established tools, factor analysis, item response theory, and correlation tests (expressed as a correlation coefficient). There are two aspects of validity to measure: internal \ and external.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 148 Internal validity is the degree to which observed changes in the dependent variable are due to the experimental treatment or intervention rather th\ an other possible causes. An EBP team should question whether there are competing\ ex - planations for the observed results. Measures of internal validity inclu\ de content validity (the extent to which a multi-item tool reflects the full extent of t\ he con - struct being measured), construct validity (how well an instrument truly mea - sures the concept of interest), and cross-cultural validity (how well a translated or culturally adapted tool performs relative to the original instrument)\ (Polit & Beck, 2017).

External validity refers to the likelihood that conclusions about research findings are generalizable to other settings or samples. Errors of measurement th\ at affect validity can be systematic or constant. External validity is a signifi\ cant concern with EBP when translating research into the real world or from one popul\ ation/ setting to another. An EBP team should question the extent to which study con - clusions may reasonably hold true for their particular patient populatio\ n and setting. Do the investigators state the participation rates of subjects \ and settings?

Do they explain the intended target audience for the intervention or tre\ atment?

How representative is the sample of the population of interest? Ensuring\ the study participants’ representativeness and replicating the study in m\ ultiple sites that differ in dimensions such as size, setting, and staff skill set imp\ rove external validity.

Experimental studies are high in internal validity because they are stru\ ctured and control for extraneous variables. However, because of this, the generalizability of the results (external validity) may be limited. In contrast, nonexp\ erimental and observational studies may be high in generalizability because the st\ udies are conducted in real-world settings but are low on internal validity becaus\ e of the inability to control variables that may affect the results.

Bias plays a large role in the potential validity of research findings\ . In the context of research, bias can present as preferences for, or prejudices against, particular groups or concepts. Bias occurs in all research, at any stage, and is di\ fficult to eliminate. Table 6.8 outlines the types of bias.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 149 Table 6.8 Types of Research Bias, Descriptions, and Mitigation Techniques Type of Bias Description Mitigation Techniques Investigator bias Researcher unknowingly influences study participants’ responses Participants may pick up on subtle details in survey questions or their interaction with a study team member and conclude that they should respond a certain way. Standardize all interactions with participants through interview scripts, and blind the collection or analysis of data. Hawthorne effect Changes in participants’ behavior because they are aware that others are observing them. Evaluate the value of direct observation over other data collection methods. Attrition bias Loss of participants during a study and the effect on representativeness within the sample. This can affect results, as the participants who remain in a study may collectively possess different characteristics than those who drop out. Limit burden on participants while maximizing opportunities for engagement, communicating effectively and efficiently. Selection bias Nonrandom selection of samples.

This can include allowing participants to self-select treatment options or assigning participants based upon specific demographics. When possible, use a random sample. If not possible, apply rigorous inclusion and exclusion criteria to ensure recruitment occurs within the appropriate population while avoiding confounding factors.

Use a large sample size. Reliability Reliability refers to the consistency of a set of measurements or an instrument used to measure a construct. For example, a patient scale is off by 5 po\ unds.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 150 When weighing the patient three times, the scale reads 137 every time (reliability). However, the weight is not the patient’s true weight because the scale is not recording correctly (validity). Reliability refers, in essence,\ to the repeat - ability of a measurement. Errors of measurement that affect reliability \ are ran - dom. For example, variation in measurements may exist when nurses use pa\ tient care equipment such as blood pressure cuffs or glucometers. Table 6.9 displays three methods used to measure reliability: internal consistency reliabil\ ity, test- retest reliability, and interrater reliability. Evaluating Statistical Findings Most research evidence will include reports of descriptive and analytic \ statistics of their study findings. The EBP team must understand the general conc\ epts of common data analysis techniques to evaluate the meaning of study findi\ ngs. Measures of Central Tendency Measures of central tendency (mean, median, and mode) are summary statistics that describe a set of data by identifying the central position within t\ hat set of data. The most well-known measure of central tendency is the mean (or average), which is used with both discrete data (based on counts) and continuous\ data (infinite number of values divided along a specified continuum) (\ Polit & Beck, 2017). Although a good measure of central tendency in normal distributi\ ons, the mean is misleading in skewed (asymmetric) distributions and extreme sc\ ores. The median, the number that lies at the midpoint of a distribution of values, is le\ ss sensitive to extreme scores and is therefore of greater use in skewed di\ stributions.

The mode is the most frequently occurring value and is the only measure of central tendency used with categorical data (data divided into groups)\ . Standard deviation is the measure of scattering of a set of data from its mean. The more spread out a data distribution is, the greater its standard deviation. S\ tandard deviation cannot be negative. A standard deviation close to 0 indicates \ that the data points tend to be close to the mean.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 151 Table 6.9 Reliability Definitions and Statistical Techniques Type of Reliability Definition Statistical Techniques Internal consistency Whether a set of items in an instrument or subscale that propose to measure the same construct produce similar scores. Cronbach’s alpha ( α) is a coefficient of reliability (or consistency).

Cronbach alpha values of 0.7 or higher indicate acceptable internal consistency. Test-retest reliability Degree to which scores are consistent over time.

Indicates score variation that occurs from test session to test session that is a result of errors of measurement. Pearson’s r correlation coefficient (expresses the strength of a relationship between variables ranging from –1.00, a perfect negative correlation, to +1.00, a perfect positive relationship).

Scatter plot data. Interrater reliability Extent to which two or more raters, observers, coders, examiners agree. Techniques depend on what is actually being measured:

■ Correlational coefficients are used to measure consistency between raters ■ Percent agreement measures agreement between raters Interrater reliability (cont.) And the type of data:

■ Nominal data: also called categorical data; variables that have no value (e.g., gender, employment status) ■ Ordinal data: categorical data where order is important (e.g., Likert scale measuring level of happiness) ■ Interval data: numeric scales with a specified order and the exact differences between the values (e.g., blood pressure reading) Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 152 Measures of Statistical Significance Statistical significance indicates whether findings reflect an actual association or difference between the variables/groups or are due to chance alone. The \ classic measure of statistical significance, the p-value, is a probability range from 0 to 1. The smaller the p-value (the closer it is to 0), the more likely\ the result is statistically significant. Factors that affect the p-value include sam\ ple size and the magnitude of the difference between groups (effect size) (Thiese et a\ l., 2016).

For example, if the sample size is large enough, the results are more li\ kely to show a significant p-value, even if the effect size is small or clinic\ ally insignifi - cant, but there is a true difference in the groups. In healthcare litera\ ture, the p-value for determining statistical significance is generally set at p\ < 0.05.

Though p-values indicate statistical significance (i.e., the results \ are not due to chance), healthcare research results are increasingly reporting effect sizes and confidence intervals to more fully interpret results and guide dec\ isions for translation. Effective sizes are the amount of difference between two groups. A positive effect size indicates a positive relationship—as one variabl\ e increases, the second variable increases. A negative effect size signifies a nega\ tive relation - ship, where as one variable increases or decreases, the second variable \ moves in the opposite direction. Confidence intervals (CI) are a measure of precision and are expressed as a range of values (upper limit and lower limit) where\ a given measure actually lies based on a predetermined probability. The standard 95% CI means an investigator can be 95% confident that the actual values i\ n a given population fall within the upper and lower limits of the range of values\ . Quality of Reporting Regardless of the quality of the conduct of a research investigation, th\ e implications of that study cannot be adequately determined if the resear\ chers do not provide a complete and detailed report. The Enhancing the QUAlity\ and Transparency Of health Research (EQUATOR) network (https://www.equator- network.org) is a repository of reporting guidelines organized by study\ type.

These guidelines provide a road map for the required steps to conduct an\ d report out a robust study. While ideally researchers are using standard reporting Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 153 guidelines, the degree to which journals demand adherence to these stand\ ards varies. Regardless of the type of study, classic elements of published research include title, abstract, introduction, methods, results, discussion, and conclusion (Lunsford & Lunsford, 1996). Title Ideally, the title should be informative and help the reader understand the typ\ e of study being reported. A well-chosen title states what the author did, to\ whom it was done, and how it was done. Consider the title “Improving transiti\ ons in care for children with complex and medically fragile needs: a mixed-methods s\ tudy” (Curran et al., 2020). The reader is immediately apprised of what was \ done (improve transitions in care), to whom it was done (children with com\ plex and medically fragile needs), and how it was done (a mixed-methods study)\ . Abstract The abstract is often located after the title and author section and gra\ phically set apart by the use of a box, shading, or italics. The abstract is a brief \ description of the problem, methods, and findings of the study (Polit & Beck, 2017)\ . Introduction The introduction contains the background and a problem statement that te\ lls why the investigators have chosen to conduct the study. The best way to present the background is by reporting on current literature, and the author sho\ uld identify the knowledge gap between what is known and what the study seek\ s to find out (or answer). A clear, direct statement of purpose and a statement of expected results or hypotheses should be included. Methods This section describes how a study is conducted (study procedures) in \ sufficient detail so that readers can replicate the study, including the study design, population with inclusion and exclusion criteria, recruitment, consent, \ a Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 154 description of the intervention, and how data was collected and analyzed\ . If instrumentation was used, the methods section should include the validit\ y and reliability of the tools. Authors should also include an acknowledgment \ of ethical review for research studies involving human subjects. The methods should\ read similar to a manual of the study design. Results Study results list the findings of the data analysis and should not co\ ntain commentary. Give particular attention to figures and tables, which are the heart\ of most papers. Look to see whether results report statistical versus cl\ inical significance, and look up unfamiliar terminology, symbols, or logic. Discussion The discussion should align with the introduction and results and state \ the implications of the findings. This section explores the research fin\ dings and meaning given to the results, including how they compare to similar stud\ ies.

Authors should also identify the study’s main weaknesses or limitations and identify the actions taken to minimize them. Conclusion The conclusion should contain a brief restatement of the experimental results and implications of the study (Hall, 2012). If the conclusion does not hav\ e a separate header, it usually falls at the end of the discussion section. The Overall Report The parts of the research article should be highly interconnected (but \ not overlap). The researcher needs to ensure that any hypotheses flow dir\ ectly from the review of literature, and results support arguments or interpre\ tations presented in the discussion and conclusion sections.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 155 Determining the Quality of Evidence Rating the quality of research evidence includes considerations of facto\ rs such as sample size (power of study to detect true differences), extent to \ which you can generalize the findings from a sample to an entire population, and\ validity (indicates findings truly represent the phenomenon you are claiming t\ o measure).

In contrast to the objective approach to determining the level, grading \ the quality of evidence is subjective and requires critical thinking by the EBP team\ to make a determination (see Table 6.10). Table 6.10 Quality Rating for Research Evidence Grade Research Evidence A: High Consistent, generalizable results; sufficient sample size for study de\ sign; adequate control; definitive conclusions; recommendations consistent w\ ith the study’s findings and include thorough reference to scientific evidence B: Good Reasonably consistent results; sufficient sample size for the study de\ sign; some control; fairly definitive conclusions; recommendations reasonabl\ y consistent with the study’s findings and include some reference to scientific evidence C: Low Little evidence with inconsistent results; insufficient sample size fo\ r the study design; conclusions cannot be drawn Experimental Studies (Level I Evidence) True experiments have a high degree of internal validity because manipula\ tion and random assignment enables researchers to rule out most alternative explanations of results (Polit & Beck, 2017). Internal validity, however, decreases the generalizability of the results (external validity). To uncover potential threats to external validity, the EBP team may pose questions such as, “How confident are we that the study findings can transfer from the sampl\ e to the entire population? Are the study conditions as close as possible to real\ -world situations? Did subjects have inherent differences even before manipulat\ ion of the independent variable (selection bias)? Are participants responding\ in a certain Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 156 way because they know the researcher is observing them (the Hawthorne e\ ffect)?

Are there researcher behaviors or characteristics that may influence t\ he subject’s responses (investigator bias)? In multi-institutional studies, are the\ re variations in how study coordinators at various sites managed the trial?

Subject mortality and different dropout rates between experimental and c\ on - trol groups may affect the adequacy of the sample size. Additional items\ the EBP team may want to assess related to reasons for dropout of subjects i\ nclude whether the experimental treatment was painful or time-consuming and whe\ ther participants remaining in the study differ from those who dropped out. I\ t is im - portant to assess the nature of possible biases that may affect randomiz\ ation.

Assess for selection biases by comparing groups on pretest data (Polit \ & Beck, 2017). If there are no pretest measures, compare groups on demographic \ and dis - ease variables such as age, health status, and ethnicity. If there are multiple data collection points, it is important to assess attrition biases by compari\ ng those who did or did not complete the intervention. EBP teams should carefully\ ana - lyze how the researchers address possible sources of bias. Quasi-Experimental Studies (Level II Evidence) As with true experimental studies, threats to generalizability (externa\ l validity) for quasi-experimental studies include maturation, testing, and instrume\ ntation, with the additional threats of history and selection (Polit & Beck, 201\ 7). The occurrence of external events during the study (threat of history) can\ affect a subject’s response to the investigational intervention or treatment. Additionall\ y, with nonrandomized groups, preexisting differences between the groups ca\ n affect the outcome. Questions the EBP team may pose to uncover potential\ threats to internal validity include, “Did some event occur during th\ e study that may have influenced the results of the study? Are there processes\ occurring within subjects over the course of the study because of the passage of t\ ime (maturation) rather than from the experimental intervention? Could the\ pretest have influenced the subject’s performance on the posttest? Were the measurement instruments and procedures the same for both points of data collection?”\ Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 157 In terms of external validity, threats associated with sampling design, such as pa - tient selection and characteristics of nonrandomized patients, affect th\ e general findings. External validity improves if the researcher uses random sel\ ection of subjects, even if random assignment to groups is not possible. Nonexperimental and Qualitative Studies (Level III Evidence) The evidence gained from well-designed nonexperimental and qualitative s\ tudies is the lowest level in the research hierarchy (Level III).

When looking for potential threats to external validity in quantitative \ non- experimental studies, the EBP team can pose the questions described unde\ r ex - perimental and quasi-experimental studies. In addition, the team may ask\ further questions such as, “Did the researcher attempt to control for extrane\ ous vari - ables with the use of careful subject-selection criteria? Did the resear\ cher attempt to minimize the potential for socially acceptable responses by the subje\ ct? Did the study rely on documentation as the source of data? In methodological\ studies (developing, testing, and evaluating research instruments and methods)\ , were the test subjects selected from the population for which the test will be us\ ed? Was the survey response rate high enough to generalize findings to the target \ population?

For historical research studies, are the data authentic and genuine?”\ Qualitative studies offer many challenges with respect to the question o\ f valid - ity. There are several suggested ways to determine validity, or rigor, in qualitative research. Four common approaches to establish rigor (Saumure & Given, 2\ 012) are: ■ Transparency: How clear the research process has been explained ■ Credibility: The extent to which data are representative ■ Dependability: Other researchers would draw the same or similar conclusions when looking at the data ■ Reflexivity: How the researcher has reported how they were involved in the research and may have influenced the study results Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 158 Issues of rigor in qualitative research are complex, so the EBP team sho\ uld appraise how well the researchers discuss how they determined validity f\ or the particular study. Systematic Reviews (Level I or II Evidence) Teams should evaluate systematic reviews for the rigor and transparency t\ hey display in their search strategies, appraisal methods, and results. Syst\ ematic reviews should follow well-established reporting guidelines (Moher et a\ l., 2009), in most cases, the Preferred Reporting Items for Systematic Reviews and \ Meta- Analyses (PRISMA). This includes a reproducible search strategy, a flow diagram of the literature screening, clear data extraction methodology and repor\ ting, and methods used to evaluate the strength of the literature. Authors should \ ensure all conclusions are based on a critical evaluation of results. Systematic Review With Meta-Analysis The strength (level and quality) of the evidence on which recommendati\ ons are made within a meta-analytic study depends on the design and quality of s\ tudies included in the meta-analysis as well as the design of the meta-analysis\ itself.

Factors to consider include sampling criteria of the primary studies inc\ luded in the analysis, quality of the primary studies, and variation in outcomes \ between studies.

To determine the level, the EBP team looks at the types of research desig\ ns included in the meta-analysis. Meta-analyses containing only randomized \ con - trolled trials are Level I evidence. Some meta-analyses include data fro\ m quasi- experimental or nonexperimental studies, and the level of evidence would\ be at a level commensurate with the lowest level of research design included (e\ .g., if the meta-analysis included experimental and quasi-experimental studies, the \ meta- analysis would be Level II).

To determine the quality of the article, first the team should look at \ the strength of the individual studies included. For an EBP team to evaluate evidence\ obtained Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 159 from a meta-analysis, the meta-analysis report must be detailed enough f\ or the reader to understand the studies included. Second, the team should asses\ s the quality of the meta-analysis itself. The discussion section should inclu\ de an over - all summary of the findings, the magnitude of the effect, the number o\ f studies, and the combined sample size. The discussion should present the overall \ quality of the evidence and consistency of findings (Polit & Beck, 2017). Th\ e discussion should also include a recommendation for future research to improve the \ evi - dence base. Systematic Review With Meta-Syntheses (Level III Evidence) Evaluating and synthesizing qualitative research presents many challenge\ s. It is not surprising that EBP teams may feel at a loss in assessing the qualit\ y of meta- synthesis. Approaching these reviews from a broad perspective enables th\ e team to look for quality indicators that both quantitative and qualitative su\ mmative research techniques have in common.

The following should be noted in meta-synthesis reports: explicit search\ strate - gies, inclusion and exclusion criteria, methods used to determine study \ quality, methodological details for all included studies, and the conduct of the \ meta-syn - thesis itself. Similar to other summative approaches, a meta-synthesis should be undertaken by a team of experts since the application of multiple perspectives to the processes of study ap - praisal, coding, charting, mapping, and interpretation may result in additional insights, and thus in a more complete interpretation of the subject of the review (Jones, 2004, p. 277). EBP teams need to keep in mind that judgments related to study strengths\ and weaknesses as well as to the suitability of recommendations for the targ\ et popu - lation are both context-specific and dependent on the question asked. \ Some con - ditions or circumstances, such as clinical setting or time of day, are relevant to determining a particular recommended intervention’s applicability.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 160 A Practical Tool for Appraising Research Evidence The Research Evidence Appraisal Tool (see Appendix E) gauges the level and quality of research evidence. The tool contains questions to guide the t\ eam in determining the level and the quality of evidence of the primary studies\ included in the review. Strength (level and quality) is higher with evidence from at least one well-designed (quality), randomized controlled trial (RCT) (Lev\ el I) than from at least one well-designed quasi-experimental (Level II), nonexpe\ rimental and qualitative (Level III) study. After determining the level, the tool contains additional questions specific to the study methods and execution to de\ termine the quality of the research. Recommendations for Interprofessional Leaders Professional standards have long held that clinicians need to integrate \ the best available evidence, including research findings, into practice an\ d practice decisions. This is the primary way to use new knowledge gained from rese\ arch.

Research articles can be intimidating to novice and expert nurses alike.\ Leaders can best support EBP by providing clinicians with the resources to appra\ ise research evidence. It is highly recommended that they make available res\ earch texts, mentors, or experts to assist teams to become competent consumers\ of research. Only through continuous learning can clinicians gain the confi\ dence needed to incorporate evidence gleaned from research into individual pat\ ients’ day-to-day care. Summary This chapter arms EBP teams with practical information to guide the appr\ aisal of research evidence, a task that is often difficult for nonresearcher\ s. It presents an overview of the various types of research evidence, including attenti\ on to individual research studies and summaries of multiple research studies. \ Strategies and tips for reading research reports guide team members on how to appra\ ise the strength (level and quality) of research evidence.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 161 References Agency for Healthcare Research and Quality. (2016). EPC evidence-based reports . Content last reviewed March, 2021. http://www.ahrq.gov/research/findings/evidence-based-reports/index.html Aromataris, E., & Munn, Z. (2020). JBI systematic reviews. In E. Aroma\ taris & Z. Munn (Eds.), JBI manual for evidence synthesis (Chapter 1). JBI. https://synthesismanual.jbi.global Awoke, M. S., Baptiste, D. L., Davidson, P., Roberts, A., & Dennison-Himmelfarb, C. (2019). A quasi-experimental study examining a nurse-led education program to impr\ ove knowledge, self- care, and reduce readmission for individuals with heart failure. Contemporary Nurse , 55(1), 15–26. https://doi.org/10.1080/10376178.2019.1568198 Centre for Evidence-Based Intervention. (n.d.). https://www.spi.ox.ac.uk/what-is-good-evidence#/ Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Academic Press. Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE Publishing. Creswell, J. W., & Plano Clarke, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE Publishing. Curran, J. A., Breneol, S., & Vine, J. (2020). Improving transitions in care for children with complex and medically fragile needs: A mixed methods study. BMC Pediatrics , 20(1), 1–14. https://doi.org/10.1186/s12887-020-02117-6 Danielis, M., Povoli, A., Mattiussi, E., & Palese, A. (2020). Understa\ nding patients’ experiences of being mechanically ventilated in the Intensive Care Unit: Findings fr\ om a meta‐synthesis and meta‐summary. Journal of Clinical Nursing , 29(13–14), 2107–2124. https://doi.org/10.1111/ jocn.15259 Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis o\ f 14 review types and associated methodologies. Health Information and Libraries Journal , 26(2), 91–108. https://doi. org/10.1111/j.1471-1842.2009.00848.x Hall, G. M. (Ed.). (2012). How to write a paper. John Wiley & Sons. Institute of Medicine. (2011). Finding what works in health care standards for systematic reviews. National Academy of Sciences. Jones, M. L. (2004). Application of systematic review methods to quali\ tative research: Practical issues. Journal of Advanced Nursing , 48(3), 271–278. https://doi.org/10.1111/j.1365- 2648.2004.03196.x Lunsford, T. R., & Lunsford, B. R. (1996). Research forum: How to critically read\ a journal research article. Journal of Prosthetics and Orthotics, 8 (1), 24–31. Meserve, J., Facciorusso, A., Holmer, A. K., Annese, V., Sanborn, W., & Singh, S. (2021). Systematic review with meta-analysis: Safety and tolerability of immune checkpoint \ inhibitors in patients with pre-existing inflammatory bowel diseases. Alimentary Pharmacology & Therapeutics , 53(3), 374–382. Moher, D., Liberati, A., Tetzlaff, J., & Altman, D., G. for the PRISMA Group. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA sta\ tement. BMJ , 339 , b2535. https://doi.org/10.1136/bmj.b2535 Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

6 Evidence Appraisal: Research 162 Polit, D. F., & Beck, C. T. (2017). Nursing research: Generating and assessing evidence for nursing practice (10th ed.). Lippincott Williams & Wilkens. Rahmani, B., Aghebati, N., Esmaily, H., & Florczak, K. L. (2020). Nurse-led care program with patients with heart failure using Johnson’s Behavioral System Model: A randomized controlled trial. Nursing Science Quarterly , 33 (3), 204–214. https://doi.org/10.1177/0894318420932102 Saumure, K., & Given, L. M. (2012). Rigor in qualitative research. In \ L. M. Given (Ed.). The SAGE encyclopedia of qualitative research methods, 795–796 . SAGE Publishing. Thiese, M. S., Ronna, B., & Ott, U. (2016). P value interpretations an\ d considerations. Journal of Thoracic Disease , (9), E928–E931. https://doi.org/10.21037/jtd.2016.08.16 Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:22.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition