Review Chapter 5 Searching the Evidence, Chapter 6 Evidence Appraisal Research, and Chapter 7 Evidence Appraisal Nonresearch in the Johns Hopkins Evidence-based Practice for Nurses and Healthcare Prof

A distinguishing feature and strength of EBP is the inclusion of mul- tiple evidence sources. In addition to research evidence, clinicians can\ draw from a range of nonresearch evidence to inform their practice.

Such evidence includes personal, aesthetic, and ethical ways of know- ing (Carper, 1978)—for example, the expertise, experience, and val- ues of individual practitioners, patients, and patients’ families. In\ this chapter, nonresearch evidence is divided into summaries of evidence (clinical practice guidelines, consensus or position statements, litera\ - ture reviews); organizational experience (quality improvement and financial data); expert opinion (commentary or opinion, case reports\ ); community standards; clinician experience; and consumer prefer- ences. This chapter: ■ Describes types of nonresearch evidence ■ Explains strategies for evaluating such evidence ■ Recommends approaches for building clinicians’ capacity to appraise nonresearch evidence to inform their practice 7 Evidence Appraisal: Nonresearch Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

164 Summaries of Research Evidence Summaries of research evidence such as clinical practice guidelines, con\ sensus or position statements, integrative reviews, and literature reviews are \ excellent sources of information relevant to practice questions. These forms of ev\ idence review and summarize all research, not just experimental studies. They a\ re not themselves classified as research evidence because they are often \ not comprehensive and may not include an appraisal of study quality. Clinical Practice Guidelines and Consensus/Position Statements (Level I\ V Evidence) Clinical practice guidelines (CPGs), as defined by the Institute of Medicine (IOM) in 2011, are statements that include recommendations intended to optimiz\ e patient care that are informed by a systematic review of evidence and an\ assess - ment of the benefits and harms of alternative care (IOM, 2011). CPGs\ are tools designed to provide structured guidance about evidence-based care, which\ can decrease variability in healthcare delivery, improving patient outcomes (Abrahamson et al., 2012).

A key aspect of developing a valuable and trusted guideline is creation \ by a guideline development group representing stakeholders with a wide range \ of ex - pertise, such as clinician generalists and specialists, content experts,\ methodolo - gists, public health specialists, economists, patients, and advocates (\ Sniderman & Furberg, 2009; Tunkel & Jones, 2015). The expert panelists should provide full disclosure and accounting of how they addressed intellectual and fi\ nancial conflicts that could influence the guidelines (Joshi et al., 2019)\ . The guideline development group should use a rigorous process for assembling, evaluati\ ng, and summarizing the published evidence to develop the CPG recommendation\ s (Ransohoff et al., 2013). The strength of the recommendations should b\ e graded to provide transparency about the certainty of the data and values appli\ ed in the process (Dahm et al., 2009). The Grading of Recommendations, Assessmen\ t, De - velopment and Evaluations (GRADE) strategy is a commonly used system t\ o as - sess the strength of CPG recommendations (weak or strong), as well as \ the qual - ity of evidence (high, moderate, low/very low) they are based on (Neu\ mann et al., 2016). For example, a 1A recommendation is one that is strong and \ is based Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 165 on high-quality evidence. This system has been widely adopted by organiz\ ations such as the World Health Organization (WHO), as well as groups such as the American College of Chest Physicians (CHEST). Use of consistent gradin\ g sys - tems can align evidence synthesis methods and result in more explicit an\ d easier- to-understand recommendations for the end user (Diekemper et al., 2018)\ .

Consensus or position statements (CSs) may be developed instead of a C\ PG when the available evidence is insufficient due to lack of high-qualit\ y evidence or conflicting evidence, or scenarios where assessing benefits and r\ isks of an in - tervention are challenging (Joshi et al., 2019). Consensus statements (CSs) are broad statements of best practice based on consensus opinion of the conv\ ened expert panel and possibly small bodies of evidence; are most often meant\ to guide members of a professional organization in decision-making; and may\ not provide specific algorithms for practice (Lopez-Olivo et al., 2008).\ Hundreds of different groups have developed several thousand different C\ PGs and CSs (Ransohoff et al., 2013). It has been noted that the methodolo\ gical qual - ity of CPGs varies considerably by developing organizations, creating cl\ inician concerns over the use of guidelines and potential impact on patients (D\ ahm et al., 2009). Formal methods have been developed to assess the quality of\ CPGs.

A group of researchers from 13 countries, Appraisal of Guidelines Resear\ ch and Evaluation (AGREE) Collaboration, developed a guideline appraisal inst\ rument with documented reliability and validity. It has been shown that high-quality guidelines were more often produced by government-supported organization\ s or a structured, coordinated program (Fervers et al., 2005). The AGREE\ instru - ment, revised in 2013, now has 23 items and is organized into six domain\ s (The AGREE Research Trust, 2013; Brouwers et al., 2010): ■ Scope and purpose ■ Stakeholder involvement ■ Rigor of development ■ Clarity of presentation ■ Applicability ■ Editorial independence Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

7 Evidence Appraisal: Nonresearch 166 Despite the availability of the AGREE tool and others like it, the quali\ ty of guidelines still vary greatly in terms of how they are developed and how\ the results are reported (Kuehn, 2011). A recent evaluation of more than 6\ 00 CPGs found that while quality of the CPGs has increased over time, the qualit\ y scores assessed by the tool have remained moderate to low (Alonso-Coello, 2010\ ). In response to concern about CPG quality, an IOM committee was commissioned to study the CPG development process. The committee (IOM, 2011) develo\ ped a comprehensive set of criteria outlining “standards for trustworthin\ ess” for clinical practice guidelines development (see Table 7.1). Table 7.1 Clinical Practice Guideline (CPG) Standards and Description Standard Description Establish transparency Funding and development process should be publicly available. Disclose conflict(s) of interest (COI) Individuals who create guidelines and panel chairs should be free from conflicts of interest (COI). Funders are excluded from CPG development. All COIs of each guideline development group member should be disclosed. Balance membership of guideline development group Guideline developers should include multiple disciplines, patients, patient advocates, or patient consumer organizations. Use systematic reviews CPG developers should use systematic reviews that meet IOM’s Standards for Systematic Reviews of Comparative Effectiveness Research. Rate strength of evidence and recommendations Rating has specified criteria rating the level of evidence and strength of recommendations. Articulate recommendations Recommendations should follow a standard format and be worded so that compliance can be evaluated.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 167 External review External reviews should represent all relevant stakeholders. A draft of the CPG should be available to the public at the external review stage or directly afterward. Update guidelines CPGs should be updated when new evidence suggests the need, and the CPG publication date, date of systematic evidence review, and proposed date for future review should be documented. For more than 20 years, the National Guideline Clearinghouse (NGC), an\ initiative of the Agency for Healthcare Research and Quality (AHRQ), U\ S Department of Health and Human Services, was a source of high-quality guidelines and rigorous standards. This initiative was ended due to lack\ of funding in 2018. At that time, the Emergency Care Research Institute (E\ CRI), a non-profit, independent organization servicing the healthcare indust\ ry, committed to continuing the legacy of the NGC by creating the ECRI Guide\ lines Trust. The trust houses hundreds of guidelines on its website, which is f\ ree to access (https://guidelines.ecri.org). ECRI summarizes guidelines in sn\ apshots and briefs and appraises them against the Institute of Medicine (IOM) Stan\ dards for Trustworthy Guidelines by using TRUST (Transparency and Rigor Using Standards of Trustworthiness) scorecards (ECRI, 2020). This source can be useful to EBP teams needing guidelines for evidence appraisals. Literature Reviews (Level V Evidence) Literature review is a broad term that refers to a summary or synthesis of published literature without systematic appraisal of evidence quality or\ strength.

The terminology of literature reviews has evolved into many different ty\ pes, with different search processes and degrees of rigor (Peters et al., 2015; S\ nyder, 2019; Toronto et al., 2018). Traditional literature reviews are not confined to scientific literature; a review may include nonscientific literature such as theo\ retical papers, reports of organizational experience, and opinions of experts. Such revi\ ews possess some of the desired attributes of a systematic review, but not the same standardized approach, appraisal, and critical review of the studies. Li\ terature review types also vary in completeness and often lack the intent of incl\ uding all Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

7 Evidence Appraisal: Nonresearch 168 available evidence on a topic (Grant & Booth, 2009). Qualities of diff\ erent types of literature reviews and their product are outlined in Table 7.2.

Specific challenges may arise in conducting or reading a literature re\ view. One challenge is that not all the articles returned in the search answer the\ specific questions being posed. When conducting a literature search, attention to\ the details of the search parameters—such as the Boolean operators or usi\ ng the correct Medical Subject Headings (MeSH)—may provide a more comprehe\ nsive search, as described in Chapter 5. Shortcomings of available literature \ should be described in the limitations section. If only some of the articles an\ swer the questions posed while reading a literature review, the reader must interpret the findings more carefully and may need to identify additional literature\ reviews that answer the remaining questions. Another common problem in literatur\ e re - views is double counting of study results, which may influence the res\ ults of the literature review. Double counting can take many forms, including simple double counting of the same study in two included meta-analyses, double countin\ g of control arms between two interventions, imputing data missing from inclu\ ded studies, incomplete reporting of data in the included studies, and other\ s (Senn, 2009). Recommendations to reduce the incidence of double counting inclu\ de vigilance about double counting, making results verifiable, describing\ analysis in detail, judging the process not the author in review, and creating a culture of cor - rection (Senn, 2009). Integrative Reviews (Level V Evidence) An integrative review is more rigorous than a literature review but lacks the methodical rigor of a systematic review with or without meta-analysis. I\ t summarizes evidence that is a combination of research and theoretical li\ terature and draws from manuscripts using varied methodologies (e.g., experiment\ al, non-experimental, qualitative). The purpose of an integrative review va\ ries widely compared to a systematic review; these purposes include summarizi\ ng evidence, reviewing theories, defining concepts, and other purposes. W\ ell-defined and clearly presented search and selection strategies are critical. Beca\ use diverse methodologies may be combined in an integrative review, quality evaluation or Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 169 further analysis of data is complex. Unlike the literature review, however, an integrative review analyzes, compares themes, and notes gaps in the sele\ cted literature (Whittemore & Knafl, 2005). Table 7.2 Types of Literature Review Type Characteristics Result Literature review An examination of current literature on a topic of interest. The purpose is to create context for an inquiry topic. Lack a standardized approach for critical appraisal and review. Often includes diverse types of evidence. Summation or identification of gaps Critical review Extensive literature research from diverse sources, but lacks systematicity. Involves analysis and synthesis of results, but focuses on conceptual contribution of the papers, not their quality. A hypothesis or model Rapid review Time-sensitive assessment of current knowledge of practice or policy issue, using systematic review methods. Time savings by focus on narrow scope, using less comprehensive search, extracting only key variables, or performing less rigorous quality appraisal. Increased risk of bias due to limited time frame of literature or quality analysis. Timely review of current event or policy Qualitative systematic review Method to compare or integrate finding from qualitative studies, to identify themes or constructs in or across qualitative studies. Useful when knowledge of preferences and attitudes are needed. Standards for performing this type of review are in early stages, so rigor may vary. New theory, narrative, or wider understanding Scoping review Determines nature and extent of available research evidence, or maps a body of literature to identify boundaries of research evidence. Limitations in rigor and duration increase risk of bias. Limited quality assessment. Identify gaps in research, clarify key concepts, report on types of evidence continues Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

7 Evidence Appraisal: Nonresearch 170 Type Characteristics Result State-of-the-art review A type of literature review that addresses more current matters than literature review. Review may encompass a recent period, so could miss important earlier works. New perspectives on an issue or an area in need of research Systematized review Includes some, but not all, elements of systematic review. Search strategies are typically more systematic than other literature reviews, but synthesis and quality assessment are often lacking. Form a basis for further complete systematic review or dissertation Interpreting Evidence From Summaries of Research Evidence Evaluating the quality of research that composes a body of evidence, for\ the purpose of developing CPGs, CSs, or performing a literature review, can be difficult. In 1996, editors of leading medical journals and researcher\ s developed an initial set of guidelines for reporting results of randomized control\ led clinical trials, which resulted in the CONsolidated Standards of Reporting Trials (CONSORT) Statement (Altman & Simera, 2016). Following revisions to the initial CONSORT flowchart and checklist, the Enhancing the QUality and Transparency Of health Research (EQUATOR) program was started in 2006.

Since then, the EQUATOR Network has developed reporting guidelines for many different types of research, including observational studies, systematic\ reviews, case reports, qualitative studies, quality improvement reports, and clin\ ical practice guidelines (EQUATOR Network, n.d.). These guidelines have made it easier to assess the quality of research reports. However, no similar guidelines exist for assessing the quality of nonsystematic literature reviews or i\ ntegrative reviews.

The Institute of Medicine (IOM, now the National Academy of Medicine) \ publication Clinical Practice Guidelines We Can Trust established standards for CPGs to be “informed by a systematic review of evidence and an assess\ ment Table 7.2 Types of Literature Review (cont.) Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 171 of the benefits and harms of alternative care options” (IOM, 2011,\ p. 4). This standardization reduced the number of guidelines available in the Nation\ al Guideline Clearinghouse by nearly 50% (Shekelle, 2018). However, the number of guidelines then rapidly increased, often covering the same topic but \ developed by different organizations, which led to redundancy if the guidelines we\ re similar, or uncertainty when guidelines differed between organizations (Shekelle\ , 2018).

The National Guideline Clearinghouse free access was eliminated in 2018 \ and replaced with the ECRI Trust. Limited evidence or conflicting recommendations require that the healthcare professional utilize critical thinking and c\ linical judgment when making clinical recommendations to healthcare consumers.

Guidelines are intended to apply to the majority of clinical situations,\ but the unique needs of specific patients may also require the use of clinicia\ n judgment.

Clinical practice guideline development, as well as utilization, also ne\ eds to con - sider health inequity , avoidable differences in health that are rooted in lack of fairness or injustice (Welch et al., 2017). Characteristics to consider include the acronym PROGRESS-Plus: place of residence, race/ethnicity/culture/langua\ ge, oc - cupation, gender, religion, education, socioeconomic status, social capital; plus others including age, disability, sexual orientation, time-dependent situations, and relationships. Barriers to care associated with these characteristics ar\ e related to access to care/systems issues or provider/patient behaviors, attitudes, \ and con - scious or unconscious biases (Welch et al., 2017). Some of these characteristics merit their own guideline; for example, a guideline related to asthma ca\ re for adults may not be applicable to the care of children with asthma.

Because guidelines are developed from systematic reviews, EBP teams shou\ ld consider that although groups of experts create these guidelines, which \ frequently carry a professional society’s seal of approval, the opinions of guideline developers that convert data to recommendations require subjective judgments that, \ in turn, leave room for error and bias (Mims, 2015). Actual and potential confl\ icts of interest are increasingly common within organizations and experts who cr\ eate CPGs. Conflicts of interest may encompass more than industry relations\ hips; for example, a guideline that recommends increased medical testing and visit\ s may also serve the interests of clinicians, if they are the sole or predomin\ ate members of the guideline development group (Shekelle, 2018). Conflicts of in\ terest may Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

7 Evidence Appraisal: Nonresearch 172 also be the result of financial or leadership interests, job descripti\ ons, personal research interests, or volunteer work for organizations, among others. T\ he IOM panel recommended that, whenever possible, individuals who create the guidelines should be free from conflicts of interest; if that is n\ ot possible, however, those individuals with conflicts of interest should make up a minori\ ty of the panel and should not serve as chairs or cochairs (IOM, 2011). I\ n addition, specialists who may benefit from implementation of the guideline shoul\ d be in the minority.

Professional associations have purposes besides development of clinical \ practice guidelines, including publishing, providing education, and advocacy for \ public health as well as their members through political lobbying (Nissen, 201\ 7). Rela - tionships between medical industry, professional associations, and experts who develop guidelines must be carefully assessed for actual or potential co\ nflicts of interest, and these conflicts must be transparently disclosed and m\ anaged.

Relationships between healthcare providers and the medical industry are \ not inherently bad; these relationships foster innovation and development, a\ llow a partnership of shared expertise, and keep clinicians informed of advan\ ces in treatment as well as their safe and effective use (Sullivan, 2018). Ho\ wever, there is potential for undue influence, so these relationships must be lever\ aged with transparency to prevent abuses (Sullivan, 2018).

Key elements to note when appraising Level IV evidence and rating eviden\ ce quality are identified in Table 7.3 and in the JHNEBP Nonresearch Evidence Ap - praisal Tool (see Appendix F). Not all these elements are required, but the att\ ri - butes listed are some of the evaluative criteria. Table 7.3 Desirable Attributes of Documents Used to Answer an EBP Question Attribute Question Applicability to topic Does the document address the particular practice question of interest (same intervention, same population, same setting)?

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 173 Comprehensiveness of search strategy Do the authors identify search strategies beyond the typical databases, such as PubMed, PsycInfo, and CINAHL?

Are published and unpublished works included? Methodology Do the authors clearly specify how inclusion and exclusion criteria were applied? Do the authors specify how data were analyzed? Consistency of findings Are the findings organized and synthesized in a clear and cohesive manner?

Are tables organized, and do they summarize the findings in a concise manner? Study quality assessment Do the authors clearly describe how the review addresses study quality? Limitations Are methodological limitations disclosed? Has double counting been assessed? Conclusions Do the conclusions appear to be based on the evidence and capture the complexity of the topic? Collective expertise Was the review and synthesis done by an expert or group of experts? Adapted from Conn (2004), Stetler et al. (1998), and Whittemore (20\ 05).

Organizational Experience Organizational experience often takes the form of quality improvement (\ QI) and economic or program evaluations. These sources of evidence can occur at \ any level in the organization and can be internal to an EBP team’s organization or published reports from external organizations. Quality Improvement Reports (Level V Evidence) The Department of Health and Human Services defines quality improvement (QI) as “consisting of systematic and continuous actions that lead \ to measurable improvement in health care services and health status of targeted patien\ t groups” (Connelly, 2018, p. 125). The term is used interchangeably with quality management, performance improvement, total quality management, and conti\ nuous quality improvement (Yoder-Wise, 2014). These terms refer to the application Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

7 Evidence Appraisal: Nonresearch 174 of improvement practices using tools and methods to examine workflows,\ processes, or systems within a specific organization with the aim of s\ ecuring positive change in a particular service (Portela et al., 2015). QI use\ s process improvement techniques adapted from industry, such as Lean and Six Sigma frameworks, which employ incremental, cyclically implemented changes wit\ h Plan-Do-Study-Act (PDSA) cycles (Baker et al., 2014).

QI projects produce evidence of valuable results in local practice and m\ ay be published as quality improvement reports in journals (Carter et al., 20\ 17). EBP teams are reminded that the focus of QI studies is to determine whether \ an inter - vention works to improve processes, and not necessarily for scientific\ advance - ment, which is the focus of health services research. Thus, lack of gene\ ralizability of results is a weakness, as is lack of structured explanations of mecha\ nisms of change and low quality of reports (Portela et al., 2015).

During their review of nonresearch evidence, EBP team members should exa\ mine internal QI data relating to the practice question as well as QI initiat\ ives based on similar questions published by peer institutions. As organizations be\ come more mature in their QI efforts, they become more rigorous in the approa\ ch, the analysis of results, and the use of established measures as metrics (Ne\ whouse et al., 2006). Organizations that may benefit from QI reports’ publis\ hed findings need to make decisions regarding implementation based on the characteris\ tics of their organization.

As the number of quality improvement reports has grown, so has concern a\ bout the quality of reporting. In an effort to reduce uncertainty about what \ informa - tion should be included in scholarly reports of health improvement, the \ Stan - dards for Quality Improvement Reporting Excellence (SQUIRE) were publi\ shed in 2008 and revised in 2015 ( http://www.squire-statement.org ). The SQUIRE guidelines list and explain items that authors should consider including\ in a re - port of system-level work to improve healthcare (Ogrinc et al., 2015).\ Although evidence obtained from QI initiatives is not as strong as that obtained \ by scientif - ic inquiry, the sharing of successful QI stories has the potential to identify fut\ ure EBP questions, QI projects, and research studies external to the organiz\ ation.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 175 An example of a quality improvement project is a report from an emergenc\ y department (ED) and medical intensive care unit (MICU) on transfer t\ ime delays of critically ill patients from ED to MICU (Cohen et al., 2015). Using\ a clinical microsystems approach, the existing practice patterns were identified,\ and multiple causes that contributed to delays were determined. The Plan-Do-\ Study- Act model was applied in each intervention to reduce delays. The interve\ ntion reduced transfer time by 48% by improving coordination in multiple stage\ s. This Level V evidence is from one institution that implemented a quality impr\ ovement project. Economic Evaluation (Level V Evidence) Economic measures in healthcare facilities provide data to assess the co\ st associated with practice changes. Cost savings assessments can be powerf\ ul information as the best practice is examined. In these partial economic \ evaluations, there is not a comparison of two or more alternatives but rather an expl\ anation of the cost to achieve a particular outcome. Economic evaluations intend\ ed to evaluate quality improvement interventions in particular are mainly conc\ erned with determining whether the investment in the intervention is justifi\ able (Portela et al., 2015).

Full economic evaluations apply analytic techniques to identify, measure, and compare the costs and outcomes of two or more alternative programs o\ r interventions (Centers for Disease Control and Prevention [CDC], 2007)\ . Costs in an economic analysis framework are the value of resources, either the\ oretical or monetary, associated with each treatment or intervention; the consequences are the health effects of the intervention (Gray & Wilkinson, 2016). A common economic evaluation of healthcare decision-making is a cost-effectivenes\ s analysis, which compares costs of alternative interventions that produce a common \ health outcome in terms of clinical units (e.g., years of life). Although the\ results of such an analysis can provide justification for a program, empirical ev\ idence can provide support for an increase in program funding or a switch from one \ program to another (CDC, 2007). Another type of full economic evaluation is co\ st benefit analysis. In this type of economic evaluation, both costs and benefits\ (or health effects) are expressed in monetary units (Gommersall et al., 2015). A\ n EBP team Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

7 Evidence Appraisal: Nonresearch 176 can find reports of cost effectiveness and economic evaluations (Leve\ l V) in published data or internal organizational reports. One example is “Th\ e Value of Reducing Hospital-Acquired Pressure Ulcer Prevalence” (Spetz et al.,\ 2013). This study assessed the cost savings associated with implementing nursing app\ roaches to prevent hospital-acquired pressure ulcers (HAPU).

Financial data can be evaluated as listed on the JHNEBP Nonresearch Evid\ ence Appraisal Tool (Appendix F). When reviewing reports including economic analyses, examine the aim, method, measures, results, and discussion for\ clarity.

Carande-Kulis et al. (2000) recommend that standard inclusion criteria\ for eco - nomic studies have an analytic method and provide sufficient detail re\ garding the method and results. It is necessary to assess the methodological quality\ of studies addressing questions about cost-savings and cost-effectiveness (Gomersa\ ll et al., 2015). The Community Guide “Economic Evaluation Abstraction Form”\ (2010), which can be used to assess the quality of economic evaluations, suggest\ s consid - ering the following questions: ■ Was the study population well described?

■ Was the question being analyzed well defined?

■ Did the study define the time frame?

■ Were data sources for all costs reported?

■ Were data sources and costs appropriate with respect to the program and population being tested?

■ Was the primary outcome measure clearly specified?

■ Did outcomes include the effects or unintended outcomes of the program?

■ Was the analytic model reported in an explicit manner?

■ Were sensitivity analyses performed? When evaluating an article with a cost analysis, it is important to reco\ gnize that not all articles that include an economic analysis are strictly financ\ ial evaluations.

Some may use rigorous research designs and should be appraised using the\ Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 177 Research Evidence Appraisal Tool (see Appendix E). For example, a report by Yang, Hung, and Chen (2015) evaluates the impact of different nursing s\ taffing models on patient safety, quality of care, and nursing costs. Three mixed models of nursing staffing, where the portion of nurses compared with nurse a\ ides was 76% (n = 213), 100% (n = 209), and 92% (n = 245), were applied during \ three different periods between 2006–2010. Results indicated that units wit\ h a 76% proportion of RNs made fewer medication errors and had a lower rate of ventilation weaning, and units with a 92% RN proportion had a lower rate\ of bloodstream infections. The 76% and 92% RNs groups showed increased urin\ ary tract infection and nursing costs (Yang et al., 2015). After a review of this study, the EBP team would discover that this was actually a descriptive, retros\ pective cohort design study, which is why use of research appraisal criteria to judge the strength and quality of evidence would be more appropriate. Program Evaluation (Level V Evidence) Program evaluation is systematic assessment of all components of a progr\ am through the application of evaluation approaches, techniques, and knowle\ dge to improve the planning, implementation, and effectiveness of programs (\ Chen, 2006). To understand program evaluation, we must first define “program.”\ A program has been described as the mechanism and structure used to deliver an intervention or a set of synergistically related interventions (Issel, \ 2016). Programs can take a variety of forms but have in common the need for thorough pla\ nning to identify appropriate interventions and the development of organizatio\ nal structure in order to effectively implement the program interventions. M\ onitoring and evaluation should follow so that findings can be used for continue\ d assessment and refinement of the program and its interventions but als\ o to measure the position, validity, and outcomes of the program (Issel, 2016; Whitehead, 2003).

Although program evaluations are commonly conducted within a framework of scientific inquiry and designed as research studies, most internal \ program evaluations are less rigorous (Level V). Frequently, they comprise pre- or post- implementation data at the organizational level accompanied by qualitati\ ve Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

7 Evidence Appraisal: Nonresearch 178 reports of personal satisfaction with the program. For example, in a pro\ gram evaluation of a patient navigation program, program value and effectiven\ ess were assessed in terms of timeliness of access to cancer care, resolutio\ n of barriers, and satisfaction in 55 patients over a six-month period (Koh \ et al., 2010). While these measures may be helpful in assessing this particular\ program, they are not standard, accepted measures that serve as benchmarks; thus,\ close consideration of the methods and findings is crucial for the EBP team \ when considering this type of evidence (AHRQ, 2018). Expert Opinion (Level V Evidence) Expert opinions are another potential source of valuable information. Co\ nsisting of views or judgments by subject matter experts and based on their combi\ ned knowledge and experience in a particular topic area, these can include c\ ase reports, commentary articles, podcasts, written or oral correspondence, \ and letters to the editor or “op-ed” pieces. Assessing the quality of \ this evidence requires the EBP team do their due diligence in vetting the author’s expert status.

Characteristics to consider include education and training, existing bod\ y of work, professional and academic affiliations, and their previous publi\ cations and communications in the area of interest. For instance, is the author reco\ gnized by state, regional, national, or international groups for their expertis\ e? To what degree have their publications been cited by others? Do they have a\ history of being invited to give lectures or speak at conferences about \ the issue?

One exemplar is an article by Davidson and Rahman (2019), who share th\ eir expert opinion on the evolving role of the Clinical Nurse Specialist (C\ NS) in critical care. This article could be rated as high quality, because they are both experienced CNSs, are doctoral-prepared, belong to several well-establis\ hed professional organizations, and are leaders in their respective specialt\ ies. Case Report (Level V Evidence) Case reports are among the oldest forms of medical scholarship and, alth\ ough appreciation for them has waxed and waned over time (Nissen & Wynn, 2014), they are recently seeing a revival (Bradley, 2018). Providing clinical description Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 179 of a particular patient, visit, or encounter, case reports can help support development of clinicians’ judgment, critical thinking, and decision-\ making (Oermann & Hays, 2016). They frequently involve unique, interesting, o\ r rare presentations and illustrate successful or unsuccessful care delivery (\ Porcino, 2016). Multiple case reports can be presented as a case series, compari\ ng and contrasting various aspects of the clinical issue of interest (Porcino,\ 2016). Case reports and case series are limited in their generalizability, lack controls, and entail selection bias (Sayre et al., 2017), and, being based on experi\ ential and nonresearch evidence, are categorized as Level V.

Case studies, in comparison, are more robust, intensive investigations o\ f a case.

They make use of quantitative or qualitative data, can employ statistica\ l analy - ses, investigate the case of interest over time, or evaluate trends. Thu\ s, case studies are considered quantitative, qualitative, or mixed-methods resea\ rch (San - delowski, 2011) and should be evaluated as such. (See Chapter 6 for mo\ re infor - mation about the different types of research.) Community Standard (Level V Evidence) For some EBP topics, it is important to gather information on community \ practice standards. To do so, the team identifies clinicians, agencies, or organizations to contact for relevant insights, determines their questio\ ns, and prepares to collect the data in a systematic manner. There are myriad ways to access communities: email, social media, national or regional conference\ s, online databases, or professional organization listserv forums. For instance, t\ he Society of Pediatric Nurses has a web-based discussion forum for members to quer\ y other pediatric nurses on various practice issues and current standards \ (http:// www.pedsnurses.org/p/cm/ld/fid=148).

In an example of investigating community standards, Johns Hopkins Univer\ - sity School of Nursing students were assisting with an EBP project askin\ g:

“Does charge nurse availability during the shift affect staff nurse s\ atisfaction with workflow?” An EBP team member contacted local hospitals to det\ ermine whether charge nurses had patient assignments. Students developed a data\ sheet with questions about the healthcare facility, the unit, the staffing pattern, and Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

7 Evidence Appraisal: Nonresearch 180 staff comments about satisfaction. The students reported the number of u\ nits contacted and responses, information source, and number of sources using\ the Nonresearch Evidence Appraisal Tool (Appendix F). Additionally, this approach provided an opportunity to network with other clinicians about a clinica\ l issue. Clinician Experience (Level V Evidence) Clinician experience—gained through first-person involvement with a\ nd observation of clinical practice—is another possible EBP information \ source. In the increasingly complex, dynamic healthcare environment, interprofessio\ nal teams must work collaboratively to provide safe, high-quality, and cost-effective care. This collaboration allows many opportunities for collegial discuss\ ion and sharing of past experiences. Newer clinicians tend to rely more heav\ ily on structured guidelines, protocols, and decision-aids while increasing the\ ir practical skillset. With time and exposure to various practice situations, experiences build \ clinical expertise, allowing for intuitive and holistic understanding of\ patients and their care needs. When seeking clinician experience to inform an EBP\ project, the team should evaluate the information source’s experiential credibility, the clarity of opinion expressed, and the degree to which evidence from \ various experienced clinicians is consistent. Patient/Consumer Experience (Level V Evidence) Patients are consumers of healthcare, and the term consumer also refers to a larger group of individuals using health services in a variety of settin\ gs. A patient-centered healthcare model recognizes an individual’s unique health needs and desired outcomes as the driving force of all healthcare decisions an\ d quality measurements (NEJM Catalyst, 2017). Patient-centered healthcare expect\ s that patients and families play an active, collaborative, and shared role in \ decisions, both at an individual level and a system level. Professional organizatio\ ns and healthcare institutions are increasingly incorporating patient and famil\ y expertise into guidelines and program development. Guidelines that do not recogniz\ e the importance of the patient’s lived experience are set up to fail when the guidelines do not meet the needs of patients and families. Unique characteristics r\ elated to Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 181 personal and cultural values shape an individual’s experience of health and their goals for health (DelVecchio Good & Hannah, 2015).

The expert healthcare provider incorporates patient preferences into cli\ nical decision-making by asking the following questions: ■ Are the research findings and nonresearch evidence relevant to this particular patient’s care?

■ Have all care and treatment options based on the best available evidence\ been presented to the patient?

■ Has the patient been given as much time as necessary to allow for clarification and consideration of options?

■ Have the patient’s expressed preferences and concerns been considered when planning care? The answer to these questions requires ethical practice and respect for \ a patient’s autonomy. Healthcare providers should also carefully assess the patient/family’\ s level of understanding and provide additional information or resources i\ f needed. Combining sensitivity to and understanding of individual patient\ needs and thoughtful application of best evidence leads to optimal patient-cen\ tered outcomes. The mission of the Patient-Centered Outcomes Research Institut\ e (PCORI), established by the Affordable Care Act 2010, is to “help p\ eople make informed healthcare decisions, and improve healthcare delivery and outco\ mes, by producing and promoting high-integrity, evidence-based information that comes from research guided by patients, caregivers, and the broader heal\ thcare community” (PCORI, n.d., para. 2). To achieve this goal, PCORI engages stakeholders (including patients, clinicians, researchers, payers, indu\ stry, purchasers, hospitals and healthcare systems, policymakers, and educator\ s) into the components of comparative-effectiveness research through stakeholder\ input, consultation, collaboration, and shared leadership (PCORI, n.d.).

Engaging consumers of healthcare in EBP goes beyond individual patient encounters. Consumer organizations can play a significant role in supp\ orting implementation and utilization of EBP. Consumer-led activities can take the Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

7 Evidence Appraisal: Nonresearch 182 form of facilitating research to expedite equitable adoption of new and \ existing best practices, promoting policies for the development and use of advoca\ cy tool kits, and influencing provider adoption of EBP (DelVecchio Good & Hannah, 2015). Many consumer organizations focus on patient safety initiatives,\ such as Campaign Zero and preventable hospital harms, or the Josie King Foundati\ on and medical errors. In examining the information provided by consumers, \ the EBP team should consider the credibility of the individual or group. What se\ gment and volume of the consumer group do they represent? Do their comments an\ d opinions provide any insight into your EBP question? Best Practices Companies A relatively recent addition to sources of evidence and best practices a\ re companies that provide consultative business development services. One e\ xample, founded in 1979, is The Advisory Board Company; the current mission is t\ o improve healthcare by providing evidence and strategies for implementing\ best practices (Advisory Board, 2020). The Advisory Board Company has many \ specific subgroups based on clinical specialty (e.g., cardiovascular, oncology, imaging) and professions (physician executive and nursing executive, a\ mong others). Membership allows access to a wide variety of resources, inclu\ ding best practices research, strategic and leadership consultation, and organizat\ ional benchmarking using proprietary databases (Advisory Board, 2020). Benefi\ ts of utilizing this type of data in development of EBP are the power of colle\ ctive, international experiences and the ability to source data from one’s own institution, but EBP teams should also consider the cost of membership as well as the\ inherent focus on organizational efficiency, which may lead to some degree of bias. Recommendations for Healthcare Leaders Time and resource constraints compel leaders to find creative ways to s\ upport integration of new knowledge into clinical practice. The amount of time \ the average staff member must devote to gathering and appraising evidence is\ limited.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 183 Therefore, finding the most efficient way to gain new knowledge shou\ ld be a goal of EBP initiatives. Healthcare leaders should not only support staff edu\ cation initiatives that teach how to read and interpret nonresearch evidence bu\ t also become familiar themselves with desired attributes of such information s\ o that they can serve as credible mentors in the change process.

Another challenge for clinicians is to combine the contributions of the \ two evi - dence types (research and nonresearch) in making patient care decision\ s. Accord - ing to Melnyk and Fineout-Overholt (2006), no “magic bullet” or \ standard for - mula exists with which to determine how much weight should be applied to\ each of these factors when making patient care decisions. It is not suffici\ ent to apply a standard rating system that grades the strength and quality of evidence \ without determining whether recommendations made by the best evidence are compat\ ible with the patient’s values and preferences and the clinician’s expertise. Healthcare leaders can best support EBP by providing clinicians with the knowledge \ and skills necessary to appraise quantitative and qualitative research evide\ nce within the context of nonresearch evidence. Only through continuous learning ca\ n clini - cians and care teams gain the confidence needed to incorporate the bro\ ad range of evidence into the more targeted care of individual patients. Summary This chapter describes nonresearch evidence and strategies for evaluatin\ g this evidence and recommends approaches for building clinicians’ capa\ city to appraise nonresearch evidence to inform their practice. Nonresearch evid\ ence includes summaries of evidence (clinical practice guidelines, consensus\ or position statements, literature reviews); organizational experience (q\ uality improvement and financial data); expert opinion (individual commenta\ ry or opinion, case reports); community standards; clinician experience; and \ consumer experience. This evidence includes important information for practice de\ cision.

For example, consumer preference is an essential element of the EBP proc\ ess with increased focus on patient-centered care. In summary, although nonresearch evidence does not have the rigor of research evidence, it does provide i\ mportant information for informed practice decisions.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

7 Evidence Appraisal: Nonresearch 184 References Abrahamson, K. A., Fox, R. L., & Doebbeling, B. N. (2012). Facilitator\ s and barriers to clinical practice guideline use among nurses. American Journal of Nursing , 12 (7), 26–35. https://doi. org/10.1097/01.NAJ.0000415957.46932.bf Advisory Board. (2020). About us. https://www.advisory.com/en/about-us Agency for Healthcare Research and Quality. (2018). Patient self-management support programs: An evaluation . https://www.ahrq.gov/research/findings/final-reports/ptmgmt/evaluation.html The AGREE Research Trust. (2013). The AGREE II instrument . http://www.agreetrust.org/ wp-content/uploads/2013/10/AGREE-II-Users-Manual-and-23-item-Instrument_\ 2009_ UPDATE_2013.pdf Alonso-Coello, P., Irfan, A., Sola, I., Delgado-Noguera, M., Rigau, D., Tort, S., Bonfil, X., Burgers, J., Shunemann H. (2010). The quality of clinical practice guidelines o\ ver the past two decades: A systematic review of guideline appraisal studies. BMJ Quality & Safety, 19 (6), e58. doi:10.1136/ qshc.2010.042077 Altman, D. G., & Simera, I. (2016). A history of the evolution of guid\ elines for reporting medical research: The long road to the EQUATOR Network. Journal of the Royal Society of Medicine , 109 (2), 67–77. https://doi.org/10.1177/0141076815625599 Baker, K. M., Clark, P. R., Henderson, D., Wolf, L. A., Carman, M. J., Manton, A., & Zavotsky, K. E. (2014). Identifying the differences between quality improvement,\ evidence-based practice, and original research. Journal of Emergency Nursing , 40(2), 195–197. https://doi.org/10.1016/j. jen.2013.12.016 Bradley, P. J. (2018). Guidelines to authors publishing a case report: The need \ for quality improvement. ACR Case Reports , 2(4). https://doi.org/10.21037/acr.2018.04.02 Brouwers, M. C., Kho, M. E., Browman, G. P., Burgers, J. S., Cluzeau, F., Feder, G., Fervers, B., Graham, I. D., Hanna, S. E., & Makarski, J. (2010). Development of the\ AGREE II, part 1:

Performance, usefulness and areas for improvement. Canadian Medical Association Journal , 182 (10), 1045–1062. https://doi.org/10.1503/cmaj.091714 Carande-Kulis, V. G., Maciosek, M. V., Briss, P. A., Teutsch, S. M., Zaza, S., Truman, B. I., Messonnier, M. L., Pappaioanou, M., Harris, J. R., & Fielding, J. (2000). Method\ s for systematic reviews of economic evaluations for the guide to community pr\ eventive service.

American Journal of Preventive Medicine , 18 (1S), 75–91. https://doi.org/10.1016/ s0749-3797(99)00120-8 Carper, B. A. (1978). Fundamental patterns of knowing in nursing. ANS Advances in Nursing Science , 1(1), 13–24. https://doi.org/10.1097/00012272-197810000-00004 Carter, E. J., Mastro, K., Vose, C., Rivera, R., & Larson, E. L. (2017). Clarifying the conundrum:\ Evidence-based practice, quality improvement, or research? The clinical \ scholarship continuum. Journal of Nursing Administration , 47(5), 266–270. https://doi.org/10.1097/ NNA.0000000000000477 Centers for Disease Control and Prevention. (2007). Economic evaluation of public health preparedness and response efforts. http://www.cdc.gov/owcd/EET/SeriesIntroduction/TOC.html Chen, H. T. (2006). A theory-driven evaluation perspective on mixed methods rese\ arch. Res Sch , 13 (1), 75–83.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 185 Cohen, R. I., Kennedy, H., Amitrano, B., Dillon, M., Guigui, S., & Kanner, A. (2015). A quality improvement project to decrease emergency department and medical intensi\ ve care unit transfer times. Journal of Critical Care , 30 (6), 1331–1337. https://doi.org/10.1016/j.jcrc.2015.07.017 Community Guide economic evaluation abstraction form, Version 4.0. (2010). https://www. thecommunityguide.org/sites/default/files/assets/EconAbstraction_v5.pd\ f Conn, V. S. (2004). Meta-analysis research. Journal of Vascular Nursing , 22 (2), 51–52. https://doi. org/10.1016/j.jvn.2004.03.002 Connelly, L. (2018). Overview of quality improvement. MEDSURG Nursing , 27 (2), 125–126. Dahm, P., Yeung, L. L., Galluci, M., Simone, G., & Schunemann, H. J. (2009). How \ to use a clinical practice guideline. The Journal of Urology , 181 (2), 472–479. https://doi.org/10.1016/ j.juro.2008.10.041 Davidson, P. M., & Rahman, A. R. (2019). Time for a renaissance of the clinical nurse specialist role in critical care? AACN Advanced Critical Care , 30 (1), 61–64. https://doi.org/10.4037/ aacnacc2019779 DelVecchio Good, M.-J., & Hannah, S. D. (2015). “Shattering culture”\ : Perspectives on cultural competence and evidence-based practice in mental health services. Transcultural Psychiatry , 52(2), 198–221. https://doi.org/10.1177/1363461514557348 Diekemper, R. L., Patel, S., Mette, S. A., Ornelas, J., Ouellette, D. R., Casey, K. R. (2018). Making the GRADE: CHEST updates its methodology. Chest , 153 (3), 756–759. https://doi. org/10.1016/j.chest.2016.04.018 Emergency Care Research Institute (ECRI). 2020. https://guidelines.ecr\ i.org/about-trust-scorecard/ EQUATOR Network. (n.d.). Reporting guidelines for main study types. EQUATOR Network: Enhancing the QUAlity and Transparency of Health Research. https://www.equator-network.org Fervers, B., Burgers, J. S., Haugh, M. C., Brouwers, M., Browman, G., Cl\ uzeau, F., & Philip, T. (2005). Predictors of high quality clinical practice guidelines: Exa\ mples in oncology.

International Journal for Quality in Health Care , 17 (2), 123–132. https://doi.org/10.1093/ intqhc/mzi011 Gomersall, J. S., Jadotte, Y. T., Xue, Y., Lockwood S., Riddle D., & Preda, A. (2015). Conducting systematic reviews of economic evaluations. International Journal of Evidence Based Healthcare , 13(3), 170–178. https://doi.org/10.1097/XEB.0000000000000063 Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis o\ f 14 review types and associated methodologies. Health Information and Libraries Journal , 26(2), 91–108. https://doi. org/10.1111/j.1471-1842.2009.00848.x Gray, A. M., & Wilkinson, T. (2106). Economic evaluation of healthcare interventions: Old and new directions. Oxford Review of Economic Policy , 32 (1), 102–121. Institute of Medicine. (2011). Clinical practice guidelines we can trust. The National Academies Press. Retrieved from http://www.nationalacademies.org/hmd/Reports/2011/Clinical-Practice- Guidelines-We-Can-Trust/Standards.aspx Issel, L. M. (2016). Health program planning and evaluation: What nurs\ e scholars need to know. In J. R. Bloch, M. R. Courtney, & M. L. Clark (Eds.), Practice-based clinical inquiry in nursing for DNP and PhD research: Looking beyond traditional methods (1st ed.), 3–16. Springer Publishing Company.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

7 Evidence Appraisal: Nonresearch 186 Joshi, G. P, Benzon, H. T, Gan, T. J., & Vetter, T. R. (2019). Consistent definitions of clinical practice guidelines, consensus statements, position statements, and prac\ tice alerts. Anesthesia & Analgesia , 129 (6), 1767–1769. https://doi.org/10.1213/ANE.0000000000004236 Koh, C., Nelson, J. M., & Cook, P. F. (2010). Evaluation of a patient navigation program. Clinical Journal of Oncology Nursing , 15(1), 41–48. https://doi.org/10.1188/11.CJON.41-48 Kuehn, B. M. (2011). IOM sets out “gold standard” practices for \ creating guidelines, systematic reviews. JAMA , 305 (18), 1846–1848. Lopez-Olivo, M. A., Kallen, M. A., Ortiz, Z., Skidmore, B., & Suarez-Alm\ azor, M. E. (2008). Quality appraisal of clinical practice guidelines and consensus statemen\ ts on the use of biologic agents in rheumatoid arthritis: A systematic review. Arthritis & Rheumatism , 59(11), 1625–1638. https://doi.org/10.1002/art.24207 Melnyk, B. M., & Fineout-Overholt, E. (2006). Consumer preferences and\ values as an integral key to evidence-based practice. Nursing Administration Quarterly , 30 (2), 123–127. https://doi.org/10.1097/00006216-200604000-00009 Mims, J. W. (2015). Targeting quality improvement in clinical practice guidelines. Otolaryngology— Head and Neck Surgery , 153 (6), 907–908. https://doi.org/10.1177/0194599815611861 NEJM Catalyst. (2017, January 1). What is patient-centered care? NEJM Catalyst. https://catalyst.nejm.org/doi/abs/10.1056/CAT.17.0559 Neumann, I., Santesso, N., Akl, E. A., Rind, D. M., Vandvik, P. O., Alonso-Coello, P., Agoritsas, T., Mustafa, R. A., Alexander, P. E., Schünemann, H., & Guyatt, G. H. (2016). A guide for health professionals to interpret and use recommendations in guidelines \ developed with the GRADE approach. Journal of Clinical Epidemiology , 72 , 45–55. https://doi.org/10.1016/ j.jclinepi.2015.11.017 Newhouse, R. P., Pettit, J. C., Poe, S., & Rocco, L. (2006). The slippery slope: Dif\ ferentiating between quality improvement and research. Journal of Nursing Administration , 36(4), 211–219. https://doi.org/10.1097/00005110-200604000-00011 Nissen, S. E. (2017). Conflicts of interest and professional medical\ associations: Progress and remaining challenges. JAMA , 317 (17), 1737–1738. https://doi.org/10.1001/jama.2017.2516 Nissen, T., & Wynn, R. (2014). The clinical case report: a review of its merits and l\ imitations. BMC research notes, 7, 264. https://doi.org/10.1186/1756-0500-7-264 Oermann, M. H., & Hays, J. C. (2016). Writing for publication in nursing (3rd ed.). Springer. Ogrinc, G., Davies, L., Batalden, P., Davidoff, F., Goodman, D., & Stevens, D. (2015). SQUIRE 2.0 . http://www.squire-statement.org Patient-Centered Outcomes Research Institute. (n.d.). Our vision & mission . https://www.pcori.org/ about-us/our-vision-mission Peters, M. D. J., Godfrey, C. M., Khalil, H., McInerny, P., Parker, D., & Soares, C. B. (2015). Guidance for conducting systematic scoping reviews. International Journal of Evidence-Based Healthcare, 13 (3), 141–146. https://doi.org/10.1097/xeb.0000000000000050 Porcino, A. (2016). Not birds of a feather: Case reports, case studies\ , and single-subject research. International Journal of Therapeutic Massage & Bodywork , 9(3), 1–2. https://doi.org/10.3822/ijtmb.v9i3.334 Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition 187 Portela, M. C., Pronovost, P. J., Woodcock, T., Carter, P., & Dixon-Woods, M. (2015). How to study improvement interventions: A brief overview of possible study type\ s. BMJ Quality and Safety , 24(5), 325–336. https://doi.org/10.1136/bmjqs-2014-003620 Ransohoff, D. F., Pignone, M., & Sox, H. C. (2013). How to decide whether a clinical \ practice guideline is trustworthy. Journal of the American Medical Association , 309 (2), 139–140. https:// doi.org/10.1001/jama.2012.156703 Sandelowski, M. (2011), “Casing” the research case study. Res. Nurs. Health, 34: 153-159. https:// doi.org/10.1002/nur.20421 Sayre, J. W., Toklu, H. Z., Ye, F., Mazza, J., & Yale, S. (2017). Case reports, case series—From clinical practice to evidence-based medicine in graduate medical educati\ on. Cureus , 8(8), e1546. https://doi.org/10.7759/cureus.1546 Senn, S. J. (2009). Overstating the evidence—Double counting in met\ a-analysis and related problems. BMC Medical Research Methodology , 9, 10. https://doi.org/10.1186/1471-2288-9-10 Shekelle, P. G. (2018). Clinical practice guidelines: What’s next? JAMA , 320 (8), 757–758. https:// doi.org/10.1001/jama.2018.9660 Sniderman, A. D., & Furberg, C. D. (2009). Why guideline-making requir\ es reform. Journal of the American Medical Association , 301 (4), 429–431. https://doi.org/10.1001/jama.2009.15 Snyder, H. (2019). Literature review as a research methodology: An overview \ and guidelines. Journal of Business Research , 104 , 333–339. https://doi.org/10.1016/j.jbusres.2019.07.039 Spetz, J., Brown, D. S., Aydin, C., & Donaldson, N. (2013). The value of reducing hospital-acqui\ red pressure ulcer prevalence: An illustrative analysis. Journal of Nursing Administration , 43(4), 235–241. https://doi.org/10.1097/NNA.0b013e3182895a3c Stetler, C. B., Morsi, D., Rucki, S., Broughton, S., Corrigan, B., Fitzgerald, \ J., Giuliano, K., Havener, P., & Sheridan, E. A. (1998). Utilization-focused integrative reviews i\ n a nursing service. Applied Nursing Research , 11(4), 195–206. https://doi.org/10.1016/s0897-1897(98)80329-7 Sullivan, T. (2018, May 5). Physicians and industry: Fix the relationships but ke\ ep them going. Policy & Medicine . https://www.policymed.com/2011/02/physicians-and-industry-fix-the- relationships-but-keep-them-going.html Toronto, C. E., Quinn, B. L., & Remington, R. (2018). Characteristics o\ f reviews published in nursing literature. Advances in Nursing Science , 41 (1), 30–40. https://doi.org/10.1097/ ANS.0000000000000180 Tunkel, D. E., & Jones, S. L. (2015). Who wrote this clinical practice \ guideline? Otolaryngology- Head and Neck Surgery , 153 (6), 909–913. https://doi.org/10.1177/0194599815606716 Welch, V. A., Akl, E. A., Guyatt, G., Pottie, K., Eslava-Schmalbach, J., Ansari,\ M. T., de Beer, H., Briel, M., Dans, T., Dans, I., Hultcrantz, M., Jull, J., Katikireddi, S. V., Meerpohl, J., Morton, R., Mosdol, A., Petkovic, J., Schünemann, H. J., Sharaf, R. N., … \ Tugwell, P. (2017). GRADE equity guidelines 1: Considering health equity in GRADE guideline develo\ pment: introduction and rationale. Journal of Clinical Epidemiology , 90, 59–67. https://doi.org/10.1016/j. jclinepi.2017.01.014 Whitehead, D. (2003). Evaluating health promotion: A model for nursing\ practice. Journal of Advanced Nursing , 41 (5), 490–498. https://doi.org/10.1046/j.1365-2648.2003.02556.x Whittemore, R. (2005). Combining evidence in nursing research: Methods\ and implications. Nursing Research , 54(1), 56–62. https://doi.org/10.1097/00006199-200501000-00008 Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

7 Evidence Appraisal: Nonresearch 188 Whittemore, R., & Knafl, K. (2005). The integrative review: Updated \ methodology. Journal of Advanced Nursing , 52(5), 546–553. https://doi.org/10.1111/j.1365-2648.2005.03621.x Yang, P. H., Hung, C. H., & Chen, Y. C. (2015). The impact of three nursing staffing models on nursing outcomes. Journal of Advanced Nursing , 71 (8), 1847–1856. https://doi.org/10.1111/jan.12643 Yoder-Wise, P. S. (2014). Leading and managing in nursing (6th ed.). Mosby.

Dang, Deborah, et al. Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professionals, Fourth Edition, Sigma Theta Tau International, 2021.

ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ucf/detail.action?docID=6677828.

Created from ucf on 2022-09-10 23:04:58.

Copyright © 2021. Sigma Theta Tau International. All rights reserved.

Johns Hopkins Evidence-Based Practice for Nurses and Healthcare Professi\ onals, Fourth Edition