Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.

QUESTION

Comparing Efficacy Research and Program Evaluation -Peer responses

There needs to be a seperate response to each peer's posting and it needs to be supported with at least two references for each peer's posting.

1st Peer Posting

What differences do you note between efficacy research and program evaluation?

The difference between efficacy research and program evaluation is the scientific aspect. Program evaluations “primary purpose is to provide data that can be used by decision makers to make valued judgements about the processes and outcomes of a program (Sherpis, Young, & Daniels, 2010). Therefore, letting the agency know what needs to be changed in the program to make the program effective to their clientele.  Efficacy research based on empirical data which is an essential to the scientific method. Therefore, efficacy research is where clients are in controlled environments and interventions can be tested.

What are the key strengths of efficacy research?

The key strength of efficacy research is the scientific process. In the article, The Efficacy of Child Parent Relationship Therapy for Adopted Children with Attachment Disruptions, the researcher wanted to test the child parent relationship therapy (CPRT) which “is an empirically based, manualized counseling intervention for children presenting with a range of social, emotional, and behavioral issues” (Cranes-Holt, & Bratton, 2014). The purpose was to test this theory on adoptive families. Thus, a control group was designed to test CPRT. The researcher used the Child Behavior Checklist-Parent Version (CBCL) and the Measurement of Empathy in Adult-Child Interaction (MEACI). These are both empirical test, the CBCL measures the parents of the child’s behavior problems; whereas, the MEACI is an operational measure that defines empathy between the parents and the child while playing. These tests are conducted in control environments where no outside distractions are permitted and the hypothesis of the researcher can be tested.

What are the key strengths of program evaluation?

The key strength of the program evaluation is the clients are the people who are participating in the program evaluation and whether the interventions used are effective for them. Thus, this lets the research know what changes are needed for the agency to be successful. Therefore, surveys are used to collect data for the participants, the parents, are people that work with the clients or caregivers with the client. This give the ideas of opinions of the people directly or indirectly receiving services. In the article, Evaluating Batter Counseling Programs: A Difficult Task Showing Some Effects and Implications, a multisite evaluation was done and the participants were “administered a uniform set of background questionnaire, personality inventory (MCMI-III; Millon, 1994), and alcohol test (MAST; Selzer, 1971)” (Gondolf, 2004). Therefore, given the research opinions of the clientele over the four sites and let the researcher know what treatment is working and not working. Therefore, the conclusion of the program evaluation “the batterer programs, in our evaluation, appear to contribute to this outcome— there is a ‘‘program effect.’’ (Gondolf, 2004).  “Referral to the gender-based, cognitive–behavioral programs, moreover, seems to be appropriate for the majority of men” (Gondolf, 2004).

What contribution does each of these types of research make to the counseling field?

The contribution that efficacy research makes to the counseling field is that there is scientific data that the interventions used with the client will work; if they are utilized correctly by the client. Efficacy research gives the counselor confidence in providing treatment inventions for the client because it will help in the client’s mental health. Program evaluations aid the counselor in what intervention are working and not working for the client population they serve. Program evaluations make sure the agency has the client’s best interest in mind and the agency is using the best intervention and treatment planning to service their client. Program evaluation helps the counselor increase their knowledge base of treatment, interventions, assessments, and diversity for the clients they serve. “Counselors recognize the need for continuing education to acquire and maintain a reasonable level of awareness of current scientific and professional information in their fields of activity. Counselors maintain their competence in the skills they use, are open to new procedures, and remain informed regarding best practices for working with diverse populations” (APA, 2014).

What is a point from any of the articles that you can apply in your current work setting or your ideal counseling fieldwork setting?

A main point that stood out to this learner was the subjectivity of the program evaluation. “Evaluation is, consequently, not an objective or purely scientific process that produces unbiased and conclusive results”. In this view, a program evaluation is a process with a subjective outcome”. This the research must be careful not to impose if owes values and views when evaluating a program from interpreting the data that is given. According to ACA Code of Ethics (2014), standard a.4.b. states “Counselors are aware of—and avoid imposing—their own values, attitudes, beliefs, and behaviors. Counselors respect the diversity of clients, trainees, and research participants and seek training in areas in which they are at risk of imposing their values onto clients, especially when the counselor’s values are inconsistent with the client’s goals or are discriminatory in nature”.

References

American Counseling Association (2014). Code of Ethics. Alexandria, VA: Author.

Cranes-Holt, K., & Bratton, S.C. (2014). The Efficacy of Child Parent Relationship Therapy for

            Adopted Children with Attachment Disruptions. Journal of Counseling & Development,

            92(3), 328-337. doi: 10.1002/j.1556.6676.2014.00160.x

Gondolf, E. W. (2004). Evaluating Batter Counseling Programs: A Difficult Task Showing

            Some Effects and Implications. Aggression and Violent Behavior, 9(6), 605-631. doi:

            10.1016/j.avb.2003.06.001

Sherpis, Young, & Daniels (2010). Current View: US Counseling Research: Quantitative,

Qualitative, and Mixed Methods. [Bookshelf Online]. Retrieved from:

https://bookshelf.vitalsource.com/#/books/9781323128015/cfi/0

2nd Peer Posting

U1D1_KDM Powell_Comparing Efficacy Research and Program Evaluation

Differences

In working with efficacy research, involves general investigation to resolve the analysis of whether a certain program is effective (Royce, Thayer & Padgett, 2016) Evaluation of a program involves assessing whether the program is supplying what is needed by the client attain their goals (Royce, Thayer, & Padgett, 2016). Each has the purpose for a variety of reasons. Program evaluations are practical is do not rely on theory or academics to be performed and can evaluated for one person or a group (Royce, Thayer, & Padgett, 2016). The effectiveness of research offers the research the answers to understand if a program is doing what it was set out to do. The effectiveness or usefulness of a program can mean the difference between expanding a program or creating change.  Program evaluation looks at the efficacy of the research to determine if information supplied can be utilized in the program. With this in mind a program can be made better which ultimately make the people involved in the program get better service toward their needs.

Key strengths

Efficacy research digs deep through a process and looks at certain information presented can be something meaningful or misguided. The amount of information that is available can offer a clearer view of the course of actions that can be followed to make success of a client’s life in the participation of a program. The amount of research compiled offers information as to what are the pitfalls or viable assets to a program because if the research was done correct is could be replicated and come to the same conclusion which would produce validity in what found (Royce, Thayer & Padgett, 2016).  Understanding how the research was handled and what is revealed within that research can be effectively used as a viable representation to be used in future research. 

In the regards to program evaluation, the program that may work in one setting may not work another setting even though client’s may have the same or similar program (Royce, Thayer, & Padgett, 2016). Program evaluation looks at how the program may relate to the clients in that particular setting. As mention with this evaluation, change can occur to be more beneficial. The developers and facilitators of a program can review if the interventions are used are what is best for their client population. Also, having the program based on research can assess what research was used to based their decision on the interventions being used.

Contribution

There are so many programs out there just as there is research out there. There are options that can be utilized to help in the counseling. There is one specific thing that stands out as being definitive in how and what interventions being used.  Gondolf (2004) maintains that what makes how effective a program is based on the interventions incorporated in the program.  Research and evaluation can set a program a part from all others. Gondolf (2004) believed that defining a program is a major issue. With use of research and evaluation, defining the program can dictate which client based that would be better served, the most suitable setting and effectiveness of the programs as whole.

Point

Information that is out there about evaluation of programs may not be entirely truthful. Gondolf (2004) expressed that producing definitive results can be overwhelming but also the results can be fabricated to produce validation. There should be consideration as to how the results are interpreted based on the research. Sometimes is good to do one’s own research and evaluation. Relying solely on other’s research and evaluation could put the good that one is trying to at risk as well as one’s reputation.

References:

David Royse, D., Bruce A. Thayer, & Padgett, D.K. (2016). Program Evaluation: An Introduction to an Evidence-Based Approach (6th ed.) Boston. MA: Cengage Learning.

Show more
LEARN MORE EFFECTIVELY AND GET BETTER GRADES!
Ask a Question