For this Capstone Case Study Report Project, you will combine the case studies provided surrounding terrorism, terroristic groups, and other aspects of global concerns into one final project about the

1 Comparison of Gun Control Policies Across the United States Student Name College of Safety and Emergency Services Columbia Southern University Dr. Charles T. Kelly, Jr. Date Author Note 2 Abstract Your abstract should be no more than 250 words and you do not indent in the abstract. Keywords: 3 Comparison of Gun Control Policies Across the United States Introduction Your introduction should be two or three solid paragraphs long. Review of the Literature A Qualitative Review of the Literature is perhaps the most critical phase in the research process in qualitative, quantitative, and hybrid research studies. While there are untold numbers of benefits derived from a review of the literature, perhaps the mos t important is the knowledge gained from others who have studied and written about the specific topics or issue under review. The review of the literature is much more than a simple summary of a few collected works. It is a complex undertaking, or an expla nation of a collection of published and/or unpublished documents available from various sources on a specific topic that optimally involves summarization, analysis, evaluation, and synthesis of the documents. Your review of the literature should be several pages long. Research Method The research design for this paper is qualitative and will be a White Paper. A White Paper is a written document used by the writer to inform readers or an interested audience on a particular topic or issue.

Further it uses authoritative knowledge (Governm ent Reports, After Action Reports, Congressional Testimony, Policy and Procedure Manuals, Books, Monographs, Academic/Scientific, peer -reviewed Journal Articles, etc.). This paper attempts to provide information relative to gun control policies across the United States. More specifically, this paper will focus on the perceived problems associated with gun possession, an analysis of the Second Amendment, document any need for gun control legislative change, examine the history of gun control, examine the pot ential causes of the problems, evaluate previous strategies, laws, or interventions, clarify the relevant stakeholders, conduct a criminal justice systems analysis, and identify any barriers to change and support for change. 4 If the writer wou ld have employed a quantitative research method or a hybrid method of investigation, two research questions and multi -variate equations would have been necessary. However, since this is a qualitative White Paper, no statistical data was collected, analyzed , and reported on. It is possible however, that research articles that include statistics will be used and some of that data discussed. Discussion This section is where the writer provides his or her contribution to the subject and the literature. This is where you investigate or examine the meaning, importance and relevance of your conclusions based upon your extensive review of the literature which might include after -action reports, government documents, peer -reviewed journal articles (academic and scie ntific), book, monographs, etc. Under no circumstances should you use any of the following: (NY Times, Washington Post, USA Today, LA Times, etc.,) news services (ABC, CBS, NBC, CNN, FOX, MSNBC, etc.), cable television, anything Wiki, Associated Press, Reuters, Time, Newsweek, or items identified as unknown, etc. Additionally, you are prohibited from using blogs, .coms, or any thing that is not academic in nature. While there are many ways to write a discussion section, the simple rule is to interpret the findings of those you focused on in the Review of the Literature, determine the implications of what others found, recognize, and articulate the limitations of those findings, and the provide recommendations based upon what you have learned and tested. Sometimes the discussion section will blend into the conclusions, but that is fine. Summary of Key Findings This section can be four to five paragraphs maybe slightly more. Recommendations This is entirely left up to the writer. Conclusion 5 This section should be no less than one page. 6 References (These are samples that have nothing to do with gun control) Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic course evaluations: Does an online delivery system influence student evaluations? The Journal of Economic Educat ion , 37 (1), 21 –37. https://doi.org/10.3200/JECE.37.1.21 -37 Berk, R. A. (2012). Top 20 strategies to increase the online response rates of student rating scales. International Journal of Technology in Teaching and Learning , 8(2), 98 –107. Berk, R. A. (2013). Top 10 flashpoints in student ratings and the evaluation of teaching . Stylus. Boysen, G. A. (2015a). Preventing the overinterpretation of small mean differences in student evaluations of teaching: An evaluation of warning effectiveness. Scholarship of Teaching and Learning in Psychology , 1(4), 269 –282. https://doi.org/10.1037/stl0000042 Boysen, G. A. (2015b). Significant interpretation of small mean differences in student evaluations of teaching despite explicit warning to avoid overinterpretation. Scholarship of Teaching and Lear ning in Psychology , 1(2), 150 –162. https://doi.org/10.1037/stl0000017 Boysen, G. A., Kelly, T. J., Raesly, H. N., & Casner, R. W. (2014). The (mis)interpretation of teaching evaluations by college faculty and administrators. Assessment & Evaluation in Higher Education , 39 (6), 641 –656. https://doi.org/10.1080/02602938.2013.860950 Buller, J. L. (2012). Best practices in faculty evaluation: A practical guide for academic leaders . Jossey - Bass. Dewar, J. M. (2011). Helping stakeholders understand the limitations of S RT data: Are we doing enough? Journal of Faculty Development , 25 (3), 40 –44. Dommeyer, C. J., Baum, P., & Hanna, R. W. (2002). College students’ attitudes toward methods of collecting teaching evaluations: In -class versus on -line. Journal of Education for Business , 78 (1), 11 –15. https://doi.org/10.1080/08832320209599691 7 Dommeyer, C. J., Baum, P., Hanna, R. W., & Chapman, K. S. (2004). Gathering faculty teaching evaluations by in -cla ss and online surveys: Their effects on response rates and evaluations. Assessment & Evaluation in Higher Education , 29 (5), 611 –623. https://doi.org/10.1080/02602930410001689171 Feistauer, D., & Richter, T. (2016). How reliable are students’ evaluations of teaching quality? A variance components approach. Assessment & Evaluation in Higher Education , 42 (8), 1263 –1279. https://doi.o rg/10.1080/02602938.2016.1261083 Gilovich, T., Griffin, D., & Kahneman, D. (Eds.). (2002). Heuristics and biases: The psychology of intuitive judgment . Cambridge University Press. https://doi.org/10.1017/CBO9780511808098 Griffin, T. J., Hilton, J., III, Plummer, K., & Barret, D. (2014). Correlation between grade point averages and student evaluation of teaching scores: Taking a closer look. Assessment & Evaluation in Higher Education , 39 (3), 339 –348. https://doi.org/10.1080/02602938.2013.831809 Jaquett, C. M., VanMaaren, V. G., & Williams, R. L. (2016). The effect of extra -credit incentives on student submission of end -of-course evaluations. Scholarship of Teaching and Learning in Psychology , 2(1), 49 –61. https://doi.org/10.1037/stl0000052 Jaquett, C. M., VanMaaren, V. G., & Williams, R. L. (2017). Course factors that motivate students to submit end -of-course evaluations. Innovative Higher Education , 42 (1), 19 –31. https://doi.org/10.1007/s10755 -016 -9368 -5 Morrison, R. (2011). A comparison of online versus traditional student end -of-course critiques in resident courses. Assessment & Evaluation in Higher Education , 36 (6), 627 –641. https://doi.org/10.1080/02602931003632399 Nowell, C., Gale, L. R., & Handley, B. (2010). Assessing faculty performance using student evaluations of teaching in an uncontrolled setting. Assessment & Eva luation in Higher Education , 35 (4), 463 – 475. https://doi.org/10.1080/02602930902862875 8 Nulty, D. D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment & Evaluation in Higher Education , 33 (3), 301 –314. https://doi.org/10.1080/02602930701293231 Palmer, M. S., Bach, D. J., & Streifer, A. C. (2014). Measuring the promise: A learni ng -focused syllabus rubric. To Improve the Academy: A Journal of Educational Development , 33 (1), 14 –36. https://doi.org/10.1002/tia2.20004 Reiner, C. M., & Arnold, K. E. (2010). Online course evaluation : Student and instructor perspectives and assessment potential. Assessment Update , 22 (2), 8 –10. https://doi.org/10.1002/au.222 Risquez, A., Vaughan, E., & Murphy, M. (2015). Online student evaluations of teaching: What are we sacrificing for the affordances of technology? Assessment & Evaluation in Higher Education , 40 (1), 210 –234. https://doi.org/10.1080/02602938.2014.890695 Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching: The state of the art. Review of Educational Research , 83 (4), 598 –642. https://doi.org/10.3102/0 034654313496870 Stanny, C. J., Gonzalez, M., & McGowan, B. (2015). Assessing the culture of teaching and learning through a syllabus review. Assessment & Evaluation in Higher Education , 40 (7), 898 –913. https://doi.org/10.1080/02602938.2014.956684 Stark, P. B., & Freishtat, R. (2014). An evaluation of course evaluations. ScienceOpen Research . https://doi.org/10.14293/S219 9-1006.1.SOR -EDU.AOFRQA.v1 Stowell, J. R., Addison, W. E., & Smith, J. L. (2012). Comparison of online and classroom -based student evaluations of instruction. Assessment & Evaluation in Higher Education , 37 (4), 465 –473. https://doi.org/10.1080/02602938.2010.545869 Tversky, A., & Kahneman, D. (1971). Belief in the law of small numbers. Psychological Bulletin , 76 (2), 105 –110. https://doi.org/10. 1037/h0031322 9 Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta -analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation , 54 , 22 –42. https://doi.org/10.1016/j.stueduc.2016.08.007 Venette, S., Sellnow, D., & McIntyre, K. (2010). Charting new territory: Assessing the online frontier of student ratings of instruction. Assessment & Evaluation in Higher Education , 35 (1), 101 –115. https://doi.org/10.1080/02602930802618336 Webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrest, L. (1966). Unobtrusive measures: Nonreactive research in the social sciences . Rand McNally. 10