Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.

QUESTION

need someone who know statistics and can write papers

Topic: Project Assignment #’s 3 and 4: The Assessment Structure and The Measurement Model.

The item structure that I have chosen for my research study is interviewing. I chose this assessment structure for several reasons. First, according to Phellas (2003) “the interview is a more flexible form than the questionnaire, it can be used to gather information of greater depth and can be more sensitive to contextual variation in meaning” (p.183). Based on the life themes I am seeking; this structure would be most effective. It will effectively describe and identify the meaning of the central theme of my study, in this case the effects of Social Promotion versus Retention. Additionally, this type of item structure will be especially useful in evoking the type of information/data that I as the researcher is seeking. Further this type of assessment structure will afford me the latitude of conducting follow up questions to respondents if deemed necessary; such as in the event that there is ambiguity in a respondent’s answer.

Further, within this framework, even though there are advantages to the other types of interviews, such as the ease of analyzing and comparing in a standardized open-ended interview; the simplicity in terms of set questions and answers in a closed fixed response interview, but based on my literature review and the goals of my study, it is my firm belief that a general interview guide is the ideal structure for my purposes. This assessment structure is ideal for me because it satisfies the criteria for same. Further, it will ensure that the general areas of information I seek for my study are collected from each person being interviewed, which in turn will result in more focus on the topic that is being researched, Social Promotion Vs Retention. This approach has the added advantage of allowing me the interviewer enough freedom and adaptability in appropriating the information I seek from the interviewee. I am fully aware that this form of assessment has a few disadvantages, such as in the analysis of the responses; possible introduction of interview bias, however; my goal in conducting this research study will not be hampered or restricted by those constraints. I will device the necessary tools to compensate for these extraneous variables in my item responses and data analysis.

Proposed Items:

1. What is the criteria for social promotion?

2. Who are the individuals that are socially promoted?

3. What is the demographics of the socially promoted, please be specific?

4. Are there disparities in those individuals being socially promoted?

5. How do you account for such disparities if such be the case?

6. Is there data/evidence of the drawbacks of social promotion at your level?

7. What are the short term benefits of social promotion?

8. What are the long term benefits of social promotion?

9. What are the negative effects short term of social promotion?

10. What are the long term negative effects of social promotion?

11. What is the criteria for retention?

12. What is the demographics of students being retained, please be specific?

13.  Based on your data, are there certain demographics that are retained more than others?

14.  How do you account for such disparity if it does exist?

15.  Are there short term benefits to retention?

16.  If so, what are those benefits?

17.  Are there long term benefits to retention?

18.  If so, what are those benefits?

19. What are your feelings about social promotion?

20. What are your feelings about retention?

21. Who are the people involved in the decision making process for retention vs. promotion?

22. What are their roles in the decision making process?

23.  Based on your experience with the process, are there any stigmas attached to those being socially promoted or retained?

24. What are those stigmas?

25. Is there any other information you would like to share that may shed light on this issue?

Instrument reliability and validity

To check for reliability, which by definition is the degree to which the instrument is consistent. That consistency can be determined by the measure of error in the instrument, according to Wilson (2005) “there are many possible sources of measurement errors” (p.140), where there is more error there is less reliability, and vice versa, where there is less error there is more reliability. To check for reliability in my instrument, I will conduct a SPLIT HALF reliability instrument review. This test will enable me to detect errors in measurement. According to Korb (2013) “The test for reliability will be split into two halves. Then these two halves will be correlated to determine how reliable the instrument is within itself” (p.183). For the mere fact that this type of instrument measures for internal consistency of an instrument, a reliability coefficient will be calculated for every variable that has more than two items to measure. The items will then be divided into the variables they measure and the researcher will look for reliability with a low split (under 0.70), indicating a poorly developed instrument and the corrective measures will be taken, i.e. redesigning the instrument or modifying parts of it, i.e. restating questions, changing the venue/environment where the interview was conducted to a more suitable forum.

To check for validity which by definition is how well the instrument measures what it is supposed to measure, the researcher will closely examine the three types of validity evidenced by the instrument. Those three types of validities are, content validity, criteria validity and construct validity. First, in content validity per Korb (2013) “the researcher will check to see whether the items on the instrument covers the entire content that it should cover”. (p. 9), in this case information related to social promotion vs. retention. Next, again in the interest of testing for validity, the researcher will check for criteria validity, in this case how well is the instrument I designed for this study relates to other instruments that measure similar variables. This will be achieved by calculating the correlation of my instrument with a set criterion. In this process I will be looking for convergent validity, divergent validity and prediction validity. My instrument would prove validity if there is convergent and predictive validity, and likewise my instrument will not have validity if there is evidence of divergent validity. Again, as in the test of reliability, if the instrument is found to be lacking in any factors contributing to validity, it will be redesigned or modified to compensate for those shortcomings.

                                                References

 Korb, K. (2013). Conducting educational research: Steps in Conducting a Research Study.

                       Retrieved  from https://pt.scribd.com/document/

Phellas, C., Bloch, A., and Seale, C. (2011). Structured Methods: Interviews, Questionnaires   

                         and Observation. Thousand Oaks, CA: Sage.

Wilson, Mark.  (2005). Constructing measures: An item response modeling approach.  

New York: Psychology Press.

Show more
LEARN MORE EFFECTIVELY AND GET BETTER GRADES!
Ask a Question