Skip to Main Content
 

Global Search Box

 
 
 

ETD Abstract Container

Abstract Header

Formative Assessment in Postsecondary Quantitative Reasoning Courses

Budhathoki, Deependra

Abstract Details

2022, Doctor of Philosophy (PhD), Ohio University, Curriculum and Instruction Mathematics Education (Education).
Quantitative reasoning is an individual’s ability to understand quantitative information in context, represent and model such information in various forms, solve real-world problems using mathematical and statistical knowledge, and communicate ideas using quantitative arguments. Quantitative Reasoning (QR) courses are increasingly popular as gateway mathematics for students whose majors lie outside science, technology, engineering, and mathematics. Professional organizations and policy documents recommend incorporating innovative assessment approaches in QR courses, including continuous and formative assessment. However, many QR instructors are unsure about what types of assessments to use and how to implement such assessments to evaluate and support student learning. Moreover, previous studies, including my pilot research, have found that many QR instructors use traditional, summative assessments. In this dissertation study, I explored instructors’ intentions, behaviors, and reflections on formative assessment, aiming to answer two sets of research questions: (a) What kinds of assessments do QR instructors use? To what extent do the instructors use their assessments as stated in their course syllabus? (b) How do QR instructors implement their assessments to support student learning? In particular, how do instructors implement questioning, feedback, and peer- and self-assessment? What are their related perceptions and experiences? To examine these questions, I employed a collective case study design, recruiting 8 instructors from eight public postsecondary institutions in Ohio that offered Ohio Transfer 36–approved QR courses. The data sources included the instructors’ course documents, semistructured interviews of the instructors, and observations of their teaching. I used six Instructional Quality Assessment (IQA) rubrics for the class observations. I conducted individual and cross-case analyses to compare and contrast the instructors’ perceptions and practices of formative assessments. The individual case analyses revealed that the QR instructors planned and implemented several categories of assessments with a wide range of frequencies and weights. Also, the instructors generally implemented one or more categories of assessments differently from how they were explained in the course syllabus. The instructors’ implemented assessments were often innovative but not always collaborative. Still, there was a strong positive association between their collaborative and innovative assessments. After several failed attempts at discovering a key factor to use as the basis for grouping instructors for cross-case analysis, the post hoc individual case analyses revealed instructional autonomy as the critical variable for QR assessments. By instructional autonomy, I mean that the QR instructor had the freedom to select, develop, and implement assessments as they wished rather than being required to follow institutional or departmental mandates. Based on this variable, I placed each instructor into either the autonomous group or the nonautonomous group. I then conducted a cross-case analysis of the instructors of these two groups, comparing and contrasting their perceptions and practices about using formative assessments, mainly questioning, feedback, and peer- and self-assessment. Via this cross-case analysis, I developed 13 themes. The themes indicated that most instructors had positive experiences using the three formative assessment strategies of (a) questioning, (b) feedback, and (c) peer- and self-assessment in their QR teaching. However, some instructors in both groups were concerned about such assessment strategies’ effectiveness in online teaching. In addition, the instructors in the autonomous and nonautonomous groups had substantial differences in their perceptions and practices of these three formative assessment strategies. Their perceptions and practices varied also depending upon whether they used individual or group projects. The autonomous instructors mainly selected and implemented collaborative and innovative assessments, made mid-term adjustments to their planned assessments, used cognitively rich tasks even in traditional assessments, and earned higher scores on the six IQA rubrics than the nonautonomous instructors. Post-hoc cross-case analysis revealed group projects as a critical variable for formative assessments in QR. The instructors using group projects provided open-ended, less-structured cognitively rich tasks, and employed more formative assessment strategies than those using individual projects. In addition, the group projects generally were the instructors’ most collaborative and innovative assessments and addressed reasoning-based QR competencies. I confirmed and extended my initial conceptual model based on the findings of individual, cross-case, and post-hoc analyses. In the revised model, I use instructional autonomy and group projects as stimulants for formative assessments in QR teaching. I also refined my initial conceptualization of quantitative reasoning. Now, I perceive the ideal QR course as one that addresses the 4Cs: (a) Quantitative Content and Skills, (b) Critical Thinking, (c) Real-World Contexts, and (d) Collaboration. The key recommendations for future pedagogical practices include providing instructional autonomy to or involving QR instructors in deciding assessment categories for their QR sections and assigning group projects with cognitively rich tasks. The key recommendations for future research include examining the impact of instructional autonomy for other courses and at other instructional levels, examining the associations between collaborative and innovative QR assessments, and replicating this study among instructors with similar instructional settings.
Gregory Foley (Committee Chair)
Allyson Hallman-Thrasher (Committee Member)
Mathew Felton-Koestler (Committee Member)
Gordon Brooks (Committee Member)
407 p.

Recommended Citations

Citations

  • Budhathoki, D. (2022). Formative Assessment in Postsecondary Quantitative Reasoning Courses [Doctoral dissertation, Ohio University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou165903392100886

    APA Style (7th edition)

  • Budhathoki, Deependra. Formative Assessment in Postsecondary Quantitative Reasoning Courses. 2022. Ohio University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=ohiou165903392100886.

    MLA Style (8th edition)

  • Budhathoki, Deependra. "Formative Assessment in Postsecondary Quantitative Reasoning Courses." Doctoral dissertation, Ohio University, 2022. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou165903392100886

    Chicago Manual of Style (17th edition)