[av_section min_height=” min_height_px=’500px’ padding=’default’ shadow=’no-shadow’ bottom_border=’no-border-styling’ bottom_border_diagonal_color=’#333333′ bottom_border_diagonal_direction=’scroll’ bottom_border_style=’scroll’ scroll_down=” id=” color=’main_color’ custom_bg=” src=” attach=’scroll’ position=’top left’ repeat=’no-repeat’ video=” video_ratio=’16:9′ video_mobile_disabled=” overlay_enable=” overlay_opacity=’0.5′ overlay_color=” overlay_pattern=” overlay_custom_pattern=”]
[av_three_fifth first min_height=” vertical_alignment=” space=” custom_margin=” margin=’0px’ padding=’0px’ border=” border_color=” radius=’0px’ background_color=” src=” background_position=’top left’ background_repeat=’no-repeat’ animation=” mobile_display=”]
[av_textblock size=” font_color=” color=”]
U.S. school teachers and their students seem caught in an endless loop of standardized tests.
Teachers spend countless hours preparing, administering and grading student assessments, while students are locked in a seemingly never-ending cycle of gearing up for and taking these tests. Assessments include not only college entrance exams but also the numerous tests required by the state of Florida,
Yet, despite all this time and attention, the assessments provide little information beyond a single score; they yield scant details about the specific areas a student needs the most help.
UF College of Education’s Ren Liu and Anne Corinne Huggins-Manley are among the scholars trying to change that.
They recently published an award-winning research paper that shows how to retrofit current assessments to give teachers and students more meaningful and actionable information. Huggins-Manley is an assistant professor of the college’s Research and Evaluation Methodology (REM) program serves as adviser to Liu, a doctoral candidate in REM.
[/av_textblock]
[/av_three_fifth][av_two_fifth min_height=” vertical_alignment=” space=” custom_margin=” margin=’0px’ padding=’0px’ border=” border_color=” radius=’0px’ background_color=” src=” background_position=’top left’ background_repeat=’no-repeat’ animation=” mobile_display=”]
[av_image src=’https://education.ufl.edu/etc/files/2017/08/Liu-Manley-300×244.jpg’ attachment=’1627′ attachment_size=’full’ align=’center’ styling=’no-styling’ hover=” link=” target=” caption=” font_size=” appearance=” overlay_opacity=’0.4′ overlay_color=’#000000′ overlay_text_color=’#ffffff’ animation=’no-animation’][/av_image]
[av_hr class=’custom’ height=’50’ shadow=’no-shadow’ position=’center’ custom_border=’av-border-thin’ custom_width=’50px’ custom_border_color=” custom_margin_top=’0px’ custom_margin_bottom=’0px’ icon_select=’yes’ custom_icon_color=’#f58320′ icon=’ue883′ font=’entypo-fontello’]
[av_textblock size=” font_color=” color=”]
Ren Liu and Anne Corinne Huggins-Manley
[/av_textblock]
[/av_two_fifth][av_textblock size=” font_color=” color=”]
Their article recently won the annual best paper award from the Florida Educational Research Association (FERA). The award was for its importance to the field of education, soundness of the research, and the quality of Liu’s presentation at the organization’s annual conference., said Robert Dedrick, professor of education at the University of South Florida and chair of the FERA award committee.
“Retrofitting Diagnostic Classification Models to Responses from IRT (Item Response Theory)-Based Assessment Forms,” the paper also was recently published in the academic journal, Educational and Psychological Measurement.
The scholars’ research provides a step-by-step method to revamp assessment tests using various tools and formulas of psychometrics, the science of measuring mental capacities and processes. Their 27-page paper is filled with the work of modern educational researchers: statistical models, tables, formulas and scatter plots.
“Developing and scoring assessments using new diagnostic measurement approaches is an increasingly important area in education research,” Liu said. “It can provide insights that allow teachers and their students to better understand specific areas where a student may need help.”
Consider this hypothetical example. Jane, a high school junior, scores an 1,200 on her SAT college-admission test. Her score of 700 on the math portion of the test was pretty good. But the results fail to show that while she has mastered geometry she struggles in algebra. By offering more precise measurements, retrofitting the test can provide more useful information to show the student and teacher areas to take action to improve Jane’s content knowledge, Liu said.
“Retrofitting also can inform teachers about areas to improve curriculum design and material preparation,” Liu said.
But why retrofit tests at all? Why not develop them from the ground up to yield such information?
“The reason we need to retrofit tests is that current tests are not developed under the diagnostic measurement framework,” Liu said. “Diagnostic measurement framework is a new psychometric tool that has developed a lot in research, but not yet ingrained in practice.”
In the future, there will be more tests developed and scored under the diagnostic measurement framework, he said. But now, given that there are limited number of assessments developed this way, scholars are having to retrofit scores of current assessments to obtain the more precise information.
Source: Ren Liu
Writer: Charles Boisseau, 352-273-4449
[/av_textblock]
[/av_section]