Evaluation+of+Research+Regarding+Audience+Response+Systems

Thomas D. Monteverde Technology in the Classroom Dr. Devon Duhaney 20 November 2009

__Evaluation of Research Regarding Audience Response Systems__

Assessment is a crucial component in education. Student evaluations are necessary to determine whether students comprehend the course content. Students require feedback from professors to determine whether their personal conceptualization of the curriculum is correct and professors require feedback from students to determine the success of pedagogical methods applied in the classroom. Assessments are the key to both students and professors understanding of the course curriculum. Testing also has positive effects on student practice and review of course content. Students tend to study harder before tests so through continuous testing professors are able to positively influence student study time. Relevant student evaluations must connect to the course content, must give immediate feedback and must be preformed continuously throughout the duration of the course in order to be successful.

It has been shown there is a direct correlation between the frequency of assessment and student academic achievement. A study by Robert L. Bangert-Drowns consisted of the review of 40 research articles that studied the effects of frequent testing. The studies had to met four criteria in order to be considered for the review; the studies had to take place in real classrooms, they had to compare groups that received identical instruction with different testing frequency, the tests administered had to be conventional classroom tests, and any study with significant methodological flaws was excluded from the analysis (Bangert-Drowns, p91). Student learning was observed through achievement examinations administered at the end of the research in thirty five of forty articles analyzed. The statistical data analyzed showed that frequency of tests had a positive impact on final examinations. The figure above shows that by increasing testing frequency in a course from five to twenty times it should be observed that student achievement increases by an average of six percentile points. As testing frequency increases small gains are observed, but students still benefit from numerous assessment and the more testing the better. The final conclusion of the analysis is that teachers can improve the affective outcomes of instruction by testing students more often (Bangert-Drowns, p97).

Science and chemistry in particular can benefit greatly from increased assessment. Chemistry concepts are generally abstract and difficult to comprehend and as a result students need more opportunities to receive feedback from their professors in order to be successful. Assessment however can be extremely time consuming. Testing takes up valuable class time and inundates professors with enormous amounts of grading. Large enrollment introductory courses are especially difficult to assess. There are however several new technologies and pedagogical methods suggested in recent research that could mitigate the the demand that assessing classes with large enrollment has on professors. Analysis of recent research in regards to new methods and technologies in education is essential in determining proper pedagogical modifications to course lectures.

Adjunct questioning incorporated into daily instruction can be very beneficial to both students and educators. Questions placed sporadically throughout a lecture can keep students engaged in the lecture and gives students the opportunity to practice and process new material. Stimulating student practice and engagement can be difficult in courses like general chemistry where many of the students are not chemistry majors. The ability to challenge and engage each individual student can be an almost insurmountable obstacle in large lecture halls. Recent technology in the form of personal student response systems has given many professors the ability to question and give feedback to large groups of students in a short period of time.

Personal response systems have had negative responses from critiques. The implementation of the system can be quite costly for the educational institution and the cost of the personal devices can be an expensive burden on individual students as well. Critiques also feel that polling the class to assess comprehension takes away from valuable lecture time that should be devoted to covering the course content. Academic integrity has also been an issue when discussing the integration of personal response technology in the classroom, but many feel that the positive aspects of the system far out weighs the negative.

Klaus Woelk a chemistry professor at Missouri University of Science and Technology recently developed a taxonomy of personal student response system uses in large-enrollment introductory courses. Personal student response systems can be used to take attendance, assess for lecture preparedness, create interest in the lecture content, ensures that students are paying attention, evaluate conceptual comprehension, master real world application of lecture content, and initiate interest in curriculum outside of the classroom. In addition to taxonomy Klaus Woelk also has illustrated the effectiveness of personal student response systems in a situation similar to a large lecture setting. Attempting to demonstrate the usefulness of personal response systems while speaking at the Fifteenth Annual Teaching Renewal Conference in Columbia Missouri on February 25th, 2005 Woelk conducted an experiment with the audience attending the conference. For the experiment personal response devices were handed out to half the audience and after a brief introduction to some chemistry concepts the audience was asked a question which resulted in eighty eight percent of the responses from the audience being correct. Woelk then repeated the chemistry concepts previously introduced under the pretense that he was attempting to improve upon the comprehension of the previously tested portion of the audience. Before polling the audience for a second time he asked that the personal response devices be passed to members of the audience that did not have one previously. Only fifty six percent of the second portion of the audience responded correctly to the second multiple choice question despite receiving the instruction twice. Because the audience of the second poll did not anticipate the test, their engagement level was significantly lower (Woelk, p1402). The test remarkably demonstrated the well-known effect that the expectation to be quizzed will lead to improved engagement (Woelk, p1402).

The taxonomy of personal student response systems was an interesting categorization of different uses of personal response systems, but there was no research to illustrate the effectiveness of the technology in a classroom setting. Woelk's conference experiment was little more than an interesting anecdote. Their was no information regarding test size or the intellectual makeup of the population tested was unknown. The conference anecdote suggested that personal response devices are useful additions to large lecture courses, but did not give definitive evidence to personal response devices effectiveness in the classroom. There are however other studies with statistical data to corroborate claims to the effectiveness of the technology.

At the University of Kentucky's College of Pharmacy professors were having instructional difficulties with the physiological chemistry/molecular biology sequence of the pharmacy program. Instructors of the course decided to integrate a personal response system into the instruction in order to engage students, monitor the student progress, improve students grades and improve students dispositions towards the course and content. Failure rates for the sequence were the highest in the college's curriculum and the courses received student evaluations commonly below the college mean for the two years prior integration of the technology. The study makes a comparison between the year that the technology was integrated with the two previous years. Students' grades and students' dispositions were the criteria analyzed for the study.

The study included the one hundred and thirty students registered for the two courses and their instructors. The technology was used during the first twenty eight lectures of the sequence by inserting six to seven questions intermittently throughout the lecture. The questions were used to keep students engaged and provide on the spot assessment for professors enabling them to deviate if necessary from the planned lecture. At the end of the sequence eighty seven percent of the students involved in the course completed an anonymous in class questionnaire and all students preferred the use of personal response systems in the lecture and ninety eight percent of students questioned felt they benefited from the discussions following the personal response questions. The overall rating of the course increased from a 2.4 to a 3.3 on a four point scale. A student focus group which consisted of twelve students chosen at random met twice a semester to discuss the positive and negative aspects of the course. The facilitator of the focus group summarized the the content of the group meetings and relayed the information to the instructors of the course. The reports of the focus group were favorable and students of the focus group felt that they benefited from the inclusion of personal response technology in the course. A comparison of the mean final grade of students of the year of personal response system use with the two years preceding was made to illustrate the effectiveness of the technology. A table demonstrating this comparison is shown at the bottom of the previous page. The higher grades in this semester's course suggest that the instructional strategy contributed to a positive effect on student grades (Cain, p5). Attendance also improved with the integration of the technology. Course absences diminished from twenty five percent in the previous semesters to two percent during the semester when personal response technology was used. Instructor self assessment of lecture modifications were also positive. Instructor evaluation of the course had risen also from a 2.7 to a 3.5 on a four point scale. Instructors felt that personal response technology helped clarify misconceptions, improve pace of instruction and improved student to teacher relationships.

There are several criticisms that could be made regarding the study. Limitations of the grades analysis was that this was not a controlled experimental study and no definitive conclusion can be drawn on the personal response strategy effects (Cain, p5). The dispositions of both professors and students were generally favorable towards the use of the devices but final mean grade data was inconclusive. The standard deviation of the mean final grades was 2.9 which was within the ninety five percent confidence range and the data available was really to small to make any viable statistical calculations. Other data comparisons were made that had no actual merit whatsoever. A comparison was made between the physiological chemistry/molecular biology I which did not use personal response devices with physiological chemistry/molecular biology II which did. The difference in content between the two courses is a large enough variable to dismiss this comparison entirely.

Personal response technology is new and research regarding the integration of this technology is just beginning to become available. Long term studies that have a large research population have not had the chance to develop yet due to the newness of the technology and as a result most research in regard to the technologies positive effects on students grades has not been indisputable. Despite the inconclusive effects that personal response systems have on student grades, personal response systems do have proven positive effects on both students and professors dispositions. An effective motivational tool that has a positive impact on student dispositions is a necessity in courses like chemistry where motivation and engagement can be a difficult.

__References__ Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., (1991). Effects of Frequent Classroom Testing. //Journal of Educational Research//, 85(2), 89-99. Cain, J., Black, E. P., Rohr, J., (200). Instructional Design and Assessment: An Audience Respones System Strategy to Improve Student Motivation, Attention, and Feedback. //American Journal of Pharmaceutical Education//, 73(2), Article 21. Woelk, K., (2008). Optimizing the Use of Personal Response Devices (Clickers) in Large Enrollment Introductory Courses. //Journal of Chemical Education//, 85(10), 1400-1405.