In this work, we present the results of the incorporation of a wireless classroom response system in laboratory practice. The present study was conducted in the frame of the Introductory Physics Laboratory, a first year undergraduate course at the Physics Department in Aristotle University of Thessaloniki. In particular a novel approach to the peer instruction method was implemented during two successive academic years in two classes attended by 23 students. Nine questions of the Force Concept Inventory were used in order to record its didactic effectiveness. According to our results the students perform much better than in the more widespread version of the peer instruction method as measured by a gain index.
Pierratos, T., Polatoglou, M. H. (2011). Enhancement of Peer Instruction in a Introductory Physics Laboratory Course Using Classroom Response Systems. Proceedings of 11th International IHPST and 6th Greek History, Philosophy and Science Teaching Joint Conference. pp. 593-598. ISBN 978-960-458-325-6.
Introduction
Wireless Classroom Response Systems (CRS), often referred to as “clickers”, address two important issues in teaching, namely how to actively engage students in the educational process and how to assess in real time the status of conceptual understanding (Hake, 1998).
The main components of a CRS are a computer, a data projector to present concept tests (normally a multiple-choice question aimed at testing students’ understanding of a concept), a set of handsets (transmitters similar to television & video remote controls), sensors (receivers) that allow students to signal their responses to the concept test, and software that allows class responses to be collected and immediately displayed as a histogram for students to see (i.e. feedback to students).
In a regular classroom, feedback can be acquired by multiple means, including a show of hands, asking volunteers to share answers, or using colored cards to represent multiple choice responses (Draper, Cargill, & Cutts, 2002; McCabe, 2006). However these methods have notable disadvantages. A show of hands, for example, is limited because it is difficult to obtain a quick, accurate sense of class understanding, particularly in a large lecture. Furthermore, some students are inclined to copy the responses of others. In addition, when hands are lowered, information is lost (Abrahamson, 2006). Also, relying on volunteers is somewhat restrictive because, only the confident students raise their hands (Banks, 2006). Note also that with a show of hands or asking volunteers to respond, anonymity is lost. Using a CRS helps improve the feedback process by guaranteeing anonymity, assures quick and efficient collection and summary of student responses, and prevents students from copying the answers from their peers.
The use of a CRS can offer real-time feedback to both instructors and students as to how well concepts are being understood (formative assessment) and alter the course of classroom instruction. Experienced teachers can quickly modify explanations or mode of instruction (contingent teaching) or students can gauge and discuss their understanding with their peers (peer-instruction). Extensive evidence suggests that using a CRS helps provide effective formative assessment (Bergtrom, 2006; Bullock et al., 2002; Caldwell, 2007; Dufresne & Gerace, 2004; Elliott, 2003).
After CRS feedback is presented to the class students are able to compare their understanding with their fellow classmates. There is some evidence to suggest that students like to see how well they are doing relative to their peers (Caldwell, 2007; Draper & Brown, 2004). It is unclear from the research to date, though, why students like to compare responses. It could be that the use of a CRS promotes a competitive atmosphere, a goal that may not promote a strong sense of community. Alternatively, some students may want to monitor their progress, while others may want assurance that they are not alone in their misunderstanding of key concepts.
Nevertheless, many studies suggest that frequent and positive interaction occurs when CRSs are used (Banks, 2006; Beatty, 2004; Bergtrom, 2006; Caldwell, 2007; Trees & Jackson, 2007). Specifically, researchers have reported greater articulation of student thinking (Beatty, 2004), more probing questions, an increased focus on student needs (Siau et al., 2006), effective peer-to-peer discussions (Bergtrom, 2006), and active learning (Kennedy et al., 2006).
The use of a CRS increases the quantity and quality of class discussions, particularly when employed with a strategy known as ‘‘peer instruction” (Beatty, 2004; Brewer, 2004; Draper & Brown, 2004; Nicol & Boyle, 2003). Peer instruction occurs when a teacher presents a question using a CRS, collects student responses and presents responses from the class, but does not provide the correct answer. Instead, the class is instructed to discuss possible solutions in pairs and then students are provided with the opportunity to answer for a second time. After the second round of answers, the issues are resolved through class discussion and clarifications from the instructor. The research indicates that students feel they are better able to discuss and calibrate their understanding of specific concepts when peer instruction is employed (Draper & Brown, 2004) and being more interested or engaged in concepts presented and discussed using a CRS (Bergtrom, 2006; Hu et al., 2006; Preszler et al., 2007; Simpson & Oliver, 2007). Some students prefer hearing explanations of CRS questions from their peers who have a similar language and therefore can explain problems and solutions more effectively than the instructor (Caldwell, 2007; Nicol & Boyle, 2003). Other students claim that using a CRS pushes them to think more about the important concepts (Draper & Brown, 2004). Still others believe that the use of a CRS helps them discover and resolve misconceptions (d’Inverno et al., 2003).
In addition, extensive qualitative research suggests that learning performance increases as a result of using CRSs (Brewer, 2004; Caldwell, 2007). Many experimental studies report that classes using CRSs significantly outperform those using traditional lecture formats (Fagan, Crouch, & Mazur, 2002).
One drawback noted by several instructors is that not as many concepts can be addressed when using a CRS (Caldwell, 2007; Elliott, 2003). However, many of these same instructors acknowledge that reduced content coverage is more than compensated for by the depth of material that students truly understand (Elliott, 2003). So, using a CRS appears to emphasize the depth of student understanding, not the amount of material “covered”.
Given that all previous research in this area has been conducted in the USA or countries others than Greece, our research allowed us to test the robustness of these instruction methods and tools in a different cultural and educational context. In addition we implemented the CRS in a laboratory course attended each time only by a small number of students (11-12 students), in contrast to the usual case of addressing large lecture hall audience. In particular, the present study was conducted in the frame of the Introductory Physics Laboratory, a first year undergraduate course at the Physics Department in Aristotle University of Thessaloniki. Various teaching strategies were designed in order to cover different aspects of the education process. For practical reasons we have decided to implement only those strategies which could enhance peer-learning and are related to the pre-laboratory conceptual understanding of key concepts, related to the experimental part.
The wireless Classroom Response System
The system which we used during the present study is the Hitachi Verdict plus (Fig. 1). The system comprises of 30 students’ control units, 1 teacher’s control unit and one transmitter/receiver. The teacher holds a control unit that has a display through which he/she receives the students’ answers and a keyboard which can be used to send messages to all or to individual students, to control the educational process by changing the displayed power point page, to start or stop the voting process.
Students’ clicker includes a similar display through which the students can receive messages, see the answer which they have selected before actually submitting it, and can get a report on the correctness of the answer if the teacher allows it. A unique identity number is assigned to each individual clicker.
In order to utilize the above mentioned hardware the Verdict plus 1.6.4.0 application was used. This application supports the use of PowerPoint presentations and can produce analytical reports, after the educational process, concerning each question, each student, or the whole class. The data can be exported in different standard formats and therefore one can further analyze the results with appropriate tools.Figure1. Hitachi-Verdict plus CRS
Methodology
The introductory laboratory class consists of nine laboratory sessions of four hours each that aim to study one particular phenomenon during each session. The sessions are design in such a way as to gain insight into physics notions. We have chosen to intervene using the CRS in six laboratory sessions to study students’ alternative ideas and how these ideas are changed during the laboratory work. The CRS was also utilized to assess whether they can promote the teaching between peers (Mazur, 1997), i.e. working in groups (Fig. 2).Figure 2. Students confer together in groups of three.
The intervention was implemented during two successive academic years, namely 2009-2010 and 2010-2011, in two classes attended by 23 students (of 11 and 12 students).
For each class a sort introduction was given to familiarize the students with the clickers and their use. Special emphasis was placed to explain the assurance of anonymity in the use of CRS controls. In order to assist the analysis of the results and study the behavior of each student over the six sessions we have suggested that each student peaks at random a clicker but always to use the same control unit. Therefore the identity of the particular student was on purpose unknown and the students could answer freely without having to worry that their answers will be used to grade them.
Various teaching strategies were implemented under the umbrella of the peer instruction methodology. Usually, this methodology takes two rounds of answers from the students. In our novel version the students were asked to answer three times.
In particular, for the first round of answers each student is asked to answer the question posed individually. As the students were supplying their answer, they could observe on the screen, not the particular answers but how many students have already answered. Immediately after the conclusion of the first answering round the results of how many answered a particular choice were displayed in the form of a histogram, similar to the one in Fig. 3.
Without the disclosure of the correct answer the students are asked to work in groups of three and to try to persuade each other on the correctness of their choice, or in case that all the students within a group choose the same answer to discuss why the other students choose a different answer. After 2-3 minutes the students are asked to answer again the same question and the distribution of all the answers is again displayed. Each possible answer is defended by a volunteer student who tries to argue on his or her point of view. That causes a wide discussion among all the students that interchange arguments and reveal to the teacher their way of thinking. Then the students answer the same question for a third time. The distribution of all the answers is displayed and the teacher discloses the correct answer and explains the reasons the other answers are not correct. The students that have given wrong answers are asked to reconsider their statements and reflect on the possible weak points of argumentation. The implementation of this novel peer instruction method aims to study the effectiveness of the introduction of the conversation among groups of student complementing the conversation in the each group. If this is effective then the extra time required could be justified.
In order to peak up questions to support each instructional module weighted inventories were used, like Force Concept Inventory (FCI), or questions used by the instructors during the last 10 years that teach this particular course.
Results
The Introductory Physics Laboratory comprises of nine instruction sessions two of which are related to the first law of Newton and the projectile motion. The present study will focus on the intervention on these two particular sessions. Very suitable for such a study are questions from standardized inventories such as the Force Concept Inventory (Hestenes, Wells & Swackhamer, 1992). According the subject matter of the two sessions we have chosen the questions numbered 4, 6, 8, 28, 26, 27, 3, 16, 29 which in the following will be refered using numbers from 1 to 9 in the above sequence. In Figure 3 is displayed the distribution of the answers of the 23 students for the three rounds, in questions 6 of FCI. Such a distribution can provide a teacher with valuable information about students’ misconceptions and how some times it is hard to reconstruct them during an instructional practice. However, because of the limited number of possible answers in a multiple answer question like those in the FCI, its quite difficult for every student’s misconception to be revealed. Althought using a weighted questionnaire like FCI decreases this possibility, the extended discussions that took place among the students between the second and the third round of anwering, proved that this not sufficient enough.
Figure 3.
In Figure 4 is displayed the percentage distribution of the correct answers given by the 23 students in the nine FCI’s questions during the two sessions. In agreement with other researchers (Crouch & Mazur, 2001) and our previous work (Pierratos et al., 2010) it was recorded that the number of students who respond correctly after the interaction in their groups (2nd round) and especially after the interaction with other groups (3th round) increases.
Figure 4.
Α possilbe measure of the average effectiveness of an instruction method in promoting conceptual understanding is proposed to be the average normalized gain <g> (Hake, 1998). The latter is defined as the ratio of the actual average gain to the maximum possible average gain.
Figure 5.
In Figure 5 are displayed the normalized gains for each one of the nine questions asked, and in the Table 1 are shown the percentage of the correct initial responses (Si%), the percentage of the correct responses after the interaction in the groups (Sf1%), the percentage of the correct responses after the discussion among the groups, the normalized value of gain g1 because of the interaction in thw groups (2nd round – 1st round) and finally the normalized value of gain gtot for the overall process.
Conclusions
The peer instruction method has been established worldwide as a powerful method that engage all the students in the teaching process driving to a highest level of conceptual understanding. However, various studies have shown that the fact of high percentage of correct answers does not necessary means that the students really understand the concepts (Henriksen & Angell, 2010). The discussions that took place between the 2nd and the 3nd round of answering in this study, which is based on the novel three round peer instruction, gave us the oppurtunity to reveal to the students that correct answers are not always supported through correct reasoning. That means that inserting the 3nd round of answering, provokes an extended exchange of arguments amongs all the students, and drives them to better understanding. In addition it provides the teacher with a valuable real time tool to deal with possible alternative students’ ideas. In this study we found that in some questions there is no noticable increase in the gain index, although there was an impovement in the reasoning of the students. Consequently, this method helps students to develop metacognitive abilities and to self-evaluate their cognition.
It would interesting to study the necessity of the discussion within the groups before the discussion in the whole class. Our research which is currently under way, seems to support that the discussions within the groups helps students to express their ideas and argue on them in a more comfortable way. The implementation of the present approach of peer instruction was doubtless facillitated of the small size of our classes. Maybe the incorporation of the same method in a large hall class would be more difficult. However in our classes gave the oppurtunity to all students to express in detail their thoughts and promote their active and contructive participation.
Bibliography
Abrahamson, L. (2006). A brief history of networked classrooms: Effects, cases, pedagogy, and implications.
D. A.Banks, D. A. (2006). Reflections on the use of ARS with small groups. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 373–386). Hershey, PA: Information Science Publishing.
Beatty, I. (2004). Transforming student learning with classroom communication systems. EDUCAUSE Research Bulletin, 2004(3), 1–13. .
Bergtrom, G. (2006). Clicker sets as learning objects. Interdisciplinary Journal of Knowledge and Learning Objects, 2. .
Bullock, D. W., LaBella, V. P., Clinghan, T., Ding, Z., Stewart, G., & Thibado, P. M. (2002). Enhancing the student–instructor interaction frequency. The Physics Teacher, 40, 30–36.
Caldwell, J. E. (2007). Clickers in the large classroom: Current research and best-practice tips. Life Sciences Education, 6(1), 9–20.
Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970–977
D’Inverno, R., Davis, H., & White, S. (2003). Using a personal response system for promoting student interaction. Teaching Mathematics and Its Applications, 22(4), 163–169.
Draper, S. W., & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20(2), 81–94.
Draper, S. W., Cargill, J., & Cutts, Q. (2002). Electronically enhanced classroom interaction. Australian Journal of Educational Technology, 18, 13–23.
Dufresne, R. J., & Gerace, W. J. (2004). Assessing-to-learn: Formative assessment in physics instruction. The Physics Teacher, 42, 428–433.
Fagan, A. P., Crouch, C. H., & Mazur, E. (2002). Peer instruction: Results from a range of classrooms. The Physics Teacher, 40(4), 206–209.
Hake, R. R. (1998). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics text data for introductory physics courses. American Journal of Physics, 66(1), 64–74.
Henriksen, E. K., & Angell, C. (2010). The role of ‘talking physics’ in an undergraduate physics class using an electronic audience response system. Physics Education, 45, 278-284.
Hestenes, D., Wells, M. & Swackhamer, G. (1992). Force Concept Inventory, The Physics Teacher, Vol. 30, March 1992.
Kennedy, G. E., Cutts, Q., & Draper, S. W. (2006). Evaluating electronic voting systems in lectures: Two innovative methods. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 155–174). Hershey, PA: Information Science Publishing.
Mazur, E. (1997). Peer instruction A Users Manual Prentice-Hall, Upper Saddle River, NJ.
McCabe, M. (2006). Live assessment by questioning in an interactive classroom. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 276–288). Hershey, PA: Information Science Publishing.
Nicol, D. J., & Boyle, J. T. (2003). Peer instruction versus class-wide discussion in large classes: A comparison of two interaction methods in the wired classroom. Studies in Higher Education, 28(4), 457–473.
Pierratos, T., Evagelinos, D., Polatolgou, H., Valasiadis, O., (2010). Evaluation of educational activities and peer instruction by the use of a Classroom Response System. Proceedings of 13th Panhellenic Conference of EEF, ISBN 978-060-9457-00-2 (in Greek).
Siau, K., Sheng, H., & Nah, F. (2006). Use of classroom response system to enhance classroom interactivity. IEEE Transactions on Education, 49(3), 398–403.
Trees, A. R., & Jackson, M. H. (2007). The learning environment in clicker classrooms: Student processes of learning and involvement in large university courses using student response systems. Learning, Media and Technology, 32(1), 21–40.
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου