Teachers sometimes find it challenging to engage students in speaking activities. Some students shy away from the task because they do not have the confidence for public speaking especially in the second language, or they simply find the tasks inauthentic and thus find little value in participating. However, researchers claim that there is more to students’ poor participation in class speaking activities than the above. Juzwik, Borsheim-Black, Caughlan, and Heintz (2014) maintain that while student-led and student-centered talk should be the ultimate goal of any educational assignment, teacher talk tends to dominate. Teacher talk is necessary and often required as an organization tool, but when it dominates, it robs students of the opportunity to participate and to improve their speaking skills. Juzwik et al. (2014) also point out that oftentimes students who are more willing to speak dominate the discussion and make it harder for the less outspoken ones to participate even when they want to.
Thus comes the concept of students’ participation in assessment to first increase their engagement in the task and second to help them take ownership of their own learning in more authentic and real-life activities. In this regard, Tarighat and Khodabakhsh (2016) observe that when students were encouraged to employ the Meta skills of reflection, and self and peer assessment, closer monitoring was observed and awareness towards structure was raised. Students also expressed a positive attitude for the equal opportunities these methods allowed for all members of the class to talk.
In general, there are two overarching methods of speaking assessment: holistic and analytical. This paper aims to discuss tools and strategies that teachers can use for each method of assessment to encourage student engagement and to ensure their involvement not only in their own learning but also in assessing their progress and that of their peers.
Holistic Assessment
Cabezas (2015) defines holistic assessment as an overall impression of the students’ ability reflected in a single score. The advantages of holistic assessment are that it is fast, practical, and cost and time effective. However, holistic assessment has a greater latitude for scorer subjectivity; and unless it is followed by feedback, it is not helpful to tell learners how they can improve or what areas of strength and weakness exist in their speaking. One of the ways teachers tried to deal with the challenge of benefiting from the time and cost efficiency of holistic assessment while also meeting the requirement of giving learners helpful, workable feedback that was to have their peers offer this feedback. In a study that involved 17 female advanced English learners, students were asked to record their speaking on a WhatsApp group chat and have their peers assess it (Tarighat & Khodabakhsh, 2016). Final questionnaires revealed that students were generally happy with peer assessment and found it helpful in advancing their speaking skills, but they were also concerned about the possibility of bias, unfairness, or harshness in their peers’ grading. Students also maintained that, while they were satisfied with using critical reflection to evaluate their peers on a shared goal, they often found it difficult to elaborate on what has been achieved and to set specific goals for their peers. Moreover, they complained that being occupied with assessment took away at times the opportunity to follow their peers’ presentations or to thoroughly appreciate them. It also has to be pointed out that the above method of peer assessment was used with advanced level students. The study did not identify how the method would be received or if it would even be possible with low level students.
The following tool was designed to offer some remedies to the above concerns. It is initially planned for lower level students, but could be modified to be used with students at any level. Moreover, the tool involves a number of open ended questions that could be varied or rephrased to address all aspects of individual presentations instead of giving a final mark or letter grade that might seem vague and undetailed to the receiver. Teachers could instruct the students to choose only one question on the list to answer for each presentation; therefore, teachers can ensure that all students will give and receive feedback to and from everyone in the class community and that students stay on task and are engaged throughout all the presentations.
It is worth noting that the above tool is best used for individual speaking activities such as presentations and monologues. However, it can also be used for drama or role play activities that involve a group of students where each student among the audience could evaluate a different member of the group speaking activity.
As has been discussed, holistic methods of speaking assessment could be helpful when followed with feedback. However, a more detailed method of assessing students’ speaking is the analytical method. In the following section, I will discuss the analytical method and tools that employ self and peer assessment within this method.
Analytical Assessment
In contrast to holistic assessment, analytical assessment looks at independent criteria of learner’s speaking performance and evaluates them on each criterion separately. The final score is the sum of grades given to each one of these criteria (Tuan, 2012). Owing to its detailed examination of the speaker’s performance, analytical assessment has a higher efficiency in providing an interpretable assessment in the sense that it offers diagnostic information about students’ speaking abilities. For example, some second-language learners may have excellent speaking skill in terms of content and organization, but may have much lower grammatical control; others may have an excellent control of sentence structure, but may not know how to organize their speech in a logical way. In this regard, the analytic scoring scales can show students that they have made progress over time in some or all dimensions when the same rubric categories are used repeatedly (Moskal, 2000).
However, analytical assessment is not without disadvantages despite its many merits. Hughes (2003) warns that in scoring analytically, the criterion scored first may affect subsequent criteria scored later, making the overall effect of a speech diverted to an individual criterion, a phenomenon that Fulcher (2009) names the Halo Effect of analytical scoring. Yet if analytical scoring is used as a formative assessment for self and peer assessment, the Halo Effect disadvantage can be attenuated. This will be discussed in further detail in what follows, but first it is important to point out the individual criteria that analytical assessment looks at in speaking.
Criteria for Analytical Assessment
Knight (1992) identifies a comprehensive list of broad categories of assessment; however, within each category, there are many detailed criteria. For example, Knight’s list includes categories that measure fluency, accuracy, and lexical and syntactic complexity; moreover, it includes the more global measures of non-verbal, conversational, and sociolinguistic skills. Within the non-verbal skill category, more specific criteria such as eye contact and body posture; gestures and facial expressions are pinpointed.
The exhaustive list of analytical assessment allows teachers the opportunity to choose the areas they want the students to focus on in their self-assessment and peer assessment. This means that teachers can put different weight on each criterion according to the context and purpose of assessment. For example, in assessing presentations more weight might be put on the non-verbal category such as eye contact and facial expressions than in group discussion where more weight is put on conversation maintenance and topic development. Moreover, different assessment tools might be needed for self-assessment and peer assessment since the criteria are specific and formative as opposed to global and summative as is in the case of holistic assessment. In what follows, I will show examples of one tool and one strategy of formative assessment for self and peer assessment respectively.
A Tool for Analytical Self-Assessment
One of the speaking assignments that can be given to students to assess their performance individually and not as part of a group is the Recorded Speaking Assignment (RSA) (Knight, 1992). This assignment requires students to record themselves reading a passage that includes certain pronunciation features or for more advanced levels talking extensively for a minute or two about a topic of interest. Teachers could prepare rubrics to assess the first submission of the assignment and give it back to the students to work on areas of weaknesses pointed out in the feedback. Students are then asked to submit the assignment a second time for holistic assessment and a final grade after they have, presumably, worked on the areas mentioned in the teacher’s rubric. I propose here that, between the first submission and the second submission, the students be given a self-assessment tool to help ensure that they have noticed and focused on the areas of teacher’s concern. The following tool gives an example of what this self-assessment tool could look like. Of course, teachers can modify it according to the requirements of the assignments and the specific speaking criteria they want to focus on.
I call this tool the speaker’s checklist. Speakers can highlight the areas on the list that appeared on the teacher’s rubric and feedback. Then, they can do self-reflection by checking off the highlighted items after they have done the second recording; thus they get to evaluate how far they have responded to the teacher’s feedback before they do the second submission that includes both the second recording and the checklist which can also be part of the final grade.
A Tool for Analytical Peer Assessment
The second tool is more suitable for assessment of speaking within a group. Group speaking activities resemble real life tasks and often require from students more sociolinguistic and pragmatic skills than individual speaking activities. For example, conversation management, taking and giving the floor, interrupting, and negotiating meaning are all skills that are required more in group speaking activities than in individual speaking assignments. Examples of group activities are round-table discussions; fish bowls, where students sit in two circles: an inner one that has the speakers and an outer one that includes the audience; advanced presentations; and drama activities. Teachers often find it difficult to assess these activities while they happen owing to the fact that assessment requires scoring many presenters on several criteria at once. Therefore, they might find it more feasible to record the students while doing the activity and assess them after class. This might make students lose the benefit of synchronized feedback while the speaking task is still current and fresh in their memory. But with the right peer assessment tool, teachers can employ peer assessment to garner some of these synchronized feedback benefits for the performers and to ensure that the students in the audience stay on task and are focused on the activity.
In this method of peer assessment, I suggest giving each student in the audience one criterion only to assess each student performer on. Students can be asked to assess different criteria for different performances. At the end, the speakers will be given feedback on their performance on all criteria from many class members. This will also ensure that the assessors are not distracted by the immensity of the task of having to assess all performers on all criteria or even one student on all criteria (Tarighat & Khodabakhsh, 2016). It will also address any concern performers might have about bias or harsh grading by their peers since they will get that feedback from more than one individual source. Moreover, in big classes, teachers can make up a number of small groups where students exchange the role of speakers and assessors without the need for the teacher to be present and watching all the time. This will help save a lot of wait time during class as students who are not speaking are engaged in listening to their peers and assessing them.
Conclusion
Hinkel (2010) outlines two essential teaching and learning objectives related to using the integrative approach to language learning: in this case, integrating listening and speaking in assessment. These objectives are focus on needed language features and using these features in situations and contexts similar to the real world. The above approach and its related tools make it possible for language teaching to be more focused on thematic, and cohesive elements of discourse or communications (cf. Cabezas, 2015). Moreover, the engagement of the majority of the class community is guaranteed as this engagement has to remain in the center of teacher’s planning. To be more specific, involving students in assessment requires for teachers to remain intentional in their assessment of speaking in terms of clear goals and reasonable and realistic expectations. It also requires students to be intentional in engaging their listening skills to be able to assess themselves and their peers. In doing so, an integrated and learner centered education is promoted and often times achieved in second language teaching.
References
Cabezas, E. D. (2015). The relationship between listening proficiency and speaking improvement in higher education: Considerations in assessing speaking and listening. Higher Learning Research Communications, 5(2), 34–56. http://dx.doi.org/10.18870/hlrc.v5i2.236
Chalkia, E. (2012). Self‐assessment: An alternative method of assessing speaking skills. Research Papers in Language Teaching and Learning, 3 (1), 225–239
Hughes, A. (2003). Testing for language teachers. Cambridge, NY: Cambridge University Press.
Fulcher, G. (2009). Rating scales and the halo effect. Retrieved on July 17 2011 from http://languagetesting.info/gf/glennfulcher.php.
Juzwik, M., Borsheim-Black, C., Caughlan, S., & Heintz, A. (2014). Inspiring dialogue: Talking to learn in the English classroom. New York, NY: Teachers College Press.
Knight, B. (1992). Assessing speaking skills: A workshop for teacher development. ELT Journal, 46(3), 294–302. https://doi.org/10.1093/elt/46.3.294
Tarighat, S. & Khodabakhsh, S. (2016). Mobile-assisted language assessment: Assessing speaking. Computers in Human Behavior, 64, 409–413
Tuan, L. T. (2012). Teaching and assessing speaking performance through analytic scoring approach. Theory and Practice in Language Studies, 2(4), 673–679. doi:10.4304/tpls.2.4.673-679
Author’s bio
Nermine Abd Elkader is an instructor of Academic Listening and Speaking in the International Foundation Program at University of Toronto. She has a Master’s degree in Teaching English as a Second Language and a PhD in Education with a concentration on multicultural education and the Sociocultural and communal approaches of education. Her research interest is focused on how to incorporate dialogue in linguistically and culturally diverse education settings.