| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • Stop wasting time looking for files and revisions. Connect your Gmail, DriveDropbox, and Slack accounts and in less than 2 minutes, Dokkio will automatically organize all your file attachments. Learn more and claim your free account.

View
 

Technology, Feedback, Action! Literature Review

Page history last edited by Stuart Hepplestone 10 years, 9 months ago


Literature review

 

Sheffield Hallam University is exploring the potential of technology-enabled feedback to improve student learning. This project aims to evaluate how a range of technical interventions might encourage students to engage with feedback and formulate actions to improve future learning.

 

The focus of this literature review is current publications and research regarding the importance of feedback and good feedback practice, with a specific regard to the application of technology to support both delivery and use of feedback. The review does not specifically cover the language and dialogue of feedback, or self- and peer-feedback, although these references are highlighted within a separate bibliography.

 

The importance of feedback

 

Feedback is an integral feature of effective and efficient teaching and learning, and can be one of the most powerful ways in which to enhance and strengthen student learning. Feedback enables learning by providing information that can be used to improve and enhance performance. There is clear evidence (Black & Wiliam, 1998; Gibbs & Simpson, 2004) that changes to assessment practice that strengthen the formative use of feedback, such as peer assessment (Falchikov, 2001) and ‘feed-forward’ techniques (Hounsell et al, 2007a), produce significant and substantial learning gains.

 

Current issues

 

Traditional and current practices of providing feedback are no longer effective (Bloxham & Boyd, 2007; Hounsell, 2008; Race, no date; Rowe & Wood, 2007; Rust et al, 2005). Students do not exploit assessment to improve their learning (Maclellan, 2001), and current pressures in the HE sector (DfES, 2003) resulting in modularisation and semesterisation have seen the 'bunching' of assessment tasks limiting the scope for assessment practices that feed-forward (Price & O'Donovan, 2008; Race, no date), and the writing of feedback under tight time constraints (Chanock, 2000). This also has the effect of reducing opportunities for students to carry forward and build-on what they have learned from feedback from previous to future tasks (Higgins et al., 2002; Yorke, 2001), and that assessment does not take place at the beginning of the module or when students themselves feel ready (Maclellan, 2001). The result has been a negative impact on the student experience of feedback. This has been further supported by responses to the National Student Survey (HEFCE, 2007) in which students have expressed dissatisfaction with the adequacy of the feedback they receive both in terms of timing and usefulness (Mutch, 2003), echoed further by recent large (Hounsell & Entwistle, 2007b) and small scale (Crook et al, 2006) studies into the student experience of assessment and feedback. There is evidence that students view late feedback as 'disrespectful' (Rowe & Wood, 2007), and the use of 'implicit criteria' means that students do not view feedback on their learning as helpful (Maclellan, 2001).

 

Staff complain that feedback does not work (Weaver, 2006) and that students do not act on feedback (Mutch, 2003), only being concerned with their marks (Wojtas, 1998) or seeing feedback as a means to justify the grade (Price & O'Donovan, 2008). Some authors have claimed that student disengagement with feedback is based on sceptical or 'anecdotal evidence' from tutors (Carless, 2006; Higgins et al, 2002; Weaver, 2006). Higgins et al (2002), in their research into the impact of feedback, questioned whether students are driven by the 'extrinsic motivation' of their mark and only engage with feedback if it is 'perceived to provide correct answers'.  Rust et al (2005) have reported on two studies (Hounsell, 1987; Lea & Street, 1998) in that students may not read their feedback as a result of not understanding it. This is echoed by Winter & Dye (2005) who researched the reasons for uncollected student work and Chanock (2000) who claimed students often misunderstand their tutors' comments or are too agitated to take in exactly what the tutor is saying ('emotional static').Carless (2006) and Higgins et al (2002) also found that problems with understanding academic language can inhibit students' engagement with feedback. Handwritten feedback comments are problematic as they are time-consuming to write and can be a daunting process for staff, in particular for large class sizes, and it can be difficult for students to decipher (Bloxham & Boyd, 2007; Higgins et al, 2002; Race, no date).

 

Despite arguments that feedback is currently ineffective, Price & O'Donovan (2008) claimed that there is still a strong belief among staff that feedback supports student learning, and they found that students respond to their feedback in different ways and at different times, yet there is no attempt to measure the extent of student engagement. Furthermore, Higgins et al (2002), Rowe & Wood (2007) and Weaver (2006) declared that students' perceptions of the value of feedback in higher education are under-researched, and there are further calls to research further exactly how students receive and respond feedback (Higgins et al, 2002; Mutch, 2003).

 

Improving student engagement with feedback

 

Price & O'Donovan (2008) argued that feedback should be incorporated into the learning and teaching process to both improve student engagement with feedback and to enable the effectiveness to be measured. Maclellan (2001) argued that students should be monitoring their own performance in order to make effective use of feedback to generate improvement in learning, and this has been supported by Carless (2006) who suggested that students should be provided with the 'means to distinguish accurately their achievements in different assignments'.

 

Several authors have indicated that disengaging the mark from feedback promotes student learning (Carless, 2006). Research by Potts (1992) claimed that withholding grades encourages students to engage with feedback, as they are 'obliged to find for themselves value in what they did'. This is further echoed through the work of Black & Wiliam (1998) who argued that the 'effects of feedback were reduced if students had access to the answers before the feedback was conveyed', and Butler (1998) who found that students performed better on tasks when they received comments rather than grades. This practice has been endorsed by Race (no date) and Rust et al (2005), as well as the Re-Engineering Assessment Practices in Scottish Education (REAP) project (Nichol, 2007) who suggested giving 'feedback before marks to encourage students to concentrate on the feedback first', and Boud & Falchikov (2006) in that marks should be 'subordinated' to qualitative feedback to promote long-term learning. Further research (Winter & Dye, 2004) has found that students do not collect marked work when they know the mark in advance. In an internal review of feedback in the Faculty of Development and Society at Sheffield Hallam University (Garner, 2006), it has been suggested that there are benefits from uncoupling the processes of providing grades, comments and return of scripts in speeding up response and quality of feedback.

 

Such practice resolves an issue raised by an action research project at University of Sunderland (Ecclestone & Swann, 1999), of how to encourage students to read feedback and use it to improve their subsequent work. This practice reflects the widely held view that feedback can only support learning if it involves both the production of evidence and a response to that evidence by using it in some way to improve learning. Higgins et al (2002) believed in a more reflective approach and the development of reflective skills to encourage student engagement with feedback, and there have been suggestions that such reflective activity is built into personal development planning (Bloxham & Boyd, 2007; Mutch, 2003; Race, no date; Rust et al, 2005).

 

Feedback grids tailored to the assignment can speed up the provision of feedback (Bloxham & Boyd, 2007), though McDowell, et al. (2005) have highlighted that students may find it difficult to interpret 'checkbox' feedback. Race (no date) suggested linking feedback directly to the achievement of learning outcomes to help students make 'better use of the learning outcomes as targets'.

 

Technology-enabled feedback

 

The most popular use of technology to provide students with formative feedback is through computer-based testing or assessment using multiple-choice or similar objective question types (Denton et al, 2008). Such software can deliver detailed formative feedback for each individual question more efficiently than is possible with traditional assessment (Brown et al, 1999; Gipps, 2005), and it has been reported that students favour the immediacy of such feedback as it keeps the activity and result closely connected (Charman, 1999; Denton et al, 2008). However the validity of automated formative assessment has been queried by Gipps (2005).

 

It has been claimed that sending tutors' comments electronically by email (Bloxham & Boyd, 2007; Denton, 2001a, 2001b, 2003; Price & Petre, 1997; Race, no date), via the internet or virtual learning environment (Denton et al, 2008; Gipps, 2005) can enhance the way in which students receive and engage with feedback. Students receive their individual feedback in privacy, enabling them to respond to their feedback in different ways and at different times (Price & O'Donovan, 2008). A number of other studies have reported on the greater impact of electronic or online feedback (van den Boom et al, 2004; Guardado & Shi, 2007; Tuzi, 2004). However Rowe & Wood (2007) have suggested that further examination of how students receive and respond to electronically redelivered feedback is required.

 

Examples of producing feedback electronically include the use of track changes and comments to alter and annotate the student's original word-processed work (Race, no date), comments typed in a separate document or digital ink using a tablet PC (Plimmer & Mason, 2006) providing individual feedback on student work. Race (no date) has claimed that the benefits of technology-enabled feedback include editing before returning to students, tracking what feedback has been given to which students, and building up evidence relatively quickly for external review. Additional benefits discussed by Bridge & Appleyard (2005), Denton et al (2008), Jones & Behrens (2003) and Price & Petre (1997) have included the legibility of electronic feedback, reduction in assignment turnaround time, efficiency in administration and reduction in paper used.

 

Some institutions have developed their own in-house systems to producing and returning feedback, including electronic marksheets (Joy & Luck, 1998) and the use of MS Office applications, templates and the computer supported generation of feedback statements from a bank of comments developed to improve quality in response to increased student numbers (Denton, 2001a, 2001b, 2003; Denton et al, 2008; Hepplestone & Mather, 2007; Price & Petre, 2007). However, Denton (2001a, 2001b) reported that despite their potential value, marking assistants are not widely used in higher education.

 

References

Comments (0)

You don't have permission to comment on this page.