Why am I evaluating this thingummygig?
Abstract
Why do I need to evaluate the new technologies? In my undergraduate days I doubt if anyone evaluated their teaching and my learning was inspite of their teaching! The long slow haul to change the teaching and learning culture has put excessive pressure on some forms of learning experiences and new technologies is one of them. This makes sense when you consider the amount of money we have been investing in developing computer based learning materials. Ever since we, in First Year Biology, started to develop teaching and learning materials using information technologies we have endeavoured to understand how these materials are being used and what, if anything, do the students gain from using such materials. Early on we did a lot of usability studies and checks on accuracy of content (formative evaluations) and so improved the materials. We also developed expertise in instructional design. With this in place we concentrated more on the impact of the materials on student perceptions (Did they like using them? Did they help them in their understanding?) Whilst this is also a type of formative evaluation it gave us some ideas on how students were using the materials. The big issue, however, is 'Do the materials have an effect on student learning outcomes?' such that one would argue they are better than other forms of learning material? This is difficult to answer without fairly exhaustive studies on the use of the computer based learning materials by students. By the time the formative evaluation stage is over, there is often little time available to ask these questions; we are too busy; there are too many students to cope with; etc. In a recent CUTSD-funded study, Shirley Alexander reviewed 104 teaching development projects and reported that in approximately 90% of cases the project leaders indicated that they had the intention of improving student learning outcomes, but only a third could report this as an actual outcome as only this third actually evaluated student learning outcomes (Alexander, 1999; Alexander and McKenzie, 1998). Alexander goes on to argue that most of the project evaluations fell within the first level of the four levels of outcome on which evaluation evidence should focus, as described by Kirkpatrick (1994), that is, 'reaction to the innovation' and a minority of evaluations fell within the second level 'achievement of learning objectives'. Only one project fitted the third level 'transfer of new skills to the job or task' and no project evaluated the 'impact on the organisation', (Alexander, 1999). It would seem that we all need to lift our performance in this area and ensure that at least levels one and two are fulfilled.Downloads
Published
2012-11-28
Issue
Section
Non-refereed Papers