• If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!


Evaluation and Quality Assurance

Page history last edited by christopherdtaylor@gmail.com 13 years, 1 month ago


Return to main menu


Evaluation - 'The process of making judgements about the worth (costs and values) of something' (Oliver 2000)


Evaluating the portability and adaptability of your resources is a vital step in the Open Educational Resources release process. The resources need to be sufficiently robust to stand alone and be re-used outside of their original environments, and open enough for others to be able to adapt and build upon. Ensuring you have considered these issues - outlined below - will greatly increase the likelihood of others adopting and adapting your OERs.


In our experience, the first piece of information the potential users of a resource would like is a measure of its worth. "Is it any good? Who else has used it and how did it work for them?" academics often ask. A positive history is crucial to secure further adoption on a wider scale. However, evaluation is often carried out retrospectively (often independently) and opportunities to refine the resource during development are lost. This makes it more difficult to produce a resource which is suitable for use by a wide range of institutions.


There are some excellent resources for teaching available on the internet but it is often difficult to find other users of the same resource and descriptions of how they deploy them, their experiences and the benefits they gained from adopting it within their teaching. We now have significant budget challenges in higher education in 2010 and it is essential that we can maximise the value of all learning and teaching materials.


So, how do we evaluate effectively? How can we reduce the effort and maximise the return? There are some quite advanced frameworks available (Yuan, 2009) but we would initially recommend a simplified approach if you are new to Open Educational Resources. Bear in mind however, that a successful OER has multi-user, multi-author perspectives as it develops.


The first step is to identify the key questions: typically (based on Conole 2000) these questions might be:

  • How effective is this?
  • Is it more effective than the previous mechanism for teaching this? (in effect, why it was developed)
  • How effectively do these [solutions / systems / resources] support learning?
  • Is it more effective than face-to-face learning? (is any reduction in effectiveness minimal, and are other benefits significant?)
  • How much time does the [adoption or adaption] of the resource take to make a gain?
  • What skills do teachers and students need in order to use [the resource] effectively?
  • Is support and encouragement available from other users?


Why Evaluate?

Performing an evaluation is essential for an open educational resource to be a success - depending on how we define success. For the resource to be useful in its original point of origin is not enough for an OER. OERs are easily adopted by others in different contexts and situations. Further and true openness is achieved by enabling others to adapt, i.e. to enhance and develop the work further (hopefully with more than one other contributor).


How to evaluate?

Do not be scared off by a potential overhead, there are evaluation frameworks available which will support gathering of information for your stakeholders. These provide a structured list of questions and issues, and tips for how to go about gathering the evidence. The example below was based on one used in an OER pilot project - complete your own questions and decide on what evidence you would capture to answer them.


Focus area


Outputs/Sources of evidence

Developing, managing and sharing OERs

Which models are appropriate for different contexts? 

Evidence of quality enhancement through development and use of a subject-specific taxonomy 

Guidance and support mechanisms

What guidance and support needs to be offered?

User guidelines produced/Feedback and reflections on user guidelines obtained from community.

Business cases and benefits realisation

What are effective business cases for different stakeholders? 

Evidence of benefit in promoting / marketing the discipline and quality of UK HE in a specific subject.

Cultural issues

What motivates and supports/ enables individuals to make their content open?  Evidence of different attitudes and practices among HE, FE, industry and professional bodies 

Institutional issues - strategy, policy, practice

To what extent do existing policies and strategies support the opening of learning resources?  Evidence of changed policies and practices in partner institutions relating to the release of OERs and wider use of Web2.0 technologies 

Legal issues

General IPR/legal issues  (securing IPR, data protection issues...)


Findings on different institutional attitudes to IPR and ownership of resources created by staff and students

Technical and hosting issues

What kinds of metadata are essential, what desirable, and what are the issues in creating and managing metadata?  Project metadata schema and automated process for partners to classify/tag their own resources 

Quality issues

Are OERs perceived to be of high quality?

What impact do perceptions of quality have on  release process/sustainability? 

Review of released materials against criteria of open, accessible, and comprehensive coverage of UG curriculum 

Pedagogy/end-use issues (not a primary focus of evaluation)

Which types of OER are used by different stakeholders?   OER testing and feedback from staff and learners from a range of Partner Institutions 

Learner and other stakeholder involvement

What role have learners played in shaping the programme outcomes? How have projects engaged learners, if at all?  User surveys, questionnaires and interviews to investigate 

Programme and project management issues

What challenges arise from consortium approaches?   Discussions online and at programme events 

 Is it worth the additional time and effort?

Yes. If the evaluation is 'bolted-on' afterwards then the retrospective work gathering the evidence can be significant and onerous. However, if the evaluation is planned into the project from the start then the workload is reduced - decisions on how the usage data, feedback, user experiences, teacher experiences etc. are captured can be planned in the development stage; the users and any contributors each have a minor workload on your behalf. The only additional workload is the analysis, which is much easier now you have decided what to capture and how.


The results of the evaluation then become valuable for publications and presentations at meetings and conferences, and may even serve to increase uptake of the resource.


Evaluation towards a goal/purpose

The goal should be for this resource on this topic to become a (hopefully the) reference resource that others take forward and enhance. Wikipedia demonstrates the model for multiple contributors quite well, including its pitfalls. However, few can doubt it has been a significant influence on education. Medpedia is a more academic project by nature and has become very popular with students. Being a wiki it is easy to enhance so you may consider a supporting wiki to be useful, whatever your resource. If you promote it through a repository there is a good chance that some wiki-like functionality might be available to gather feedback, tips and experiences with using your OER. Consider how you might achieve this. For example;


Evaluation of content (quality of materials)

  • Identify a critical friend for checking content
  • Process critical friend feedback into resource
  • Test on pilot consumers
  • Process pilot feedback
  • Declare level and prior expectations of knowledge for use (educational level for instance)
    • Have any 3rd party materials been evaluated separately?
  • Have you added an evaluation comment form/weblink to gather 'downstream' feedback?


Evaluation of re-usability i.e. delivery, interface, installation

  • Have you identified a critical friend for adoption?
  • Have you declared suitable metadata, keywords?
  • Have you included any existing 'adopter' reports since the first version of the resource was created?


Other factors

  • Have suitable Accessibility issues been addressed?
    • Try to avoid creating barriers. Provide cover documentation for non-accessible components
  • Have you adhered to suitable technical standards?
  • Are you aware of current best practice in this area?


Stakeholder engagement

  • How were your stakeholders involved?
  • How are they notified of developments?


Is the resource

  • Effective?
  • Useful?
  • Valuable within the discipline?
  • Easy to adopt?
  • Easy to adapt?
  • Of sufficient quality?
  • Authentic? 


Was it evaluated formatively DURING development to enable a better final output?

Was it summatively evaluated by an adopter - is this in effect the end of a successful release?


We would also recommend you associate a tag with your resource (such as #bioukoer) so those who reuse your resources, write about it in their blog or discuss it in social media can be identified. 


The above should provide a starting point for you when developing, evaluating and releasing your OERs.




Oliver, M. & Conole, G., (2000), Assessing and enhancing quality using toolkits. Journal of Quality Assurance in Education, 8, 1, 32-37.


Conole, G (2004) The role of Evaluation in the Quality Assurance of elearning. Learning and Teaching in Action, 3, 2.
Available at http://www.celt.mmu.ac.uk/ltia/issue8/conole.shtml 


Yuan, Li (2009) Workblog - Developing a Framework for Understanding and Evaluating the Impact of Open Educational Resources


JISC Infonet (2010) Open Educational Resources Infokit


Return to main menu



Number of visitors:

Comments (0)

You don't have permission to comment on this page.