The transformation of assessment practices at an institutional level is a persistent challenge (Macdonald & Joughin, 2009). Transitioning from an assessment of an assessment for learning culture at the institutional level is an arduous challenge. The case of a higher education institution in the United Kingdom is presented. Firstly, the broad context of institutional priorities in assessment transformation is described. Secondly, the case pays specific attention to the institutional transformation of attitudes and practices towards student engagement in assessment. Conceptually the model of student engagement is derived from the theoretical description of evaluative judgement (Boud et al., 2018). Its implementation in practice requires the development, planning and integration of multiple instructional strategies aimed at fostering student’s self-regulation (e.g. from rubric development through to self, peer and co-assessment). While; not a new concept, from the instructional perspective, implementing the range of practices effectively, presents practitioners with the challenge of understanding new ideas and developing multiple skills. Taking a practitioner, and, an institutional view, the case describes the approach to tackle these challenges. Institutional leadership in driving change has consisted of multiple strategies, from the bottom up growing a broad base of local champions. Equally, quality assurance processes and other top-down processes are discussed. Support to develop, evaluate practices with colleagues has played a crucial role in gaining confidence and credibility. However, complementary top-down approaches (e.g. policy and quality assurance) are also necessary for cultural transformation. Lessons learnt, persisting challenges in influencing common practices, beliefs and, ultimately, culture are shared.

Context

In line with sector-wide challenges, the institutional priorities for improving and transforming assessment practice have been formulated (Bloxham, Hughes & Adie, 2016; Bloxham et al., 2015; Boud, 2017; Medland, 2016; Tomas & Jessop, 2019; Jessop & Tomas, 2017; Tomas, 2014), which are as follows:

  • Programme level design of assessment and curricula
  • Student engagement in assessment and feedback
  • Efficiency and effectiveness of processes (e.g. marking, moderation)

The institutional agenda of priorities, internally, has been contextualised by evaluating and consulting with faculties across the University to draw up a joint plan. Assessment practices across the institution are heterogeneous, with varying degrees of effectiveness. Despite local academic cultures associated with different disciplines and backgrounds, the elements described above are a common challenge.

Insights from this case will be of most interest to managers and leaders of change at the departmental or institutional level. The focus is on an area of utmost importance that is student engagement in assessment and feedback. The paper describes the steps taken in planning and delivering institution-wide transformation and a reflection on persisting challenges.

Step 1 Defining institutional models of good practice for student engagement

The recent conceptual formulation of evaluative judgement provides the basis for the institutional model of student engagement in assessment (Boud et al., 2018). Evaluative judgement makes an essential contribution to instruction in that it frames existing theoretical notions and empirical research (Nicol & Macfarlane, 2016) into a cohesive set of practices with an explicit focus on promoting students’ autonomy (Zimmerman, 2000). In line with this conceptualisation, a baseline for institutional practice has been justified. Enhancing student support in advance of assessments has been translated into a set of basic requirements for institutional practice:

  • Communication of criteria to students which includes the pre-requisite of development of valid rubrics
  • Student engagement in understanding criteria (in-class scheduled activities)
  • Method of assessment formats opportunities (practice tests, coursework, exams)
  • Involvement of students in understanding expectations (standards) with the use of peer, self and co-assessment practices

The implementation of these elements across an institution is challenging because there are multiple strategies, each requiring an understanding of design and implementation of its nature. The institution is promoting adoption of this as a cohesive model with the aim to offer students a consistent and planned exposure to these different elements of support also across programmes of study. The vision is to promote student autonomy and understanding of assessment expectations in a consistent manner.

To achieve this ambitious goal, the barriers encountered in practice are:

  • The absence of programme planning of teaching and learning activities
  • Culturally, most of the methods above are considered as optional except the publication of rubrics. This denotes that conceptually, in practice, the role and significance of these activities (e.g. engaging students in understanding quality or rubrics) is not fully understood
  • Development of rubrics is not usually a guided and supported process that results in rubrics of varying quality and possibly with low levels of validity (construct).

Culture, awareness, skills and programme level planning, therefore, pose particular challenges each with their own particularities. Thus, in practice, implementation of instructional approaches that can promote evaluative judgement needs to be broken down, and changes paced to enable practitioners to become skilled.

Step 2 Initial steps: rubric design, use and sharing with markers and students

In the initial stages, a bottom-up approach to cultural transformation has been adopted. A phase focussed on the intense collaboration and work across the institution to enable the development of individuals that, locally, would create adapted models of the institutional guidelines.  A total of sixteen trials across all Faculties have now taken place.

Central support: development and evaluation

Support has been provided to develop and evaluate these practices, and this has ensured the effectiveness of the trials. Developing rubrics requires articulation of abstract and implicit constructs. Moreover, introducing new types of activities such as co-assessment (i.e. where staff and students together might assess exemplars of work and use rubrics to evaluate) requires a great deal of skill and confidence building. The support offered came initially centrally (from the author) however, as the number of institutional examples has grown we have been able to provide worked out models for other colleagues via resources and seminars that are periodically held to disseminate cases. Nevertheless, guidance and support in initial stages for rubric design is essential.

The support offered in the development of the first exploratory trials in different disciplines consisted in breaking down the challenge of implementing the model into various steps. While the approach to development has followed the same steps across all contexts, the speed of development and implementation has varied across settings according to the judgement of local leads.

The steps in the development of student engagement in assessment and feedback have aimed in the first instance to develop module level examples of practice, illustrating the use of a consistent set of elements and in a similar order:

  • Development of rubrics: the first step in the process consisted of a review of existing rubrics. Teams were involved in the process in a variety of ways. For example, gathering tutors’ descriptions of quality in association with different levels of quality. Analyses of these descriptions would provide an initial basis to construct rubrics which followed a series of checks also involving academic teams.
  • Sharing the rubrics with other academic staff and introduction for marking
  • Sharing the rubrics with students
  • Engaging students in the use of rubrics (e.g. co-assessment scheduled activities)

The desired model is conceptual and speaks of a scenario where all elements work in an integrated fashion. However, the transformation of practice and also collectives requires, sometimes, pacing the changes. For example, development of rubrics, their implementation and student engagement (e.g. timetabled in-class activities such as co-assessment or peer assessment) impose a high demand on top of the normal load and activity. Other areas of disciplinary and contextual variations emerged regarding whether rubrics are considered for feedback only or as marking guides. These are some examples of implementation that will require practitioners’ judgement relating to the readiness of their colleagues and their contexts to work through the introduction of each element, as well as ensuring trust and confidence are built in the new practice.

Models and parameters are suggested centrally. Despite this, local implementation in each discipline considers the readiness for change. All trials across the institution have transitioned now from a trial period into a business as usual practice. Several modules across the institution now have consistent communication of expectations (rubrics), engagement of students and staff that is used in modules involving communities (e.g. co-assessment, peer and/or self-assessment). Support from local directors of teaching has been necessary for the implementation stage to overcome cultural barriers.

The evaluation of the various trials has been conducted using the same protocol (adjusted to each context) by the author of this article. An intense period of evaluation of sixteen trials, across disciplines in the institution, has played a crucial role enabling local teams to consider impact and effects and, consequently, to instil confidence in the new practices for them to be established as the usual practice. Institution-wide, the growth of a critical mass of local leaders has also enabled cross-disciplinary sharing and exchanges both in the use of rubrics and strategies for implementing a co-assessment session. Such as sessions where both students and staff mark samples of work and discuss them. A community has grown over the years (since 2014). Dissemination and showcase events of the work and opportunities to discuss peer and co-assessment for different assessment types and scenarios have been essential to growing the base of institutional, local leads (e.g. engaging students in large classes with co-assessment or peer-assessment; summative and formative peer assessment).

Step 3 Developing advanced examples of evaluative judgement 2018-19

Building on the previous basis, during the academic year 2018-19, more advanced practices have now been integrated with year-long plans of student engagement that now, in addition to the use of rubrics and co/peer assessment have also introduced self-assessment as part of the scheduled activities.

These practices are now being embedded across multiple modules integrating multiple strategies, with a year-long sustained exposure to practising and developing students’ own evaluative judgement. These practices are currently under investigation and review to answer instructional design questions, effectiveness and student experience.

Step 4 Integration with quality assurance processes

Lastly, at present various quality assurance mechanisms are being reviewed to ensure future quality assurance processes mirror the models of good practice explained above and the institutional framework for enhancement.

Reflections and lessons learnt

The institutional case described offers an example of the approach to facilitating change. Primarily, the method relies on collaboration between central and local leadership. The collaboration essentially has also consisted in a division of responsibilities and areas of influence as detailed below.

The scope of the central leadership has consisted of:

  • Defining a model for practice
  • Offering support in development and implementation by helping to break down the steps, provide models and support development (e.g. creation of rubrics, development of workshops for students)
  • Collect examples and create a repository of resources
  • Offer support with evaluation (both design of the evaluation protocol and implementation)
  • Lead on dissemination events institution-wide
  • Lead on gaining institutional approval and influencing additional mechanisms (e.g. quality assurance, policy)

The scope of the local leadership has consisted of:

  • Adjusting and contextualising models and steps
  • Deciding on the pace of the introduction of changes
  • Reflect and review based on the evidence gathered
  • Developing trust in the community of practice

The success of the approach and institutional experience can be seen concerning the initial trials and exploration have become, in all sixteen trials, core practice in the particular modules. That is a testimony to the effectiveness of the approach, despite the intensity of the labour described required.

The experience has borne many successes. Nevertheless, some of the initially desired goals have not yet been realised. For example, wide-spread embedding of the model at the programme level has not occurred yet. The initial experience suggests it may require long term and different approaches. Persisting barriers are the absence of cohesive programme cultures as there is a tendency to work in modular silos. An additional obstacle is staff perceptions and beliefs. One assumption made at the start was that the use of evidence would facilitate transformation. Despite evidence of impact on student learning, beliefs may remain unchanged and practitioners not persuaded. Establishing the instructional model of evaluative judgement as central to student learning still demands more sustained efforts in the future.

Organic approaches to transformation are successful to a reasonable extent and necessary as has been illustrated. The modules that have now incorporated the model are having an impact in the immediate community of practice. However, the adoption and transformation process unfolds slowly. Nevertheless, this reveals something significant. Transforming assessment cultures and traditions is necessarily a slow process. Leaders of transformation and managers should not underestimate the role played by practitioners’ deeply held beliefs and previous experience about what makes good practice. In the particular institutional case, at this point in the institutional journey and mission to advance assessment for learning agendas, regulation will be introduced. The aim is not to mandate but to reinforce messages and models that have increased their presence in practice in an organic manner as described above.

About the Author: 

Dr Carmen Tomas

Dr Carmen Tomas

Dr Carmen Tomas works as a lead for the transformation of assessment practice in the University of Nottingham (United Kingdom). In her role as Assessment Advisor, she leads on policy, strategy, as well as the development and research in the context in collaboration with Faculty, leads. She is an associate editor for Assessment, Testing and Applied Measurement (Frontiers). Her full publication record can be found at https://orcid.org/0000-0001-6163-2907. You may contact her by email with any queries at carmen.tomas@nottingham.ac.uk.

References

Bloxham, S., Hughes, C., & Adie, L. (2016). What’s the point of moderation? A discussion of the purposes achieved through contemporary moderation practices. Assessment and Evaluation In Higher Education. 41 (4): 638-653.

Bloxham, S., den-Outer, B., Hudson, J., & Price, M. (2016). Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment and Evaluation in Higher Education. 41 (3): 466-481. DOI: 10.1080/02602938.2015.1024607.

Boud, D. (2018). Assessment could demonstrate learning gains, but what is required to do so? Higher Education Pedagogies. 3 (4): 4-6.

Boud, D. (2017). Standards-based assessment for an era of increasing transparency. In Scaling up assessment for learning in higher education edited by David Carless et al., 19-31. Springer: Singapore.

Boud, D., Ajjawi, R., Dawson, P., & Tai, J. (2018). Developing evaluative judgement in higher education. Abingdon Oxon, Routledge.

Jessop, T. & Tomas. C. (2017). The implications of programme assessment patterns for student learning. Assessment And Evaluation in Higher Education. 42 (6): 990-999. doi: 10.1080/02602938.2016.1217501

Macdonald, R. & Joughin, G. (2009). “Changing Assessment In Higher Education: A Model In Support Of Institution-Wide Improvement.” In Assessment, Learning and Judgment in Higher Education edited by G. R. Joughin, 193-213. Dordrecht: Springer.

Medland, E. (2016). Assessment in higher education: drivers, barriers and directions for change in the UK. Assessment and Evaluation in Higher Education. 41 (1): 81-96.

Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education. 31 (2):199-218. doi: 10.1080/03075070600572090.

Panadero, E., & Broadbent, J. (2018). Developing evaluative judgment: self-regulated learning perspective. In D. Boud, R. Ajjwi, P. Dawson, & J. Tai (Eds) Developing evaluative judgement in higher education. Assessment for knowing and producing quality work, pp. 81-89. London: Routledge.

Tomas, C. (2014). Marking and feedback provision on essay-based coursework: A process perspective. Assessment and Evaluation in Higher Education. 39 (5): 611-624.

Tomas, C. & Jessop, T. (2019). Struggling and juggling: a comparative study of student assessment loads across institution types. Assessment and Evaluation in Higher Education. 44 (1):1-10. doi: 10.1080/02602938.2018.1463355.

Zimmerman, B.J. (2000). Attaining self-regulation – a social cognitive perspective. In Handbook of self-regulation. In M. Zeidner, P.R. Pintrich and M. Boekaerts (Eds), pp. 14-19. San Diego, CA: Academic Press.

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

Privacy Policy | +92-51-2724070 | subscribe@pakistanascd.org | ©2019 The Reformer

Head Office: Plot # 179, 3rd floor, Zuhra Icon, Civic Center, Phase 4, Bahria Town, Islamabad, Pakistan, 44000

Log in with your credentials

or    

Forgot your details?

Create Account

By continuing to use the site, you agree to the use of cookies. more information

PAKISTAN ASCD Cookies, Tracking and Advertising Disclosures

When you visit the Service, we may use cookies and similar technologies, like pixels, web beacons, and local storage, to collect information about how you use PAKISTAN ASCD’s Services and to provide features to you.

Some of the content, advertising, and functionality on our Services may be provided by third parties that are not affiliated with us. Such third parties include

We use third-party analytics tools to help us measure traffic and usage trends for the Service. These tools collect information sent by your device or our Service, including the web pages you visit, add-ons, and other information that assists us in improving the Service.

 Details of Third-Party Services and Your Options

Below is a description of our advertising and audience-measurement companies, and the options you have with their services.

Google: PAKISTAN ASCD uses Google Analytics to help analyze our performance and our delivery of services and advertising to you. You can prevent your data from being collected by Google Analytics by using the Google Analytics Opt-Out Browser Ad-On, which you can find at this URL: https://tools.google.com/dlpage/gaoptout/

cookie) together to report how your ad impressions, other uses of ad services, and interactions with these ad impressions and ad services are related to visits to PAKISTAN ASCD's sites.

Facebook: PAKISTAN ASCD uses Facebook’s Custom Audience Remarketing Tools. Please go to aboutads.info/choices to opt-out of such tools.

LinkedIn: PAKISTAN ASCD uses LinkedIn’s Lead Accelerator services. Please go to https://www.linkedin.com/psettings/guest-controls to unsubscribe from target ads and to manage your LinkedIn advertising

Twitter: PAKISTAN ASCD uses Twitter’s tailored advertising and remarketing Please go to https://support.twitter.com/articles/20170405 to turn off tailored ads and to exercise other options.

 How long will cookies stay on my device?

The length of time a cookie will stay on your computer or mobile device depends on whether it is a "persistent" or "session" cookie. Session cookies will only stay on your device until you stop browsing. Persistent cookies stay on your computer or mobile device until they expire or are deleted.

General Choices

To learn more about the choices that advertisers provide generally for individuals to influence how information about their online activities over time and across third-party websites or online services is collected and used, visit the Network Advertising Initiative at http://www.networkadvertising.org/managing/opt_out.asp and the Digital Advertising Alliance’s www.Aboutads.Info/choices/ service.

Close