Assessment faces many challenges; one, in particular, is that Assessment practices are typically designed by individuals without specialist test design skills. Another challenge is that such Assessment practices are then “Validated” by individuals who lack specific skills in test evaluation. The result is an impossibly wide variation in the relative quality of assessments across RTOs – With the consequence that there is wide variation in outcomes.
These observations apply equally to the assessment of both skills and knowledge; however, here, I will focus on challenges specific to the assessment of knowledge.
Four Questions About Knowledge Assessment
I will address this by proposing answers to four questions below:
1. Why do we assess Knowledge in VET?
In VET we commonly assess Knowledge for several reasons including (but not limited to) the list below:
- To predict transferability of observed skills to different contexts
- To predict the future performance of knowledge recall
- To diagnose reasons for performance – To distinguish skill from knowledge
- To identify knowledge gaps
2. How do we assess knowledge in VET?
The list below is not exhaustive; it covers the methods I have encountered in my time in VET:
- Quizzes and choice-based Exams (Formative and Summative) under a variety of test conditions
- Verbal responses to questions while performing a task in the workplace (or simulated workplace)
- Verbal responses to questions in a competency interview
- Written responses embodying declarative knowledge (short answer, essay)
3. What’s wrong with this?
- Accommodation of multiple attempts/open-book/unlimited time
- Evidence collected is inauthentic (copying/plagiarism)
- Excess assessor support (“scaffolding”, telling candidates what to write). Providing the candidate with test questions before the test
- Poor fit with the CBA “evidence-capture” model- For example the acceptance of “copy and paste” into a template as a surrogate for a genuine response
- Knowledge assessment tasks at the wrong level
- Insufficient “forgetting time” (not testing recall)
- Ineffective Validation techniques fail to identify inappropriate assessment of knowledge
4. What should be done about it?
The following lists several suggestions for improving Knowledge assessment practice in VET:
- Require high-stakes summative assessment quizzes are extensive, timed, closed book and invigilated
- Use plagiarism detection and remediation processes as part of the standard Assessment process
- Prescribe clear limitations to assessor support/scaffolding within the Assessment Tool
- Validate to distinguish clearly between the quality of evidence captured and quality of knowledge/learning
- Design tasks to assess knowledge at the right level (The design explicitly references the AQF level and perhaps even Bloom’s Taxonomic category)
- Test knowledge recall not just short-term memory
- Validate Knowledge tests results for both predictive and concurrent validity and test, re-test reliability
- Externally (as part of ASQA RTO renewal Audits), retest a sample of RTO graduates against a national standard test
I’ve characterised my proposal in the form of a list of suggestions in the hope of promoting thoughtful discussion. However, I have also provided this list to support the view that creating Knowledge assessments that are fit for purpose is a specialist skill of expert instructional designers working in collaboration with subject matter experts and experienced assessors; it should not be left to the assessors alone. Finally, I propose that the skills of validation practitioners be improved (particularly those of ASQA appointed Auditors and so-called “Validation specialists”) to ensure that Australian VET knowledge assessment practices achieve the outcomes to which they aspire.
Published 5 December 2019
About the Author:
Sean Kelly is VET Trainer/Assessor and Consultant to a broad cross-section of the VET world. He is the author of the essay entitled, Is there a “blacklist” or a “whitelist” for the TAE40116 Upgrade, which is published HERE.