Pages

Monday, May 08, 2017

Statement bank feedback: worthwhile?

Feedback checklist After a long sleep, time to dust off the old blog. This paper (just published) is highly relevant to research I'm currently engaged in, so it's of interest to me. It should be of interest to you too, given that staff time is the major pressure on feedback and tricks such as statement banks are only going to grow in prominence. An interesting study, but it uses the language of "assessment literacy" which I don't buy into. My mental model of the "all or nothing" behaviour observed is a different one - student responses to assessment are yet another proxy of engagement, evidenced by the higher marks of the email responders. Apart from not reading their feedback or email, we don't know what the non-responders were doing. So if students aren't going to read it, let's save staff time by using statement banks.


Response of students to statement bank feedback: the impact of assessment literacy on performances in summative tasks.
Assessment & Evaluation in Higher Education 07.05.17, doi: 10.1080/02602938.2017.1324017

Efficiency gains arising from the use of electronic marking tools that allow tutors to select comments from a statement bank are well documented, but how students use this type of feedback remains under explored. Natural science students (N = 161) were emailed feedback reports on a spreadsheet assessment that included an invitation to reply placed at different positions. Outcomes suggest that students either read feedback completely, or not at all. Although mean marks for repliers (M = 75.5%, N = 39) and non-repliers (M = 57.2%, N = 68) were significantly different (p < .01), these two groups possessed equivalent attendance records and similar submission rates and performances in a contemporaneous formatively assessed laboratory report. Notably, average marks for a follow-up summative laboratory report, using the same assessment criteria as the formative task, were 10% higher for students who replied to the original invite. It is concluded that the repliers represent a group of assessment literate students, and that statement bank feedback can foster learning: a simple ‘fire’ analogy for feedback is advanced that advocates high-quality information on progress (fuel) and a curricular atmosphere conducive to learning (oxygen). However, only if students are assessment literate (ignition) will feedback illuminate.

Wednesday, October 05, 2016

Microsoft Madness #OneNoteFail

OneNote YiL that the desktop version of Microsoft OneNote 2016 (but none of the many, many other versions of OneNote as far as I can tell) allows students to alter the creation dates of documents.

I can't imagine what the possible justification for this is, but as we were relying on OneNote to datestamp student diaries, it looks as if Microsoft may just have confined OneNote to oblivion as far as we are concerned.

Why Microsoft, why?





Thursday, September 22, 2016

Use of statistics packages Statistics in science as a whole is a mess. Ecology is no different from the rest of the field, although maybe slightly better than some parts. Statistical analysis is becoming more sophisticated but one thing is clear - R is winning the race.



The mismatch between current statistical practice and doctoral training in ecology. Ecosphere 17th August 2016. doi: 10.1002/ecs2.1394
Ecologists are studying increasingly complex and important issues such as climate change and ecosystem services. These topics often involve large data sets and the application of complicated quantitative models. We evaluated changes in statistics used by ecologists by searching nearly 20,000 published articles in ecology from 1990 to 2013. We found that there has been a rise in sophisticated and computationally intensive statistical techniques such as mixed effects models and Bayesian statistics and a decline in reliance on approaches such as ANOVA or t tests. Similarly, ecologists have shifted away from software such as SAS and SPSS to the open source program R. We also searched the published curricula and syllabi of 154 doctoral programs in the United States and found that despite obvious changes in the statistical practices of ecologists, more than one-third of doctoral programs showed no record of required or optional statistics classes. Approximately one-quarter of programs did require a statistics course, but most of those did not cover contemporary statistical philosophy or advanced techniques. Only one-third of doctoral programs surveyed even listed an optional course that teaches some aspect of contemporary statistics. We call for graduate programs to lead the charge in improving training of future ecologists with skills needed to address and understand the ecological challenges facing humanity.





Friday, September 16, 2016

Motivation to learn

Motivation Five theories:
  1. Expectancy-value
  2. Attribution
  3. Social-cognitive
  4. Goal orientation
  5. Self-determination


Motivation to learn: an overview of contemporary theories. Medical Education 15 September 2016 doi: 10.1111/medu.13074
Objective: To succinctly summarise five contemporary theories about motivation to learn, articulate key intersections and distinctions among these theories, and identify important considerations for future research.
Results: Motivation has been defined as the process whereby goal-directed activities are initiated and sustained. In expectancy-value theory, motivation is a function of the expectation of success and perceived value. Attribution theory focuses on the causal attributions learners create to explain the results of an activity, and classifies these in terms of their locus, stability and controllability. Social- cognitive theory emphasises self-efficacy as the primary driver of motivated action, and also identifies cues that influence future self-efficacy and support self-regulated learning. Goal orientation theory suggests that learners tend to engage in tasks with concerns about mastering the content (mastery goal, arising from a ‘growth’ mindset regarding intelligence and learning) or about doing better than others or avoiding failure (performance goals, arising from a ‘fixed’ mindset). Finally, self-determination theory proposes that optimal performance results from actions motivated by intrinsic interests or by extrinsic values that have become integrated and internalised. Satisfying basic psychosocial needs of autonomy, competence and relatedness promotes such motivation. Looking across all five theories, we note recurrent themes of competence, value, attributions, and interactions between individuals and the learning context.
Conclusions: To avoid conceptual confusion, and perhaps more importantly to maximise the theory-building potential of their work, researchers must be careful (and precise) in how they define, operationalise and measure different motivational constructs. We suggest that motivation research continue to build theory and extend it to health professions domains, identify key outcomes and outcome measures, and test practical educational applications of the principles thus derived.


Nicely contextualizes Carol Dweck's work.




Thursday, September 15, 2016

Assessing Student Learning

Assessing Student Learning For those of us involved in curriculum redesign, the University of Leicester Learning Institute has put together a useful page on assessment and feedback in the form of a "traffic lights" system for a wide range of assessment strategies (not just essays and exams!). You may or may not agree with all their assessments of how long each form of assessment takes, but overall it's a very useful "thinking aid" when you are trying to redesign assessments - worth bookmarking.

www2.le.ac.uk/offices/lli/developing-learning-and-teaching/assessment-and-feedback




Monday, August 08, 2016

A quiet life

Uphill struggle "Students collude with academics in a ‘disengagement compact’, which is code for ‘I’ll leave you alone if you’ll leave me alone’... This promotes a cautious culture of business-as-usual. In most institutions, internal quality assurance departments prefer a quiet life and reinforce the status quo. In the end, the challenge of motivating students to undertake formative tasks surmounts the potential value of those tasks. The idea that well-executed formative assessment could revolutionise student learning has not yet taken hold."



The implications of programme assessment patterns for student learning. Assessment & Evaluation in Higher Education 02 Aug 2016 doi: 10.1080/02602938.2016.1217501
Evidence from 73 programmes in 14 U.K universities sheds light on the typical student experience of assessment over a three-year undergraduate degree. A previous small-scale study in three universities characterised programme assessment environments using a similar method. The current study analyses data about assessment patterns using descriptive statistical methods, drawing on a large sample in a wider range of universities than the original study. Findings demonstrate a wide range of practice across programmes: from 12 summative assessments on one programme to 227 on another; from 87% by examination to none on others. While variations cast doubt on the comparability of U.K degrees, programme assessment patterns are complex. Further analysis distinguishes common assessment patterns across the sample. Typically, students encounter eight times as much summative as formative assessment, a dozen different types of assessment, more than three quarters by coursework. The presence of high summative and low formative assessment diets is likely to compound students’ grade-orientation, reinforcing narrow and instrumental approaches to learning. High varieties of assessment are probable contributors to student confusion about goals and standards. Making systematic headway to improve student learning from assessment requires a programmatic and evidence-led approach to design, characterised by dialogue and social practice.







Friday, July 29, 2016

You are who you learn: authenic assessment and team analytics?

Forget facts - that's the learning of the past. The learning of the future is soft skills. But how do you assess it? Peter Williams argues that we can approach the problem though learning analytics. Along the way he also provides the best theoretical introduction to authentic assessment that I've read yet. He also flags Lombardi's definition of authentic learning:

  1. Real-world relevance: the need for authentic activities within a realistic context.
  2. Ill-defined problem: confronting challenges that may be open to multiple interpretations.
  3. Sustained investigation: undertaking complex tasks over a realistic period of time.
  4. Multiple sources and perspectives: employing a variety of perspectives to locate relevant and useful resources.
  5. Collaboration: achieving success through division of labour and teamworking.
  6. Reflection (metacognition): reflection upon individual and team decisions.
  7. Interdisciplinary perspective: encouraging the adoption of diverse roles and thinking.
  8. Integrated assessment: coinciding the learning process with feedback that reflects real-world evaluation.
  9. Polished products: achieving real and complete outcomes rather than completing partial exercises.
  10. Multiple interpretations and outcomes: appreciating diverse interpretations and competing solutions.

So this paper is worth reading for the above reasons. But am I convinced about his pitch for learning analytics as the way forward? No - it's completely fanciful and unsupported by any evidence. Which makes me feel better - we agree on the problem and it's not just me being thick because I can't quite figure the solution.




Assessing collaborative learning: big data, analytics and university futures. Assessment & Evaluation in Higher Education 28 Jul 2016 doi: 10.1080/02602938.2016.1216084
Assessment in higher education has focused on the performance of individual students. This focus has been a practical as well as an epistemic one: methods of assessment are constrained by the technology of the day, and in the past they required the completion by individuals under controlled conditions of set-piece academic exercises. Recent advances in learning analytics, drawing upon vast sets of digitally stored student activity data, open new practical and epistemic possibilities for assessment, and carry the potential to transform higher education. It is becoming practicable to assess the individual and collective performance of team members working on complex projects that closely simulate the professional contexts that graduates will encounter. In addition to academic knowledge, this authentic assessment can include a diverse range of personal qualities and dispositions that are key to the computer-supported cooperative working of professionals in the knowledge economy. This paper explores the implications of such opportunities for the purpose and practices of assessment in higher education, as universities adapt their institutional missions to address twenty-first century needs. The paper concludes with a strong recommendation for university leaders to deploy analytics to support and evaluate the collaborative learning of students working in realistic contexts.