Thursday, 20 December 2018

Is Christmas really necessary?

Planning a relevant, thoughtful, challenging but accessible learning experience for HE students in UK Universities is a skilled passtime.  Carefully navigating the strictures of institutional "norms" on contact hours, timetabling, blended inputs, assessment types and weights have to be balanced with programme outcomes and intended learning goals.  To top it all the material, the engagement and the enthusiasm have to be maintained at a high level to ensure "good" feedback and to avoid that awkward conversation with the Dean over a "sub-standard" score of 3.9 on the Richter scale of academic value.

Now add in the generosity of Church, State and tradition to engineer a 4 week "break" at Christmas and another at Easter.  The former divorces the learning experience from the assessment in many Universities, whilst the latter splits the teaching into two unequal parts but, at least, presents assessment opportunities fairly close to the end of the teaching.

The answer?  Ban Christmas and ignore Easter?

No. Simply allow for flexibility in planning teaching and assessment so that the barriers and hurdles are seen as opportunities and so that the student journey is dictated more by good learning design than by religious festivals.

Happy Christmas.

Thursday, 13 December 2018

Nobody cares about apathy anymore

In much survey-based research in Social Sciences the value and quality of the research instrument (the questionnaire) is often eclipsed by the importance of getting a "good" response rate.

The Survey/Anyplace blog attempts to measure the "average" response rates that researchers can expect.  There is great celebration in the office when a survey actually gets above the "average".

In-person surveys are always likely to get above "average" rates of response.  These are targetted, difficult to avoid where a respondent is accosted in the street (although Chuggers (Charity Muggers) have taught people to navigate around folks brandishing clipboards) but are also costly.

Mail surveys can be targetted, too, will involve some cost, although FREEPOST can prevent costs from being incurred for non-responses.  But, like some in-person surveys, they will involve the researcher in manual input of data prior to analysis.  Does anyone actually do telephone surveys with random respondents any more?  Call barring and opting-out and GDPR will have shielded many from this.  Telephone polls using panels of respondents fare better, even if they consistently fail to forecast the result of a Referendum or Election.

We all know that surveys arriving by email can be captured by SPAM filters or simply ignored, it's even easier for low-cost on-line or in-app surveys.

Bribery, personalisation, careful targetting and selection, persistence and even advertising a worthy cause can all hope to increase response rates.  Ultimately, however,  the sample used will be biased, self-selecting, obliging (what is the right answer?) and probably unrepresentative.

So why do we get so excited when a 20% response rate for on-line module feedback indicates "below average" performance from a lecturer and reveals that the lecturer had egg on his tie or was wearing mismatched earrings?

Thursday, 6 December 2018

Satisfied but Unemployable

I am indebted to an Australian colleague for coining the phrase "Satisfied but Unemployable" when discussing the focus held by many Universities on Student "Satisfaction".

We cannot blame Universities for responding so positively to the "market" and to the regulatory measures that focus on voter appeal, rather than quality.

But we can blame every individual who votes in satisfaction surveys without truly reflecting whether their "satisfaction" is surface or deep.  Without reflecting that the uncomfortable, challenging module in which a low mark was awarded and, accordingly, received a low feedback score for the lecturer, was, in fact, one of the best learning events in the whole degree.

We can blame those institutions, newspapers, government ministers and agents and, of course, University marketing directors, who propagate the illusion that "satisfaction" is a meaningful measure in itself.

And we can blame ourselves for believing it so readily and then failing to make changes to practice that will add to efficacy - even if students (voters) struggle to see the immediate relevance of the rite of passage called a University Education.