Thursday 26 November 2015

"Second class" lecturers

Business Schools teach about successful (and sometimes unsuccessful) businesses where strategic missions are fully supported by an appropriate mix of resources.  Taking Google as a success story we can see clearly the mix of technical, entrepreneurial, marketing , financial and people management expertise that combine to make the company a good place to build a career (so we are told) and a successful place to invest the pension pot.
So, you would expect that a Business School would balance in the mix of lecturers and professors in business education as suggested in AACSB Standard 15 ?
In brilliant business consultancy style AACSB identifies a 2 x 2 matrix of academic staff sorted by sustained activity (either scholarly / research or applied / practice) and point of entry (either PhD route or via a professional career).  This gives 4 "classes" of lecturer:


Key: SP = Scholarly Practitioner; IP = Instructional Practitioner; SA = Scholarly Academic and PA = Practice Academic.

Clearly SAs are the traditional model of Research focused academics, able to command the title "Professor", be feted in International conferences, draw high salaries...  If they can communicate with mere earthlings in order to be effective teachers then that's a bonus.  SP's can also gain this academic high ground, through embracing the SA career path.  Often SAs have enough about them to communicate well with students and teach effectively - "real world" experience it is called - as if Business School s are not actually "the real world".

IPs and PAs, however, are the second class lecturers.  These are often effective and popular teachers, charged with student satisfaction, programme management and delivery, recruitment and retention etc.  Typically they do not enjoy the title of "Professor", do not draw high salaries but quietly ensure that the main source of revenue for the School (its student fees) are maintained.  And, let's not forget the administrative and support staff who sweep up after all of the academics.

When a successful Business School needs a mix of resources you would expect to see equal incentives and rewards for all of the AACSB categories - wouldn't you?

Thursday 19 November 2015

Quick and dirty beats slow but measured feedback

Let me warm to the theme started in my last blog - the need for lecturers and students to know where the "goalposts" are in relation to assessments in their HE studies.
Clearly lecturers need to develop a clear, consistent, relevant, appropriate and communicated set of assessment criteria to allow their students to achieve the learning outcomes for the lesson, course or programme.  These can often be expanded to a range of descriptors that explain a particular level of achievement - such as the example from a FHEQ Level 6 taught module for the assessment criterion "Analysis / Discussion / Evaluation":

By selecting the descriptor for the chosen criterion the marker can begin to indicate the level of performance of the individual student.  There is also good scope to add some "feed forward", a suggestion, or two, about how the work could be improved.

So, for an assignment scoring 60 - 69% the feedback could be:

A good attempt to analyse or prioritise issues and to draw conclusions.  The example of XYZ corporation's new JIT system showed the key costs and benefits clearly.  To improve the work an example of JIT failing to work would give a more rounded picture.

So, well considered and well designed marking criteria, communicated at the outset, become the basis for clear feedback that can be made to have an individual focus for each student - as the following continuum shows:

Contrast this will the in-line comments on a script, the careful summary of points at the end of a piece of work.  Yes, it is less focused on the individual, yes, it can look like the tutor is using a mechanical way of marking but it does have the advantage of quicker turnaround for the student, efficiency for the marker and a clear and consistent set of standards - whether the paper is marked on a Monday morning or a Friday evening.

Wednesday 11 November 2015

"Goal Line Technology" in Higher Education

So was Geoff Hurst's wonder goal in the 1966 World Cup final actually a goal?  English (and Russian standards say - "yes" but German standards say "no") - and all that had to be decided was whether a ball had gone over a line...

So how much more tricky is it for the University teacher to decide whether a piece of work meets the standard when there's so much more to it than simply going over a line on the grass.

Standards for Higher Education are set by articulating learning outcomes (intended learning outcomes (ILOs) , that is) for particular courses of study.  Often learning theories such as Bloom's Taxonomy of Learning Outcomes is used to show progression of achievement and expectation as studies progress.  These can include the ability to describe or explain, apply a concept, analyse and evaluate data or information and synthesise information from a number of sources.

So, when ILOs are articulated, it becomes relatively simple to design assessment that has the aim of assessing a student's ability to - describe, apply, analyse etc.  It then becomes simple to set the "goalposts" by writing and sharing with students the clear criteria to be used in assessing their work.  Students know what to aim for and teachers know what they are looking for (this becomes very important when marking a large cohort of students' work as part of a team of markers).  Further, ranges of achievement against specific criteria can be articulated.

Then, and here's the beauty of the scheme, feedback can be structured to respond to the student's abilities in that particular aspect of the criteria.  Feed-forward can show students how marks can be improved in the future.

If the 1966 World Cup happened today we would (just) have goal-line technology to determine whether England's third goal was actually a goal.  So why do University teachers still resemble the Russian linesman in the way that they make decisions?  Vague, uncertain, subjective - in fact no help to the students who should be learning through the whole experience.


Thursday 5 November 2015

But will I ever have to write an essay after University?

Assessment in Higher Education has had a familiar feel and look for many years.  Whilst many disciplines focus on exams - although these have more to do with memory and the examiners sure knowledge that it is the unadulterated work of the individual student - others favour coursework.  But how often does that coursework take the form of a knowledge based test, an academic essay masquerading as an assignment or some steps in the preparation of a dissertation or research project?

Business programmes need to prepare students for business careers - where knowledge will not be remembered but "googled" as it goes out of date so swiftly - and the analysis carried out in order to compile a report, briefing or presentation.

Clear parallels exist between the business report or presentation and an academic assignment:

  • Both must be focused on a key issue,
  • Both must communicate their key message clearly,
  • Both must provide evidence to support claims and contentions,
  • Both must analyse the data and information researched,
  • Both must provide evaluation of the evidence in the form of a conclusion or recommendation
Authentic business mechanisms can readily be used to illustrate academic learning outcomes by teachers and lecturers familiar with "industry", "business practice" and "the real world".  It just needs a little thought, a little embedding of real business skills in the curriculum and a move away from the academic comfort zone of the quasi-thesis or mini-research article.

Authentic assessment is capable of measuring learning outcomes but is business education really authentic?  Check out the Key Information Set (KIS) for any "top" UK Business School and ask if coursework values below about 50% really represent authenticity?