Why 70? Why not 68? Or 72?

Or… how does one quantify the human mind?

I recently (too recently for the memory to not be painful, so bear with me) filled in long questionnaire to get my personality type assessed. When I say long, I mean the thingy had over 500 questions. Yes, ouch. Ouch is what I said, along with true and false, given my noble intention of being brutally honest about myself. And at the end, the assessor totted up the scores and pointed out those scores which were above 70 – which brings me back to why 70? What is to say 70 is normal? – and therefore, er, the obvious aberrations (disorders?) in my personality.

I googled for this test and came across this phase among other eulogies of this test – breakthrough in objective psychological assessment now, here is the thing – how can psychological assessment be objective? And exactly how objective can you be in assessing another person? As it happens, the same article goes on to say that the test has been described as the most successful failures in the history of psychological test construction. Yet, for over fifty years, this test has been used extensively not just by clinical psychologists (as it was originally intended) but potential employers, law enforcers, wannabe divorcees…

And what is alarming is not that this and other such tests are administered freely (don’t even mention copyright – or ethics), they are often used as stand-alone measures of personality, and therefore, past and likely future behavior. This is what another article on such personality tests says – These tests are scattershot in their attempts to target liars, cheats, and thieves. According to a review conducted by the federal government’s Office of Technology Assessment, 95.6 percent of people who fail integrity tests are incorrectly classified as dishonest—an error rate far worse than that of the notoriously unreliable polygraph machine (emphasis mine).

My own experience was that I came away with a list containing numerical values and codes of the overwhelming defects in my personality – and not a whiff of a solution. No further talking and probing (there, I have said it- I am a qualitative researcher, after all) – no “subjective” validation (if you think that is an oxymoron, think again, we do it all the time) of the “data”…

Here is where we stop and take a look at some of the questions – and then slowly shake our in disbelief and wonder at the thought of attempts to quantify and measure human emotions and states of mind… So, why is the survey method not always the best? Two things -1. because the method is so blatantly unsuited for the kind understanding required to look into a human mind and 2. because as always, in such cases, the “data collection” instrument is designed in such a manner as to include all the cardinal sins of questionnaire design.

While I can go on and on about point 1., here is where I have actually gone on and on – about point 2. Read on if you are interested in these “cardinal sins” – Questionnaires that confuse and confound. And while I am at it, I quote Harini on what drives quantitative research – confusion on cause, causality, correlation and conjecture

2 comments

  1. if it wasn’t so dangerous, it would be quite funny.
    when i was heading brand on a leading marathi channel, the agency person gave me an insight. s/he said that data shows that maharashtra has a significant number of castes, and that caste loyalties were high. and that different castes probably behaved differently. And therefore, the marathi channel should do caste based programming. before we proceeded to slaughter him/her, you could hear our collective jaws hit the concrete and bounce back.
    in stuff like this i would rather consult the roadside astrologer with the parrot – the probability of them getting it right is marginally higher !!

Comments are closed.