Students are sceptical of the computer-based CAT. Here, Prometric explains the process that is used to compute the scores to ensure that the scores are ‘fair’.
A few days after the registration process for the second computer-based Common Aptitude Test (CAT), a yardstick for admissions to the elite Indian Institutes of Management, commenced, the organisers ran into a few glitches. The dreaded ‘malware,' that reportedly crippled the CAT experience last year and had students alleging that the entire testing process was ‘unfair,' resurfaced. That too, when registration had just opened. Those who visited the online site found the portal infested with virus, and was declared a “harmful” site by several search engines. Given that all details of when to apply, how to go about it and related protocols are hosted on this site, this did create some panic and generated some buzz online.
Since the official portal (www.catiim.in) is only an information site, this does not directly affect the registration process. Registration at the Axis Bank centres was generally a smooth affair, and the user interface was an improved one compared to last year's registration software. Prometric — the U.S.-based firm that is the IIMs' implementing partner for the computer-based CAT — has assured that CAT this year will be a better experience considering the testing window has been increased from 10 to 20 days, and testing centres have been chosen with more care and will be “sanitised” well in advance. The on-ground implementing partner, NIIT, too was replaced and the contract was given to MeriTTrac Services and Everonn that have conducted several online examinations in India.
So, are CAT aspirants looking forward to a better testing experience? An opinion poll conducted by MBAUniverse.com asked if candidates thought that the CAT 2010 will be glitch-free? 47 per cent of respondents are pessimistic about this. This shows that most of them are still very sceptical, an announcement on the portal stated.
CAT 2009 spelt testing times for thousands of students who encountered technical glitches in the software while taking the examination, and for those whose exams were postponed due to server crashes. Scores of others, who were able to take their test, complained of small software issues that went unnoticed and unreported, such as a screen that would not load properly or getting timed-out mid-session. These students were angry because they felt this put them at a disadvantage, and there were allegations that many were given extra time and therefore got an undue advantage.
So, will the testing process be fair this time? As Shivkumar Mathan, a CAT aspirant, queries: “How will they ensure that 20 different question sets will be of the same difficulty-level?”
In a document shared with The Hindu by Dave Meissner, Vice-President, Solution Services, Prometric, he explained that CAT is scored using scientifically proven and internationally accepted techniques which ensure accuracy and fairness.
Each submitted test must accurately reflect the performance of the candidate, and no external factors such as the date, time or location of the test, will impact the final score.
All candidates are compared against a common scale. In order to meet these requirements, Prometric creates multiple forms, or versions of the paper, and scores them using a multi-step process which ensures that even though each candidate took a slightly different paper, they are evaluated on equal terms.
But how can they ensure that different question sets (which may imply varying difficulty levels) will not give one an advantage over the other? Mr. Meissner explains that the raw score is first calculated, derived by providing candidates with three points for each correct answer and subtracting one point for each incorrect answer. Then, because there are multiple versions of the paper, the second step is to ‘equate' the forms.
“A small number of questions are present in more than one version of the paper. These questions allow us to measure how candidates taking different forms compare against each other when asked the same question. By using enough of these questions across all forms, we can adjust each candidate's raw score and provide each candidate with the score he/she would have earned if he/she had taken the same form, at the exact same time.”
After ‘equating' the scores, these are placed on a common scale, creating a range of scores that can be used to create a percentile rank for the test as a whole, and for each section. Candidates scoring in the top percentile performed at the highest level when compared to all other candidates, Prometric clarifies.