Posts Tagged ‘aptitude test distributor’
Friday, September 24th, 2010
In this session we will explore the following:
1. Computer-based scoring of psychometric tests
2. Hand-scoring of psychometric tests
3. Norming of test results
4. The link between scoring of tests and reliability
Converting raw scores to standardised scores and using representative norms will be covered in a later session.
Once a psychometric test has been properly administered, it needs to be scored. Depending on the test chosen, you may have a few options.
a. You can opt for computer-based scoring.
This would work if you had administered the test using computer software or if you had asked your candidate to complete an online test. For online tests, this option is good because it is less likely to involve scoring errors! Your candidate completes the test online and then the system immediately and automatically scores the test. There is no additional input required and hence less chance for error. This pre-supposes the publisher has used the correct scoring algorithms of course. Whilst most reputable test publishers will, we do know of one who had an error in a test battery that was not spotted until one of their distributors pointed out that his partner had done poorly on a test for which she was a subject matter expert!!
If you administer the test to your candidate using desktop software, you should be able to automatically score it in the same way as above.
b. You can opt for hand-scoring or a bureau service or keyed input followed by computer-scoring. You are most likely to use this option if you administered the test to your candidate using hard-copy test booklets and answer sheets.
Firstly, you’ll need to double-check the answer sheets to ensure that there are no irregularities. Ensure that it’s obvious which answer the respondent selected. Be careful with any “blobs” that may have appeared from ink or pencil smudges etc. If a respondent has changed their mind after selecting a response and has crossed it out, ensure that you only use the most recent response in scoring.
For hand-scoring using a scoring key, you’ll next need to align the scoring key with the answer sheet. The exact requirements will vary based on the test you are using, so ensure that you read and fully understand the instructions provided by the test publisher.
Once you have scored the responses, double-check your scoring. You then need to record the score. The score you calculate at this point is called the RAW SCORE. On its own, a raw score means nothing. If I tell you that you scored 54 on a numerical reasoning test or 75 on the extraversion scale of a personality assessment, you’ll need to ask me more questions before you truly understand your score. The most important question to ask would be how your score compared to others. The comparison of your score with others is called norming.
It is called norming because we compare a candidate’s score to a group of others (called the norm group) who completed the test in the past. To undertake this comparison, you can do it by way of a simple calculation or through the use of norm tables either developed by yourself or, more usually, supplied by the test publisher.
Norm tables allow us to use a standard vocabulary for expressing a candidate’s score in relation to others who have taken the test and it is for this reason that we call your new score a standardised score. A standardised score is simply your candidate’s raw score, compared with the norm group and expressed in terms of how the candidate scored in relation to others. We’ll consider standardised scores in more detail in a later lesson. You’ll see by now that your objective is to calculate the candidate’s standard score as this is the way to achieve maximum meaning. If you opt for paper and pencil tests and hand-scoring, the process can be lengthy. So are there other options?
We have already seen above that we can simply have the candidate complete an online test. However, you may not wish to do this if there are many candidates. This is because you will need as many computers as candidates if you are going to supervise them. If you are using an unsupervised test, the candidate can complete on their own PC, but you may be concerned about possible cheating and so on. This is why you may end up using paper and pencil tests (in a supervised environment). However, there is an alternative to arduous hand-scoring if you have used paper and pencil tests.
You can use the bureau service of your psychometric test distributor. You just need to check that the answer sheet is properly completed, clear and free from any irregularities and then send the answer sheet to the distributor by fax or scanned email. The bureau service will then score the test for you and send you a report.
Furthermore, you may have another option yet. If you have access to a computer or online test system, you can probably also enter the candidate’s responses to each question into the system and have the system produce the report. This is essentially what the bureau service above does for you. Doing it yourself should work out cheaper. Do be careful when you transpose the responses though – accuracy is far more important than speed unless you want to invalidate the whole process!!
Self-scoring answer sheets: Some psychometric tests are supplied with self-scoring answer sheets. These are much easier to use than non-self-scoring answer sheets. In this case you usually need to open up the answer sheet by tearing off some perforated card. Inside the answer sheet, the candidate’s responses will have been duplicated via carbon or similar onto a scoring card. Usually, you add up the number of responses (often black circles) that appear inside a circle. Those outside of a circle represent incorrect answers so don’t get counted. Once you’ve added up correct responses, you have your raw score. Slightly different procedures obviously apply for personality assessments and fewer personality assessments provide self-scoring answer sheets due to their scoring complexity. When using self-scoring answer sheets you need to be especially careful to ensure that the candidate presses hard on the answer sheet when completing the test. If they are light-handed their responses may not come through onto the scoring card!
Finally, let’s consider the link between psychometric test scoring and reliability/validity. As you know, the test administrator can have a huge impact upon psychometric test reliability throughout the whole process. At the scoring stage you can affect reliability simply by scoring incorrectly. This might happen because you miss the fact that a candidate crossed out their answer and changed their mind. It may also happen because you try to score fast and just don’t add up correctly. Perhaps you use the scoring key incorrectly or perhaps the scoring is so arduous (often the case for personality assessments) that you simply get lost in the scoring or incorrectly use your calculator!
Ensure therefore that you fully understand how to score the test, use the scoring key as per the publisher’s instructions, score slowly and double check or have someone else double check your scoring. If possible, use computer based scoring or self-scoring answer sheets. Incorrect scoring reduces reliability and of course that means that a valid test can become invalid and a waste of time or money!
Interested in learning more about psychometric testing for HRM? Keep reading – your next free session is not far away! To ensure you don’t miss a single instalment, we suggest you follow-us on twitter as each new post will be announced there. You may also like to join our face-to-face psychometric training courses in Singapore or Hong Kong – these range from simple introductory courses through to Certification Courses such as the BPS Level A and BPS Level B Certificates of Competence in Occupational Testing. Not in Singapore or Hong Kong? No problem – we also offer both recorded and live online training in psychometrics! For full details please see here or email us.
DO NOT COPY OR SAVE THIS ARTICLE TO YOUR COMPUTER.
THIS ARTICLE IS CLEARED FOR PUBLISHING ON PSYCHOLOGY1 GROUP SITES ONLY. IT REMAINS COPYRIGHT AND INTELLECTUAL PROPERTY OF PSYASIA INTERNATIONAL PTE. LTD. YOU ARE NOT AUTHORIZED TO PUBLISH IT ON ANY OTHER SITE. YOU ARE NOT PERMITTED TO COPY/PASTE THIS ARTICLE OR TO SAVE IT TO YOUR LOCAL DRIVE. YOU ARE ONLY PERMITTED TO READ IT ONLINE AT OUR WEBSITE. VIOLATION OF THESE TERMS WILL RESULT IN BANNING OF OFFENDING IPS AND LEGAL ACTION FOR THOSE WHO REPUBLISH THIS ARTICLE WHETHER IT BE WITH OR WITHOUT A REFERENCE TO THE ORIGINAL AUTHOR.
Monday, September 6th, 2010
Yesterday I was watching a program from the UK which fights for consumer rights. A segment of the program was reporting on a sofa that was not fit for purpose and this led my mind back to psychometrics. We’re always looking for easy ways to define some of the more techical aspects of psychometrics and this was a good example!
The sofa looked absolutely fine. In fact, it was beautiful leather and looked very expensive. To relate this back to psychometric testing we could say it had FACE VALIDITY. The sofa looked as if it would do the job it is supposed to do (on the face of it). Likewise, a test, be it personality or aptitude, which looks like it will do the job it is supposed to do is said to have face validity. We assess face validity simply by looking at the test. However, face validity is not very important in the grand scheme of things! It’s important for candidate buy-in of course. If you are given a test as part of a selection process and that test doesn’t seem relevant to the job you won’t be happy with the process and may not take it or the company too seriously!!
The sofa, despite looking great, had some major problems. The first time its owner sat on it, it fell apart. There were lots of flaws in the design and so on. Likewise, some of us may have experienced similar examples with second-hand cars. They may look excellent on the face of it, but then they break down on the way home! In other words, the sofa or the car are not FIT FOR PURPOSE. This is a major problem. You use psychometric tests to help discriminate between candidates and to help you select the best. If there is something fundamentally wrong with the design of the test that causes any problems, then the test will not be fit for purpose. It will not be valid, even if it has face validity.
It’s for this reason that it’s not a good idea to ask a test supplier for a free trial to “validate the test” as some of our clients ask! Often this is similar to a second-hand car buying looking at the paintwork on the car and ignoring the mechanics because they know little about them.
If you are interested in learning how to evaluate the “mechanics” of the many psychometric tests out there and knowing how to choose good from bad based on critical information, please consider attending either our face-to-face psychometric training courses in Singapore and Hong Kong or joining our live online or distance learning in psychometrics. Full details here: http://www.psychometricassessment.com/psychometric_training_courses.php
Wednesday, August 11th, 2010
Psychologist Vincent Wong carried out an analysis of psychometric tests in use across Asia. In this analysis, more than 40 tests were reviewed which involved no less than 20 test developers. There were several focuses in the analysis which included practical information of the tests (information such as price and practical design issues), construct of the test, report design, technical details and training requirements.
There exists a wide pricing range among tests developed by different test developers. In the lower end of the continuum one test provider provides tests for free in their entire product range and a section of the chargeable report will be produced. Obviously for user to obtain useful information they have to pay for the full report and this is certainly a marketing strategy. However in the perspective of psychometric this practice serious harm the integrity of the test as anybody can get access to the tests for unlimited number of times. Therefore it can only been seen as tests for people who are interested in trying out tests, rather than being usable in organizational settings. For more protected tests, prices range from USD$10 to more than USD$120 with some of the providers charge per usage while the others charge for subscription fee as well (usually paid annually).
In this analysis, several design dimensions of the test were considered and they were the split between ipsative and normative measures, the type of scales that were employed, and other practical issue like medium of test administration.
The majority of the personality assessment tools (over 80%) employ normative measures (the type of psychometric tools that compare the respondent with a group of similar others, or the norm group) while the remaining ones employ an ipsative style (the type of psychometric tools that determine the preference among different personality traits within the respondent). Two exceptional case was identified which employs a mixed style, i.e. normative plus ipsative. The reason behind the popularity of normative style might down to the fact that for tests that were designed for selection purpose normative style was the better style to go with as it actually compare the respondents with the others. On the other hand ipsative measures can provide us with better knowledge about the preference or strength within the respondents. In line with this we found that most of the ipsative tests were preference or value tests which were designed for coaching or counselling purposes, although some ipsative measures that were designed for selection purposes were also identified. For the only tests that incorporated both normative and ipsative styles, the underlying connotation of the difference between normative and ipsative scales were utilized and it represented the discrepancy between the real and ideal self of the respondents.
The type of scale used by the tests is actually a function of whether they are ipsative or normative tests. For normative test the most popular scale type used was 5-point Likert Scale (Likert Scale is the type of scale that respondents choose among several options for the one that represent their thought most). 7-point scale was also quite common and there were a few occurrences of 3-point and 9-point scales. Other than using Likert scales, a few normative tests employed true or false scale. For ipsative tests force-choice scale was employed. One of the more popular version of ipsative scales asked the respondents to pick the option that describes them the best (usually termed as ‘most like me’) as well as option that describes them the worst (usually termed as ‘least like me’). Another appearing version of ipsative scale asked the respondents to put the available options into order, although this version was very uncommon.
Most of the surveyed tests, if not all, were designed for completing on computerized environment. While some of the tests can be administered online in an unsupervised manner, there were quite a few that required supervised administration. Whereas there were few test that provided different versions for supervised and unsupervised administration. Having more than one version allowed the result to be checked in a supervised manner after the candidates had passed the unsupervised session. Paper and pencil version of the tests were usually available with similar price of the computerized version although there were a few tests that did not provide paper and pencil version.
Although all the surveyed tests were not designed to be completed in a designated time, timer was identified in one test and it served the function of checking against random or thoughtful responses.
Among the different attributes, personality was the most popular one being measured. The majority of the personality measurements were built on the Big Five model of personality identified by Costa and McCrae (1985). While some of them retained the original five factors within the tests, about half of the surveyed tests restructured the factor compositions based on the result of the factor analysis or other theoretical support, for example one test split the factor of conscientiousness into ‘Industriousness’ and ‘Methodicalness’ while another developer incorporate the five factor model with behavioural tendencies and came up with a seven factor model. Another common phenomenon observed was that under each of the five factors the primary factors (ranges from 3-5 facets, also known as facets) were also measured, and they were actually more commonly used by test developers in report generation and interpretation. This was probably because the primary factors offer more detailed information thus higher flexibility in using them. Besides the Big Five model, another very popular personality model employed by test developers was Jung’s (1920) typology of personality. For instance two of the tests were developed from this theory as their entire theoretical foundation but one employed the original categorical model while the other one developed a continuum model. Besides building upon one theory, many tests extract personality factors from multiple personality theories and some of them measured as many as 34 personality dimensions. Example of the measured personality dimension includes ambition, initiative, concern for others, flexibility, and energy. Nearly most of the surveyed personality tests served multiple functions which included selection, training/development need analysis, counselling and other related applications such as personal development, conflict management and team building. Test developers further added the applicability of personality tests in different situations by providing multiple versions of reports alongside with a general personality profile.
Value, Motive and Preference
Another popular attributes being measured were value, motive and preference. Although these are three distinct attributes, we found it was common that test publisher combine either two or all three attributes into one test. These tests were less commonly employed in the situation of selection but more widely used in counselling and developmental scenarios, although some of them were also designed to be used in selection as well. For tests that measures value and motive, normative measures were found to be more common and ipsative measures were more common among preference tests. Another related attribute being measured was interest and they were mainly designed to be a career development tool.
Other measured attributes included measure of leadership styles, team role, behavioural tendency, Emotional Intelligence, self-efficacy, work ethic, interpersonal communication, sales orientation, customer service orientation, learning style and even work effectiveness tendency.
Nearly all of the surveyed tests have multiple reports and they are all in narrative form alongside with a graphic representation (usually bar charts) of the measured characteristic. However there was one test that did not employ narrative style in their report at all. Graphical representations with a sentence long description for each factor were employed instead of the narrative format. 2 dimensional typology graphs and score matrix were also employed for some type of reports. Some reports made use of different colours in representing different dimensions being measured yet some others used colour to indicate extreme scores (for example green representing high scores while red representing low scores). Colour was also frequently employed for matching test scores with a standard or an established profile, with green meaning a good match and red representing a poor match.
Generic Personality Profile
For all the surveyed tests, there was at least some form of generic personality profile provided in the report, whether in the form of narrative writing, matrix of scores, 2 dimensional typology graphs, bar charts or broken line graphs. Most commonly the personality profile was consisted of a graphical representation of the test scores on different dimensions with a brief descriptive narrative alongside it. In this generic personality profile the test scores, usually in form of sten scores or percentile were presented. Raw scores were also found in some reports. About half of the survey tests also presented the variation of the test score in the report and a few had an explanation on the meaning behind that. In all cases primary dimensions measured by the tests were reported in this section. Secondary or higher-level composite dimensions were also frequently reported in this section.
Strengths and Limitations
Strengths and limitations were another very popular qualities being reported, although we identified a few tests that do not report them. In reporting strengths and limitations some tests referred them to very specific behavioural terms while there existed some tests simply referred high or low scores in particular dimensions as strengths or limitations. Few tests incorporated contextual factors into the reporting of strengths and limitations were identified and they were more common in purpose-specific reports (for example reports designed for leadership development or team building). Overall tests tended to present information about strengths and limitations of the candidates.
Leadership, team work, interpersonal skills or orientation and problem solving orientation were found to be the most popular competencies being tapped. Other competencies being tackled by the surveyed tests included achievement orientation, customer service orientation, management style, decision making, planning and organization, influence and negotiation, delivery, creativity, analytic orientation, coping style and thinking style. Rather than being measured directly in the tests, these competencies were often generated from several primary dimensions of personality. They were found to be written in context of work and behavioural terms were employed heavily in order to aid comprehensibility of the report. Furthermore competency based reports were identified and leadership related reports were the one which appeared most. Competency based reports for sales and managerial positions were also popular.
Interview prompts were found in some reports. These included general instruction of how to use the report correctly to enhance the effectiveness of a follow-up interview as well as specific suggested interview questions to be asked for a particular candidate. The number of interview prompts varies from three to ten plus suggested questions and some reports even included the expected answer from the candidate. These interview prompts also served as a check or back up of the validity of the tests.
Training (Development) Needs
Several tests with a separated training need or developmental report were identified. For tests that did not have a designated report for training needs, it was surprising to found that the section outlining training was absent for majority of the surveyed tests, given most of them were designed to be used in training need analysis. When present, the training needs outlined (or some tests referred it to be ‘action plans’) were usually generated from the unfit aspects identified or areas that were not up to the normative standard. Simple description about the needs per se was common and a few reports were found to be providing concrete training suggestions.
Cultural fit information was identified in a few test reports. This information could include the fit of the candidate with the organizational culture, task nature as well as co-workers and it existed in several forms. The more popular way to compute it was comparing between the candidate’s score with the norm or an ideal profile. One test generated this information by comparing the candidate with the best performers. Yet another test presented the information in light of the candidate himself by stating what culture or environment will be the best fit for the candidate.
Technical information of the test included normative data, reliability and validity data as well as development procedure of the test. They are the most important information to be readily accessible to the public but unfortunately some of them were virtually absent for some of the surveyed tests. Normative data were found to be the most reported information and reliability data followed. However evidence for validity as well as development procedure of the test were absent for some of the tests despite the claim of ‘scientifically validated’ in their marketing materials. For tests that did not provide any of the above mentioned information the integrity of them were seriously in doubt.
Training requirement of the tests varied from no need training for an extreme case (which was the free online test) to BPS Level B plus additional training (approximately 7 days of training in total). For most of the tests 2-3 days of training for the specific test was common but this type of training would not be recognized by a different test provider. The BPS (British Psychological Society) Competence in Occupational Testing was found to be the most widely accepted qualification by the test providers. Most of the tests could be administered by a BPS Level B qualified user but there existed some tests which required a conversion training (1-2 days long) in order to be a qualified user of them.
Friday, March 26th, 2010
PsychometricAssessment.com / PsyAsia International offer Free Psychometric Testing Course in Hong Kong & Singapore
Introduction to Psychometric Testing Course: Hong Kong, 4 May 2010; Singapore, 11 May 2010
PsyAsia International is Asia’s independent Leader in Psychometric Test products and Training. We choose to distribute only the world’s best, most validated psychometric assessments and offer locally relevant, world-class training in psychometrics. The Introduction to Psychometrics Workshop expands on PsyAsia’s expertise in Psychometric Training in Asia by offering a course geared to those with very little experience or understanding in Psychometrics. Many first time clients don’t understand why they need to be careful in their choice or use of psychometrics and many do not understand why training is a necessity in competent test use.
This one-day course aims to provide experienced-based training in an accessible and economical way. The course is easy to understand and yet covers many of the important issues to be aware of when choosing and using psychometric tests. Given our passion for Asia and our passion for the competent use of psychometric tests in Asia, PsyAsia makes no profit on this course. We charge delegates a small fee that reflects the cost of the hotel venue (including buffet lunch and refreshments) where the training is held as well as materials that we provide to the delegates. What’s more, if you later decide to attend one of our accreditation courses in Psychometrics, we will issue you with a discount code that reduces your course fee by the amount you paid for this course!
| The history of psychometric testing
Comparison of psychometric tests with other modes of employee testing and assessment
The benefit of using psychometric tests in recruitment/selection, development and coaching
Reliability in psychometric testing
Validity in psychometric testing
Error in psychometric testing
Review of different aptitude, personality and values tests on the market
Questions to ask your test publisher or distributor
What next?Note: During the workshop, delegates will create quasi-psychometric tests in groups to enable a hands-on exploration of issues such as reliability, error and validity in psychometric tests.
To view full course details and to register, please click here.
Saturday, March 20th, 2010
PsyAsia International is pleased to announce that until the end of March we will be offering free daily webinars to showcase our product range. Their will be no set agenda. The agenda will be set by attendees. Please note however that product knowledge may differ depending on which of our consultants is running the webinar. Come along and chat with our consultants, see the Saville Consulting Wave, Identity Personality Assessment and the Apollo Profile in action. Ask questions about training and consulting options and so forth!
For times and to register, please click here…
Thursday, March 11th, 2010
PsyAsia International is pleased to once again be supporting Singapore’s Human Resource professionals as a sponsor of the Singapore Human Resources Institute’s Annual Human Resource Congress.
The Singapore HR Congress and Business-Connect Exposition 2010 will address the newly derived term of HR TransmutationTM and explore the topic in deeper context. The current economic churning has made it explicitly clear that industry is not just facing another downturn but it is accompanied by impactful structural, demographic and mindset changes across industry and top management cannot afford to respond with anything less than a complete overhaul of the system to survive and sustain. Renowned speakers and leaders from the HR fraternity will share their experiences and provide useful insights on the know-how of managing paradoxes in a turbulent world.
PsyAsia’s clients are entitled to a 35% discount on the price of conference tickets. Please contact us in the first instance to avail of this special offer.
“A strong and capable HR community can be the catalyst and change agents to initiate and implement people development efforts in organisations, and help build stronger capabilities amongst our business leaders and managers.”
PM Lee Hsien Loong
11th World HR Congress 2006 organised by SHRI
PsyAsia International is Asia’s leading independent distributor of Psychometric Tests of Personality and Aptitude. From offices across Asia, including Singapore and Hong Kong, our psychologists assist the world’s top organisations and local governments to recruit, select, assess and retain the best employees. Our services are only offered by fully registered organisational psychologists with years of experience in their field. PsyAsia also offers world-class training in Psychometric Testing in Singapore, Malaysia, Hong Kong and Online.
Friday, November 20th, 2009
The Market for Psychometrics in Singapore
There are so many Psychometric Tests on the market in Singapore now, the task of choosing the right one is not easy. Choice is always a good thing, however as humans we often look for easy or stereotypical ways of making those choices and they are not always the best ones to make. For example, a client of ours was preparing for an upcoming team-building session. He approached us asking if we had a certain test that he could use in that session. Our answer was that we don’t supply that test for various very good reasons. The client’s response was “but so many people use it”. This is a typical response. Another potential client had been looking around in Singapore for Psychometric Personality Tests to use in his training sessions as an added benefit. He categorically advised us that he was not interested in validity and was looking for something simple and cheap! The reality here is that at best he is wasting his time and the time of those who will complete his tests. At worst and most likely, his trainees will be led to believe things about themselves which frankly may not be true (reliable or valid!).
Science, Psychology, Psychometrics and the Real World of Business
As busy professionals we often assume that if lots of other people are using a test it must be a good one. This is a huge mistake. Our evolution has programmed us to be seduced by glossy advertising materials and confident, friendly salespeople. On the other hand, we have a tendency to be turned off by less glossy scientific figures, statistics and perhaps psychologists such as myself who speak about the science and real value behind a test, its validity! Ultimately then, both our clients and ourselves as psychologists have problems to overcome!!
Psychologists have to be able to explain in more “glossy” terms about the technical properties of a test and our clients, usually the HR and aligned professions, are invited to turn their ears our way for a little while, just long enough to get the notion that there is more to a psychometric test than meets the eye!
Technical Properties of Psychometric Tests
When we talk of the technical properties of a psychometric test, we are referring to things such as its reliability and validity as well as how it was constructed. If a test is constructed well, it will take time. Not months, often years. The test will also evolve over time such that more and validity data will be added to its manuals. This process is costly, hence good tests cost money.
If you come across cheap tests, that should start to ring alarm bells. It’s possible to write a few questions on a napkin in a restaurant and call it psychometric and even try to sell it. If it looks good and the questions look relevant perhaps it will sell and gain a huge following. But how reliable is that test?
In other words, can it provide consistent measurement of your candidate? If your bathroom scales provide different results each time you weight yourself you take them back and say these are not reliable. Likewise with a test, you need to ensure that it is consistently assessing the constructs that it purports to assess. We often come across new clients who are shocked when we tell them that good personality tests often contain around 200 questions. However, buyer beware! We know that the longer the test, the more reliable the results (as long as it is not so long that the candidate falls asleep!).
An unreliable test can not be a valid test, hence reliability is a precursor to validity. However, validity is arguably the most important aspect of a test. You choose to use tests because you want them to illustrate where a candidate stands in terms of their ability or personality or in order to predict how your candidate will perform or behave in a job. The test’s ability to meet this need is referred to as validity.
Some tests on the market are simply more valid that others. In fact, one test in the past year has proven to be more valid than all other tests it was compared with on the market! How come users stay with their current test then? Perhaps because of preference, habit, price, mass-following and so on. However, do ask yourself and your test supplier, how valid is your test – this is the single most important technical property in a psychometric test!
Sometimes tests which are more valid will be more expensive but this makes sense. If a test took a long time to develop, was developed well and by a reputable publisher and is based on well founded theories that have been researched internationally, then surely it is worth paying the extra as such a test will provide an excellent return on investment with its strong validity.
Training to use Psychometric Tests in Singapore
Properly developed psychometric tests require proper training to be used competently. If your test supplier requires that you undergo very limited or no training, this is a reflection of the test as well as their lack of understanding of psychometrics. You need to understand the concepts referred to above, as well as error in testing and how to make decisions based on test results, let alone how to feed back results properly to candidates and decision-makers. The type of questions (i.e., forced choice versus rating scales) will also dictate how you can use the results – you need to be trained to understand this! In some parts of the world (South Africa for example), only psychologists can use psychometric tests. Whilst this is a strict rule, it has its logical basis in how easy it is for untrained professionals to use tests wrongly.
Purchasing Psychometric Tests in Singapore
You may also wish to consider where you purchase your tests from, particularly in Singapore. In recent years we have seen an influx of profiteers in the industry who seek to make money but lack any depth of understanding in psychometrics or psychology at work. This will change in time as psychology in Singapore develops. For now however, be wary of this and we suggest that you only purchase psychometric tests from fully registered organisational psychologists who have a firm grounding in personality, psychometrics and psychology at work and who are answerable to professional competence and ethics boards. Many of those selling psychometric tests in Singapore are simply not answerable to anybody in terms of their conduct or competence. You can therefore not be certain that any advice they provide is relevant, up-to-date or will work in your organisation.
There are many more things to be aware of when choosing psychometric tests in Singapore. We cannot entertain them all here due to space constraints. You may wish to look out for training courses in Psychometric Assessment such as our our Psychometric Assessment at Work training which leads to the internationally recognised British Psychological Society Level A and B Certificates of Competence in Occupational Testing. Such courses will prepare you further for choosing the right test and therein avoid costly selection and development mistakes. Look for courses run by experts in psychometrics who are based in Singapore and hence have a strong understanding of test use aligned with local culture, laws and practice.
Note: some Singapore firms will ship in overseas trainers to run psychometric training. We suggest you avoid this training reseller model given that the facilitator is based overseas and is thus likely to lack knowledge of the Singapore business/legal and cultural environment for Psychometric Testing.
This article is Copyright PsyAsia International Pte Ltd.
It was originally written for Human Resources Magazine in Singapore
A shorter version of the article appears in the magazine’s November 2009 issue
Tuesday, July 21st, 2009
One of the first things clients will want to know when choosing who to work with when ordering psychometric tests is “why should I choose xyz company”?
As the field of psychometrics continues to grow, overseas publishers are working hard to make inroads into local markets. Clients should therefore be wary of the expertise (or lack of it) in organisations that are distributing tests.
We firmly believe (as do publishers of high level tests such as the Saville Consulting Wave), that those in the best place to distribute psychometric tests are those who have a background in personality psychology and/or organisational psychology. In fact this premise was shared by many reputable test publishers until relatively recently.
Greed and motivation to expand market share have taken over in many cases and some test publishers have delegated test distribution to non-psychologists or those with short-course qualifications in this area.
The downsides of this are tremendous. Not only does it threaten the very integrity of the test and the industry, but it brings into the fore concerns regarding malpractice and the like.
Registered Organisational Psychologists are registered with government bodies. They therefore report to these bodies on issues involving competence. In addition to their 6-10 years of training in psychology (i.e. as much as a medical doctor!), they are bound to undergo continuous professional development and must submit proof of this on an annual basis. This means they need to attend high-level conferences, read peer-reviewed professional and academic journals and more.
Non-psychologists of course are not subject to any of the aforementioned. In fact, many clients who have come over to us from such distributors have entertained us with stories of gross negligence and incompetence of these “salespeople” who lack expertise and passion for the subject matter. A couple of examples follow:
1. A client told us how when they contacted “******** Assessments” in Hong Kong and asked for more information on how the test has 95% predictive accuracy (as published on their website). They were told that this related to 2 things. Firstly that the test has a sophisticated lie detection system and so is very accurate. A psychologist will tell you this has nothing to do with predictive accuracy! Predictive accuracy (or validity) is about using the test scores to predict work performance or something similar.
This same client was then told:
“The second form of predictive accuracy is construct validation which relates to the job prediction score”.
Again, a psychologist would point out that this salesperson is getting confused. Construct validity and predictive validity are two different forms of validity. Most importantly though, no psychometric test is 95% predictive! Psychologists know that and if they claim any different they would be reported to their board and struck off!
Unfortunately, at no time was this client provided with hard-data or evidence that this test (which is based on a theory that has not been peer-reviewed and has not been independently tested in Asia or Australia) actually predicts meaningful workplace behaviours and performance.
2. Another client told us how they contacted a non-psychologist distributor of another test brand in Singapore. They asked for information about impact of dyslexia on aptitude test scores and also wanted to know about the comparison between certain tests within that brand and those of the competition. This distributor had no idea there and then, and said he would need to go away and find out. A psychologist would not need to do this. Unless the client is asking about an obscure test, Psychologists are trained to have the answers.
As we know, there are many things to consider when choosing the right psychometric test. Issues such as reliability, validity, norm groups, standard error of measurement, cost versus validity (ROI), report options, online assessment options and so on. This short article has added to that list and suggested that the background and currency of the people in the distributorship are also important.
To cast doubt aside, it is best to work with distributors who have demonstrated their passion in psychology and psychometrics through years of training in the subject along with years of experience. Choose those holding full registration as psychologists with government/professional bodies who must undergo professional development on a continual basis. Purchasing psychometric tests from non-psychologists may amount to asking a private pilot to fly a jumbo jet. They may be able to get it off the ground (“may”!), but what happens when they encounter problems or when they try to land??!!!