Printer Friendly

An Interview with Cecil Reynolds about the RIAS-2.

Dr. Reynolds is currently Editor-in-Chief, Archives of Scientific Psychology and Associate Editor, Journal of Pediatric Neuropsychology and Emeritus Professor of Educational Psychology, Professor of Neuroscience and Distinguished Research Scholar at Texas A & M. The Reynolds Intellectual Assessment Scales (RIAS) were published in 2003 as a challenger to the traditional intelligence scales, such as the various Wechsler Scales, and the Binet scales. The RIAS model of fast intelligence subtests that produce highly reliable scores in key domains of intelligence in a fraction of the time required by traditional tests was foreign to many clinicians and educators. However, the RIAS model was gradually accepted and then even mimicked by some other popular tests in their subsequent revisions. In late 2014, the RIAS-2 was published, and extended the original RIAS model into other domains of cognitive function.

Below are responses to questions about the RIAS and RIAS-2 and its applicability to modern assessment by Dr. Cecil R. Reynolds, author, along with Dr. Randy W. Kamphaus, of both editions of the RIAS.

NAJP: Dr. Reynolds, it seems like just yesterday that I reviewed the RIAS, and now it has been revised. What exactly was the length of time between the RIAS and the RIAS -2- and why did you decide to make these revisions or to re-standardize it?

CRR: It was approximately 12 years between the publication of RIAS (2003) and RIAS-2 (2015). Tests of different types are revised for a variety of different reasons. Intelligence tests and achievement tests, of the various forms of psychological and educational testing, require the most frequent revision. First of all the population demographics of the USA are changing in dynamic ways, and if the standardization or reference sample to which we refer scores to obtain meaningful IQs is to remain relevant, it must be updated to reflect the population changes of the USA. This needs to happen every 10-15 years. Secondly, the Flynn Effect is operative on intelligence tests, and there is some debate over the degree of current Flynn Effect on IQ results and the magnitude of any correction factor that should be applied based on the age of the norming or reference sample. Keeping the normative sample current reduces greatly the impact of making the wrong choice for the Flynn Correction or even the need to apply it at all.

Test items also become dated. Item content that worked well in a test 10 or more years ago, may not work so well now. This issue has always been known but the impact in the age of the internet, social media, and rapid change in our knowledge base and in the immediacy of access to information has made the issue more salient. Item content has to be retested and validated, and it is not uncommon for more extensive item revision over a 10-15 year period to be required now than in the past.

The conceptual views of intelligence are also a moving target. One must be careful not to be faddish in making conceptual changes to a measure of intelligence, but one must also be current in the context of mainstream science of intelligence and intelligence testing. This requires you to periodically evaluate the structure and content of IQ measures as well, and possibly make subtle and/or substantial changes.

Lastly, as an author, you always (not a word I use often) see elements of the test and its content and design that could have been done better. Such works are never perfect, and revision gives you the opportunity to improve on your initial work.

NAJP: The RIAS-2 has two supplemental speeded processing subtests (one verbal, one nonverbal) to create a Speeded Processing Index. Why is this Index needed in today's day and age?

CRR: The simple answer is because clinicians want it included in measures of intelligence. The single most frequent request we received from users of the RIAS was to add a measure processing speed, but to add one in the context of the RIAS model, which to us meant having both verbal and nonverbal measures of processing speed and taking as much motor-dependence as possible from the tasks so as to reduce confounding with fine motor and related motor issues when measuring mental functions. We feel we accomplished this task by using the new item formats we devised for assessing speeded processing and doing so in the verbal and nonverbal domains.

We do not believe simple processing speed should be included in the calculation of IQ, due to its simplicity and the very low g-loadings of such tasks on all intelligence tests, and recommend against it in our chapter on interpretation of the RIAS-2 in its professional manual (Reynolds and Kamphaus, 2015). That said, we also know there are legitimate arguments to include it, so we have a full set of reliability, validity, and normative tables to allow clinicians to do so if they view the data and concept of intelligence differently than we do.

There are several good reasons to look at processing speed when examining neuropsychological integrity. Neuropsychologists have known this for over half a century, and one of the most frequently administered of all neuropsychological tests is some version of a trail making task, where different levels of processing speed are measured, but in a motor-intense task. In the intelligence testing community, processing speed is useful when considering accommodations for students with a disability, especially when they are asking for extended time for tests or assignments. Having such an index on an intelligence test presents the best possible psychometric scenario for comparing problem-solving skill with the ability to respond accurately in a time-restricted environment.

NAJP: Decision speed and reaction time--how are they measured with the RIAS-2 and why are they important?

CRR: Decision speed is actually a component of reaction time (RT) as RT is typically measured. Most RT tasks have 2 components, decision time (DT) and movement time (MT). DT is the time required to evaluate the problem, solve it, and formulate a response--MT is the time required to deliver the response. Research has shown that DT is related in various ways to intelligence but the relationship with MT is negligible in most circumstances. Most tests only measure RT where RT=DT+MT, and so confound the intelligence component of the task being used. In the RIAS-2 subtests looking at speed of processing, we too measure RT but designed the tasks to minimize MT and emphasize DT.

However, the partialling of DT and MT as components of RT is difficult but can be done, but is difficult, time consuming, and requires specialized software and carefully calibrated equipment. At this point it is unclear whether a more accurate assessment and separation of DT and MT form RT is incrementally useful in the context of clinical applications of intelligence tests, relative to the costs of doing so-- clearly it is in the context of research work, but in practical assessment environments, not so much. However, it would not surprise me if this were to change as the technology becomes available to make the distinction easy, accurate, and inexpensive.

NAJP: Were harder and easier items added?

CRR: At the time of the initiation of the revision, the RIAS was being used heavily with gifted and talented identification programs as well as seeing increased application with more severely impaired populations (we often received feedback that the game-like and rapid progression of the testing on the RIAS made it popular with these kids who are often difficult to engage). To accommodate these populations and provide more reliable scores at the extreme of scoring (especially more than 3 SDs from the mean where accurate assessment is difficult), we added items that were easier and also added some items that were harder, thus extending the range and the reliability of these scores.

NAJP: How can the RIAS-2 be used in terms of an ability-achievement discrepancy, and with which achievement tests?

CRR: If using an aptitude-achievement discrepancy model, the RIAS-2 can certainly be used as the aptitude measure, and for reason I will note elsewhere, we believe it is the best such measure to use. If you are using the simple difference model of a severe discrepancy (which I do not recommend), any achievement measure scaled to a common mean and having acceptable psychometric characteristics is fine for the comparison. However, if using a regression-based model, which is far more sound and sensible, a sound estimate of the correlation between the two measures is necessary. We do provide tables for use of the Academic Achievement Battery (AAB) in contrast with the RIAS-2 since they have overlapping norms, but any psychometrically sound achievement measure where correlation can be estimated with some accuracy works.

NAJP: Let's talk about memory skills---what specific realms of memory are assessed and how are they assessed?

CRR: With the exception of adding harder and easier items and some other item revisions, the memory section of the RIAS-2 is unchanged from RIAS. RIAS-2 assesses short term verbal and nonverbal memory via memory for stories and visual memory respectively. Short term memory has been a component of intelligence tests since the original French Binet early in the last century. It is useful in clinical assessment to have memory testing that is co-normed with problem-solving or intelligence tests in order to be able to compare these two areas accurately. The short term memory tasks on the RIAS and RIAS-2 were chosen for the predictive ability concerning other areas of memory as well as academic achievement. Memory for stories is a very good predictor of not only other short term verbal memory skills but predicts reading comprehension scores as well as most full scale intelligence measures. As with processing speed however, we do not recommend that short term memory be included in calculation of the intelligence indexes, but rather be viewed separately. Nevertheless, as we do with processing speed, we provide full data on reliability, validity, and full sets of norms tables for total battery scores that allow the user to incorporate accurately the memory subtests into the intelligence indexes should they disagree with our recommendation.

NAJP: Exactly how many subtests are there? And how many are timed?

CRR: RIAS-2 consists of eight subtests. The two nonverbal intelligence subtests have time limits but are designed as power tests, such that 95% or more of the people in the standardization sample would not have earned a higher scaled score if given more time. On nonverbal tasks it is important from the standpoint of efficacy, to have a time limit since some examinees will persist to no avail on a task for a very lengthy time. Setting the time limits generously however, and using the 95% rule, brings us under the rubric of a power test not a speeded test. The processing speed tasks, both verbal and the nonverbal, are speeded, timed tasks obviously as they are designed to assess the speed with which one can solve very simple problems that given unlimited time almost everyone would solve correctly.

NAJP: Obviously, there will be questions about validity and reliability. What general statements can you make in this regard?

CRR: RIAS was used for many years and RIAS-2 is improved in many ways. It remains user friendly but test score reliability and the validity of the recommended interpretations of the various RIAS-2 indexes were at the forefront of our thinking. We organized the validity work around the current conceptualizations of validity and validity evidence as articulated in the 2014 Standards for Psychological and Educational Testing. As you know RIAS was the first test of intelligence to use this modern organizational structure as articulated in the 1998 Standards. The evidence base for interpreting RIAS-2 scores then is modern and extensive as expressed in the numerous research studies reported in the Manual (2015). You will find in the Manual, among other studies, correlations with other major intelligence scales such as recent versions of the WPPSI, WISC, and WAIS, multiple achievement measures and memory tests, contrasted clinical groups, and exploratory and confirmatory factor analytic modeling studies. Subtest and Index score internal consistency reliability coefficients meet or exceed recommended standards for tests used for individual diagnosis and decision-making.

Median subtest score reliability coefficients across age range from a low of .81 to a high of .99 and the composite scores all have median internal consistency reliability coefficients across age that are at or above .90, being comparable to tests of significantly greater length.

RIAS-2 provides co-normed verbal and nonverbal processing speed tasks which are greatly motor-reduced, providing assessment of mental control, attention, and simple processing speed. All data are collected quickly and in formats most children and adolescents find engaging and fast moving, enhancing rapport and cooperation. Given these characteristics, RIAS-2 is highly compatible with PSW (Profile of Strengths and Weaknesses) models and very efficient in such models since nothing goes to waste when applying the RIAS-2 in PSW models.

NAJP: What correlations have been noted through the years, first with the RIAS and now with the RIAS-2, between intellectual abilities and success in the public-school system? Any correlations with success in the university setting?

CRR: It is interesting that you should frame this question in terms of "success." And, in that case, the answer is "no" not only for the versions of the RIAS, but for all intelligence tests. Intelligence predicts academic performance, job performance completion rates in many vocational training programs--but "success," no. Success is defined in a myriad of ways by different people. A recent lay paper criticized intelligence tests as lacking utility because they do not predict income as an adult, and equated income with life success.

Many of the best and brightest, highly intelligent people in the USA do not seek to maximize income as a measure of their life success--they have other, what I and many others, consider superior goals in life. An example I often use is a board certified family practitioner I know well in a small Texas town. He is a former USAF flight surgeon, has been boarded in several medical specialties, and clearly capable of earning far more money than he does now.

However, his dream since being a young boy was to become "the town doctor," and to heal people and make their lives better. He has been tremendously successful in this regard--he is highly respected, but more importantly loved by his community and his patients while he persists in practicing in one of the lowest paid medical specialties. He may be the most successful person I know but is far from the wealthiest. IQ tests will never predict such success, nor should they even attempt it. Does that render them useless--hardly. They have many other clinical applications as well as educational ones.

NAJP: Has the RIAS-2 been gaining popularity in the field with several of the more traditional formal cognitive assessment instruments?

CRR: Clearly, it has. While I have a contractual prohibition on revealing sales figures and usage numbers, which I see every 90 days, I can tell you that RIAS-2 sales in its first year were nearly 500% above the last year of RIAS sales, which is remarkable even for a revision of an instrument. It has settled in at around 200% or a little higher and usages have increased in like numbers, and this is for a test that was far more successful in its first edition than we predicted it would be.

NAJP: How widespread will be the predicted use of the RIAS-2 become in the next several years?

CRR: Given the request for training I receive and the number of inservice training I have been doing at various state and national conferences along with individual school districts, I am certain it will continue to grow. People who use the RIAS-2 like it and find it helpful--that portends well for its future and the model of test design Randy and I adopted. It is a very practical assessment device that does not sacrifice psychometrics or other quality features.

NAJP: What are some of the features that make the RIAS-2 more appealing to school, clinical, and related psychologists and other professionals in the field of cognitive assessment?

CRR: RIAS_2 provides the practitioner with a practical measurement of intelligence in terms of efficacies of time, direct costs, and information needed from a measure of intelligence, i. e., reliable and valid measurement of g and its 2 major components: Verbal + Nonverbal, with close correspondence to [g.sub.c] + [g.sub.f] [Given there are no pure measures of [g.sub.f]].

The tasks go by quickly and are game-like and engaging, so children and adults tend to like taking the test as opposed to seeing it as arduous as some intelligence tests tend to be, making rapport and cooperation easier to obtain. RIAS-2 provides a measurement of intelligence that has fewer confounds than other intelligence tests, to wit, reduced or eliminated dependence upon speed of performance and motor skills, fewer cultural confounds with instructions (e. g., we never on the intelligence subtests tell the child to "work as quickly as you can," a command not interpreted the same in every culture of relevance in the USA); there is no dependence upon reading for measurement of IQ as occurs in some tests; and, we do not recommend the inclusion of simple tasks that require no manipulation of information or problem-solving in the calculation of the IQ.

RIAS-2 provides accurate prediction of academic achievement, at least comparable to that of intellectual batteries two to three times its length using tasks that employ familiar, common concepts that are clear and easy to interpret.

Native born ethnic minority populations in the USA also tend to score higher on the RIAS-2 than traditional intelligence tests for a variety of reasons, though we did work extensively to eliminate items showing DIF (differential item function) as a function of gender or ethnicity, using multiple statistical approaches, or that are offensive or ambiguous (expert minority reviews were also done at the earliest stages of item development).

The low cost of the test and the record forms along with the reduced amount of time required for test administration, scoring, and interpretation also leaves the practitioner with ample opportunity to do more direct follow up testing in other areas of concern as opposed to tests that require hours for administration, scoring, and interpretation.

NAJP: Given the significant number of brain injuries each year in the United States alone, what is your prediction for the clinical use of the RIAS-2 when assessing cognitive abilities of patients who will need rehabilitative therapies?

CRR: Given the lack of motor and speed confounds as well as the rapidly moving tests on the RIAS-2, which lowers attentional demands, RIAS-2 is a best choice for assessing the intelligence of brain injured patients--fewer confounds. This means that for the clinical neuropsychologist RIAS-2 provides the best estimate of the intactness of intelligence in the patient without all of the other confounds. This in turn provides a better baseline for looking at areas of specific impairment relative to intelligence as well as making plans for rehabilitation and independent living. Many of my neuropsychology colleagues have recognized this and have moved to make the RIAS and now RIAS-2 there "go to" measure of intelligence.

NAJP: What have been some of the challenges when developing the RIAS-2 regarding culture-fair issues?

CRR: Intelligence always is expressed in a context. Culture-fair tests are highly desirable, but there will always be limits on the generalizability of test results across cultures. Most if not all modern test developers seek to develop measures that have strong generalizability, especially for native born ethnic minorities. We expended a great deal of effort to do so on the RIAS-2 development team, and employed a variety of both qualitative and quantitative methods to ensure we have done the best we can do. That said, there are simply too many micro-cultures with in a country the size of the USA to make us successful for everyone.

Lastly, culture-fair is a moving target. Times change at an accelerating rate, and items that work well across cultures now may not 5 years from now, so we have to look at items that can also withstand the test of time, and while we have rules for making such decisions, ultimately these final decisions become subjective after all of the other quantitative eliminations have been made.

NAJP: What are the challenges when developing a valid and reliable cognitive assessment for the English Language Learner (ELL) and English as a second language (ESL) examinees?

CRR: The challenges here really surround educating test users about appropriate interpretations of test scores with ELL and ESL populations. Tests simply answer questions about the examinee. An intelligence test developed and normed in the USA in English can be useful but really cannot answer questions about intelligence very well with such populations. For example, with ELL and ESL examinees, a very useful question RIAS-2 can answer is "how well does this examinee think and problem solve in English?" And, we can monitor their progress in the development of English language problem-solving skills by readministering the test periodically, but I would not interpret such scores as reflecting the examinee's intelligence.

Assessing the intelligence of such populations is very complex without instruments developed and normed for their native language and culture. Furthermore, a test that might be appropriate for making such an assessment for a native Spanish speaker from Spain versus Mexico could be quite different as would tests for measuring the intelligence of native speakers of Farsi, French, Tagalog, Afo, Hindi, Russian, or Bengali, all of which are spoken in various communities in the USA and which typically have specific acculturation factors associated with them as well.

NAJP: Something that some individuals may find of interest is the Reynolds Adaptable Intelligence Test (RAIT,2012), which can be administered completely via computer or in an individual or group administration format using booklets and answer sheets. Can you tell us about this measure?

CRR: The RAIT is an intelligence test designed for group or individual administration via paper and pencil booklets or online. It contains subtests that assess:

1) Crystallized intelligence (Crystallized Intelligence Index, CII).

2) Fluid intelligence (Fluid Intelligence Index or FII).

3) Quantitative aptitude or intelligence (Quantitative Intelligence Index or QII).

The RAIT also yields a Total Intelligence Index (CII+FII+TII) and a Total Battery Intelligence Index (CII+FII+QII=TBII). Unlike many tests available in dual formats, the paper and pencil and online versions are fully equated. The full battery requires a total testing time of 50 minutes, but examiners can choose to administer fewer than seven subtests.

RAIT provides a comprehensive assessment of intelligence that is flexible and efficacious with tasks formatted and equated for individual, group, paper and pencil, and computerized administration. It provides a rapid, reliable assessment of g along with crystallized and fluid intelligence while offering the option of a quantitative intelligence component to add to comprehensiveness and academic relevance. Tasks of the QII can be included as a separate score or as a component of a Total Battery Intelligence Index (TBII).

Its use has been growing in juvenile facilities and in the adult prison environment, but recently clinical practitioners have begun to pay more attention to the RAIT since it can be used to assess intelligence in a comprehensive manner without the use of the clinician's time for administration when the computer-administered option is chosen.

REFERENCES

Reynolds, C. R. (2012). Reynolds Adaptable Intelligence Test. Lutz, FL: PAR.

Reynolds, C. R., & Kamphaus, R. W. (2015). Reynolds Intellectual Assessment Scale (2nd ed.). Lutz, FL: PAR.

Cecil R. Reynolds

Texas A & M University

(Interviewed on behalf of NAJP by)

Michael F. Shaughnessy & Dan Greathouse

Eastern New Mexico University
COPYRIGHT 2019 North American Journal of Psychology
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Reynolds Intellectual Assessment Scales
Author:Reynolds, Cecil R.
Publication:North American Journal of Psychology
Article Type:Interview
Geographic Code:1USA
Date:Mar 1, 2019
Words:3949
Previous Article:Is Commitment Related to Marriage Stability?
Next Article:Positive Framing Effect: A Mixed Method Study on the Buying Behavior of Filipino Millennials.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters