So I was digging through my FTP space when I happened across this gem. Back when I was studying Computer Information Systems at DeVry University (go ahead, get your laugh/joke in, I’ll wait… ok, good, ready?) I was often amazed at the lack of forward thinking and technological illiteracy by both students and faculty. I mean, I shouldn’t have been, but it was still kind of disheartening to be in a “technology school” and be surrounded by technologically inept people. In fact, I published an FAQ to setting up basic devry.edu accounts on this site that still gets a crazy number of hits every month (dropslash.com/devry).
Anywho, I ended up getting so frustrated that I converted a post I once made on this site in 2007 about the need to reassess how intelligence is quantified in the emerging networked world into an essay and submitted it as a class assignment. I used to publish my papers to my webspace and deliver them as links because I didn’t have a printer and, well, I think for the most part printing is an archaic practice. So below is that essay. It’s crazy how some of the numbers have changed in the past 4 years, especially global population.
And no, this essay didn’t fly well with the professor, but that was expected. I’m sure my not-so-subtle closing jabs didn’t help matters.
Why traditional assessment will have to change for the Information Age.
Ferbuary 13, 2008
As the modern world evolves through technology, the need to redefine many accepted standards has arisen. Among these accepted standards are those that define, quantify, and legitimize an individuals “intelligence”. Before we reevaluate the standards that surround intelligence though, we first need to explore some of the more common, accepted, definitions of intelligence.
There are two widely accepted consensus definitions of intelligence and hundreds of individual definitions. The first of the two consensus definitions was set forth in 1995 by the American Psychological Association in their report entitled “Intelligence: Knowns and Unknowns”. It reads:
“Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person’s intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of “intelligence” are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen somewhat different definitions.”
The second consensus definition comes from an earlier report. In 1994 the Wall Street Journal published an opinion article written by psychology professor Linda Gottfredson entitled “Mainstream Science on Intelligence”. It was a list of 25 statements that claimed to uphold findings on the subject of intelligence research discussed in the 1994 book The Bell Curve by Harvard professor Richard J. Herrnstein and American Enterprise Institute political scientist Charles Murray. The article was signed by 52 professors (including Gottfredson) specializing in intelligence and related fields at the time. Its definition reads:
“A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—”catching on”, “making sense” of things, or “figuring out” what to do.”
Even though these definitions are both over ten years old, they are still the two most widely accepted consensus definitions. From these two definitions we can easily construct a simple definition. In general, intelligence is an over-arching term used to describe aspects of the mind that encompass a range of related abilities, such as reasoning, planning, problem solving, abstract thinking, comprehension, use of language, and learning.
While the general definitions of intelligence are broad enough to advance alongside the changes of society and culture, the means by which intelligence is assessed by organizations has not been. Technology has leapt forward by enormous, almost incomprehensible, distances in just the past decade and these advancements have had a profound effect on many of the accepted aspects of intelligence, especially the acquisition and retention of information, or data.
As previously mentioned, the advancement of technology has forced many accepted standards to adjust and adapt for modern times. One doesn’t have to look further then the ongoing battle between the music industry and file sharing technology advocates for a basic example of this. Copyright law has been thrust into the limelight over the past decade with many changes being made over the course of that time. The Digital Millennium Copyright Act, signed into law in 1998, is probably the most well known example of standards (in this case, law) being amended in an attempt to adapt to a digital age.
As these technologies advance, so must our definitions of ideas and standards for institutions that are drastically changed by said technology. Copyright law is just one example of how emerging technology has forced traditional standards to adapt and adjust to the modern, digital, age. As we continue forward we will explore another established standard that has not adapted to address the role modern technology plays in our lives, the assessment of intelligence.
Attempting to recount all the ways technology has changed the lives of humans in the modern world could fill millions of pages, and even before you finished you’d have to write a million more. For the purpose of this paper we’ll take a look into modern communications technologies and how they’ve drastically changed how people access and utilize information every day.
It can be argued that modern communications technology has reduced the need for the individual to store large of quantities of information in their brain. While it cannot substitute for experience, there is a vast wealth of information available to an individual at nearly anytime, anywhere. As this trend continues, the need for a new standard is established; one which accounts for the individual’s ability to retrieve relevant information from a remote node, rather than just recite it from memory. This is, in computer terminology, networking or remote access. By storing information remotely and accessing it when it’s needed over a network you can save resources on the host machine. For humans, this means not having to store “trivial” information in the local bran and instead leave it in a remote location and access it only when it’s needed. The challenge here is the interface, the tools used to access the information. It is in the past decade though that we’ve seen huge advancements in this field of technology.
In November, 2007, the level of global cellphone penetration reached a staggering fifty percent (Reuters, 2007). That is, half the people in the world, roughly 3.35 billion people as of February 2008 (US Census Bureau, 2008). Millions of these phones are capable of accessing the internet for information, but even the ones that are not are having a profound effect on the people that use them. According to a 2007 survey by Ian Robertson, professor of psychology at Trinity College, Dublin, two thirds of people surveyed relied on a mobile phone or electronic organizer to remember key dates and phone numbers. The same survey revealed that people below the age of 30 stored far less dates and numbers in their brain than those over the age of 50 (Reuters, 2007). I’m sure anyone who reads this can relate a similar instance. In a conversation with my father, a man I respect for his profound knowledge and intelligence, about this topic he revealed that he has found that his once encyclopedic knowledge of sports trivia has been rendered almost obsolete by the internet and people’s access to it. The separation of human memory and machine memory is seemingly dissolving further every day.
As humans continue to integrate these technologies into all aspects of their lives, we’re forced to adjust and re-write standards that now fail to address the new issues presented by that very technology. We’ve seen one example of standards struggling with technology in the form of copyright law. An even greater example of how standards are being rendered obsolete as humans merge with technology is the story of Oscar Pistorius. Pistorius, who had his legs amputated when he was one year old, has been training as an Olympic sprinter for most of his life. He was recently denied entry into the 2008 Olympic Games because his prosthetic legs were ruled by the International Association of Athletics Federation to be superior to natural human legs (IAAF, 2008). The prevailing opinion in our culture is that completely natural is superior to artificial prosthetics, yet Pistorius has proven just the opposite. His “disability” is actually ability. An appeal is expected. This is precedent setting in that an official ruling body has declared that artificial, prosthetic, limbs are superior to natural ones.
This brings us back to the topic of assessment. Copyright infringement can be measured in downloads and dollars. Sprinting can be measured in meters and seconds. How is intelligence measured? With so many aspects present it seems almost impossible to accurately judge and quantify a person’s intelligence, yet we all seem to know what is “smart” and what is not.
There are many different ways to measure intelligence, but most methods are based around two approaches, each over 100 years old. The first method is based on the studies of Sir Francis Galton, an English scientist. He conducted studies from 1884 to 1890 based on psychophysical tasks, which he believed were the basis for intelligence. It’s the second method though that has become the basis for most modern intelligence tests. Developed as a child’s test in 1904 by Alfred Binet and Theodore Simon, it was brought to America from France and modified by Stanford University psychologist Lewis Terman. This became known as the Stanford-Binet test. This test produced a score called an Intelligence Quotient, or IQ. Many of today’s modern tests, such as the Wechsler Adult Intelligence Scale, score multiple IQs for different categories as well as an overall IQ.
Intelligence Quotient however was originally formulated by a ratio of mental age to chronological age multiplied by one hundred. It should be noted that few test still utilize this method. Most of today’s IQ tests produce a result based on statistical distribution, or bell curve. The third edition of the Wechsler Adult Intelligence Scale (WAIS III) is one of the most popular IQ tests today. It consists of fourteen categories broken into two subtests, Verbal and Performance, and the results are grouped by four indices; Verbal comprehension, perceptual organization, working memory, and processing speed (Wechsler, 2008).
The test is considered quite thorough, hence its acceptance as a standard means by which to measure IQ. Although the tested categories include many applicable skills as related to intelligence, none test a subject’s ability to acquire accurate information efficiently from a remote source. For example, the Verbal Information Subtest is based on the general information acquired from culture, a common example being “Who is the president of Russia?”. The test accounts for whether or not the subject knows the answer, but not his/her ability to retrieve the answer from a remote node in a satisfactory manner.
While it should be noted that IQ tests are not an exhaustive means by which to measure intelligence, many other styles of intelligence testing exist, they are the accepted standards by many official organizations.
With so many aspects of intelligence and so many definitions of those aspects it’s amazing that any standards of evaluation exist at all. As mentioned, we all seem to instinctively know “smart” and “stupid”, but quantifying these aspects of humanity will never be exacting. This is why it is important for standards to be malleable and adaptable. Unfortunately, it seems that once a standard is accepted by the majority of authoritative bodies it becomes very difficult to proactively alter unless the change is directly beneficial to the authoritative body. Typically an authoritative body will not actively pursue the process of standards alteration unless they perceive a direct threat, such as with copyright law.
So where does this leave us? Many modern official institutions that are recognized and instituted to foster the growth of intelligence and knowledge operate on standards developed for, and from, a time long past. While the premises of many of these standards are still relevant today, their actual function in today’s modern world is sadly obsolete. Fortunately some authoritative institutions are realizing this deficiency and attempting to modernize and adapt. In 1999 The National Science Foundation (NSF) requested the results of a two year study by the National Research Council (NRC) about information technology literacy. The report was entitled “Being Fluent with Information Technology” and stressed that fluency in information technology (FIT) was a synthesis of knowledge rather than just a display of skills. In the IT Journal Educause, Anne Moore (2007) writes about the findings of the report as well as the need to rethink the approach to teaching, learning, technology literacy, and performance assessment.
Another example of authoritative bodies attempting to update the standards by which they assess intelligence is California State University and the Educational Testing Service’s (ETS) Information and Communication Technology Literacy Assessment. An article in USA TODAY (2005) outlines how the test was instituted at Cal State and how the test is designed to test what they call “Internet IQ”. It includes many real world simulations, such as finding a correct answer on the internet and evaluating the legitimacy of online sources. The article also mentions the growing rift between teachers and students, in such that “Of course, some of those text-messaging students are still being taught by professors whose idea of a personal data assistant is a fresh pad of Post-Its.” This is one of the cores problems that exist today with changing intelligence assessment for the modern age, many of those who would be responsible for the giving the assessment are not well versed enough to understand its content, much less its application.
This disconnect between teacher and student forces the generation gap to become almost exaggeratedly visible. The younger generation, having grown up with these technological advancements, follows one set of cultural standards, while the older follows another completely. These tests generally have not been developed by people actually using and understanding the technology, people who have been immersed in its potential and application. The older generation, typically through positions of authority, continues to enforce standards that become more and more obsolete as technology advances, standards like copyright law and prosthetic inclusion. Many times the authoritative body cannot even explain why they cling to obsolete standards, they have forgot the reasons behind the rules and maintain them out of tradition rather than face a change they do not understand. It is my experience that people of authority in this position do not enjoy being questioned about it. They do not seek to understand the “why” of a standard, its existence is enough to warrant adherence. This seems counterproductive to the learning experience, but I suppose no one likes being made aware of their shortcomings.
Technology has changed the face of the world we live in. Ideally, for every step mankind takes in technological development an equal step is taken in understanding. This, unfortunately, is not the case. As time and progress march unstoppably forward though, that responsibility falls squarely on the shoulders of this generation, and the next.
Gottfredson, L. (1994, December 13). Mainstream Science on Intelligence.
The Wall Street Journal, p. A18
Herrnstein, R., Murray, C. (1994). The Bell curve.
New York, NY: Simon and Schuster
Kurzweil, R. (2005). The Singularity is near.
New York, NY: Viking Penguin.
Litman, J. (2001). Digital copyright.
Amherst, NY: Prometheus Books.
Logan, J. (2008). iGeneration: Shuffling toward the future.
New York, NY: Penguin Global.
McHugh, Josh. (2007, March). Blade Runner.
WIRED, 15-03, 136-141, 179
Neisser, U., Boodoo, G., Bouchard, T., Boykin, A., Brody, N., Ceci, S., et al.
(1996). Intelligence: Knowns and Unknowns.
American Psychologist, 51, 77-102
Reuters. (2007, November 29). Global cellphone penetration reaches 50 pct.
Reuters UK. Retrieved on February 15, 2008 from http://investing.reuters.co.uk/news/articleinvesting.aspx?type=media&storyID=nL29172095.