How ASWB Develops Its Licensing Exam Questions

This long post traces the path by which the Association of Social Work Boards (ASWB) develops the questions on its licensing exams. It is long mostly because ASWB conceals relevant information; I had to assemble pieces of the puzzle through painstaking reconstruction from scattered and sometimes confusing sources.

For the reader who does not have time to read the entire text, I would recommend reading the Summary (immediately below) and the From Content Outlines to Exam Questions section. Time permitting, it might also be interesting to search this post for the word “criminal,” and to look at the final two sections.


ASWB Hides the 2010 Practice Analysis
Reconstructing the 2010 Practice Analysis
Scrutinizing the Content Outlines
How the Content Outlines Were Developed
From Content Outlines to Exam Questions
ASWB’s Item Writers
Item Writers’ Areas of Expertise
Pretest Questions
Examination Committee
Pearson VUE: The Gorilla in the Living Room
Culmination of the Process: A Sample Exam Question



There seem to be two major parts to ASWB’s licensing exam: the part that ASWB does itself, and the part that it farms out to Pearson VUE. ASWB’s part consists of a display of effort that does not really amount to much. Pearson’s part is to handle much of the business, and to do most of what is needed to produce ASWB’s $1.6 million annual net profit, and to pay its executive director’s inflated salary.

Pearson’s contribution — its sophisticated statistical analyses, its collection of test fees, its creation and operation of test centers nationwide, and so forth — is outlined in the last section of this post. I was not able to provide more detail because ASWB does not provide much detail. It appears that ASWB tries to minimize Pearson’s efforts while drawing attention to its own. There is a question of whether this seeming distortion is driven by the concern that people might question the profits and salaries at ASWB if it became clear how much of the licensing exam work is done by Pearson.

There appear to be two principal reasons why ASWB’s contribution does not amount to much. One reason is disorganization. As discussed in the early sections of this post, every seven years or so, ASWB conducts a large survey of practicing social workers. The alleged purpose of this survey is to give ASWB an impression of what social workers actually do, so as to inform ASWB’s test construction activities. Unfortunately, the survey does not seem to achieve that objective. For one thing, the survey appears poorly conceived. Contrary to ASWB’s claims, it does not appear to yield an accurate picture of the social work profession. Moreover, based on available information, it is not clear that it would matter even if it were well-done, because its results are not shared with the social work profession, nor do the results seem to play an effective role in ASWB’s question-development processes.

One might ask whether the real purpose of ASWB’s survey, and of its unnecessary complexity and disorganization, is simply to create an impression that ASWB is adding value by conducting original research. Otherwise, a practical person might ask why ASWB would go to all this trouble in order to produce data that are inferior to the information freely provided and frequently updated by the O*NET database in cooperation with the U.S. Department of Labor.

Regardless of what is really happening in social work practice across America, it appears that ASWB’s Content Outlines, listing the subjects to be tested in ASWB’s exams, are mostly drawn from previously generated materials and judgments by a few key personnel. But then, in another flawed handoff, it appears those outlines do not actually drive the selection of exam Item Writers and the subjects they address. The entire convoluted process, seeking to identify and test core topics in social work practice, appears to yield jumbled results inferior to what might be achieved by earnest collaboration among a small number of qualified professors and practitioners, steered by relevant literature.

The other principal reason why ASWB’s contribution does not amount to much is simply that what it produces is not very substantial. The output of the process is a set of about 1,650 raw (i.e., untested) exam questions per year. That amounts to an average output of about six or seven questions per weekday. It is perhaps what one might expect from one social work PhD working part-time; crowdsourcing would probably provide a superior quantity and quality of questions at a significantly lower price. About 70% of those 1,650 questions survive review by a subgroup of five or six reviewers, meeting four times a year. The surviving questions are further whittled down as Pearson goes to work: statistics demonstrate that numerous questions are unusable. Finally, many of the questions that ASWB approves, and on which test-takers are actually examined, may be of poor quality.


ASWB develops standardized exams that most North American jurisdictions require as a condition of becoming licensed to practice social work. In a separate post, I critique those exams. This post explores the process by which ASWB develops the questions that appear on those exams.

Among the several ASWB exams, the Masters and Clinical exams are by far the most frequently taken. This post focuses on the Masters exam, but much of what is said here applies to the Clinical and other ASWB exams as well. I chose to focus on the Masters exam even though clinical licensure is the ultimate goal of many who pursue master’s degrees in social work (MSW). For a variety of reasons, discussed elsewhere, many MSW graduates will never get to the point of taking the Clinical exam. The Masters exam tends to be the principal barrier between the career steps of finishing the MSW and becoming legally permitted to call oneself a social worker.

ASWB provides a description of the process by which it generates exam questions. As of this writing, that description reads as follows:

The process begins with the practice analysis, a major survey of thousands of practicing social workers. The results of this survey give ASWB a highly accurate picture of social work practice and help ASWB establish the categories of exams offered. The results of the practice analysis are content outlines — blueprints for the licensing exams. . . .

Questions on the exam are written by practicing social workers, a group of individuals who are selected to reflect diversity in practice setting, ethnicity, race, and geography. Every question is then reviewed by ASWB’s Examination Committee, a group of experienced social workers who approve all questions before they appear on the exam.

Every question starts out as a “pretest” question, included among the 170 questions on the exam but not counted toward the passing score. After the pretest questions [have] been answered often enough to provide statistically significant data, they are evaluated for difficulty as well as for signs of bias. Only after this statistical review is completed can a question become part of the bank of scored items on the exams.

The following sections of this post discuss various elements within that description, including the practice analysis, the content outlines, what ASWB calls the “Item Writers” (i.e., those who write the questions), the Examination Committee, and the statistical testing process.

ASWB Hides the 2010 Practice Analysis

In 2010, ASWB prepared a Practice Analysis of the type mentioned in the foregoing quote. Unfortunately, at this writing, details were difficult to find.

ASWB did offer a Summary Report on the 2010 Practice Analysis. The Summary Report (p. 3) stated that the full Practice Analysis was “an extensive report” — that, in fact, ASWB had published it “as a printed book and as an electronic document at,” both bearing the title, Analysis of the Practice of Social Work, 2010. It was apparently a fairly detailed and extensive document: ASWB stated that its Practice Analysis Task Force (2010, p. 1) had been at work for “about two years.”

Sometime in or after 2011, ASWB ceased to offer the printed version of its 2010 Practice Analysis on its website. At this writing in July 2014, searches on Amazon and elsewhere turned up no copies. In fact, a search found only two copies among WorldCat’s 72,000 member libraries. Surely, I thought, the schools of social work at places like Columbia would have obtained copies of such a fundamental work; but apparently not.

Well, there was always the electronic version. But as it turned out, ASWB had removed that from its website as well. Multiple searches (on Google, Yahoo!Bing, and Dogpile) yielded not a single copy of the book anywhere. Many websites (e.g., Docs-Archive, FreeDocLib) did offer links to PDF copies of the book, but upon closer examination all reported that they were no longer able to find the electronic version at its original location on ASWB’s website.

This was puzzling. Why would ASWB go to the trouble of producing a long report, publish it in book form, advertise that book in the Summary Report and elsewhere, and then completely remove all forms of that book from sight? ASWB’s policy manual (2012, sec. 7, p. 70) required practice analysis reports to be preserved “indefinitely.” Apparently the book does still exist, in ASWB’s offices; apparently ASWB just does not want the public to see it anymore.

According to its Annual Report (2012, p. 10), ASWB aimed to conduct a new Practice Analysis every six to eight years. (According to its policy manual (2012, sec. 2, p. 5), it would be every five to seven years; ASWB’s (2011, p. 9) Exam Blue Book says it is every five to ten years.) The one before 2010 had been done in 2003. ASWB had handled the report of that 2003 Practice Analysis very differently. Marson et al. (2010, p. 99) indicated that the 2003 report was still available as of 2010. Indeed, at this writing, it still appeared possible to find a copy of the 2003 report.

ASWB does not seem to have intended, originally, to hide the 2010 Practice Analysis. Along with the reference to that Practice Analysis in the Summary Report (above), an ASWB press release dated September 1, 2010 confirmed that there was a new “practice analysis final report” and notified readers that this report “can be downloaded in its entirety at” (see also Blue Book p. 9). It may have remained available for a year or more. Campbell (2011) commented on it and cited the specific ASWB webpage from which she had retrieved it; but, again, at this writing there was no longer anything there.

By itself, the appearance and then disappearance of a book-length report would be unusual, when the party experiencing such an apparent change of heart is one of the major organizations in this profession. But when that book describes the preparation of a research study forming the basis for licensing exams at the very heart of the profession, the suppression of that information is not just odd; it is unethical. The research that went into this process would be of enormous value, assuming it was done well, for everything from the definition of social work to the discussion of what an MSW degree should try to achieve.

On the other hand, if the research was not done well, that fact too must be shared with the profession, in light of ASWB’s role and the purposes for which it continues to use that research, and steps must be taken to rectify the problem and minimize any damage that it may have caused. If the research resulted in the creation of licensing exams that do not measure what they purport to measure, state licensing authorities should be notified, alternate licensing arrangements should be made, and corrective steps should be taken on behalf of exam-takers who may have been wrongly flunked.

At this writing, ASWB has offered no explanation for its concealment of the 2010 Practice Analysis. Removing it from public availability seems to indicate that it was in some way flawed or embarrassing. Putting it out there and then removing it suggests that ASWB itself did not initially notice any errors, and thus incorporated those errors into its exam construction processes — but then, upon noticing problems, wanted to keep others from seeing them.

It may be appropriate to suspend the use of ASWB licensing exams while problems in the exams are rectified, if circumstances suggest that route. But continuing to make millions of dollars from those exams, while suppressing information suggesting that the exams had been sold to state licensing authorities on the basis of false information — well, if that is the situation, that would seem likely to entail criminal liability. There would presumably also be civil liability if people were being defrauded of hundreds of dollars each, and in many cases delayed or completely prevented from obtaining work in the field, due to an exam development scheme that proved to be deceptive or otherwise substantially bogus.

This is not the only instance in which ASWB has removed or withheld information. Concealment of the 2010 Practice Analysis is another installment in a disturbing pattern of concealment at ASWB. That pattern, and the foregoing remark about the unethicality of such behavior, are more fully developed in a separate post.

Reconstructing the 2010 Practice Analysis

Without the 2010 Practice Analysis, there seems to be no way to gain a clear understanding of how ASWB has developed its exams in their current form. Therefore, this post pieces together an imperfect sense of the situation, using alternate sources of information. The 2010 Summary Report mentioned above, briefly discussing the full Practice Analysis, is one alternate source; the following paragraphs cite others. Unfortunately, these sources leave many questions unanswered.

According to the Summary Report, the Practice Analysis was undertaken “for the purpose of finding out what social workers do in their jobs, how frequently they do it, the importance of each task, and whether it is necessary to be able to do the task at the time of licensure and a first, entry-level job.” In effect, ASWB claimed to use the Practice Analysis to define the social work profession in terms of actual practice.

Of course, no research is done in a vacuum. An attempt to figure out what hundreds of thousands of social workers do, like any other large research project, must be informed by knowledge of the scholarly literature — by awareness, among other things, of what was done right and wrong in previous research efforts. You don’t want to try to reinvent the wheel. The topics just mentioned (e.g., what an MSW degree should achieve) have been debated and studied at length, as demonstrated in the web links provided above. So there are preliminary questions, unanswerable as long as ASWB withholds information about its research: did ASWB conceive the issues intelligently, as distinct from launching forth on a mission that its top executives might find personally interesting or profitable? Did they take account of relevant research, as distinct from wasting time on inquiries that have already proved pointless or misleading?

And, really, the questions just go on from there. Who designed the research — were these experienced researchers, applying current learning in the study of social work and other professions, or did ASWB personnel just seek a substantial repeat of the approach used in 2003? Given ASWB’s focus on accumulating money, did the researchers have the budget they needed for such a massive undertaking, or did they have to cut corners? What protections were in place to insure that people didn’t just fudge the numbers in order to get on with it? Stranger things have happened, when people have decided to cook up a study to convey the impression that they are doing scientific work.

ASWB’s 2010 Summary Report goes on to say that the 2010 Practice Analysis was “a combination” of “the judgment” of 21 “subject matter experts” (SMEs) and data produced by surveying thousands of practicing social workers. The “combination” of the two was brought about by “measurement scientists.” This appears to mean that ASWB began by choosing experts in as many as 21 different topical areas. But how did ASWB decide that those areas were important — was that not the purpose of the survey, to find out what areas of practice are actually significant in social work today? If you choose a subject-matter expert in skateboarding injuries, and it turns out that no social workers are actually practicing in that area, why would you “combine” the “judgment” of that expert into your social work practice exams?

What appears to have happened from the outset — what will crop up again below; what I have seen in other social work contexts — is that the traditional social work types wanted to give an impression of reaching out to the masses, but they also wanted to make sure the masses said and did what they were supposed to, in order to position the social worker in the desired light. That would be poor science. If we’re going to survey practitioners, let’s do that. If we’re going to follow the judgment of SMEs, let’s do that. But as soon as you bring them both into the mix, it becomes a question of who is arbitrating between them. Practitioners say X; SMEs say Y; and then some anonymous “measurement scientist” steps out of his/her role as researcher and becomes, instead, a driver of policy for the social work profession — for which s/he may be completely unqualified. Then it is a question of who chose the researcher (and the SMEs) for this project, and why — a question, that is, of whether the outcome was somewhat orchestrated by ASWB executives from the beginning.

One possibility would be that, well, social work is not presently active in the area of skateboarding injuries, but it should be. But that would be a different study. You’d get there, not by surveying thousands of social workers who are not working with skateboarders, but rather through a scholarly analysis of areas in which social workers appear to be needed but are not presently practicing. And this could be a good thing to investigate. It just might not have much to do with what MSWs need for entry into today’s profession.

The Summary Report does provide a fraction of an explanation of how the SMEs were selected. It says the group of SMEs “was carefully balanced for diversity in gender, race and ethnicity, practice setting, and geographic location.” We will have an opportunity (below) to review the credibility of a similar claim about the writers of ASWB’s exam questions. For now, notice several problems:

  • ASWB thus perpetuates the profession’s antiquated preoccupation with privileging certain categories of diversity while ignoring many others (e.g., disability, age, socioeconomic status).
  • It is not clear what “balance” in gender or race means, when analyzing a profession consisting overwhelmingly of white females.
  • ASWB seems to confuse the roles of subject-matter experts and cultural advisors. Expertise implies competence. It would disserve the profession to choose a non-expert “expert” merely to represent groups in which social work is perceived to have a deficit.
  • What you look like does not necessarily mean that you will represent others who look like you (example: Clarence Thomas).
  • The notion of geographical diversity could entail all sorts of unfounded and contradictory beliefs — comparing, for instance, persons from Chicago and from rural Illinois: similar, because they are midwesterners? or different, because city and countryside are different?

The Summary Report goes on to say that the SMEs chose the list of “social work tasks” that the survey would inquire into. So not only were SMEs chosen for areas of expertise that ASWB assumed were matched to today’s profession, but now those alleged experts are relying on their own assumptions about what social workers do — as they develop a survey that, according to the Summary Report, “is for the purpose of finding out what social workers do.”

The available sources do not make clear what would count as a social work task, for purposes of the survey. To develop the list of tasks that would be addressed in the survey, the SMEs “began by reworking the list of social work tasks” from the 2003 Practice Analysis. They do not appear to have done terribly much reworking. ASWB’s (2011, p. 5) Exam Blue Book states that the resulting Content Outlines (below) ended up with “basically the same material” within their subheadings. The Summary Report (p. 4) concurs that the “actual content” of the Content Outlines did not change much: in the Masters exam, there was more attention to diversity, human development, and ethics, and reduced attention to research, evaluation, and service delivery. (It is no longer possible to view the older Content Outlines; ASWB has removed them from the location where scholars from the Institute of Medicine (2008) found them, and from its website altogether.)

Consistent with a profession that tends to be conservative in its ways, then, it appears that the process of designing the Practice Analysis was not very good either at ejecting outdated old ideas nor at adding significant new material. Among other things, that process — commencing in the depths of the most severe recession since the 1930s — was not too sure that anything noteworthy might be taking place.

It is not quite clear what happened next. Somehow, according to the Summary Report, ASWB went from having a “list of social work tasks” to the step of circulating a survey to over 16,000 social workers. A person might assume that a survey would contain questions. How does a list of tasks become a set of questions? Based on remarks by Campbell (2011), who did have a chance to view the Final Report before ASWB yanked it from the website, it seems that, actually, there were no questions. Instead, each respondent was apparently just asked to rate 169 different social work tasks in three separate ways. The Exam Blue Book (2011, p. 9) says this:

The practice analysis survey lists a series of tasks common to social work, and then asks participants to rate how often they perform each task, how critical knowledge of the task is regardless of how often it’s performed, and whether the ability to perform this task is a necessary entry-level skill at their particular level of practice (performance, importance, and frequency).

Evidently the SMEs tried to reduce social work practice at all levels — Advanced Generalist, Clinical, Masters, Bachelors — to a set of 169 tasks. Or less than 169, considering that survey participants, on average, were probably going to rate some of those 169 as being relatively unimportant. This seems very strange. How could the work performed by social workers of all varieties — clinical, hospice, hospital, military, school, policy; working with children, adolescents, men, women, people with disabilities, people of different cultures, organizations, governmental agencies; providing assessment, planning, advocacy, research, therapy, supervision — be reduced to, say, 130 tasks worth examining in a licensing exam? Is there any scholar, anywhere, who has laid out a solid argument for such a notion? For that matter, did the SMEs understand all of those areas of practice — did that set of 21 individuals (credentials unknown) have expertise sufficient to formulate key tasks in each such area?

Here is a summary of what seems to have taken place. ASWB assembled a group of social workers who had made themselves available for the purpose. From this group, ASWB excluded a few white females and recruited a few males and minorities, for the sake of appearances. ASWB glorified these people with the title of “subject matter expert.” It gave these SMEs the list of social work tasks from the 2003 Practice Analysis. The SMEs had a meeting and, without engaging in any particular study of the matter, came up with a list of 169 tasks. ASWB printed that list in survey format asking for three reactions to each task (regarding “performance, importance, and frequency,” according to the foregoing quote), and mailed it out.

The Summary Report says that ASWB sent the survey to a random sample of people who had passed an ASWB licensing exam between 2006 and 2009. This was problematic on multiple levels. Current numbers suggest that roughly half of these people would have been newcomers to the profession, at either bachelor’s or master’s degree levels. When the survey was conducted, in 2009 or thereabouts, many of these 2006-2009 graduates would have had less than one year of experience. Some may not even have found positions in social work at that point; the social work experience informing their answers may have been limited to their part-time field placements in school.

ASWB was not providing age data; but even with the inclusion of those passing the Clinical exam (and the tiny number passing the Advanced Generalist exam), it seems that the large majority of these 16,000 survey recipients must have been fairly young. As such, they would be decidedly unrepresentative of the social work profession as a whole, at a time when roughly 80% of licensed social workers were over the age of 33. ASWB inaccurately characterizes this as a survey of “what social workers do in their jobs” (Summary Report) and as “a highly accurate profile of social work” (Exam Blue Book, p. 9). At best, it was only an impression of social work practice from the perspective of a generally young and inexperienced group.

Should this survey’s respondents determine the content of the exams that will decide who is qualified to become a social worker, or what a social worker should know? Surely they do have a perspective worth considering. But it is not plausible that schools of social work and licensing authorities should focus solely on what entry-level social workers consider important. That, however, is what ASWB is suggesting. A serious effort to figure out what a licensing exam should cover would be based, not only on those newcomers’ impressions, but also on the sometimes contrary views of their employing organizations, their immediate supervisors, their more experienced colleagues, their potentially better-trained competitors from other fields, and clients. There should be a logical connection with the elements of accredited social work education required by the Council on Social Work Education (CSWE). Exam development would entail an ongoing commitment to good research into social work practice, conducted by an impeccable and independent research entity without financial interests in the outcome. These elements were lacking.

There is another problem. A potentially large majority of these survey respondents, registering for their ASWB exams shortly after graduation, would have given ASWB their student addresses. Since then, many would have obtained jobs and relocated to different apartments and cities. In many cases, by 2009-2010 their postal forwarding orders (if any) would have expired; their surveys would have been lost in the mail, dropped in the trash by their old apartments’ new occupants, or returned as undeliverable. Despite follow-up efforts, many of the intended survey participants would not have received surveys. Right there, you have an example of how a random selection in principle becomes a skewed selection in practice, overrepresenting the views of one set of exam-takers (notably, in this case, the youngest ones, whose forwarding orders would still be effective) while understating the views of another set.

Tellingly, the Summary Report does not indicate what the response rate was. Response rates can be quite low, especially when you are asking people to take a difficult survey, and especially when you are the organization that charged them hundreds of dollars and subjected them to a difficult test that they might have disliked and even resented. Moreover, in long surveys, people can become bored and mildly confused, as they reconsider their earlier answers in light of later questions. The list of topics addressed by this survey was long and detailed — as noted above, it consisted of 169 items, each of which had to be rated on three separate scales, for a total of 507 responses. It would take pages to convey all that material. Only a fraction of those who received these bulky surveys would have bothered to open the letters and spend an hour or more to return complete responses.

Thus, for all we know, ASWB could have based its conclusions on a mere 500 to 1,000 complete responses, and those may have been skewed — tending to be submitted by people with atypically positive views of ASWB, an interest or belief in these sorts of questions, and a lot of free time on their hands. In that event, the views of even the largest categories of social workers could be misconstrued. The practice priorities of Mormon social workers, black male social workers in the South, rural social workers, and many other subgroups may have been represented by so few respondents as to be useless for purposes of providing a credible snapshot of this very scattered profession.

Despite ASWB’s efforts to remove so many of its materials from public view, it is still possible to obtain indirect confirmation for many of the foregoing impressions, by observing how ASWB conducted another survey. That is, at about the same time as ASWB was preparing the 2010 Practice Analysis, ASWB (2009) was also doing a study of supervision of practicing social workers. Its procedures there suggest that the Practice Analysis probably did incorporate some of the problems discussed above.

In that 2009 study of social work practice supervision, ASWB began (as in the 2010 Practice Analysis, above) by selecting a panel of “subject matter experts” (2009, p. 3). Two of this study’s 13 SMEs had no experience in supervising social workers; others had as little as two years’ experience. As a second-best, ASWB considered whether at least they had experience as social work clinical instructors; five did not, and others had as little as one year of experience in that capacity. There may or may not have been overlap, such that all of the 13 did have at least one year of experience in some relevant capacity; unfortunately, as in the 2010 Practice Analysis, ASWB did not provide specifics to that effect. There did not seem to be any rigorous process by which these particular individuals were selected as “experts.” All were at least 50 years old.

In that 2009 study, ASWB depended upon a single organizational psychologist, apparently playing a role similar to the “measurement scientists” mentioned above. That psychologist met with these 13 SMEs. In a focus group meeting of dubious quality, without the aid of research into the components of supervision (p. 4), the SMEs were encouraged to invent their own notion of what good supervision would involve (p. 5).

In these regards, then, that 2009 study supports the impression that ASWB probably did proceed in substantially the manner outlined above, when conducting the 2010 Practice Analysis.

To help SMEs in that 2009 study understand the kinds of knowledge that would be relevant to social work supervision, ASWB provided a definition of knowledge from a book written in 1997 (p. 6). That definition was as follows:

Knowledge refers to specific types of information people need in order to perform a job. Examples of the types of knowledge identified for performing the job of electrician are:

— Knowledge of National Electrical Code
— Knowledge of building specifications
— Knowledge of blueprint symbols

In an increasingly information-based society, for purposes of the people-oriented practice of social work, it was rather sad that ASWB would offer the example of an electrician to illustrate what might count as knowledge. ASWB could instead have drawn upon, for instance, an article in which Cha et al. (2006, p. 120) presented their findings that younger social work participants were more likely than older participants to value information from researchers and scholars. Note that no option involving research and scholarship was included in the electrician example. Combined with the foregoing observation that the Masters exam Content Outline paid reduced attention to research in 2010, as compared to 2003, it appears that the SMEs may have brought practitioners’ reported tendency to disregard research — a tendency to “do it my way” or “figure it out for myself” — into both the construction and the content of ASWB’s profession-defining 2010 Practice Analysis.

To summarize, ASWB’s decision to conceal so much of its material from public view may have been driven by two distinct concerns. First, ASWB executives may have wanted to minimize the risk that people would become aware of the problems of excessive executive compensation and other abuses, as discussed in a separate post. Second, after initially putting the 2010 Final Report online, ASWB yanked it, apparently because of a belated realization that the contents of that Report might not pass the straight-face test. In both of these regards, there may have been a fear that such information might prompt important people to ask whether this organization should be deciding who was qualified to be licensed as a social worker.

Scrutinizing the Content Outlines

In the quote provided at the start of this post, ASWB said, “The results of the practice analysis are content outlines — blueprints for the licensing exams.” This section of this post discusses the process by which ASWB used the 2010 Practice Analysis, with its survey of new social workers, to produce Content Outlines. (As noted above, this post tends to focus on the Masters exam, though many of these remarks apply to other ASWB exams as well.)

ASWB says, “Each content outline is organized into content areas, competencies, and knowledge, skills, and abilities statements (KSAs).” To illustrate what those terms mean, here are a few lines from what ASWB calls (somewhat confusingly) the Content Outlines and KSAs for the Masters exam:

This section of the exam may include questions on the following topics:
• Developmental theories
• Systems theories
• Family theories
• [the list continues]

ASWB explains that it considers Human Development, Diversity, and Behavior in the Environment (HDDBE) to be a Content Area — a broad area of knowledge to be measured by (in this example) the Masters exam. ASWB does not explain why it decided to lump these particular topics together to form one of the four Content Areas on the Masters exam. Other authorities do not share ASWB’s view. For instance, the Code of Ethics of the National Association of Social Workers (NASW, 2008) considers diversity to be just one aspect of Social Justice, and it considers Social Justice to be one of six major Ethical Principles.

According to the Masters exam Content Outline, as shown here, HDDBE is worth 28% of the total grade. The Professional Relationships, Values, and Ethics Content Area is worth 27%. The two other Content Areas on the Masters exam are Assessment and Intervention Planning (24%) and Direct and Indirect Practice (21%). This selection of major topics yields odd inferences: that ASWB does not consider knowledge of human development, human behavior, or diversity to be a part of either direct or indirect practice, for instance, and that ASWB considers Professional Relationships, Values, and Ethics more important than all aspects of Direct and Indirect Practice.

Already at this top level of the Content Outline, then, it appears that ASWB entertains some strange ideas about how it should divide up the topics comprising social work practice, and what weight it should give to those topics. To prevent this long post from becoming even longer, I limit my remarks, here, to just a few of the many ways in which ASWB’s Content Outlines seem to vary from what social work authorities and practitioners alike might expect.

ASWB refers to subpoints within each of the four Content Areas as Competencies. The foregoing quotation from the Masters exam Content Outline indicates that point I.A., Theories and Models, is a Competency. The complete list of Masters exam Competencies is as follows:

I.A. Theories and Models
I.B. Abuse and Neglect
I.C. Diversity, Social/Economic Justice, and Oppression

II.A. Biopsychosocial History and Collateral Data
II.B. Use of Assessment Methods and Techniques
II.C. Intervention Planning

III.A. Direct/Micro
III.B. Indirect/Macro

IV.A. Professional Values and Ethical Issues
IV.B. Confidentiality
IV.C. Social Worker Roles and Relationships

At this secondary level, as at the top level, we have odd implications — that, for example, Social Worker Roles are not a part of direct or indirect practice, and that Confidentiality is a major aspect of practice. Certainly confidentiality is important. But in what sense can it be on a par with the whole realm of Direct/Micro practice? The implication is that, all other things being equal, a test-taker who knows nothing about direct practice is as competent as one who knows nothing about confidentiality. This seems nonsensical. Many students are interested in degrees in Direct Practice; very few are interested in degree programs in Confidentiality.

It is not clear why ASWB chose to elevate some topics to be highlighted as Competencies. Why, for example, does ASWB elevate Confidentiality but not the Intrinsic Worth and Value of the Individual? The latter is one of NASW’s major ethical principles, but to ASWB it is just one of eleven minor aspects of Professional Values and Ethical Issues.

There is another problem at this level: as in the case of HDDBE at the top level (above), ASWB invents general-purpose composite headings and places them on a par with specific competency headings. Practitioners might agree that Confidentiality or Intervention Planning are indeed areas of competency, even if they do not qualify to be highlighted among the profession’s eleven major Competencies. But in what sense is Theories and Models an area of competency? It appears rather to be like the use of the English language: no matter what you are doing, to be able to understand relevant professional literature and generally to practice social work competently in the U.S., you need to be able to speak English and to comprehend and discuss relevant theories and models.

Such remarks suggest that, in terms of competencies, ASWB’s outline would be better reconstructed with top-level Parts I and II: competencies required for virtually any form of social work practice, research, or other activity, and competencies specific to particular kinds of practice. Language skills, an ability to work with theory, and an understanding of the profession’s ethical principles might be among the general competencies included in Part I; intervention planning and confidentiality might be among the specific competencies in Part II. Such a division would highlight differences among kinds of master’s-level practice — raising, among other things, the question of whether it makes sense to require clinical and non-clinical master’s-level practitioners to display the same level of clinical competence. That question seems obvious enough; it is a major question, dividing the largest category of ASWB exam-takers; its neglect raises the question of why ASWB’s exam structure does not address it.

The foregoing discussion, within this section of this post, has dealt with the top two levels of ASWB’s exam Content Outlines. There is a third level, and its existence multiplies the complexities and problems already identified. This third level contains what ASWB considers the smallest subtopics. The foregoing excerpt from the Masters exam Content Outline offers several examples of these small subtopics: Developmental Theories, Systems Theories, and Family Theories.

The Content Outlines (2014) refer to these subtopics as “Knowledge, skills, and abilities statements” (KSAs). That appears to be a misnomer: a term such as “Developmental Theories” does not constitute a statement. Alternately, the Outlines say that the KSAs break down the Content Areas and Competencies into “discrete knowledge components” upon which individual exam questions are based. Yet it quickly develops that this, too, is a mischaracterization: the very name suggests that a KSA may involve skill or ability as distinct from knowledge, and its contents may not be discrete at all.

Consider again an example just cited: the so-called Developmental Theories KSA. Developmental Theory is the subject of many textbooks, such as Austrian’s (2008) 440-page volume. It is not a discrete topic. It is on a completely different level of generality from much more limited KSAs (e.g., “Indicators of sexual dysfunction”; “Methods used to obtain/provide feedback”). A set of two or three questions might suffice to test at least superficial competence for some KSAs, but two or three questions would barely take you past the first twenty pages of a textbook on Developmental Theory. Unfortunately, Developmental Theory is not alone; the Masters exam Content Outline is full of comparably general terms, coexisting uneasily with other examples of relatively specific terms. It is as if one were reading an outline in which People Everywhere and Life and Truth were on the same list with My Left Index Finger and The Cockroach in the Kitchen Sink: it would seem to be a list composed by a scattered and/or amusing person or process — as if ASWB were joking with us, or did not really know what it was doing.

In addition to that problem of variant specificity, the Masters exam Content Outline exhibits another problem encountered above, in the Outline’s higher levels: some KSAs appear to be miscategorized. One pair of examples is Social Justice and Economic Justice, which ASWB (contradicting the NASW Code) says are not ethical issues. Another example: Concept of Empathy, which the Outline says is not an element of Direct/Micro practice. A third example: Culturally Competent Social Work Practice, which the Outline says is not a part of the Competency dealing with Diversity, Social/Economic Justice, and Oppression.

Among the KSAs, as at the higher levels (above), there is again the problem of composite headings. For instance, the Masters exam Content Outline oddly considers Professional Values and Ethics to be just one of 31 topics within the Professional Relationships, Values, and Ethics Content Area. It appears that the person(s) who composed this part of the Content Outline may not have been too clear on what professional ethics might entail: the list of KSAs appearing under the Professional Values and Ethical Issues Competency treats Professional Values and Ethics as if they were different from ethical dilemmas, ethics in practice, and professional boundaries.

The Theories and Models Competency, discussed above, comes up again, here, as another area with composite heading problems at the KSA level. It is not clear why anyone who knows what theories and models are would treat them as a separate competency, distinct from the specific areas in which they arise. If the explanation for this strange aspect of the Outline is that practitioners are coming out of their MSW programs with uncertainty as to what theories and models are, or how they relate to practice, why perpetuate the problem by segregating theories and models into their own little world, here in the Content Outline? The situation is more disturbing, of course, if the Content Outline is so arranged because the person(s) responsible for it were not too sure how theories and models might be relevant to direct practice, to ethics, or to other major or minor topics listed on the Outline.

A few specific examples within the Theories and Models Competency may help to clarify these remarks. One KSA under that Competency is Addiction Theories and Concepts. The argument here is that the Addiction Theories and Concepts KSA belongs in the Direct/Micro practice section of the Outline. Practitioners are not likely to be thinking about or working with Addiction Theories and Concepts in settings that would be completely divorced from real-life cases involving people with addictions. Addiction work is not something to which one adds a bit of theory and, say, a dash of ethics. To the contrary, it is not possible to engage in addiction work without explicit or implicit adoption of theoretical and ethical perspectives. To cite another example, the Psychosocial Approach listed as a KSA in the Outline’s Direct/Micro section “has always drawn on both psychological and social theories” (Goldstein, 2010); hence, psychosocial theories should not be arbitrarily separated from that Direct/Micro section.

Such comments suggest that the Masters exam might be better reconceived as something more similar to licensing exams in other professions. If you want to test people on the law of real estate, you give them a real estate scenario and ask them questions about the actual practice of real estate law. You don’t hit them with textbook-style questions about abstract principles. It seems, that is, that the Masters exam Content Outline might best be replaced with a document focused on specific kinds of practice.

(That, incidentally, would highlight the inordinate number of practice settings that this single exam purports to test, raising again the question of whether ASWB’s exams conform to what clients, the public, and other stakeholders need, as distinct from what seems convenient to social work professors and practitioners. If someone can pass the Clinical exam straight out of school, and if s/he is going into clinical practice, why require a separate Masters exam? Yes, s/he may need to know non-clinical things about the profession. But we are talking about someone who has already spent years earning a degree in the field. If you doubt that their school of social work has prepared them for the social work profession, then stop accepting their school’s degree as a valid credential.)

The foregoing discussion has offered a few examples of different kinds of problems arising among the KSAs and, above them, among the so-called Content Areas and Competencies. There are problems at an even higher level. While this discussion has focused on the Masters exam Content Outline, there are also distinct Content Outlines for the other ASWB exams. Each of those outlines has its own list of KSAs. It appears these lists may not be entirely coherent, when viewed in light of one another. For example, ASWB indicates that, unlike the situation shown above for the Masters exam, Theories and Models are not a Competency within the Clinical exam; instead, theories (of e.g., personality, behavior, addiction) just appear as KSAs here and there, under other Competencies. As another example of seeming incoherence among ASWB exam outlines, the Masters Content Outline summarizes Developmental Theories into a single KSA (above), whereas the Bachelors Content Outline breaks development out into a dozen different subtopics (e.g., child behavior and development). One might have expected that the Masters exam, not the Bachelors, would be the one requiring more detailed knowledge.

This section has focused on the contents of the Masters exam Content Outline. The discussion has pointed out a few among the many problems permeating that document. It does not appear that the Outline was prepared with great care by knowledgeable experts. If that impression is correct, ASWB is not justified in using that Outline as, in its words, a “blueprint” for a licensing exam that costs MSW graduates millions of dollars, collectively, and that deters, delays, or denies the career plans of MSW graduates.

How the Content Outlines Were Developed

Generally, it is unclear who developed the Masters exam Content Outline, or how they proceeded to develop that Outline. This section discusses some possible answers to those questions.

It does not appear that one or more real experts on social work practice were involved in the process, or that the author(s) of the Outline simply adopted the Table of Contents or some other outline from an established textbook on social work practice. For one thing, there is no mention of any such person or textbook. In addition, as discussed above, the Outline shows signs of being prepared by people who lacked a clear understanding of various terms and concepts incorporated into that Outline, or of how those terms and concepts would relate to practice.

It also appears very unlikely that the Outline simply emerged, as if by magic, from the results of the survey of practitioners. For one thing, it seems that the tasks listed on the survey were not equivalent to the KSAs — that ASWB’s survey did not simply confront participants with terms like “Developmental theories,” “Phases of intervention,” and “Limit setting,” to quote three relatively inscrutable KSAs from the Masters exam Content Outline. More likely, they were asked to rate something like “Counsel clients in grief” or “Advocate on behalf of disadvantaged persons.” So the items on the Content Outline were evidently not the tasks listed on the survey, and the survey does not seem to have asked participants to indicate how tasks or KSAs should be arranged and subdivided.

One possibility is that the Task Force created the Content Outline. The existence of a Task Force is disclosed in ASWB’s Summary Report about the 2010 Practice Analysis. The Task Force apparently included the 21 subject matter experts (SMEs) and perhaps some others; it is not clear. The identities and credentials of these individuals are not available. As discussed above, the SMEs were not necessarily experts in any deep sense of the word. Their activity appears to have been focused on creating a list of tasks, surveying social workers on those tasks, and reviewing the data obtained from the survey. None of this seems to be directly tied to the Content Outline.

ASWB generally seems willing, indeed eager, to emphasize contributions by its various volunteers and participants. If the Task Force or the SMEs had written the Content Outlines, it appears likely that ASWB would say so. ASWB does not say so. What I have found, instead, is vague wording in which ASWB seems to avoid stating precisely who wrote the Content Outline. For example, in its Summary Report about the 2010 Practice Analysis, ASWB says that conclusions reached by the Task Force “would be included in the exam content outlines” — but who would write the words achieving that inclusion? ASWB’s (2011, p. 5) Exam Blue Book refers to “content outlines from the practice analysis,” but again fails to say how those outlines came from that analysis. (See also ASWB’s 2010 Annual Report, p. 8.)

It appears, then, that the Task Force (i.e., the SMEs, accompanied or perhaps guided by others) reached conclusions about the survey data, but that someone else drew upon those conclusions to produce the Masters exam Content Outline. We apparently cannot know who that person, or those persons, might have been. But it may be possible to narrow the field.

Consider, in particular, what else those sources say about the process. As noted above, ASWB’s (2011, p. 5) Exam Blue Book states that the current Content Outlines contain “basically the same material” as the ones finalized in 2003, although major headings were rearranged. Likewise, the Summary Report (p. 4) says that changes to the Content Outlines in 2010 “were extensive, but the modifications are more in the organization of the content outlines than in the actual content to be measured.”

Let us pause to reflect upon those words. Headings of an outline were extensively rearranged, but this did not affect the contents. So, for example, if you were to rearrange the major headings of an outline of American history, that’s OK, as long as you leave the subpoints unchanged. George W. Bush is still listed under the American Presidents heading, but American Presidents has now been moved to the American Comedians section. No harm done: it’s just a change in “organization” rather than in “actual content.”

This way of looking at things may help to explain some of the bizarre aspects of the Content Outline (above). It seems that the people rearranging the Outline did not think that the relationships among the Outline’s headings and its KSAs really mattered much. Evidently this mindset seemed perfectly normal to the persons who wrote and approved the words of ASWB’s Exam Blue Book and its Summary Report. Given that ASWB claims to consider the 2010 Practice Analysis very important, it appears that this sort of outline surgery must be consistent with the thinking of ASWB’s executive director.

There remains the question of how ASWB made the step from the survey results to the KSAs. The following paragraphs discuss that aspect of ASWB’s process.

For one thing, there could not be a straightforward one-to-one relationship between survey items and KSAs; sheer numbers would prevent that. The survey listed 169 tasks (above), whereas there are (by my count) 191 KSAs on the Masters exam Content Outline. The Summary Report (p. 3) claims there are “more than 800” KSAs in total, although the foregoing comparison of Bachelors and Masters exams suggests that this number is arbitrary, depending upon how one collapses or expands various topics.

Moreover, there are no guarantees that every KSA is linked to at least one task on the survey, and vice versa. As we have just seen, the 2010 Content Outline was, for the most part, a rearranged version of the 2003 outline. The Task Force is said to have constructed the survey by consulting the list of tasks used in 2003, but there is no clear indication that anyone tried to reconcile the task list and the Content Outlines, in either 2003 or 2010. So there may be tasks whose components are not specifically reflected in KSAs, and there may be KSAs that were not included in any of the tasks on the survey.

So it appears that there could be KSAs that stayed on the Content Outline, despite no obvious link with anything in the survey, merely because they had somehow gotten onto the Content Outline; there did not appear to be any rigorous effort to make sure that old items on the Content Outline were still relevant to practice. (For a brief history of ASWB content outlines back to 1987, see Blue Book p. 5.)

Reciprocally, a task on the survey could require forms of knowledge or skill that would not be specifically reflected in the Content Outline. According to the Summary Report (p. 1), “The Task Force reviewed the [survey] data and linked each task to a competency that would be included in the exam content outlines.” This statement suggests that there was not even an attempt to link survey tasks with discrete KSAs.

As ASWB’s Manual for New Board Members (2010, p. 18) admits, “Usually, there are several knowledge areas attached to any one task, reflecting the complexity of social work practice.” For instance, a social work task (or an exam question, with its four multiple choice answers) related to trauma could require knowledge about KSAs appearing under several different Masters exam Content Areas (e.g., Impact of Stress, Trauma, and Violence, in Content Area I; Indicators of Traumatic Stress and Violence, in Content Area II; Crisis Intervention Approach, in Content Area III). But ASWB’s process does not seem to have included a step in which real experts (in e.g., trauma) would attempt to identify all of the discrete topics (including those not yet appearing in the KSAs) arising from each survey task. The survey and the Content Outline seem to have been following somewhat separate tracks.

As just noted, ASWB linked survey tasks with Competencies, not with KSAs. Most of ASWB’s eleven competencies, on the Masters exam Content Outline, were extremely general. This linkage would not usually say anything meaningful. For instance, it would be obvious that a client advocacy task would be linked with Direct/Micro practice. Awareness of that link could have come from a textbook or simply from common sense.

It seems the only purpose of a link between survey tasks and Competencies was to yield the grading percentages cited on the Content Outline — the indication that, for example, HDDBE accounted for 28% of the Masters exam score. Despite the fact that a single task could involve several knowledge areas, it seems ASWB selected just one of those knowledge areas — just one Competency — as the link between that task and the Outline.

How, then, did ASWB arrive at the calculation that 28% of the Masters exam score should be attributed to HDDBE? A clue arises from the description of ASWB’s exam scoring process. According to the Exam Blue Book (p. 10), ASWB’s grading is quite simple. There is no attempt to divvy up the credit from a correct answer, on the exam, across the multiple areas of knowledge that one must possess to answer the question correctly. Each exam question is linked to just one Content Area. So if a Content Area is worth 28% of the total score, that means that 28% of the exam’s questions will be linked to that Content Area. It seems likely that similar logic prevailed at the phase of linking survey tasks to the Content Outline: the figure of 28% evidently came from the belief that 28% of the tasks on the survey were primarily linked to the several Competencies within the HDDBE Content Area.

In short, it looks like ASWB translated real-life tasks from the survey into the chaos of the Content Outline, and that ASWB then translates parts of the Content Outline back into real-life situations constituting exam questions. It appears, that is, that the Content Outline complicates the process without adding coherence or rigor. It would seemingly make more sense to dispense with the Outline and proceed directly from the tasks on a survey to questions on an exam — to design the survey, that is, to do a good job of identifying what social workers do, and to ask exam questions that arise directly from the survey’s results. For instance, examinees might be informed that a certain kind of scenario might appear on the exam, and that the factors that may vary within that scenario could include the client’s age, sex, and other characteristics, the school’s treatment of the client, and so forth.

Overall, the foregoing investigation suggests that the specific elements of ASWB’s Content Outlines — the so-called Content Areas, Competencies, and KSAs — may be worse than useless. There appears to be little if any justification for the claims surrounding these outline components, or for the uses made of them. The “Content Area” label is patently vacuous; if an outline is used at all, its top level probably should consist of major divisions that have some clear meaning and logic. The “Competencies” do not generally seem to be actual competencies, nor do they obviously identify the full set of the key competencies required for practice. The KSA concept does not seem to make any sense; it appears to be shorthand that means “an idea, a large or small area of knowledge, a specific ability, or anything else that should be included in the list of what we think social workers should know.” It seems, in short, that ASWB finds it convenient to have an outline, and in that case it is more impressive to use these glorified terms than to speak of mere points and subpoints on a list.

Clearing away the fog of the Content Outlines may permit certain beneficial developments. On one hand, there may be some advantages in the creation of a defensible list of general concepts or principles within social work practice — of, say, twenty areas in which MSWs should have specific abilities. On the other hand, there may be advantages in dispensing with the motley hundreds of KSAs and commencing development, instead, of many thousands of discrete situational rules leading to a computerized practice database comparable to those used in other professions.

To summarize this section, the Masters exam Content Outline appears to be confused, not directly related to the 2010 Practice Analysis, not useful for purposes of exam prep guidance, not essential for exam question development, and not consistent with or broadly supported by relevant literature. It appears that the primary effect of the survey was just to provide a justification for the allocation of points on ASWB’s exams — and yet, in net terms, that allocation appears to be done poorly. To a very casual observer, the process of developing and mailing out 16,000 surveys might look like legitimate research. There are grounds to wonder whether the generation of that impression was the survey’s primary objective.

From Content Outlines to Exam Questions

The preceding discussion has largely focused on the relationship between ASWB’s survey of social workers and its Content Outline. This section takes the Content Outline as a given and turns, instead, to the production of exam questions from that outline.

As discussed above, the Masters exam Content Outline consists of four primary headings, called Content Areas; eleven secondary headings, called Competencies; and numerous subsidiary points, called KSAs. According to the Summary Report on ASWB’s 2010 Practice Analysis, the “competency statements” (by which ASWB apparently just means the Competencies) are “the building blocks of the exams.”

In this regard, ASWB’s description does not seem to fit the situation. “Building blocks” are, for example, bricks: small, solid pieces that can be put together to make something big. These 11 so-called Competencies are obviously too vague to serve as the fundamental elements of an exam.

Nor are the KSAs listed on the Masters exam Content Outline useful as exam building blocks. For one thing, the preceding sections have noted that the KSAs vary in generality. It is unrealistic to expect people to display even “minimum competency,” as ASWB puts it, across those KSAs. Consider the KSA calling for minimum competency to solve real-world problems with the “use of expertise from other disciplines.” That demand is bottomless: expertise in other disciplines goes on forever, and in many regards greatly exceeds the expertise within this discipline.

Many of the 191 KSAs on the Masters exam are topics of great breadth and difficulty. Examples include psychopharmacology, human genetics, impact of social institutions on society, and current Diagnostic and Statistical Manual diagnostic framework and criteria. Minimum competency in even one of those areas can be the focus of an entire master’s degree program. The expectation that MSWs have minimum competency across 191 such areas is absurd. Nor is there any evidence that ASWB’s writers and reviewers of exam questions have remotely the depth of education and experience to develop competent questions across such endless horizons of human knowledge. In short, contrary to its claims, ASWB cannot possibly be testing Masters examinees for minimum competency across those 191 KSAs.

This is apparently why, when you begin to prepare for the Masters exam, you discover that ASWB does not offer to sell you a book containing the material you would need to learn for such an exam. They would have to sell you a bookmobile. Instead, their test preparation materials consist largely of sample questions. The idea seems to be that, if you buy all of their study materials, for a total price of about $210, you will have the best available (authorized) idea of the kinds of questions they are likely to ask. That seems to be all they can do, to help you figure out what might be on the exam.

ASWB’s claim of testing minimum competency is not credible. A more realistic goal would be to test people for minimum familiarity with those 191 KSAs. An example of the exam prep mission, in that case, would be like this: within textbooks on Developmental Theories and Human Genetics, you should just try to know what the names of the chapters (e.g., Acetylcholine, DNA Structure) mean.

Why do Masters exam candidates not simply laugh when they encounter the expectation that they will be even minimally competent in so many complex subjects? There appear to be several reasons. One is that people are sheep. As shown in my own experience in the school of social work at Indiana University, survival in a homogenizing (in some ways oppressive) society, and in its major institutions (e.g., universities), requires learning not to ask awkward questions or otherwise make waves. The impossibility of minimal competence suggests that MSWs must simply fake it; they may feel that their only real option is to figure out how.

Another reason why Masters exam candidates do not laugh is that, for people who have endured MSW programs, such outlandish claims have become routine. This is hardly the first time that the syllabus tells us we will learn X and Y; we endure a semester of sitting around and reading (or, often, not reading) a bunch of assigned books and articles; at the end, we do not actually have much to show for it, in terms of real-world capability; and for this, they give us an award. So, sure, we can pass an exam designed to test social work “competence.”

There is another possible explanation. In the kind of education just described, people may simply not understand what competence is. They may think that, by standing up and spouting off, they are achieving something similar to what the engineer achieves when s/he builds a bridge. It may appear that, in this profession, to be practicing as expected, all you need is to say things that your professors would approve — and that doing so somehow makes you the professional equivalent of that engineer. Social work education does not generally test the ability to achieve real things with people; why should the licensing exam be any different?

With the aid of such remarks, let us reconsider what the ASWB Masters exam is testing. Not competence; that much is clear. Not familiarity across the entire contents of the many disciplines, theories, and subjects listed in the KSAs; our social work educations did not seek to achieve that either. We did not, in fact, have even a rudimentary introduction to subjects like Human Genetics and Psychopharmacology, though perhaps we did acquire fragmentary bits of learning about them in passing.

The significance of the Masters exam Content Outline, it seems, is not that we need to “know” all the subjects that it lists, in any realistic sense of the word; it is just that those are the subjects that could arise, in at least some tangential fashion, as we engage in the day-to-day practice of social work. There probably won’t be an exam question exploring Developmental Theories per se, but there could be a question about something else, where one of the four multiple-choice answers uses a concept from somebody’s developmental theory.

That could explain why there is not, and perhaps cannot be, an authoritative textbook that would contain the knowledge needed for the Masters exam. Textbooks are supposed to cover the waterfront. If they pay any attention to developmental theories at all, they are supposed to present the two or five leading developmental theories, and elaborate upon each of them. This requires a lot of verbiage. A textbook, providing textbook-style coverage of the myriad topics appearing on the Masters exam Content Outline, would have a million pages.

This leaves the problem of how a person can possibly prepare for the Masters exam. The first conclusion appears to be that you cannot study for it by simply launching forth into extended reading on the various topics on the Content Outline. You might get bogged down and lost, miles away from anything that would ever appear on the exam. As shown in another post’s discussion of sample exam questions, it is even possible that reading on your own would do more harm than good, for purposes of exam preparation. Social work education, and ASWB’s grasp of the subjects it tests, are not necessarily up to the level of the information that you might acquire from other sources.

So here is one way to see the situation. This perspective may apply more to some people and situations than to others. There may also be better ways to phrase this perspective, so as to apply to a broader set of people and situations. But as a rough suggestion, the following paragraphs may suffice.

The suggestion is, in effect, that social work education may always have been oriented toward studying and doing things that social work students want to study and do, as distinct from what they are good at, or what the world most needs from them. It may be that schools of social work have become more oriented toward such catering to students in recent decades, as the effort to profit from the marketing of social work education has become more established.

In that sort of environment, within the generally grade-inflated ambiance of American higher education, it often seems to be tacitly accepted that social work students will be more inclined to study or at least skim assigned readings that they agree with (see Knobloch-Westerwick & Meng, 2009) or otherwise find appealing. Materials featuring a relatively quick and superficial treatment of a subject, with some interesting tidbits from here and there, are likely to be more appealing than texts that require sustained attention, familiarity with prior literature, technical (e.g., statistical) sophistication, or other relatively advanced education or ability.

For the person preparing for an ASWB exam, what seems essential is not actual mastery of the topics listed in the Content Outline, but rather the experience of going through a few years in a school of social work. That experience can help to inculcate an understanding, not of Developmental Theories in the abstract, but of which developmental theory social workers find most appealing, along with a sense of what we believe about male and female roles, what we think about Borderline Personality Disorder, what forms of labeling and blaming are encouraged, and so forth.

Seen in this light, the purpose of preparation for an ASWB exam is clearly not to arrive at a nuanced and deeply informed understanding of various topics. It is, rather, to learn what the profession has adopted as the specific right answer to some issues and, beyond that, to develop a general-purpose sense of how a social worker is supposed to react to various terms, concepts, and situations. Here’s how I phrased the situation in a prior draft of this post:

If you see it this way, ASWB is not really lying to people. If this profession implicitly defines minimal competence as the ability to bullshit people across a broad variety of topics, most of us have indeed been educated, and ASWB is right on target in proposing to verify that. From this perspective, our educations have consisted, not so much of developing empirically supported knowledge through the thoughtful reading and intelligent discussion of scholarly works, but rather of acquiring buzzwords and speculations to fill chinks in our armor.

This seems to be, roughly speaking, the guiding philosophy that enables exam-takers to find a path through the endless piles of material that the KSAs point toward. It is another way of phrasing something that Collins said, as quoted in another post: “[O]ne study suggests that passing licensing exams is correlated more with the candidate’s personal therapy experience rather than fieldwork, work experience, graduate or undergraduate education, or clinical training.”

This seems to be approximately the nature of the Masters exam. It would seem to take some cynicism for the smart people at ASWB to participate in this kind of enterprise. Cynicism is often the price of high compensation. But a somewhat understandable way to view the situation is that our society does not presently accept the idea of good socialistic protections for the vulnerable; therefore, it becomes necessary to invent substitute protections that look capitalistic. So we build a social work profession, with licensing and exams and so forth, modeled on the legal profession’s protections for the wealthy. Society generally accepts the need for lawyers; therefore, a social work scheme that emulates the law’s licensing process might be able to fly under the radar. It seems to work: social work licensing boards have not been hugely controversial. Now we just need someone to administer something like the bar exam and — no sooner said than done — we have the ASWB exams.

Thus we find ourselves with a sort of circus — about which the famous man said, you can fool all of the people some of the time. That is, eventually there are consequences. For MSWs, we face a situation where, in a world of increasingly sophisticated mental health professions, clinical practice is at risk of becoming obsolete. In an era when admission to nursing school (never mind a psychiatric nursing program) tends to be highly competitive, and sophisticated technology is becoming a growing part of mental health treatment, almost anyone can get into an MSW program, and five out of six of those programs’ graduates who attempt the ASWB Masters exam will pass on the first try. You can decide for yourself whether my other post is on target, in its discussion of what those graduates are paid and how their careers tend to turn out.

The consequences for ASWB are not quite like that. ASWB is making good money from its exam fees and sales of related merchandise; it hopes to do more of the same through an expansion abroad (Annual Report, 2012, p. 18). From ASWB’s perspective, this is not the time to be shy about the Masters exam, or about social work education.

For many years now, ASWB has resisted the call (discussed in another post) to let researchers review its exam testing data, so as to verify that ASWB licensing exams really are testing competence. ASWB isn’t afraid that researchers will discover that they haven’t been doing rigorous testing of exam questions. They are afraid that researchers will deduce that ASWB has been deliberately avoiding rigorous testing of exam questions. You can’t just flunk 95% of MSW program graduates on the grounds that neither they nor anybody else can display minimal competence across the topics in the Content Outline. There would be hell to pay. The only way to make the system work, in its present form, is to help most MSW program graduates identify the “right” answers to exam questions. This is why the research by Albright and Thyer, discussed in that other post, found that test-takers displayed a remarkable ability to select the right multiple-choice answers without even being able to see the questions: ASWB had written those answers to help the test-taker along.

A researcher, given access to ASWB’s exams from the past ten years, would probably be able to boil down the subjects covered in those exams to a manageable list of topics and recommended readings. It would be possible for ASWB to offer a compact set of exam prep books, or a pool of 3,000 questions from which the exam’s 150 scored questions will be drawn. ASWB has done nothing like that. Doing so would tend to imply that students do not really need to sit in MSW classrooms for two years.

It seems, in short, that the system is rigged. ASWB is not jeopardizing two-year MSW programs by disclosing the smallness of the areas tested, and ASWB is also not jeopardizing itself by coming clean on its art of making a potentially impossible exam easily passable. What is at stake here is not just the survival of ASWB and of those two-year MSW programs, however; it is the profession itself. What unifies this highly scattered quasi-profession is not a readily identifiable body of shared empirical knowledge. It is the experience of being steeped, during two or more years of graduate study, in an academic culture that teaches us what to think and how to react. This may explain why social work professors have sometimes been observed clinging tenaciously to things that make no sense: it may be all they have.

The suggestion here is that, in effect, ASWB helps to hold the profession together by conferring legitimacy upon the two-year MSW process of being marinated in approved ways of thinking. What makes social work a profession is not so much the wildly disparate things that social workers do, often with little training in how to do them. What makes it a profession is like what makes a person a lifelong fan of his/her college basketball team: it’s where you’re from; it’s part of who you see yourself as. It is a circle, and circles are simple: you are a social worker because you attended a social work program, and you attended a social work program in order to become a social worker. Who cares what the profession’s definition is?

It seems, in other words, that ASWB is in a position to capitalize upon, and must live up to, expectations that it will wave a magic wand and produce herds of test-takers who study for and pass a putatively rational exam at a fairly stable rate. This is what preserves prestigious and lucrative careers, in schools of social work and at places like ASWB. As long as the whole thing doesn’t absolutely stink, state licensing authorities will find it difficult to buck the tide; they will pretty much go along and hope that the market eventually helps consumers of mental health services, at least, to figure out which sorts of mental health professionals deserve the big bucks. The poor, often presented with no choice as to the quality of social workers impinging upon their lives, will have to fend for themselves. In this regard and others, social work has thus become more or less the opposite of what it claims to be.

For what may be clever reasons, ASWB has substantially failed to identify and communicate, to test-takers, a meaningful and realistic list of topical priorities for purposes of exam preparation. This state of affairs tends to favor the wealthy, who will be able to afford assistance from professional test prep organizations that may have data banks containing questions that have appeared on ASWB exams, along with useful insights on what to study and how to answer certain kinds of questions. In another reversal of the profession’s alleged priorities, then, it is no surprise if test-takers from relatively disadvantaged regions and schools have lower pass rates. This, in general, is the scheme by which unlicensed minority social workers of yore are supplanted by whiter and supposedly more educated social workers possessing that expensive license.

ASWB’s Item Writers

There remains just one problem: ASWB still has to write tests on topics discussed in thousands if not millions of pages of scholarly literature. Where to begin?

The quotation provided in the Introduction to this post provides an answer. After mentioning the Practice Analysis and the Content Outline (above), that quotation says this:

Questions on the exam are written by practicing social workers, a group of individuals who are selected to reflect diversity in practice setting, ethnicity, race, and geography.

This quote says that what we put on the exam will be shaped by two factors: practicing social workers will write exam questions that seem appropriate to them; and what they write, and how they write it, will be shaped by their demographic characteristics.

Let us begin with the second part of that formulation. It seems to say that the people who write questions (what ASWB usually calls “items”) for ASWB exams exhibit diversity in four ways: practice setting, ethnicity, race, and geography. This claim is difficult to assess, because ASWB withholds so much information, as noted above; but there are some ways to get a sense of the situation.

At this writing, ASWB’s most recent Annual Report is the one for 2012. That report (p. 11) says, “A new class of examination question writers was trained in mid-summer.” Accompanying those words, the report provides a photo of this new class (consisting of 20 people) and a list of their names and the states from which they hail. Another document, called the ASWB Program Yearbook (2013), similarly provides a list of “Item Writers, class of ’13.” There are 19 names in that list. These two documents convey the impression that ASWB retains and trains about 20 Item Writers each year. This is confirmed on the Exam Writer Program webpage, which states that ASWB “accepts only 20-25 participants a year.”

That Exam Writer Program webpage also says that the program’s one-year contracts “are renewable indefinitely, depending on the performance of the writer.” Then again, ASWB’s Policy Manual (2012, sec. 2, p. 11) states that these contracts are renewable for a maximum of three years. The ASWB Association News issue of March/April 2014 says that the contract is renewable for only two additional years, which may mean the same thing. Writers apparently need not have passed the exams for which they are writing questions; the webpage indicates that, for a year after they leave the program, they must agree not to take any licensing exam to which they have contributed questions.

One cannot be sure how many Item Writers ASWB employs. While a single class in the past few years has consisted of about 20 people, the possibility of renewal for two or three years (or more) suggests that there could be 60 or more Item Writers altogether. One apparent constraint arises from the indication, on the Exam Writer Program webpage, that writers are expected to submit 30 items per year. The 2012 Annual Report (p. 10) says that Item Writers submitted a total of 1,634 new test questions in 2012. The March/April 2014 issue of Association News seems to say that Item Writers submitted 1,635 new questions in 2013. (This incredible coincidence is apparently not an error; the reported percentages of items approved vary between those two sources.) A set of 1,635 new questions, at a rate of 30 questions per writer, implies that there were exactly 54.5 writers in each of those two years. Apparently someone had worked out a half-time contract.

The situation seems to be that ASWB brings in about 20 new writers, tells them that they can renew two or three times if they perform well, and then allows a few really good ones to linger on beyond the three-year mark. The calculation of 54.5 writers suggests that most do renew for at least a year or two.

Thus, it seems that, while the two sets of names and photos mentioned above do not provide a complete list of Item Writers (and ASWB avoids making any such list available), they do provide a sense of what the majority composition of the team was like, as of 2013. Let us begin, then, with ASWB’s claim that these Item Writers are geographically diverse. Is that claim accurate?

The answer depends on what one means by geography. As noted above, a person from Chicago might be considered similar to one from New York or, alternately, from rural Illinois, depending on whether geographical similarity is intended to involve urban/rural or East Coast / Midwest distinctions. That said, the two lists (above) both name the writers’ states of origin, so apparently that is the basis of ASWB’s claim.

On a state basis, the estimated pool of 54.5 Item Writers in 2013 probably did not reflect geographical diversity. Of the 39 known members of the classes of 2012 and 2013, 18 (46%) came from the South (i.e., Virginia through Texas); 9 (23%) came from the East (i.e., Maryland through Maine); 4 (10%) came from states and provinces bordering on the Pacific Ocean (bearing in mind that California was not an ASWB state); and 8 (21%) came from the Midwest. In the East, there were none from New England. There were none from any Rocky Mountain states, from Arizona and New Mexico all the way up to Montana and Idaho. In the midwestern states west of the Mississippi, there were only four from Missouri and Kansas north to Canada. In terms of most populous states, there were five from New York and six from Texas, but only one each from Florida, Illinois, Pennsylvania, and Georgia, and none from Michigan, New Jersey, or Washington.

It is just as well that the quote at the start of this section did not claim to reflect diversity by sex; however, ASWB’s Candidate Handbook (2014, p. 2) does claim that Item Writers are selected to reflect gender diversity as well. Of the 39 members of the Item Writer classes of 2012 and 2013, only five were male. But then, if diversity is construed in terms of a group’s representation within the gender-skewed ranks of licensed social workers, perhaps that distribution would be about right.

The quote did claim diversity by ethnicity, race, and practice setting. The document listing the members of the class of 2013 did (but the list of members of the class of 2012 did not) provide details on these matters. Judging from the 2013 document, practice setting seems to have been construed as involving Direct Practice, Academia, or Other Indirect Practice. None of the members of the class of 2013 were in Other Indirect Practice. Five were in Academia. It is odd that Academia was included as a practice area; there is no ASWB exam for that area. Among those in Direct Practice, there was no specification of particular areas, and thus no way of knowing how many were specifically clinical, or otherwise what their direct practice specialties might have been.

Diversity by ethnicity and race was another area in which ASWB’s claim seemed unwarranted. Of the 19 members of the class of 2013, eight (42%) were white, four (21%) were Asian, two each (11%) were black and Hispanic. Regardless of whether ASWB was aiming to reflect diversity in the U.S. population or just among social workers, percentages of 42% white and 21% Asian do not provide accurate reflections of diversity.

The point here is not that participants in ASWB’s list of Item Writers, or in most other groups or lists, should be selected based on their appearances rather than their competence. They shouldn’t. The point is, rather, that ASWB’s claims appear to have been false.

What ASWB seems to have meant was, in effect, “We are going to choose a bunch of minorities to create a group of people who look more diverse than the social work profession really is.” They achieved that. But looks are deceiving. Regardless of skin color, a group of superficially different middle class suburbanites will not bring remotely the diversity, to the task of writing social work questions, that one would achieve by choosing an entirely white (or black, or Asian, or female) group to represent varieties of religious and political belief and socioeconomic status. Those are the kinds of areas where the real battles are being fought today – and, as usual, at ASWB and elsewhere, the social work profession is not there.

For purposes of writing exam questions, it is not clear that a preoccupation with racial, ethnic, or other forms of diversity even makes sense. ASWB’s statistical analyses (below) explicitly seek to remove bias from questions — to eliminate, among other things, the possibility that a question will be easy for some kinds of people and hard for others.

Possibly the idea would be that a member of a minority could notice a practice issue that would escape the attention of others. But that concept would have two problems. First, it would defeat the purpose of the survey (above), which was to obtain a broad and representative impression of social work practice today — not an impression shaped by the perspectives of targeted subgroups. Either approach could have its merits; the point here is just that you can’t have it both ways. Second, given the reality of white encroachment upon forms of social work practice in which minorities were previously more numerous, as discussed in another post, it sounds like ASWB’s goal is to assist in what that post characterizes as white dominance and cultural imperialism: to guide the profession’s predominantly white licensees in their efforts to replace unlicensed minority practitioners. In all events, as discussed above, engaging in racial or sexual discrimination does not guarantee diverse perspectives. You will probably be further ahead if you seek Item Writers who evince familiarity with ideas and perspectives conveyed in good literature about people who are old, black, unemployed, alienated, etc.

Instead of an unethical focus on skin color and the like, it would have been heartening if ASWB’s comments about its Item Writers had focused on their credentials and other signs of ability. The quote at the start of this section did claim that all ASWB Item Writers were “practicing social workers.” But as just noted, five of the 19 members of the class of 2013 were “practicing” only in the sense of being in academia. Being in academia could be good, if the academics were experts in their subject areas – but, again, that appears unlikely, as only two of the 19 had PhDs (fields unspecified). The other academics were MSWs who presumably taught clinical courses.

Practice experience uninformed by research arguably has its place, but the newcomers appeared mixed on that level as well. ASWB did not state how many years of experience any of them might have had. Their group photo provided a general impression that most were under 40 and some were under 30. While youth does not absolutely defeat the possibility of experience spanning a variety of topics and life circumstances, it does diminish it.

Guessing about people based on their appearance in a photo is obviously a fallback strategy. Needless to say, the better practice, by a nonprofit responsible for vetting thousands of would-be social workers, would involve presentation of Item Writers’ credentials and full disclosure generally.

Item Writers’ Areas of Expertise

For each member of the class of 2013, ASWB did provide a statement of the Item Writer’s areas of expertise. I compared those areas of expertise against the Masters exam Content Outline. There was very little direct correspondence between these two lists. For example, the areas of expertise named by some writers included Adolescents, Adolescent Behavior and Treatment, and Anxiety, but those terms did not appear in the Content Outline. In other words, somehow ASWB’s own question-writers named areas of specialization that were mostly not included in what ASWB described as the profession’s key areas of knowledge, skill, and ability.

In some cases, there were KSAs that would overlap to some extent with the Item Writers’ areas of claimed expertise. For example, the area of Adolescent Behavior and Treatment would be partly addressed by the Outline’s inclusion of Adolescent Development. But that was the only reference to adolescents in the entire Outline, and an understanding of development is certainly not the same as an understanding of treatment.

It was interesting enough that the set of roughly 200 items in ASWB’s Content Outline had so grossly failed to match up with, or even accommodate, the areas of expertise named by its own Item Writers. It was even more interesting that ASWB had not thought to ask these incoming Item Writers to express their areas of expertise in terms appearing in the Content Outline. That raised a question of whether ASWB took its own KSAs seriously, as terms that were useful for identifying actual areas of practice. Since the Masters exam was the most frequently administered ASWB exam, it certainly seemed that some of the Item Writers should have stated areas of expertise that would clearly correspond to topics tested by that exam.

If ASWB had asked these incoming Item Writers to phrase their areas of expertise in terms listed in the Content Outline, it is not clear what would have happened. Some of the Item Writers did not seem to have areas of expertise. For instance, one named Clinical Practice as her area of expertise. That would be the focus of the entire Clinical exam. On the Masters exam Content Outline, Clinical Practice would involve, among other things, the whole Direct/Micro Competency, comprised of more than 60 KSAs. But there was also the opposite problem: other Item Writers named areas of expertise that were mere subsets of a single KSA. For instance, one said that her area of expertise was Substance Abuse Among Offenders, which would be a specialty within the Indicators of Substance Abuse and Other Addictions KSA, and at the same time would also implicate KSAs having to do with prisons, offenders, incarceration, and criminality. Simply put, the KSAs in the Content Outline did not seem to constitute areas of expertise, as understood by this group of practicing social workers.

Incidentally, the previous paragraph mentions KSAs having to do with prisons, offenders, incarceration, and criminality. That mention is fictional. As it turns out, none of those four terms appeared to be represented in any way within the Masters exam Content Outline. It was not clear what to make of this. It did not seem that the entire social work profession was completely out of touch with that large set of social issues; after all, we did have this Item Writer who claimed an expertise in Substance Abuse Among Offenders. Somehow, ASWB’s Practice Analysis had not just missed these Item Writers’ particular specialties; it had missed the whole criminality/incarceration topic, just as it had missed unemployment, which is specifically identified as a form of social injustice in the NASW’s Code of Ethics).

In my impression, ASWB had missed these topics of criminality and unemployment, and had kept on missing them for years after scholars and commentators in other fields had devoted countless pages to them, because the narrowly educated white females dominating the profession, numerically and ideologically, have historically tended to view those subjects as men’s issues, and have remained largely opposed to serious engagement with that sort of thing. As noted above, unemployment somehow remained off ASWB’s radar even in the midst of the Great Recession. It seems, in other words, that ASWB’s convoluted 2010 Practice Analysis, and its Item Writer selection process, both failed to identify major areas that practicing social workers do encounter on a daily basis.

If nothing else, at least the Masters exam Content Outline should have been updated when these Item Writers arrived and began to claim areas of expertise missing from that Outline. There was, in other words, an apparent problem of rigidity and/or insularity, in which major flaws in the 2003 analysis appear to have been perpetuated, without serious reflection, in the 2010 Practice Analysis. It appeared that ASWB’s processes could have benefited greatly from fresh air — that is, from genuinely diverse perspectives on its efforts, provided by people from the outside world, with degrees in such fields as sociology, political science, law, and philosophy. Such people might have provided a salutary check on ASWB’s evidently inbred perceptions.

To the extent that ASWB remains wedded to its present approach going forward, there is a serious problem of verifiability. Given the organization’s refusal to share its testing data, stakeholders (including test-takers, licensing agencies, clients, and the public) have been forced to accept ASWB’s assurances that the concerns of diverse kinds of people have been taken into account throughout its test development processes. By this point, unfortunately, ASWB had little credibility. Continued trust without verification would seem inappropriate under the circumstances.

Pretest Questions

In the quotation provided at the start of this post, ASWB says this:

Every question [produced by the Item Writers is] reviewed by ASWB’s Examination Committee, a group of experienced social workers who approve all questions before they appear on the exam.

As indicated above, ASWB’s Item Writers submitted 1,634 test questions to the Exam Committee in 2012 and 1,635 in 2013. According to the sources cited there, the Exam Committee approved 1,127 (69%) and 1,234 (75%) of those submissions, respectively, for pretest. The quote provided at the start of this post explains what that means:

Every question starts out as a “pretest” question, included among the 170 questions on the exam but not counted toward the passing score. After the pretest questions [have] been answered often enough to provide statistically significant data, they are evaluated for difficulty as well as for signs of bias. Only after this statistical review is completed can a question become part of the bank of scored items on the exams.

In other words, ASWB says that each of its exams, of whatever kind, contains 20 pretest questions that are “included to measure their effectiveness as items on future examinations.” It is not clear when ASWB would begin testing the new pretest questions approved in 2013, so let’s take the average of the two years’ figures stated above: there appear to be about 1,180 (i.e., half of 1,127 + 1,234) new pretest questions per year. At a rate of 20 pretest questions per exam, ASWB is apparently able to test all of a typical year’s new pretest questions in just 59 exam administrations (i.e., 1,180 / 20). So the first 59 people who take ASWB’s exams in a typical year would be capable, collectively, of giving ASWB one response to each of the year’s batch of experimental questions. Every time ASWB administered another 59 exams, there would be another sample response to each of its 1,180 pretest questions.

When you’re dealing with tens of thousands of test-takers, you can accumulate several hundred sample responses to each pretest question, even if you are testing a couple thousand pretest questions. ASWB says that, in 2013, it administered a total of 27,699 exams of all kinds (Clinical, Masters, etc.). So it could repeat that exercise about 469 times (i.e., 27,699 / 59). In other words, each pretested question would get answered by nearly 500 different test-takers. This would provide a good opportunity to see whether the average test-taker scored much better or worse on some pretest questions than others (in which case those questions might be too easy or difficult), and whether certain kinds of test-takers (e.g., young people, women, Hispanics) tended to score much higher or lower than average on a particular question (i.e., “bias”).

It may be helpful to point out something about testing. If you have enough people answering your questions, you can demonstrate differences among them with almost any kind of question. Some people won’t know what day it is; some won’t know where Austria is; some will not be sure how many pennies there are in a dollar. Ask enough questions and you’ll also be able to eliminate those who get tired or bored too quickly. Step it up a few notches, and you will be able to identify social work test-takers who have trouble understanding basic instructions, following a story, or doing a simple calculation. Do a lot of pretesting, and you can develop a sense of what level of difficulty you need. Too many people entering the profession? Make the clinical question tougher or the story longer. Over a period of years, as you settle into your role, you get a sense of how to tweak your exam to produce a desired outcome. So, for instance, when Masters exam pass rates jumped sharply higher (from 74% to 83%) between 2010 and 2011, the explanation is probably that somebody at ASWB decided to ask easier questions and/or to lower the numbers of correct answers needed to be considered passing. Maybe there was pressure from schools of social work to let more social work graduates into the profession. Remarkably, ASWB’s Annual Report (2012, p. 10) ignored that jump, claiming that the rates had been stable when they clearly hadn’t. Perhaps even more remarkably, the Annual Report did admit that instability in the rates would cast doubt upon the claim that the test was reliable.

A supply of only 1,180 new pretest questions per year is not great, not when you’re dividing those questions among three different kinds of exams (i.e., primarily Bachelors, Masters, and Clinical; the Associates and Advanced Generalist exams were each taken by fewer than 200 people in the entire year of 2013). True, an average of about 400 new questions for each of those three kinds of exams (i.e., 1,180 / 3) is nearly three times the number of scored questions on an exam. But the test prep companies are at work, busily buying test-takers’ recollections to add to their databases of present or past ASWB exam questions. You need to keep ahead of them with new questions. Besides, you aren’t really getting 400 new questions per exam per year, because for reasons already noted (e.g., bias, difficulty), many of those questions will not pass the pretest.

Examination Committee

What does it mean — looking back, here, at the quotes provided at the start of the previous section — when ASWB says that its Exam Committee reviews the questions submitted by Item Writers and approves some for pretest? In terms of the numbers quoted above, it means that members of the Exam Committee tend to think that somewhere around 70-75% of submitted items have a good chance of making it through the statistical pretest. This is a judgment call: in the words of ASWB’s 2012 Annual Report (p. 10), “Exam Committee members are selected from the pool of Item Writers, which means they have been trained in good test writing practices and have gained experience as Item Writers themselves.”

An ASWB webpage, supplemented with an Examination Committee Report presented at the 2013 ASWB Annual Meeting (Hartley et al., 2013), outlined what Exam Committee members do. Apparently their primary duties are to review and edit new questions, or return them to the writer with rewrite suggestions; to decide whether to approve, delete, or revise and retest items that have been pretested; and to link test questions to KSAs. (That last task is left vague, in the ASWB materials I found: it is not clear what that linking involves, or what difference it makes.) Criteria for item approval, revision, or rejection include the concerns of difficulty level and bias cited above, along with whether the item is critical to practice, allows more than one correct answer, is applicable across all states and provinces, contains extraneous information, and so forth.

The ASWB Examination Program Yearbook (2013) provides a look at Exam Committee members. In that year, there were 18 members (17 in 2014). Exam Committee members were divided into three subgroups, one for each of the major exams. The 2013 Yearbook indicated that only five people on the Examination Committee had specific responsibility for the Masters exam. Those five consisted of four females, of whom three had doctoral degrees, and one male. Except for one black female, all were white. They were reported to average over 20 years of experience in teaching and/or practice, for the three who provided a number; pictures and writeups of the other two suggested that they, too, might be fairly experienced.

ASWB claimed that Exam Committee members, like Item Writers, were balanced for diversity, but the foregoing concerns about “diversity” among Item Writers seemed applicable here as well. If anything, a preoccupation with superficial variety would seem even less appropriate at this level. Higher education is, in many ways, a narrowing and homogenizing experience. A person who has progressed to an advanced level in social work education, especially, is at considerable risk of indulging parochial and unethical behaviors and beliefs like those illustrated in numerous posts in this blog and in my Indiana University blog. Being male or black at this level is not completely meaningless, but it is unlikely to indicate the presence of significantly heterodox perspectives. If anything, it is likely to serve a homogenizing function: complaints that the perspectives of men or blacks are ignored will likely be met with the reply: “Not at all! See, we have one of your type on our committee!” Frankly, I would much rather have a woman on the committee who understands male issues than a man who does not.

Here, as above, diversity seems to be a red herring, a distraction from the real issue of competence. It is reassuring that these people are educated and apparently experienced. Their job is not, ultimately, to represent male, black, or other factions in social work. It is to develop good questions for licensing exams. We are no longer in the 1960s: superficial differences that often would have implied deep divergence of outlook, back in the days of Women’s Liberation and Black Power, no longer do so. If you are one of only five Masters exam members on ASWB’s Exam Committee, and if you want to include diverse perspectives from our fragmented society in ASWB licensing exams, your sex and skin color are secondary; you need to be reading the literature, and understanding the perspectives, of groups with which you may never have direct personal experience.

For purposes of appraising the Exam Committee members, there is a second narrowing factor at work. ASWB makes clear that Exam Committee members are selected from the ranks of Item Writers. That does give them valuable direct experience. But it also reduces the odds that exam questions will be critiqued by anyone who is not in tune with the values of ASWB corporate culture. Every organization has its own way of seeing things; and sometimes, as in the case of ASWB, there are indications that the internal culture may be sick. A thorough investigation might well conclude that the Exam Committee would benefit from the addition of a gadfly, a position on the Committee (or perhaps within ASWB at large) held, perhaps on a rotating basis, by people who are decidedly not part of that internal culture. The existence of such a position could help to mitigate the kinds of problems with ASWB exam questions that I have identified in a separate post.

It appears that Exam Committee members are not full-time ASWB employees and do not generally work in ASWB’s offices. The Examination Program Yearbook (2013) states that the committee members meet only about four times per year. The Yearbook indicates that Exam Committee meetings are also attended by five Item Development Consultants who provide unspecified assistance to the Item Writers.

Hartley et al. (2013, p. 17) seem to indicate that, at their quarterly meetings, Exam Committee members also engage in Form Review. This activity entails review of draft exams — that is, of sets of 150 questions that may become the scored parts of real exams. This phase of the process is unclear. It appears that someone selects a set of 150 pretested questions for use in the draft exam. Again at this stage, Exam Committee members apparently have the option to decide that questions need to be revised. Apparently they exercise that option frequently. At a time when about 1,180 new questions were being approved for pretest each year (above), Hartley et al. indicate that 713 (about 60%) went through Problem Item Review. Of those, 471 were revised and thus had to be pretested again, and 235 were archived, apparently meaning that they were deemed unusable at present.

It appears that, at the Form Review stage, draft exams are often dismantled and reassembled. Hartley et al. appear to indicate, however, that the Exam Committee does manage to assemble a new final test form at each of its quarterly meetings. It is not clear to what extent these exams may incorporate questions from the previous year, whether previous years’ exams continue to be reused, or how many forms (e.g., versions of the Masters exam) are in use at any one time.

Pearson VUE:
The Gorilla in the Living Room

The foregoing sections more or less exhaust the ASWB materials that I have been able to find. That is, the materials that ASWB makes available on its website tend to be silent on the statistical part of the enterprise. ASWB does not make clear who does the statistical analysis, how that analysis interacts with the processes described above, or what data result from such analysis. For example, there is no statement that Statistician X or Statistics Consulting Company Y examined the results from pretesting 1,000 draft questions in 2012 and determined that 10% were biased or too difficult. Here, again, someone who has read my post discussing broad concealment of information at ASWB might suspect that, if ASWB doesn’t provide basic information on something, they may be trying to hide it.

Within the materials available on ASWB’s website, there are very limited references to Pearson VUE. Pearson is the company that ASWB’s Annual Report (2012, p. 10) describes as the “test vendor.” Across ASWB’s website generally, details about Pearson’s role are very scarce. For instance, in Hartley’s (2013, p. 14) presentation, there is only one reference to Pearson. It says, “Committee members review statistics and content of each item and make real-time changes onscreen in Pearson’s item bank.”

From that paucity of detail, one might infer that Pearson is a minor participant, a mere assistant in ASWB’s diligent efforts to develop questions, test them, combine them into exams, and put them into service. That inference becomes less plausible as more evidence emerges, however. Much to the contrary, it appears that the entire business is built around Pearson — that Pearson is integral to, and dominant in, key parts of ASWB’s testing service.

One bit of evidence appears in Hartley’s remark. Note that she says Exam Committee members do their work on each test question within Pearson’s item bank. It seems Pearson, not ASWB, is providing the online workspace in which Exam Committee members revise, test, monitor, aggregate, and archive test questions. As noted above, ASWB’s exam-writing personnel are part-timers, in the project of developing the tests upon which this multimillion-dollar business depends; and when they do work, the infrastructure they use is Pearson’s.

Another bit of evidence arises from a visit to the website of Pearson VUE. According to that website, “Pearson VUE is a part of Pearson plc, a $9 billion corporation that is the largest commercial testing company and education publisher in the world.” Not exactly something you would try to hide from people, is it? And yet, oddly, ASWB’s IRS Form 990 (2012) seems to contain no reference to Pearson. In fact, it fails to list any independent contractors at all. That’s not how it used to be: ASWB’s Form 990 for 2010 admitted that it had paid $3.1 million to ACT Inc. in its capacity as ASWB’s previous “test administrator.” Why would ASWB not acknowledge Pearson VUE on its more recent Form 990? (For that matter, why are other pieces of information, such as the names of key employees, no longer provided on the Form 990?)

Much of the vagueness in ASWB’s descriptions of the foregoing steps in its process — especially in its descriptions of the Exam Committee — seems to result from deliberate attempts to avoid mentioning Pearson. Otherwise, it seems there would be explanations of how questions are submitted to Pearson for pretesting, what kinds of information Pearson hands back, and so forth. We would see an explanation of the judgment calls that Exam Committee members make when Pearson tells them that a certain pretest question has exhibited a certain degree of bias. But no, ASWB provides no information of that nature.

It is not as though ASWB was highly forthcoming in other regards discussed above. But compared to its silence about Pearson VUE, ASWB is an absolute fount of information about its Practice Analysis, its Content Outlines, its Item Writers, its Executive Committee, and so forth. The impression is that, to the extent that it volunteers information at all, ASWB tries to play up its part in the exam assembly process, and to minimize Pearson’s contribution.

Perhaps the logic is that, if influential people knew how much of ASWB’s work was farmed out to Pearson — if such people were to pause and reflect on the prospect that ASWB’s licensing exams are slapped together in quarterly conferences and then forwarded to Pearson for the real analysis — there would be a question of why it is, exactly, that ASWB deserves such high salaries and profits in the name of the public service of licensing.

ASWB’s 2012 Annual Report and Form 990 disclose unexplained Exam Costs of about $3.7 million. Judging from the similar but somewhat smaller amounts listed for ACT Inc. on the 2010 Form 990, it seems that this $3.7 million must consist substantially of payments to Pearson. This amount is about 13 times as much as ASWB spent on the Exam Committee. And that makes sense. Pearson is providing the exam centers, with their computers, personnel, and security; the statistical analysis; and God knows what else. In essence, if you are ASWB’s executive director, you pay $4 million for Pearson VUE to take your raw material, give you a system for revising and assembling it, and administer it to tens of thousands of people each year — and you wind up with nearly $2.5 million in net corporate profit and salaries and benefits for top executives, along with piles of money for other pursuits. As ASWB acknowledged in its Manual for New Board Members (2010, p. 15), nearly 40% of its budget was for matters other than exam development and maintenance circa 2010. It appears that a detailed analysis could yield a percentage well above that for 2013.

ASWB’s Manual for New Board Members illustrates some of the statistical problems that Pearson, apparently, must handle. According to that manual (pp. 18-20), statisticians must track and retain data on each pretest question, and on each question that finally becomes a scored test item, regarding its difficulty, bias, and other potential deficits (above). Differential Item Functioning calculations are performed to determine whether questions are uniquely easy or difficult for particular subgroups (Blue Book, 2011, p. 49). The resulting data are used to calculate an item-by-item difficulty rating. Those ratings are used in “a modified Angoff method” to identify “cut scores” separating a passing grade from a failing one. Then related calculations are done for each form of the exam, based upon the difficulty of its 150 questions (as determined by Item Response Theory — see Blue Book, 2011, p. 45), as compared to the relative difficulty of other forms of the exam, so as to preserve the same level of difficulty across all forms of the exam. It appears that Pearson also provides nonstatistical services, such as monitoring the Internet to detect sharing of exam questions (ASWB Annual Report, 2012, p. 10).

Incidentally, these statistical undertakings can be confusing for nonexperts. For example, because of scaling, one test-taker might get 110 out of 150 right, and fail, while another in the same jurisdiction, taking a more difficult form of the test, might get 100 right and pass — and their scaled scores might be reported as 68 and 78, respectively. ASWB’s Exam Development webpage says, “In general, test takers need to answer between 93 and 106 questions correctly to pass the test. The actual number varies depending on the difficulty of the exact version, or form, of the exam.”

Scaling also means that the same number of correct answers can be reported as different scaled scores. So if you and I take exactly the same form of the exam, and we both get exactly the minimum number of questions right, as calculated by Pearson, in order to achieve a passing grade on the particular form of the exam that we took — let’s say we both got 105 out of 150 — and if you are in State A, where the minimum passing score is 70, and I am in State B, where the minimum passing score is 75, then your score report will say that you got a 70, and mine will say that I got a 75.

It appears, in short, that ASWB’s people decide which questions they want to ask, within the exam form that they are currently developing, and then Pearson VUE conducts numerous complex statistical procedures on each exam question, and for each exam form as a whole; handles exam registrations; administers the exam; makes the necessary calculations; and reports each test-taker’s scaled score for purposes of his/her jurisdiction. It does seem that, but for the services of part-time question writers, ASWB could be eliminated from the equation, at a potential savings of 60% off the present per-person cost of its exams. An exam fee of $100 per test-taker would seem vastly more appropriate for entrants into the profession of social work.

Culmination of the Process:
A Sample Exam Question

In this post’s section on the Content Outline, there was an indication that ASWB’s exams require exam-takers to deal with an indefinitely large number of dimensions of knowledge. As a capstone to illustrate the outcomes produced by ASWB’s entire process, it may be interesting to look at an exam question that ASWB itself offers as a quintessential example of what exam-takers may expect to encounter.

A key thing to notice about this exam question is that it is very different from the kinds of questions that appear on licensing exams in other professions. Those professions’ licensing exam questions tend to start by making clear what the question is about. On the bar exam that would-be attorneys must past, it may be a question about courtroom procedure (e.g., National Conference of Bar Examiners, 2014, p. 18); on the medical board exam, it may be about the lungs (e.g., United States Medical Licensing Examination, 2014). Those exams don’t start by asking the test-taker to decide which profession s/he should be practicing. Yet that appears to be the implicit question in many ASWB exam questions. To illustrate, consider the sample Clinical exam question currently provided on ASWB’s website:

A six-year-old child lives with a foster family. His father is in prison and his mother is in residential treatment for alcohol dependence. The child is small for his age, often has temper outbursts, and has difficulty completing schoolwork. The social worker notes that his speech is immature. What should the social worker do FIRST?

A) Work with the foster parents on a behavior modification plan
B) Suggest that the child’s teacher refer him for special education placement
C) Refer the child for assessment for fetal alcohol syndrome
D) Work with the child’s biological mother toward reunification

Answer A implies that the test-taker must be knowledgeable regarding behavior modification plans. A quick search suggests that such plans may be examined especially by scholars in psychology, education, and law. Answer B refers to special education placements. In this case, the search leads to research involving such matters as race, poverty, and learning disabilities. Answer C refers to fetal alcohol syndrome, which is preeminently a psychiatric issue. Answer D, reunification with the biological mother, appears to be studied in journal articles in psychology, marriage and family therapy, and public administration.

Among those four choices, ASWB’s selection of answer C seems to be mistaken, for reasons elaborated in another post. But even if it were a good answer, the typical graduate of an MSW program would not know why. The question requires the test-taker to possess transdisciplinary knowledge of whether experts in a half-dozen different fields believe their preferred interventions should come first — and of why they may be wrong, if they do have such a belief. For instance, psychiatric referral in this sort of case could mean a wait of two months, before the psychiatrist has an opening. Meanwhile, experts in those other fields may have demonstrated that such referral often leads to dependence upon unaffordable and stigmatizing psychiatric interventions, and that the problem can be more quickly and affordably reduced or managed with interventions involving appropriate behavior modification plans, special education placements, or steps toward reunification. But is that what such experts have found, and should the social worker care?

In other words, an answer to this exam question seems to require (a) a statement of the objective(s) that the social worker should be pursuing in this case, which this ASWB exam question seems to assume or ignore, and (b) knowledge of the relative merits of experts’ arguments regarding the several proposed interventions. Remember, this is just one exam question. There will be another assortment of expert topics in the next question, and in the 168 others comprising the licensing exam.

The situation could be different if social work education were not such a grab-bag — if, that is, this profession had its own version of the Kentucky Revised Statutes or Gray’s Anatomy — of, that is, a text, or a readily identifiable set of texts, that structure and dissect the elements of the profession’s knowledge. It doesn’t. This is, in effect, why Flexner concluded, a hundred years ago, that social work is not a profession. Rather than begin with a specific area of practice, these ASWB questions begin with an open-ended look at the assortment of things that might be affecting people in real life. That’s great, as a matter of the possibilities for genuine social work; it just doesn’t translate into workable standardized testing.

*  *  *  *  *

That concludes this detailed investigation of ASWB’s processes resulting in licensing exam questions. Clearly, change is needed.

For further reading, you might view the webpages to which links appear in the foregoing discussion. Many of those links lead to other posts in which I have provided more detailed discussion of related topics.


Trackbacks are closed, but you can post a comment.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: