U.S. News Rankings of MSW Programs

Among college and university ranking services mentioned in another post, U.S. News and World Report is especially well known.  Those rankings have been a subject of considerable debate among scholars.  In social work (SW), especially, the graduate student who wants to be happy and get the most out of his/her education will be well advised to learn much more about a school than its rank, and to pay limited attention to that rank in many cases, but also to learn what s/he can from rankings and related sources.

How U.S. News Ranks Schools of Social Work

In some areas of graduate education, U.S. News takes a number of criteria into account.  For instance, its business school ranking methodology depends upon peer assessment (i.e., ratings of MBA programs provided by deans and MBA program directors) (25%), recruiter assessment (15%), graduates’ starting salaries and bonuses (14%), employment outcomes (21%), mean GMAT scores of entering students (16.25%), mean undergraduate GPAs (7.5%), and acceptance rate (1.25%).  (Quite a few MBA programs accept the GRE in lieu of the GMAT.)  U.S. News ranks schools of education by similar criteria.

The situation is stunningly different for the U.S. News rankings of master of social work (MSW) programs.  U.S. News treats social work as a Health specialty.  Since at least 2000, these rankings have been done only once every four years.  Hence, at this writing, the most recent ranking of schools of SW (SSWs) was done in 2008.  According to Robert Morse, director of data research at U.S. News, the 2008 ranking of SSWs proceeded as follows:

All the health rankings are based solely on the results of peer assessment surveys sent to deans, other administrators, and/or faculty at accredited degree programs or schools in each discipline.  All schools surveyed in a discipline were sent the same number of surveys.  Respondents rated the academic quality of programs on a 5-point scale:  outstanding (5), strong (4), good (3), adequate (2), or marginal (1).

To emphasize, U.S. News ranks MSW programs entirely on the basis of opinions provided by SW deans, administrators, and professors.  Switzer and Volkwein (2009, pp. 832-834) are not especially enthusiastic about this approach:

It is difficult, perhaps impossible, to know how accurately the perception of deans and admissions directors matches real quality in graduate education, as distinct from the large amount of marketing and promotional material that schools produce and distribute to their peers, not coincidentally around the same time the USNWR surveys are mailed. It is likely that schools could better utilize their limited resources by focusing their efforts on their students and faculty, rather than on those who will be rating them in a magazine.

Such an approach raises many questions.  For one thing, there are serious limits on how much a SSW administrator or faculty member will know about other SSWs.  Consider the numbers.  For the 2008 rankings, U.S. News explains that its surveys were sent out in fall 2007 to 177 master’s of social work (MSW) programs accredited by the Council on Social Work Education (CSWE).  (CSWE does not accredit, and U.S. News does not rank, SW PhD programs.)  Unfortunately, only 56% of the recipients responded to the survey.  When only 56% respond, who can say whether the busiest, most knowledgeable, or most competent educators are being heard over the opinions of those with the least ability or the most free time?

There is a question of how much data U.S. News could have had, to provide a basis for its rankings.  No doubt many respondents had opinions about the quality of the MSW program at Berkeley or Columbia.  But how many of them were well acquainted with the MSW programs at places like Wheelock College or Northwest Nazarene University?

The average SW dean, director, or professor has had extensive personal exposure to perhaps a half-dozen SSWs.  Yet in the U.S. News approach, as noted above, raters are not instructed to vote only on those programs that they know well.  Their vote on a place where they spent 20 years gets exactly the same weight as their opinion of a program they discovered last week; and those raters who have spent years at a place will be greatly outnumbered by those who have not.  They could all choose to vote on only those places that they do know well; but in that event the U.S. News ranking of many programs could be decided by a very small number of votes, many of which may come from biased sources, such as a program’s own faculty or its direct competitors.

In numerous cases, hardly anyone outside of an MSW program will have any substantial knowledge of its quality, because so many progams are so new.  Bardill and Hurn (1988) indicate that there were only 95 accredited MSW programs in 1988.  At an average rate of more than four new programs a year since then, 40 of the 177 programs rated in 2008 would have been less than ten years old when U.S. News ranked them.  For many of those programs, few if any clinical graduates would even have achieved licensure.  On what basis would voters have known whether such programs were excellent, mediocre, or terrible?

That problem may be even more significant in the 2012 U.S. News rankings.  As of February 2011, CSWE had accredited a total of 208 MSW programs.  If U.S. News polls people at all those programs, there will be 31 (18%) more schools voting (and being voted on) in 2012 than in 2008 — plus any additional programs that were accredited during the remainder of 2011.  Under such circumstances, it would be interesting to see what would happen if entries for nonexistent MSW programs at Duke University or at the University of Central Kentucky were included in the surveys.  I suspect a number of voters would have fairly positive opinions of the Duke program, but perhaps not of Central Kentucky’s.

The U.S. News rankings are only as reliable as the SSW faculty and administrators being surveyed.  Views of students, community leaders, educational scholars, state licensing authorities, departments of education, accreditors — indeed, even the views of those faculty members who do not receive copies of the U.S. News survey — are ignored.  It is not even clear that the faculty who vote on schools are the most knowledgeable on the matter, within their respective SSWs.

In short, it makes very good sense to treat U.S. News as just one among many sources of guidance, for purposes of choosing an MSW program, and not to place much faith at all in small differences among schools’ U.S. News rankings.  This advice seems especially applicable to programs that are not among, say, the top 10 or so.   As one moves away from the leading schools that supply disproportionate numbers of SW professors and publications, the profession’s depth of knowledge about its MSW programs shrinks dramatically.

To the extent that one must rely upon U.S. News at all, it may be appropriate to focus on differences in scores, rather than differences in absolute rank, among MSW programs.  For instance, one might divide schools according to half-point increments, on the scale from marginal to outstanding (above).  In that approach, starting at the bottom of the 177 SSWs that were ranked in 2008, eleven schools (i.e., those ranked 167-177) had average peer reviews below 2.0 (adequate), and are listed as “Rank Not Published.”  The next 72 MSW programs (ranked 95-166) had average ratings between 2.0 and 2.4, inclusive (i.e., closer to “adequate” than to “good”).  The next 42 programs (ranked 53-94) were rated between 2.5 and 2.9 (i.e., better than “adequate”).  The next 31 programs (ranked 22-52) were rated between 3.0 and 3.4 (i.e., closer to “good” than to “strong”).  The next 14 programs (ranked 8-21) were rated between 3.5 and 3.9 (i.e., better than “good”).  The next five programs (ranked 3-7) were rated between 4.0 and 4.4 (i.e., closer to “strong” than to “outstanding”).  Only two were rated at 4.5 or above (i.e., better than “strong”).

That sort of grouping could have some usefulness.  It might help to identify programs that have climbed or fallen significantly.  For instance, a school’s rise or fall of 25 places between 2008 and 2012 could be virtually meaningless, if the school nonetheless remained solidly anchored in the large pool of SSWs ranked 2.0 to 2.4; but a change of 25 places would be incredible, if it involved a change from “good” to “outstanding” or vice versa.  As one moves closer to the top, extreme changes may be increasingly likely to highlight unreliability in the measuring process, rather than real and profound change in a program within the space of just a few years.  There certainly may be the occasional SSW whose rank changes dramatically because it has become deeply troubled or has recovered from deep trouble.  But otherwise, it would be very unlikely that an SSW would suddenly find itself blessed with such profound improvements as to propel it sharply upwards among truly competitive peers, and the same goes for sudden large drops.

A Comparison of U.S. News Rankings:  SW vs. Physics

Some may assume that the top 10 graduate programs in social work would be academically comparable to the top 10 programs in another field — say, physics.  This would not be a safe assumption.  Scores on the Graduate Record Exam general test (GRE) may help to explain why not.

According to the Educational Testing Service (ETS, 2010), combined GRE math and verbal scores in recent years were 895, for test-takers indicating an intention to enter graduate programs in SW (ETS code 5001), as compared to 1269, for those intending to major in physics (ETS code 0808).  More specifically, the mean scores were 539/742 (verbal/math) in physics and 429/466 in SW.  SW is not rocket science, so we might focus on the verbal rather than the math scores.  The SW verbal average of 429 was around the 42nd percentile (ETS, 2011, p. 20), putting the average SW grad school applicant firmly in the bottom half (especially, one might surmise, after allowing for the fact that many SW programs did not even require the GRE).  In other words, nearly 60% of GRE takers had higher verbal scores than did the average SW GRE-taker.  By contrast, the physics verbal average of 539 was around the 72nd percentile.  (By the way, of the universities housing the top 10 graduate programs in physics according to U.S. News — Caltech, Harvard, MIT, Stanford, Princeton, and so forth — only a few have decided to form SSWs.  They do, however, have recognized programs in related fields, such as psychology and public affairs.)

The U.S. News methodology for ranking physics programs is the same as that used for SW programs:  it relies solely on surveys of deans, directors, and other professors in the field.  U.S. News provides the average scores that it calculates for each such program.  (I’m not sure whether this information is available to those who haven’t paid $20 for premium online access.)  In physics, in 2008, all of the U.S. News top 10 graduate programs had scores above 4.3 on a five-point scale; in SW, only two did.  (As noted above, 5 = outstanding, 4 = strong, 3 = good, 2 = adequate, and 1 = marginal.)  Six of the top 10 physics programs were rated above 4.6, while none of the SW programs were.

That is interesting.  Suppose, for purposes of discussion, that a score of 4.5 or above can be construed as an indication that faculty tend to consider a school outstanding.  Suppose, in addition, that the U.S. News survey results do accurately represent the general tendency of faculty opinion within these disciplines.  On that basis, it would appear that physics professors overall can name nine physics programs that they consider outstanding, whereas SW professors can name only two SSWs that they consider outstanding.  It could appear that, for the most part, SW professors are not extremely impressed with even the most highly ranked programs in their field.

Of course, there could be many caveats on such an impression.  Maybe SW professors think more critically about what counts as good education.  Maybe they have more negativity, inadvertently disparaging their field more than physics professors are inclined to do.  Maybe they downplay the book-learning part of higher education because they are more oriented toward the practice side of their field than physicsts tend to be.  Maybe SW faculty are relatively less enthusiastic about SW programs because they are constantly reminded of their relatively low status within the university (because of e.g., GRE scores, funding).  Such possibilities tend to emphasize that a ranking of graduate programs based solely upon faculty opinions can have limited value for students’ purposes.

Adjusting SSW Rankings Downward

As some SW scholars have been pointing out for years (e.g., Gambrill, 2001; Sarnoff, 1999), the theories and practices of SW have a history of relying upon authority, ideology, and wishful thinking.  This has had terrible consequences for the profession, its clients, and the public generally.  For example — to refer back to the comparison with physics — a profession that links itself with abysmal housing projects and a flawed war on poverty is likely to be at a disadvantage, in public appraisal and perhaps in its own self-assessment, when compared with a discipline that can build a bomb to end a world war, or put a man on the Moon.  All disciplines have their boondoggles and their quacks.  It would just be nice if SW education were more rigorously dedicated to making the best use of the profession’s assets and opportunities.

The point of those remarks is that SW faculty, rankers of MSW programs, do not have a good historical record of picking winners.  Regrettably, when they name only two of their own programs as outstanding, they may actually be overstating the case.  To put it in U.S. News terms, a SSW rated as 4.0 (strong) by SW faculty might be rated as only 3.0 (good) by non-SW academics (e.g., sociologists, psychologists) who would be competent to conduct such a rating.  As another contrast, a SW graduate student who gets top grades in the SSW at Washington University (U.S. News score:  4.6) might find it very difficult to get comparable grades in the law programs at Chicago or Columbia (same U.S. News score).

It seems, then, that adjustments to U.S. News scores, to take account of factors like student qualifications and graduates’ employment, could cause the scores of SW programs (especially top programs) to drop significantly.  Many SW applicants might rethink their perceptions of SW education if U.S. News gave its top-ranked SSW (presently Washington University) a score of around 3.7 to 3.9 rather than 4.6.  It would not be completely unreasonable to do so — after all, a score in that lower range would still be comparable to the scores that lawyers and judges give to such significant law schools as those at UCLA and the University of Wisconsin.

In fact, even a score of 3.7 would probably be far too high, considering that U.S. News says the acceptance rates at those law schools are in the 16%-22% range, whereas Wash. U’s MSW program (following a pattern found in most other MSW programs) apparently accepted about 86% of its applicants in the 1990-2004 period.  (U.S. News has ranked Wash. U. at or near the top for a number of years.)  To explain that essentially open admissions policy, Stoesz, Karger and Carrilio (2010, pp. 67-68) suggest that the glut of MSW programs has forced a rather desperate search for warm bodies to fill seats.Looking at GRE Scores

It may be informative to consider one final area of quantitative comparison.  The mean GRE scores of students entering the SW PhD (not MSW) program at Washington University in 2005 were 617/601, for a combined score of 1218.  (For some reason, top SSWs seem to withhold such data.)  ETS (2004, pp. 18-20) indicates that 1218 was below the combined average of all GRE-takers (i.e., master’s and doctoral level, accepted or rejected) who intended to go into some fields (e.g., physics, math, philosophy).  The quantitative score of 601, in particular, was far below the means for all GRE-takers intending to enter the physical sciences (699) and engineering (721), and only moderately above the averages in agriculture (593), business (591), and religion (578).

Of course, SW does not purport to be rocket science, so it may make more sense to focus on the verbal side.  In comparison with that average WashU entering PhD student’s verbal GRE score of 617, ETS also has also indicated that substantial numbers of recent GRE-takers in related fields have verbal scores of 600 or above:  about 25% of all GRE-takers who intend to apply to graduate programs in economics, 27% in political science, 18% in sociology, 11% in public administration.  That’s a lot of people.  In political science alone, these numbers mean that nearly 6,000 GRE-takers had verbal scores that would have put them around, or above, the average for the WashU SW PhD program.

Overall, only about 5% of SW GRE-takers had verbal scores of 600 or above.  At that rate, SW may be less like a social science and more like vocational training.  For instance, the percentages of all GRE-takers having verbal scores of 600+ were about 8% in dental sciences (e.g., dental technicians), 6% in nursing, 5% in pre-medicine, 5% in elementary education, and 4% in home economics.

Again, the GRE is only one component of an admissions decision.  The WashU document cited above does not disclose the undergraduate GPAs of its entering PhD students, and as already acknowledged, other things go into an admissions decision.  Nonethless, one might be concerned at these indications that SW’s highest-ranked PhD program does not appear to attract many intellectually elite students.  On that point, U.S. News regrettably fails to provide (and may not be able to obtain) GRE scores for many kinds of PhD programs, including SW programs; but what it does provide is illuminative.  The field of education seems to be especially forthcoming in this regard.  U.S. News indicates that, in 2008, combined GRE scores for entering PhD students in 11 of the top 16 graduate schools in education exceeded that WashU combined GRE score of 1218.  The combined GRE at education’s highest-ranked graduate school (Vanderbilt) was 1386, followed by Harvard at 1321.

The field of education is not generally considered to be among the most challenging areas of graduate study; and yet this discussion raises the question of whether the top-ranked SW graduate school is even on a par with most of the top 15 graduate schools in education.  Obviously, a great deal more research would be required to answer that question with confidence.  But it does seem that a quick and dirty attempt to compare across academic disciplines would require the U.S. News scores for top SW graduate schools to fall around, and possibly well below, the range of 3.5 to 3.7.  In other words, a highly qualified potential applicant, viewing the scores that U.S. News gives to, say, Harvard in economics (5.0), CalTech or MIT in physics (4.9), or Stanford in psychology (4.7), might employ educated guesswork to score the top SW PhD programs at somewhere in the range of 3.0 to 3.5.

These remarks have been relatively narrow and technical.  Hopefully they have provided some food for thought on interpretation of the U.S. News ranking of SW graduate programs.  Now it may be appropriate to return to a broader perspective on the rankings.

The Quality of the U.S. News Voting

One need not be a good scholar to be a good teacher.  But it may be difficult to teach well without good scholars working, somewhere, to provide high-quality materials that can be used for educational purposes.  SW academia has not been doing especially well in its scholarly pursuits.  According to Stoesz and Karger (2009, pp. 105-106), many SW deans and professors (including many of those who rate SSWs, teach doctoral students, and sit on the boards of the CSWE and of leading SW journals) have very poor academic credentials:

If the [CSWE] board members were up for promotion and tenure at a university requiring six SSCI articles, 80% of the CSWE board would have been terminated. . . . [A]lmost 70% of all social work deans/directors would not have achieved tenure (some would not have even passed a third-year review) at the associate level, much less a promotion to tenured full professor. . . . 61.8% of JSWE consulting editors [i.e., the people who evaluate articles submitted for publication to the Journal of Social Work Education, a leading SW journal] would not likely have been promoted or tenured based on their scholarship.

It’s not that SW theory and education are a complete joke.  Much of the material taught in SW programs comes from solid research conducted by researchers in other fields (e.g., psychology, psychiatry, public policy).  It can be legitimate to have a profession, like social work, that occupies a middle ground among other professions.  But anyone hoping for a genuine education that teaches good knowledge and valuable skills cannot be reassured to observe the conceptual hash that SW makes in its treatment of important concepts and research.  In the words of Holosko (2010, p. 666),

Social work authors have a rather peculiar knack for differentially defining important terms, which they routinely use in education and training.  This conceptual quirk may indeed sometimes produce useful transformational concepts (Holosko, IN PRESS; Ramsay, 2003), but more often than not, it has a rather deleterious and mitigating effect on our understanding of rather straightforward concepts. This frequently results in what Wakefield (2003) referred to as social work’s professional penchant for ‘‘running around circuitously in conceptual cul-de-sacs’’ (p. 297). . . . Often, in social work R&E education and training, this issue, in the words of renowned chef Emeril Legasse, ‘‘gets taken up another notch’’—as there becomes a conceptual mushing or blending together of two differentially defined terms, often used erroneously and interchangeably.

As suggested by these (and many other) critical remarks about SW research and education, the problem with the U.S. News approach to MSW program rankings — that is, the complete reliance upon votes cast by SW faculty and administrators — is that an apparently substantial number of those votes reflect the views of people who are not necessarily qualified to render an opinion on what makes a good SW education.  The views of such people should not be completely negated — after all, there’s more to life, and to social work practice, than what one can learn in journal articles — but it really doesn’t make much sense to rely on a ranking that completely ignores the kinds of things that do play a role in other U.S. News rankings (e.g., students’ GPAs and other credentials; graduates’ employment outcomes).

The Problem of Breadth

And yet, in fairness to U.S. News and to those who rely on its SSW rankings, there is a larger problem that not even the inclusion of those additional kinds of data (e.g., GPAs) can fully resolve.  If you want to become a lawyer, you have to go to law school.  If you want to study physics, you pretty much have to go to a physics department.  Even in the very broad area of business, there’s a limited number of places where you can go — mostly involved with business or public administration — to get appropriate training and employment.  By contrast, few if any other fields attempt to cover the breadth of subjects that can emerge in the study of SW.

In part, that’s a reflection of what Holosko (above) was saying:  SW mushes and mangles concepts in ways that can be uninformed and counterproductive.  In part, though, it’s a legitimate and probably unavoidable outcome in a field that is supposedly oriented toward the spectrum of needs and possibilities for disadvantaged people.  It can be very appropriate to try to know something about law, something about counseling, something about public health — and so on — when trying to help people for whom you may be the last and/or only hope, who aren’t going to be able to hire lawyers and lobbyists to protect their interests.

I say SW is “supposedly” oriented toward the needs of the disadvantaged.  SW tends to fall back on that rationale when doing so is convenient.  But as many (e.g., Specht & Courtney, 1995; Reid & Edwards, 2006, pp. 480-481) have pointed out, over its history, SW has often been embarrassingly out of touch with what disadvantaged people actually need.  In the 2000s — just as in the 1920s, shortly before the Great Depression — SW was preoccupied with the relatively good-paying and personally pleasant work of providing clinical mental health services to middle-class people.  That is, in these recent years, when economic injustice was rising, SW was almost completely clueless about subjects like unemployment.  At present, if you want to study unemployment, you’d probably better go into something like economics or sociology.  Similarly, as Healy (2008, p. 735) observes, SW “is not widely regarded as a leader within the larger global human rights movement.”

The problem of breadth is that almost anything a person might study in SW can be studied — in most cases, studied better — somewhere else in the university.  That’s a problem for three kinds of people.

First, as indicated in the Occupational Outlook Handbook, the MSW student who hopes to become a Licensed Clinical Social Worker (LCSW) (or whatever it’s called in one’s particular state) as a relatively fast and easy route to an eventual independent mental health practice will face significant competition from programs (e.g., occupational therapy, psychiatric nursing, psychology) whose graduates tend to have received better and more focused clinical training, and whose salaries reflect that differential.  What started out looking like a quick and dirty solution may deteriorate into an employment situation where you’re at the bottom of the barrel, in terms of your skills and credentials, and therefore have little alternative but to spend long hours at low pay in poorly managed, poorly funded, and ineffective programs and workplaces.  This is certainly not the case for everyone.  But it is an unpleasant surprise for some.

Second, the person who doesn’t yet know precisely what s/he wants to do, or where s/he excels, will probably find that SW opens up possibilities in multiple directions.  In that sense, SW is less like law or physics, and more like business or education.  To some extent, a relatively scatterbrained concept of graduate education is built into the requirements that the CSWE imposes upon MSW programs.  But without the rigor that would come from consistent exposure to people who know what they’re talking about, or who have a good grasp of what they would need to accomplish in order to make a significant difference in the real world, this rationale for entering a SW program can waste of a lot of time and money, and can produce a graduate who has a rather skimpy and skewed grasp of relevant concepts and information.  In other words, a SW education can aggravate rather than alleviate a tendency to drift.

Third, the person who goes into an MSW program with a relatively focused interest in helping disadvantaged people may find both of the foregoing characteristics frustrating — may be unhappy, that is, with programs that have a clinical rather than social emphasis, and yet produce graduates who have spent too much time meeting random CSWE requirements and not enough time focusing on what they need to be competent clinicians.  A person of this type may benefit from further investigation of programs that are more directly targeted on the kind of help that s/he would propose to provide.  For some people, that may mean looking into some of the kinds of practice-oriented programs mentioned above (e.g., law, education, administration, public health, counseling) or learning the technical skills (in e.g., civil engineering, dental technology) needed to provide practical assistance.  For others, it may mean becoming familiar with the areas of difference and overlap among policy-oriented fields of study (e.g., political science, public policy, public affairs, sociology, economics).

The point of these remarks is that U.S. News is miles away from being able to rank MSW programs effectively.  Even if it did go beyond its exclusive reliance upon faculty ratings to include data on student inputs and outcomes, it would still not be comprehensively addressing the enormous variety of things that people can be hoping to find in a SW program.  It is not even clear what a single ranking of MSW programs would mean, given that a field of such tremendous breadth and vagueness inevitably incorporates numerous inconsistencies and contradictions in what it tries to do and how it tries to do it.

Recap and Suggestions

This post has criticized the murky and weak nature of the questions and responses involved in the U.S. News ranking of MSW programs.  The discussion has suggested that combining several measures (e.g., student GPAs, GRE scores, graduation rates, employment outcomes) could help to insulate the U.S. News rankings from problems inherent in the use of faculty surveys within the world of SW.  It has also appeared, however, that regardless of the method used to rank SW programs, none of them may stack up terribly well against programs in other fields.  This possibility derives from larger problems with SW academia.  Altogether, these various kinds of concerns still do not negate the general idea of one or more rankings within SW, nor do they suggest that there are no significant differences between bottom- and top-ranked SW programs.  But they do present some reasons for taking a very questioning and nuanced approach to the project of choosing among SW graduate programs, with limited faith in the accuracy or usefulness of the U.S. News rankings.

Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: