Printer Friendly

Research on Social Work Practice Does Not Benefit from Blurry Theory: A Response to Tomi Gomory.

 The idea that casework needs a superordinate theory or theories is no
 longer tenable. The knowledge base of future practice will more likely
 consist of a variety of empirically demonstrated propositions from
 different perspectives, tied together by at least one major thread, i.e.,
 their utilization leads to success in helping clients.

 Joel Fischer


IT IS CERTAINLY AN HONOR for one's modest efforts to receive such concentrated attention from a scholar so gifted with a keen intellect and erudition as Professor Gomory. I was sorely tempted to form a simple one-sentence response to his critique, thanking him for his analysis and urging the reader to give his arguments the attention they deserve. But I have known and respected the journal's editor, Professor Eileen Gambrill, for too long to disregard her gracious invitation for a more detailed reply. It may help the reader for me to provide some background as to how my article and Gomory's have come to your attention. In 1998 I was invited to attend the annual Ohio State University College of Social Work National Symposium on Doctoral Research, serving as a commentator for several of the dissertations presented there. The dean of the College of Social Work, Dr. Tony Tripodi, and the director of the PhD Program, Dr. Denise Bronson, invited me to give the keynote address for the following year's symposium. I was greatly honored by this opportunity. In 1998 the keynote speaker was Dr. Enola Proctor, in 1997 Dr. Eileen Gambrill, and to be invited to follow in the footsteps of such illustrious predecessors was in some ways the nicest professional recognition that I have ever received.

I worked on my address, titled "The Role of Theory in Research on Social Work Practice," during the following 12 months, and presented it at Ohio State on April 12, 1999. I made the manuscript available to selected individuals (e.g., social work doctoral program directors, members of various social work Internet Listservs, etc.), and in due course it appeared in the conference proceedings. I have presented it at two international and one national conference and, based on these dissemination efforts and feedback, revised it. I submitted it to the Journal of Social Work Education (JSWE), under the prior editorship of Dr. Paula Allen-Meares. It was critically reviewed and turned down, with a request that I revise and resubmit it. I responded to the reviewers' lengthy suggestions and submitted a revision which was eventually accepted for publication. While I was patiently awaiting its publication I attended an invitational conference on the development of social work practice guidelines held in May 2000 at the George Warren Brown School of Social Work at Washington University. Professor Gambrill was among the speakers, as was Professor Gomory to whom I was introduced. Professor Gambrill was Gomory's major professor when he completed his PhD in social welfare at Berkeley. We exchanged pleasantries and spent some hours together one evening, Professor Gomory, Professor Gambrill, Professor Stuart Kirk, and myself. It was a very enjoyable night--an hour browsing in a wonderful used book store, then a few beverages in a tavern talking. Two months ago I received a request from Professor Gambrill, in her role as the new editor of JSWE, asking for permission to delay the scheduled publication of my blindly peer-reviewed article while Professor Gomory developed his response to it, something she would invite him to do. I would in turn be offered the opportunity to respond to his paper, and then he would react to my response. This was to be a part of a new and ongoing initiative Professor Gambrill was undertaking as the Editor of the journal, intended to promote the critical analysis of published papers. I agreed, with some curiosity, not quite knowing what to expect. In due course I received Gomory's paper, and I set about crafting my reply, which brings us to the present. On to some specifics!

Antitheory? Moi?

Yes, I am guilty of a prior essay appearing in this journal wherein I argued that social work education should not include theoretical content in the BSW and MSW curriculum (Thyer, 1994a). I stand by that paper and encourage interested readers to review its meticulously sound and irrefutable reasoning. But I will not use valuable space to reiterate that paper here. If you e-mail me, I will send you a reprint. But, unlike Professor Gomory's first misrepresentation of my views, that paper did not extend the argument to educating doctoral students. I certainly support teaching them theory, particularly empirically supported theories. You see, the BSW and MSW are professional practice degrees, presumably based on scientific findings, but not themselves academic degrees like the PhD is at most universities. You can practice without recourse to theory (many social workers appear to do so everyday!), but theory can be quite useful in designing research studies. I myself have written a considerable number of theoretical papers (e.g., Thyer, 1987, 1988, 1992, 1993, 1994a, 1994b, 1997, 1999a, Thyer & Myers, 1997a, 1997b), which should belie any implication that I am somehow antitheoretical. My position in the paper Professor Gomory reacted to is relatively simple: not all useful research is based on some theory, and we in the academy should recognize this and stop pretending to be engaged in theory-testing research when we are really not. Just as finding one black swan disproves the contention that all swans are white, finding some examples of useful types of social work research that do not rely on formal theory disproves the contention that all research must be theoretically based.

My Selective Definition of Theory? Or the Profession's Definition?

Gomory delicately suggested that I defined "theory" in a deliberately selective manner so as to buttress my position and that I misunderstand what theory is. Perhaps he is right. I certainly had difficulty understanding what Gomory means by theory, and I suspect that many readers do also. I cited widely used authorities in our field, the definition found in the National Association of Social Workers (1995) Social Work Dictionary, and the definition found in several social work research texts, including the prestigious Rubin and Babbie (1997), among others. I did not cull the literature, blithely discarding points of view at variance of my own. Rather the definition I address is precisely that which most social workers really mean when they say "theory." Turn to textbooks on social work theory and you find the kind of grand, encompassing accounts I spoke about in my paper. Some examples are Roberts and Nee (1970), Greene (1994), Brandell (1997), and Payne (1997), each containing chapter-length treatments on grand approaches such as systems, psychodynamic, cognitive, behavioral, ecological, humanism, existential, radical, psychsocial, functional, problem-solving, client-centered, symbolic interactionism, role, constructivism, and Marxist theories. Do you, gentle reader, know what Gomory's definition of theory is? I certainly didn't after my first reading. I e-mailed him asking, "... would you be able to give me your sense of the term via email, as this may be an important factor in clarifying where you disagree with my position." His response was "I guess the quote of Popper's that `we must regard all laws and or theories as hypothetical or conjectural; that is as guesses' (Popper, 1979, p. 9) about what is, is as good as any." I am sorry but this ambiguous definition is not as good as any, particularly the widely understood and accepted definitions I provided in my article. Payne (1997) summarizes this situation very nicely:
 There is also disagreement about what a social work "theory" is. Again, two
 positions exist, one broadly positivist, the other leaning towards
 postmodernism .... The positivist view is a strict application of
 scientific method. This argues that a "theory" is a general statement about
 the real world whose essential truth can be supported by evidence obtained
 through scientific method ... the process of theory development in
 positivist views comes from showing that an approach can work in particular
 cases, moving on to show that it does work in a series of cases, and then
 showing how it works.... In postmodernist views, the meaning of "theory" is
 looser. It is a generalisation which takes on three different
 possibilities: 1. Models describe what happens during practice in a general
 way, applying to a wide range of situations, in a structured form, so that
 they extract certain principles and patterns of activity which give the
 practice consistency. 2. Approaches to or perspectives on a complex human
 activity express values or views of the world which allow participants to
 order their minds sufficiently to be able to manage themselves while
 participating.... 3. Explanatory "theory" accounts for why an action
 results in particular consequences and the circumstances in which it does
 so. This is the positivist meaning of "theory." (pp. 34-35)


See also Turner (1979):
 We frequently apply the word `theory' to describe such things as: basic
 tenets of practice; systematic formulation of ideas; approaches to theory;
 schools of thought or systems of thought; accumulated practice wisdom; post
 factum explanations of basic values; rather than the rigorous definition of
 theory usually used. (p. 4)


So basically Professor Gomory (the postmodernist?) and I (the rigorous positivist) differ as to the meaning of theory. That seems to be, as near as I can figure out, the basis of his argument. You know my view, and it is very likely similar to the one you understood. For Gomory, theory seems to imply all those things I specifically excluded from my use of the term--the underlying philosophy of science, models of practice, perspectives--and some others, including hypotheses, conjectures, and guesses. I submit that such an elastic use of the word by Gomory renders the entire concept devoid of scientific meaning and, oddly enough, places Gomory in a particularly difficult position given his advocacy of falsificationism as a preferred method of inquiry. If by theory you mean your underlying philosophy of science, model of practice, or perspectives, then how can you possibly falsify these via empirical testing? It seems as if Gomory is advocating for a definition of theory that cannot possibly be falsified. I concede that my own preferences for say, "realism" as an epistemological assumption, are axiomatic, not supported by irrefutable proofs. Axioms by definition are accepted givens and need no proof. If I accepted Gomory's view that if you have guesses that you bring to a study, or conjectures, or hypotheses, or an underlying philosophy of science, or the statistical assumptions of the t test, then I would by definition have to agree with him that all research involves theory in some way. But I was employing my definition of theory, not Gomory's. He thinks my definition was parsed, selectively culled, and overly restrictive. I think it is a fair representation of the profession's understanding of the term. Judge for yourself. I can gladly live with that.

Who Said "Social Work Methods Are Atheoretical"?

Professor Gomory says that I endorse the statement that social work methods are atheoretical. Sorry, not true. I have said that some are atheoretical (given the sense of the term I used and documented in my paper). Here is another example: Take Habitat for Humanity (HFH), that wonderful program that pairs poor people up with community volunteers and helps them build their own home. I am quite sure that if we queried President Jimmy Carter or high officials within HFH about the behavioral science theory underlying this social welfare intervention that we would be met with bemused grins. These would develop into guffaws if we asked them about the HFH psychosocial theory regarding the causes of poverty or the theoretical mechanisms on how home ownership benefits people. They would most likely inform us that HFH is based upon certain religious beliefs (mostly) and perhaps common sense, practice wisdom, prior experience, and some other sources of knowledge. Behavioral science theory would be conspicuous by its absence.

Now picture the social work PhD student who conceives of undertaking an evaluation of the HFH program. Perhaps she has plenty of resources and time to conduct a randomized controlled trial. In the portion of the state where she lives, HFH can only provide homes to half the families otherwise qualified for the program, so to be fair the local HFH selection committee tosses a coin and randomly assigns equivalently needy families to receive a home or to not receive a home. The PhD student assesses the economic well-being, quality of life, health, and life satisfaction of all families qualified by HFH, and again five years later, during which time half moved into their own home (one they helped build, about four years ago) and half did not. Assume two groups of about 50 families each. I contend that this randomized controlled trial would be a marvelous exercise in program evaluation of the HFH intervention, and I would certainly encourage any PhD student to undertake such a study. One could test the hypothesis that the families who received a home had significantly higher economic well-being than they did five years earlier, as well as the hypothesis that the families who received a home were better off than the families who did not receive an HFH home. This analysis could be repeated for other outcome measures (life satisfaction, quality of life, etc.).

Given that the intervention is not explicitly based on some theory of poverty (e.g., a Marxist analysis), and does not enjoy a genuine theory of how it actually may work, I think it would be a dreadful mistake to make the student pretend that she is testing some formal theory. I can envision a faculty member enamored of psychodynamics asking the student to review the psychoanalytic significance of the "home" (symbolic of "mother"?), perhaps introduce elements of attachment theory or of transitional objects. Perhaps some elements of "self-efficacy" theory could be used to explain how home ownership enhances life satisfaction, and undoubtedly there are economic theories that could come into play. But why bother? The question being asked dealt with the possible effectiveness of the HFH in helping poor people, not the validity or invalidity of any theory! Thus we have illustrated the crucial difference between the purposes of applied and basic research. The validity of the particular theory in question will not be proven based on the results of the program evaluation. There will always be too many exculpatory factors to permit solid conclusions regarding the validity of some underlying theory, but it is frequently the case that well-designed experimental studies can demonstrate the effectiveness and efficacy of an intervention. Why not simply let the poor student evaluate the outcomes without being forced to retrofit some spurious theory onto the project? Lord knows that the HFH and similar (most!) social welfare programs would benefit from this type of systematic investigation. It would be exceedingly useful to learn that the results were positive. It might help HFH solicit additional funds and volunteers if you could show long-term positive benefits among its clients. And it would be useful to know if the results were negative--perhaps by isolating those families most likely to not do well, the program would be in a better position to select future recipients. Please note that this is not a perfect hypothetical study. It did not involve a random selection of clients from the universe of folks needing home ownership. The sample size of 100 can be faulted because it was not 1,000, and of course a five-year follow-up is not as stringent a test as a ten-year follow-up!

Falsificationism--A Recipe for Disaster for Research on Social Work Practice!

The above section should illustrate the idea that social workers need to move ahead with the selection of services in the absence of perfect evidence. We need to know what psychosocial interventions seem to offer the most promising benefits to clients. Cognitive-behavioral therapy (CBT) is a well-supported evidence-based psychosocial treatment that clinically and significantly helps clients meeting the DSM criteria for obsessive-compulsive disorder (OCD). Dozens of well-controlled clinical trials and dozens of single-subject studies bear this out, many designed and conducted by social workers. Most of these studies have involved Caucasian clients, a few used African Americans. But both groups seem to respond well, as do both males and females. Suppose a social worker has a new client from Mongolia with OCD. Being an evidence-based practitioner she consults the empirical literature and finds to her dismay that there are no randomized controlled studies of CBT for OCD with obsessive Mongolians! What is she to do? Should she attempt to refer the client to a therapist trained in CBT, or perhaps acquire such skills herself? Or should this extremely large literature be ignored, because CBT for OCD has not been evaluated with Mongolians! The answer is obvious. We need to seek out the best evidence and not wait for perfect evidence. A psychosocial intervention support by several well-designed RCTs has stronger support than an intervention supported by only one. One supported by a single RCT would be preferred (generally speaking) than one buttressed with a couple of quasi-experiments. One with a single quasi-experimental study to back it up is preferred over one with a couple of pre-experimental ones, and so forth. Practitioners in applied fields like social work need to make decisions about what to do now, and this by necessity involves relying on less than perfect evidence.

Falsificationism may well be the strongest approach to scientific inquiry regarding the validity of theories. I commend the lucid essay by Platt (1964) for an engaging account of this method. But you may have figured out from Gomory's essay that falsificationism can only tell you confidently what does not work (or what is not true), but not what does work. Consider how the evidence is portrayed. Should you use CBT with OCD? "Well," the stern falsificationist says, "it has stood up to a number of strict tests, and it has not failed them yet." "But does it work?" you enquire. "Can't say. But it has not failed yet." "Should I use it with my Mongolian client?" "Can't say, it's never been tested with Mongolians" "But what should I do with my Mongolian client?" "Can't say."

Contrast this with the approach Professor Gomory calls justificationist (which I think corresponds to conventional scientific reasoning). Should you use CBT with OCD? "Well," the naive justificationist says, "it has been subjected to a relatively large number of RCTs with generally positive results. It is widely cited in handbooks and journal articles reviewing the scientific evidence regarding the effectiveness and efficacy of interventions and the psychosocial treatment of choice (e.g., Cohen & Steketee, 1998; Franklin & Foa, 1998), and it has been endorsed by the OCD Foundation as a treatment of choice, based upon their independent review of the available evidence." "Is there any approach with stronger evidence to support its use, in lieu of CBT," you ask? "No" the justificationist answers. "But it has not been tested with obsessive Mongolians," you correctly note. "True, unfortunately. Eventually researchers may get around to that. But it has been shown to be generally useful with other client groups it has been used with, and since we have no particular reason to believe that OCD in Mongolians differs appreciably from other groups, CBT would seem to be the best place to start. You recognize, of course, that the designation of CBT as an evidenced-based psychosocial treatment is a provisional one, always susceptible to review as new data comes in." "Why yes, of course," you reply, "I learned that from some of the many elementary readings I had in my research classes while an MSW student."

Turner makes this distinction nicely: "The clinician's principal interest is in the utility of theory: what can it tell me about this situation that will permit me to act effectively? It is, therefore, not knowledge for its own sake, but knowledge for use" (Turner, 1979, p. 3, italics added). Research findings viewed through the lens of the falsificationist cannot fulfill this function. It is a recipe for paralysis. It is also inherently self-contradictory, as are all radical skeptical arguments. Gomory is convinced that theories cannot be proved to be true. He asserts this as a truth. But where is the proof of this positive falsificationist assertion? If he is right, proof is impossible. If he has no proof how can he claim to be right? Philosophical debates like this can be amusing, but they are hardly productive since most of the issues involved are incapable of scientific resolution, and it was in that limited sense that the logical positivists labeled them as "meaningless," at least as defined by their "verificationist principle."

The very term justificationist as used by Professor Gomory can be misleading, to the extent that this implies a commitment to finding a positive outcome, outcomes that can be used to support or "justify" one's a priori position. Good science consists of conducting a study and being committed to accepting (and publishing) the results, regardless of whether they corroborate or falsify one's personally favored theories or interventions.

On the Relative Value of Positive and Negative Knowledge

Professor Gomory asserts that positive knowledge, for example, knowledge that helps choose social work interventions, is simply not possible. For example, "A positive outcome of a test doesn't provide additional support" (p. 34), or "Nothing new is learned by positive results.... Positive results just confirm what you already believe and can have no further inductive benefit. Real help would have been negative results" (p. 43). Talk to a social work practitioner seeking guidance about how to provide effective care to clients, Professor Gomory, and ask them what they would find more helpful--a well-designed outcome study with positive results, indicating that clients receiving a particular psychosocial intervention were helped, or an equally well-designed outcome study with negative results, indicating that the intervention did not help clients. You will quickly find the relative utility to the field of falsificationist versus so-called justificationist approaches to research.

The above is a social work recasting of a widely known critique of Popper. Here is one example (O'Hear, 1989):
 The Popperian will say that the Popperian method aims at the truth. The
 critic will reply that the method aims at the truth only in the sense of
 ruling out false theories, and that it does not give any positive reasons
 for believing in the theories which have survived severe tests. To which
 the Popperian will agree, adding that we may still act on such theories in
 the hope that they are true. And the critic will say that he had hoped for
 more than a hope in science. Once again we are back where we started. (p.
 41)


You see, again this is a distinction between basic science and applied studies. Although a positive outcome cannot be aid to prove that the theory predicting that outcome is true, a positive outcome in the context of an intervention study can provide guidance about useful ways to practice. As Gambrill (1983) states, a characteristic of competency-based social work "is selection of assessment and intervention procedures based on the available evidence. We know more today than ever before about how to help people achieve their desired outcomes" (p. 3, italics added). Again, she notes that one of the considerations to be used in selecting intervention plans is whether "empirical literature supports selection in terms of efficiency, effectiveness comfort, and feasibility" (p. 227). Clearly, positive knowledge, the selection of evidence-based treatments, is possible.

"One cannot generalize at all beyond the observed data" (p. 32) is a particularly telling example of Gomory's negative point of view. I spent no inconsiderable amount of time learning the rudiments of probability theory in order to have some legitimate grounds to make inferences from randomly chosen samples to larger groups. I also learned about the value of replicating findings in order to enhance our confidence in their generalizability. As a practical matter, social workers must generalize (cautiously) from the findings of flawed research studies in order to be guided in their practice. Professor Gambrill, for example, pioneered testing the use of a behaviorally based systematic case management program for the natural parents of children in foster care. The positive results from this admittedly flawed study lead her to explicitly recommend this program. It was justificationally concluded that "It is clear that the project was successful in accomplishing the objectives of moving a significant number of children out of foster home placement" (Stein & Gambrill, 1977, p. 509; see also Stein, Gambrill, & Wiltse, 1978). I note that this study did not involve a placebo control group, a failing it shares with much applied research involving real life clients in agency-based settings (e.g., Baker & Thyer, 2000; Vonk & Thyer, 1999).

Our field must rely on the best available evidence, not perfect evidence. No one has ever conducted a prospective randomized controlled clinical trial of the effects of smoking in relation to the development of lung cancer. Instead science relies on retrospective investigations, correlational studies, epidemiological data, surveys, public health records, and similarly flawed data to construct an inescapable conclusion, that yes, indeed, smoking causes lung cancer. Similarly imperfect evidence points to the causal linkages between alcohol use and domestic violence; the availability of handguns and violent crime; and the importance of education (e.g., high school graduation) to economic well-being in later life. The flawed nature of the evidence related to the latter issue has not prevented policymakers from taking steps to try and deter dropping out of high school. If I were to dare introduce the recent Report of the Surgeon General on Mental Health and cite Professor Satcher's frequent pronouncements of the availability of evidence-based treatments for a wide array of so-called mental disorders, I suspect that Professor Gomory would accuse me of relying on "authority" to buttress my claims. The reality is that this national report (edited by an MSW by the way) is indeed based on a considerable amount of empirical research, and the conclusions contained therein are based on the overall quality of the data, not on the personal or professional status of Professor Satcher. Reference to the Report's findings is a shorthand version of citing a very large body of primary literature. This is not an appeal to authority, but to data. I say this recognizing the shortcomings of the report--it is not perfect, but it is an advance (like the latest edition of the DSM).

Applied disciplines like social work cannot wait, in the fullness of time, for falsificationist research approaches to gradually weed out all the false, imperfect and ineffective interventions from our armamentarium. There is great value in less than perfect research because it provides the beginnings of an evidence-based foundation for practice in various areas. The American Psychiatric Association used the following types of evidence (ranging in credibility from strongest to weakest) in developing its practice guidelines: (a) a randomized clinical trial, prospectively designed with double-blind assessments and treatment and control groups; (b) a clinical trial, similarly prospective, but lacking blind assessments or control groups; (c) cohort or longitudinal studies; (d) case-control studies, retrospective studies of clients; (e) review with secondary data analysis, such as meta-analyses; (f) qualitative reviews of literature lacking a quantitative synthesis of the data; and (g) other forms of evidence, such as textbooks, expert opinions, and case reports (Nathan & Gorman, 1998, p. 18). The American Psychological Association's Task Force on Psychological Interventions used the following standards to determine if a given treatment should be considered as empirically well-established: Is it supported by at least two good group design studies conducted by different investigators? Was efficacy demonstrated by proving the experimental treatment superior to pill or psychological placebo, or to an already established treatment? Did the therapists use treatment manuals? Were the characteristics of the clients clearly described? (Nathan & Gorman, 1998, p. 15). Such standards, while admittedly imperfect and perhaps too lax, provide the field with a place from which to move forward. Critics of these standards are urged to suggest alternative scientific criteria to judge the evidentiary basis for providing a particular psychosocial intervention.

It is widely recognized that we must move forward within the constraints of imperfect evidence in terms of making treatment decisions. Almost a quarter century ago Professor Gambrill authored a monumental handbook about behavior modification, containing over 1,000 pages packed with recommended interventions. She took a decidedly justificationist point of view: "If the future holds as rapid an output of empirical studies on helping people to change their behavior as has the recent past, we should be blessed with increasingly effective change methods" (Gambrill, 1977, p. 1080, italics added). And she goes on to laudably cite "the behavioral model of practice, which draws upon empirical literature to inform selection of assessment and intervention procedures" (p. 1082). Gambrill asked "Why aren't programs that have shown themselves to be effective employed on a broader basis" (p. 1089)? I think that one culprit is the notion that one must demand perfect evidentiary standards before making a decision about how to proceed.

Professor Gomory finds fault with handbooks such as mine (Thyer & Wodarski, 1998) and Gambrill's for overstating the evidence with respect to the research support for various interventions. We are chastened for talking about anxiety, depression, drug abuse, autism, the mentally-retarded, `psychotic', stress, children's fears (all terms taken from Gambrill's (1977) Handbook, because this language may perpetuate the medicalization of these problems, and the use of the term "disorder" may lead to the reification of these constructs. Yet these are indeed the very issues that clients bring to social workers. Given an awareness of the lack of evidence regarding the medical etiologies for many of these conditions, writers such as Gambrill and myself are very cautious about not reifying such terms and noting that their use is merely convenient shorthand and contains no assumptions as to their `reality' as disease entities.

What Is Not Empirically Researchable?

Gomory oddly construes my statement in another publication (not the one published in this journal) that "empirical research has little to say about those aspects of the world that are wholly subjective, immaterial or supernatural" (Thyer & Wodarski, 1998, p. 4), claiming "that this seems to leave out such common topics of empirical research as subjective well-being, opinion polls about a multitude of topics, and cross-cultural studies on belief systems (including belief in magic) to just name a few that are not `empirically' researchable according to Thyer" (p. 35). What Gomory actually leaves out is the sentence immediately following the one he quoted from my book, "The study of these areas is the subject matter of other disciplines, such as philosophy or religion." Leaving out this following sentence is a curious omission, since it clearly limits my remarks to philosophical and religious issues, not to issues of science and social work. What is justice? What is the nature of God? Science can offer little assistance in resolving these matters. Of course we can do research on subjective matters like well-being, opinions, and cross-cultural studies. I have done so myself. What is essential for empirical research is that the phenomena in question be measured in some reliable and valid manner. When it comes to research on social work practice I cannot think of any topic that does not lend itself to empirical analysis using scientific methods. Even phenomena supposedly based on supernatural or metaphysical explanations, like magic or faith healing, are amenable to investigation, so long as they are hypothesized to have measurable effects in the material world. Popular journals like The Skeptical Enquirer and The Skeptic routinely report the results of scientific research on such topics. I have never contended that science could not study such things. Gomory's failure to include the sentence immediately following the one he quotes seems much more of a example of contrived and selective citation than anything he suggested I was guilty of. That he knows he was misrepresenting my views is evident, since he cited another paper of mine containing the following related statement--and surely a scholar as careful as Professor Gomory would not miss such a crucial point:
 Many questions of great importance to our profession, such as the value
 base of social work, are simply outside the purview of scientific inquiry
 and other standards apply to the discussion of such topics apart from the
 conventional rules of scientific inference, standards such as those
 pertaining to religious beliefs, morality, and other philosophical
 convictions. Logical positivists are fully aware that many significant
 areas of our professional and personal lives should not be scrutinized
 through the lenses of science, but when the issues pertain to social work
 theory and the evaluation of our practice methods, then the role of
 controlled scientific investigations guided by the principles of logical
 positivism becomes a relevant factor. (Thyer, 1993, p. 6)


Scientism is the view that the investigational methods used in the natural sciences should be applied in all fields of inquiry and can answer all questions of importance to the profession. I know of no social worker who holds such a view and I myself have explicitly rejected it, as in the above quote, and more recently another publication (Thyer, 2000, p. 10).

On Solipsism and Secondary Sources

Contrast Schopenhauer's view (1969, p. 3, as cited by Gomory) "The world is my representation ..." with a current definition of solipsism taken from Reber (1995, p. 737), "the philosophical position that holds that the only thing of which one can be certain is one's own personal experience and, by extension, that one's experiences represent all of reality--the outside world existing only as an object of one's consciousness." Linking Schopenhauer's views with the ancient doctrine of solipsism is not such a great stretch. Yes, I confess to relying on secondary sources here--Gomory and Reber. Moreover I admit to not having read Schopenhauer in the original German. I also confess to not understanding what this section of Gomory's paper had to do with my essay on the role of theory in research on social work practice.

On Atheoretical Interventions

I cited Shapiro's EMDR to illustrate the common finding that interventions can be devised that are based on some underlying theory and to have that theory subsequently proven to be false. The efficacy of EMDR is irrelevant to the point (although it is likely a far weaker intervention that was originally claimed, e.g., Colosetti & Thyer, 2000). I would think that Gomory would be applauding my falsificationist approach, since I noted that EMDR theory has been subjected to severe tests and shown to be invalid. Why is he criticizing me for making that legitimate claim? See, for example, Montgomery's review:
 Shapiro's rationale for EMDR does not appear to hold water either
 philosophically or empirically, but any new treatment, if empirically
 supported, would not require a theoretical grounding to be of utility.
 However, to date, EMDR has neither a sound empirical nor theoretical base
 on which to stand. (p. 68, italics added)


Consider also MacCulloch and Feldman (1996, p. 571): "Thus far, EMDR is a purely empirical treatment, developed by Shapiro almost serendipitously, lacking a theoretical framework or explanation" (italics added). Precisely my point.

But if a treatment legitimately based on a particular theory is properly applied, and the positive outcomes predicted by the theory do not emerge, two conclusions may be drawn--(1) the treatment does not work and (2) the theory is invalid (false). It is possible to test the validity of a theory outside of the context of an outcome study of an intervention. For example, psychoanalytic theory asserts that the etiology of OCD has its origins in traumatic toilet training experiences. It also asserts that psychoanalysis as a treatment should help clients with OCD. These two predictions can be tested separately. One could do an outcome study on psychoanalysis for OCD. Or one could prospectively follow two groups of kids, those known to have experienced traumatic toilet training, and those known not to have had such experiences. If psychoanalytic theory were true, as these kids grew into adolescence OCD would emerge in the former group and not the latter. A failure to find this predicted result would be a step towards falsifying or disconfirming that particular theory--all outside of the context of an intervention study.

I was very pleased to find buried deep within the anatomy of Professor Gomory's paper his acknowledgement that the central theme of my paper is indeed correct! He agrees that doctoral advisory committees forcing students to include grand theory into their dissertation research is silly, if it is not a legitimate integration. But he makes another serious mistake in his subsequent discussion of the Baker and Thyer (2000) article mentioned in my paper, when he says that I have undermined "Professor Thyer's claim of no theoretical organization of the `intervention package'" (p. 42). If you the reader will review my discussion of this paper, you will find that nowhere did I state that this intervention was atheoretical. In fact I specifically said that it was based on operant principles (which qualifies as a theory in my view). What I did introduce in the discussion of the Baker and Thyer (2000) work was to point out how the PhD student was forced by her advisory committee to include truly tangential and irrelevant theory, namely, the health belief model, in her dissertation. It would have been scientifically legitimate, in my view, if the committee had wanted more content on the theory behind case management and more elaboration on operant theory, since these were what the intervention was really based upon. But this is not what happened. She had to include entirely unrelated theoretical content, content which was not used in the development of the intervention and was not tested in her outcome study. This was the practice I railed against. I am pleased that Professor Gomory would be similarly upset to see students being coerced in this manner. But I did not use Baker and Thyer (2000) as an example of an atheoretical intervention.

Similarly, the reader can review what I said about the Vonk and Thyer (1999) study. I said that the practitioners used many diverse approaches to intervention. Because this "independent variable" (student counseling services) was such a mishmash of interventions (and some theories those interventions were based upon), Vonk did not construe her quasi-experimental evaluation of the general outcomes of student counseling center services as a test of any particular theory. Gomory calls "eclectic short-term treatment" a theory of treatment. I do not, particularly when that short term treatment is being provided by practitioners operating from various, different, and competing perspectives. Short-term treatment per se does not provide either an etiological or an interventive theory for college student problems. The only thing that defines short-term treatment is a focus on a small number of sessions provided within a specific time frame (Dziegielewski, Shields, & Thyer, 1998). Such pragmatic considerations do not constitute any kind of theory; short-term treatment may well be a model, but as I explained earlier models are not explanations for the etiology of problems or of how interventions work (i.e., a theory). Some of Gomory's methodological criticisms of the Vonk and Thyer (1999) study may well be true, but then we never claimed to have conducted a perfect evaluation, only one that advances upon those previously published. This is not self-aggrandizement, it is a fact used to justify the publication of flawed (like most) studies and is characteristic of the progressive nature of scientific research. We clearly discussed the limitations of our design--lack of randomization to immediate versus delayed treatment, using a nonprobability sample, the small proportion of clients seen, the small sample size, etc.--in the published article. Gomory's reiteration of the methodological flaws in our study, flaws we ourselves discussed, seems tangential to the possible role of theory in this project. If his point is to criticize quasi-experimental studies in general as not capable of providing useful findings, such a perspective fails to reflect the realities of field research and would exclude much valuable research from consideration by the profession (Cook & Campbell, 1979; Morgan, Gliner, & Harmon, 2000).

Some Black Swans?

Towards the end of his article Gomory contends that Thyer
 should be critiqued rigorously for his failure to fully and carefully
 engage with the essential scientific issues entailing philosophy and method
 as well as for suggesting that some members of the profession (i.e.,
 students and direct service workers) don't have to think too critically but
 should simply apply pragmatic knowledge leaving theory, if at all necessary
 to the academics. I argue that Professor Thyer, due to his justificationary
 approach to science has not been able to see that efforts at finding proof,
 support, and credibility for his atheoretical research are doomed to
 failure because no such proof is possible. He seems to be unaware of the
 fallibilistic alternative which I have presented (he never discusses it in
 any of his writings), although he argues his position by claiming to know
 philosophy of science. (p. 47)


Well, Gomory has certainly critiqued my work rigorously if not always accurately. I may well have failed to engage fully and carefully in the philosophical and methodological issues that Gomory raises. Although I am not a professionally trained philosopher of science, I certainly have not ignored philosophical issues in my work and writings. In fact, one of my latest publications is an edited book on a philosophy of science (Thyer, 1999). I have outlined a philosophical position with clarity in a number of articles and chapters, some of which Gomory himself cites! The philosophical positions I advocate can certainly be challenged--realism, materialism, determinism, positivism, empiricism, rationalism, operationism, parsimony, scientific skepticism--as can my rejection of metaphysics, nihilism, teleology, vitalism, mentalism, dualism, reification, circular reasoning, and scientism. Nor have I neglected methodological issues (e.g., Royse, Thyer, Padgett, & Logan, 2001), having long written about and practiced a range of research methods for the evaluation of social work practice, including single-system research designs, pre-experimental designs, quasi-experimental designs, and true experiments conducted in the nomothetic tradition. Gomory may claim that my positions are shallow (from the perspective of the professional philosopher) or simply wrong (they may well be), but they are defensible and most certainly these issues have not been ignored by me, writing to the best of my abilities as a humble clinical social worker with evaluation interests.

I do not see how he can contend that I suggest that social work practitioners need not think critically about their work. Much of my career has been devoted to encouraging such critically reflective thought. And I have even worked at promoting theory development and testing. But one can think critically about issues other than theory--that is what evidence-based practice is all about. What Gomory calls "my" justificationary approach to science is not mine. I did not invent it. It has been and remains the mainstream approach to program evaluation. Far from being unaware of the fallibilistic alternative, as Gomory claims, I have indeed written about it in the very articles of mine which Gomory cites! Take for example the following (Thyer, 1993):
 The criterion of falsifiability has been recently summarized by Lett
 (1989): "It must be possible to conceive of evidence that would prove the
 claim for hypothesis false. It may sound paradoxical, but in order for any
 claim to be true, it must be falsifiable. The rule of falsifiability is a
 guarantee that if the claim is false, the evidence will prove it false; and
 if the claim is true, the evidence will not disprove it, in which case the
 claim can be tenatively accepted as true until such time as evidence is
 brought forth that does disprove it" (p. 154). Paradoxically, scientific
 methods of research are far better at demonstrating that a hypothesis (and
 ultimately the theory that the hypothesis is based upon) is incorrect than
 in proving that a particular theory is true.... At best in social science
 one may claim that the favorable results of a particular study (or clinical
 outcome) may be said to corroborate the theory the hypothesis was based
 upon. As the number of such corroborative studies grows, our confidence in
 the validity of a given theory may similarly increase. However, no matter
 how many such positive studies may accrue, the possibility always exists
 that tomorrow a more definitive (i.e. controlled) study based on another
 theory will be published which accounts for the findings of all the
 previous research, plus the results of the latest investigation. Logical
 positivists are thus modest in their claims, recognizing that all
 scientifically derived knowledge is tentative and susceptible to
 modification as further work is conducted. We can be relatively more
 certain about which theories are false, and only tentatively sure about
 which ones are true. Articles by Platt (1964) and Berkson (1989) provide
 excellent overviews of the role of falsifiability in this incremental
 approach to scientific progress. (pp. 14-16, italics in original)


Or how about Thyer (1997):
 The extent to which a theory is falsifiable has long been held to be an
 important characteristic of its potential usefulness. Indeed, Sir Karl
 Popper ... contended for quite some time that disproving theories is a more
 valuable (and feasible) contribution to science than is trying to confirm
 them as accurate (Popper, 1959, 1963) (p. 65, italics in original).... As
 accurate data accrue, it becomes increasingly evident that a given theory
 is largely consistent with the facts or is inconsistent with them. In the
 former, the theory should be retained pending further testing by gathering
 new data. In the latter the theory should be discarded. (p. 80)


Gomory says that I am unaware of the fallibilistic alternative, and have never discussed it in any of my writings! He is incorrect in this statement. Absolutes (e.g., "never") are dangerous in science, Professor Gomory, since all it takes is one black swan to be proven wrong in one's contention that all swans are white. But I will not make the mistake of discounting the balance of Gomory's analysis, his other published scholarship, or his personal capacities as a researcher because he makes a few misrepresentations or outright errors. I have learned that no one who ventures into print is immune from criticism, and some of it justly points out honest mistakes. This is how all of us learn, as I most certainly did from preparing this response. I found Gomory to be a charming and engaging colleague when I met him earlier this year, and I look forward to another round of beverages with him as we discuss these issues further.

Gomory's vision is of "`autonomous social workers' who can decide through rigorous and open debate and tests what are better and worse policies and interventions." I doubt that debate will help us decide. Proper tests might (I am immensely curious as to the types of tests that Gomory believes are capable of providing us guidance). And when the empirical research summarizing the results of such tests is reviewed, we end up with the beginning steps towards evidence-based practice. Some psychosocial interventions have already been shown not to work. Others have been shown to be relatively effective. Social work education and practice should focus on the latter. This may, or may not, involve theoretically driven research. If theory is legitimately involved, great. If it is not, then doctoral students and other researchers should not be forced into pretending that it does. Both approaches are characteristic of science. Both have value. At present, theory-driven research exercises are much more highly valued within academic social work than are atheoretical studies evaluating outcomes. This should change. Professor Gambrill (1993) says it well:
 Social work practice requires making inferences about the causal
 relationship between or among variables.... Problems in causal analysis
 arise when social workers do not use theories when they would be helpful
 and when they do use theories in areas in which they are inappropriate. (p.
 235, italics added)


Some Challenges for Gomory

While I certainly do not wish to dictate the substance of Gomory's following rebuttal, I think that if he attended to some of the following questions, the merits of his position(s) would become much clearer to the reader.

1. Please provide a succinct definition of what you mean by "theory."

2. Please indicate the type of evidence (e.g., tests), if any, that you believe would justify labeling a psychosocial intervention as evidence-based.

3. Can you point to a single psychosocial intervention that you believe should be recommended to social workers as evidence-based with respect to a particular type of client problem/situation?

4. If you do not believe that any psychosocial treatment can be recommended at present as evidence-based, please indicate the alternative sources of knowledge that social workers should draw upon, apart from empirical outcome studies, in order to make necessary practice decisions now.

REFERENCES

Baker, L., & Thyer, B. A. (2000). Promoting parental compliance with home infant apnea monitor use. Behaviour Research and Therapy, 38, 285-296.

Barker, R. L. (1995). The social work dictionary (3rd ed.). Washington, DC: NASW Press.

Berkson, W. (1969). Testability in the social sciences. Philosophy in the Social Sciences, 19, 157-171.

Brandell, J. R. (Ed.) (1997). Theory and practice in clinical social work. New York: Free Press.

Cohen, I., & Steketee, G. (1998). Obsessive-compulsive disorder. In B. A. Thyer & J. S. Wodarski (Eds.), Handbook of empirical social work practice: Volume 1, mental disorders (pp. 343-363). New York: Wiley.

Colosetti, S. D., & Thyer, B. A. (2000). The relative effectiveness of EMDR versus relaxation training with battered women prisoners. Behavior Modification, 24, 719-739.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis for field settings. Chicago: Rand McNally.

Dziegielewski, S. F., Shields, J. P., & Thyer, B. A. (1998). Short-term treatment: Models, methods and research. In J. B. W. Williams & K. Ell (Eds.), Mental health research: Implications for practice (pp. 287-309). Washington, DC: NASW Press.

Fischer, J. (1972). A review of Theories of social casework. Social Work, 17, 105-108.

Franklin, M. E., & Foa, E. B. (1998). Cognitive-behavioral treatment for obsessive compulsive disorder. In P. Nathan & J. Gorman (Eds.), A guide to treatments that work (pp. 339-357). New York: Oxford University Press.

Gambrill, E. (1977). Behavior modification: Handbook of assessment, intervention, and evaluation. San Francisco: Jossey-Bass.

Gambrill, E. (1983). Casework: A competency approach. Englewood Cliffs, NJ: Prentice Hall.

Greene, R. R. (Ed.). (1994). Human behavior theory: A diversity framework. New York: Aldine de Gruyter.

Lett, J. (1989). A field guide to critical thinking. The Skeptical Inquirer, 14, 153-160.

MacCulloch, M. J., & Feldman, P. (1996). Eye movement desensitisation treatment utilises the positive visceral element of the investigatory reflex to inhibit the memories of post-traumatic stress disorder: A theoretical analysis. British Journal of Psychiatry, 169, 571-579.

Montgomery, R. W. (1966). A review of Eye movement desensitization and reprocessing: Basic principles, protocols and procedures. Journal of Behavior Therapy and Experimental Psychiatry, 27, 67-68.

Morgan, G. A., Gliner, J. A., & Harmon, R. J. (2000). Quasi-experimental designs. Journal of the American Academy of Child and Adolescent Psychiatry, 39, 794-796.

Nathan, P. E., & Gorman, J. M. (1998). Treatments that work--and what convinces use they do. In P. E. Nathan &J. M. Gorman (Eds.), A guide to treatments that work (pp. 3-25). New York: Oxford University Press.

O'Hear, A. (1989). An introduction to the philosophy of science. New York: Oxford University Press.

Payne, M. (1997). Modern social work theory (2nd ed.). Chicago: Lyceum.

Platt, J. R. (1964). Strong inference. Science, 146, 347-352.

Popper, K. R. (1959). The logic of scientific discovery. London: Hutchinson.

Popper, K. R. (1963). Conjectures and refutations. London: Routledge & Kegan Paul.

Reber, A. S. (1995). Dictionary of psychology (2nd ed.). New York: Penguin

Roberts, R., & Nee, R. (Eds.) (1970). Theories of social casework. Chicago: University of Chicago Press.

Royse, D., Thyer, B. A., Padgett, D., & Logan, T. K. (2001). Program evaluation: An introduction (3rd ed.). Belmont, CA: Brooks/Cole.

Rubin, A., & Babbie, E. (1997). Research methods in social work (3rd ed.). Pacific Grove, CA: Brooks/Cole.

Stein, T. J., & Gambrill, E. D. (1977). Facilitating decision making in foster care: The Alameda project. Social Service Review, 51, 502-513.

Stein, T. J., Gambrill, E. D., & Wiltse, K. T. (1978). Children in foster care: Achieving continuity of care. New York: Praeger.

Thyer, B. A. (1987). Contingency analysis: Toward a unified theory for social work practice. Social Work, 32, 150-157.

Thyer, B. A. (1988). Radical behaviorism and clinical social work. In R. Dorfman (Ed.), Paradigms of clinical social work (pp. 123-148). New York: Guilford.

Thyer, B. A. (1992). A behavioral perspective on human development. In M. Bloom (Ed.), Changing lives: Studies in human development and professional helping (pp. 410-418). Columbia: University of South Carolina Press.

Thyer, B. A. (1993). Social work theory and practice research: The approach of logical positivism. Social Work and Social Sciences Review, 4(1), 5-26.

Thyer, B. A. (1994a). Are theories for practice necessary? Journal of Social Work Education, 30, 147-151.

Thyer, B. A. (1994b). Social learning theory: Empirical applications to culturally diverse practice. In R. R. Greene (Ed.), Human behavior theory (pp. 133-146). New York: Aldine de Gruyter.

Thyer, B. A. (1997). Is it possible to know when theories are obsolete? In M. Bloom & W. C. Klein (Eds.), Controversial issues in human behavior in the social environment (pp. 65-71, 79-80). Boston: Allyn & Bacon.

Thyer, B. A. (1999). Clinical behavior analysis and clinical social work: A mutually reinforcing relationship. The Behavior Analyst, 22, 17-29.

Thyer, B. A. (Ed.) (1999). The philosophical legacy of behaviorism. Dordrecht, The Netherlands: Kluwer.

Thyer, B. A. (2000). Introductory principles of social work research. In B. A. Thyer (Ed.), Handbook of social work research methods (pp. 1-24). Thousand Oaks, CA: Sage.

Thyer, B. A. (2001). What is the role of theory in research on social work practice?Journal of Social Work Education, 37, 9-25.

Thyer, B. A., & Myers, L. L. (1997a). Behavioral and cognitive theories for clinical social work. In J. Brandell (Ed.), Theory and practice in clinical social work (pp. 18-37). New York: Free Press.

Thyer, B. A., & Myers, L. L. (1997b). Social learning theory: An empirically-based approach to understanding human behavior in the social environment. Journal of Human Behavior in the Social Environment, 1, 33-52.

Thyer, B. A., & Wodarski, J. S. (Eds.) (1998). Handbook of empirical social work practice: Volume 1, mental disorders. New York: Wiley.

Turner, F. J. (1979). Theory in social work practice. In F. J. Turner (Ed.), Social work treatment: Interlocking theoretical approaches (second edition, pp. 1-12). New York: Free Press.

Vonk, E. M., & Thyer, B. A. (1999). Evaluating the effectiveness of short-term treatment at a university counseling center. Journal of Clinical Psychology, 55, 1095-1106.
COPYRIGHT 2001 Council On Social Work Education
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:THYER, BRUCE A.
Publication:Journal of Social Work Education
Geographic Code:1USA
Date:Jan 1, 2001
Words:9076
Previous Article:A FALLIBILISTIC RESPONSE TO THYER'S THEORY OF THEORY-FREE EMPIRICAL RESEARCH IN SOCIAL WORK PRACTICE.
Next Article:CRITICAL RATIONALISM (GOMORY'S BLURRY THEORY) OR POSITIVISM (THYER'S THEORETICAL MYOPIA): WHICH IS THE PRESCRIPTION FOR SOCIAL WORK RESEARCH?
Topics:

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters