Click here Contents Page
to return to the contents page.

Chapter Eleven

#[p222] PART 4

Concluding thoughts

#[p223] Chapter 12. Reflections on the project.

There is a call in some quarters for new approaches to inquiry. Rowland (1997), discussing action research, calls for " ‘joyfulness’, risk and even playfulness rather than a narrow concern for systematic method" (p.252). In educational research Thomas (1998) speaks of the "tyranny of method" (p.151) which traps researchers into "sclerosis" (p.152). Within my own profession, Burden (1997) bemoans the "heavy emphasis ... upon ... methodologies rooted within a positivist paradigm" (p.13). My desire for a new method came, however, from within, not from the literature, as with Anderson’s comments below on the triggers for changes in practice:

It is interesting that the feeling that I was participating in relationships that were uncomfortable for me has stimulated changes in my work, and that it was not theories or the reading of books or journals that made me change. (Anderson 1992 p.89) I believe that I began this study with something of a "storybook" view of research, much as Mitroff (1983) describes a storybook view of science. The untidy realities of investigation I had experienced previously, solidly in a positivist tradition but full of mistakes and uncertainties, I suspect I saw as part of my "failings": I was just a bad researcher. Stenhouse’s view of research as "systematic self critical enquiry" which is made public (Stenhouse 1981 p.8) set me thinking along more flexible lines. However, I continued for a long time to have nagging doubts partly triggered by the term "systematic": my diaries provided a systematic core to the study, but surely doing "real" research was more like the image of methodical, objective men (sic) in white coats, producing incontrovertible knowledge (the image captured in, for example, Mitroff 1983).

A summary of what I have learned about this research, my vision of a messy method, is contained in chapter 11. The main argument of the thesis is that the method I uncovered is how I believe research works for me. I am confident of this now, although its nature changes as I write. In a very real sense, I am writing it into existence as I go along, as I learn to "sail skilfully on troubled waters" (Elliott 1996 quoted in Dadds 1998 p.49). It is #[p224] a version of practitioner research, the hallmarks of which I extract below. These hallmarks are, I propose, associated with mess, values and communication.

A variation of the above argument is that I believe some other researchers may, in part, work the way I have described, some of the time, and my description of this approach has resonance for them. The inquiry has in its final stages been about finding ways to explore this confirmation. It remains as yet undecided. A more general statement, that all people work this way at some time, I will not offer.

In concluding the project, however, I would like in this section to take a more reflective overview, to finish with celebration but also critique: to point up the shortcomings of the inquiry and, paradoxically, to offer these as part of its strength - an exercise in self-criticism. This continues the theme introduced in chapter 11 of self-criticism being one of the criteria for assessing research strength. I will return to some reflections on research later in the light of the current study but first, some reflections on practice.

(1) Reflections on practice.

The changes made (and those I anticipate), and the questioning attitude adopted have led to a more comfortable feel for my casework, not complacent but more defensible. I was also pleased to be able to help place reflection, and the idea of honesty about practice, on the agenda for the profession in a small way. While accepting all the potential drawbacks of a concept such as attention seeking, I am also happy that this is now more clearly set out and that it offers, at least potentially, an alternative understanding of some of the very pressing problems children and their families may face.

Given my understanding of the method which has developed over the last few years, I feel I could now turn to exploring my work with an approach I owned[1] and that I was confident with. It is galling, however, looking back over the project, to see how much of the effort was, in the end, directed at problems of inquiry rather than problems of practice. This is particularly so when I have highlighted meeting practice values as a cornerstone of research method. A multitude of practice-related questions remains to be addressed:

#[p225] (A) My collected casework reflections have been of those families I became involved with. What about "the ones that got away"? Returning to my concerns about Mr. L. in appendix B, an ongoing issue for me will be decision making around involvement: how these decisions are made; whose interests they may serve; which routines of thought and practice appear to imprison me in my ways of deciding.

(B) Interviews of other EPs about their day to day work could have thrown light on my own approaches, perhaps re-assuring me in some areas, challenging me in others. The discussions in local interest groups which I mention, and which did not form part of the study, were not personally focused and detailed enough to achieve this in other than a very general way. The comments I sought on my tapes were useful, but very limited in scope. The tapes could well be examined, for example, for evidence of any number of "isms" and videotape would have been of great benefit in aiding the study of the to-and-fro of non-verbal communication in these interactions.

(C) Although the project was not a piece of action research, many changes to practice occurred during it. I judged the adequacy of these myself. A collaborative approach would have subjected these to more demanding scrutiny. My practice is the weaker for want of this.

(D) The excursion into reflection on practice, which became an issue to do with honesty, raised, but did not resolve, the problem of what kinds of knowledge we, as EPs, actually draw on. This seems to me an important area for the profession to address in its own right. It could also become a site of transformation, particularly for those with a fixed "scientific" view of our work (if such persons do in fact exist).

(E) The broader social and political aspects of work with children displaying "emotional and behavioural difficulties" is largely missing from this investigation. The structure for reflection I adopted does introduce such ideas, they are not, however, incorporated into the project proper. Todd (1998) takes me to task over this lack in reviewing my book on attention seeking. I hope to address such issues in later work.

#[p226] (F) The concept of attention seeking which is central to the Eric Harvey work is a particularly loaded term. I am aware of this, but have chosen to promote its use on the pragmatic grounds that it seems to serve as a peg around which to hang ideas of change. My impression is, it works. Situations do seem to "improve". A different kind of study would be needed to assess its benefits (and that of the Eric Harvey method as a whole); its potentially negative, stigmatising influence; and its long term impact on the children and their families; quite apart from any questions of definition, incidence, causation and the selective perception in practitioners which it may promote.

I describe these omissions in research as galling. It is even more galling to discover one side effect of this research on practice. At the commencement of the project I was deeply worried that in some way inquiry would undermine my faith in my working style, make me more self conscious and less confident and thereby less effective. In the end, my fears were unfounded. Emphasis on guarding against this, however, possibly contributed to a situation where one insidious impact went unseen. Ironically, in view of my rejection of quantitative methods at the outset, this impact was revealed through some elementary quantification.

In September 1998, beginning my write up, I spent a short time casting my eye generally over cases for the past few years and those I knew about before the project proper began, in Autumn 1993. A clear pattern emerged. In my old patch I had been working at a pretty steady rate of twelve Eric Harvey cases per year over a number of years. I moved areas and in my new patch, in the second year of the project, this rate dropped dramatically. By the end, I was down to seven cases per year. The work I so valued, I was not doing. What had happened?

Part of the problem was possibly simply to do with starting off in a new set of schools, unused to my way of working and not passing on those referrals I could tackle. Educational Psychology is a peculiar job within the world of education, with an enormous potential range of involvement: children and young people 0-19 with any combination of learning and emotional difficulties and additional disabilities. Interventions can take place #[p227] from the individual to the school to the LEA level, with many different methods available (from varieties of casework intervention to training and consultation). We tend to develop special interests. At referral stage there is quite a large element of discretion: as illustrated earlier, one colleague described her avoidance of counselling, another her focus on in-service training; and part of my current concern above is how such decision are made, particularly in the stressed arena of school review meetings.

My recall is that I used to be "hungry" for EBD cases. When children came up at review meetings, for instance, I saw these as "my" cases, in "my" schools. I wanted to do the work, the Eric Harvey work, not some outsider from social services or child psychiatry. There was an element of territorial protection. I perhaps took on cases I should really have referred elsewhere, those with multiple problems, beyond my ability to help, those like Mr. L. Part of my surprise at catching my demeaning view of Mr. L. was to do with my habit of usually deciding in the end to try, sometimes successfully, to intervene in such instances. As the research progressed, however, I began to find reasons not to take on cases. Whether it was simply the extra pressures of research overwhelming my sense of energy, or a challenging of practice habits resulting from research or some other cause is, unfortunately, not recorded: the referral decisions were not my focus.

I am, I feel, more sensible now about referral, not so territorial, more willing to pass on to others (perhaps with a better rationale). I am more confident about the work generally - I feel I can better defend it. And, since noting the problem so starkly highlighted above, of declining involvement, I have been jolted into action. Since September 1998 I have been gradually building up my casework again. Working a three day week, with two days writing-up study leave, in the Autumn term I was up to six cases in four months.

(2) Reflections on practitioner research.

One question I would like to try to address in this final chapter, is does my understanding of "practitioner research", in the light of this study, differ in any significant way from simply "research", however defined? Does the addition of one extra term significantly affect the meaning? In chapter 11 I emphasised a "scholarly" component, however, for me #[p228] the distinguishing features of practitioner research have come to be associated with three issues: mess, values (in particular honesty) and communication.

A. Mess.

As much of the thesis concerns this area, I will restrict this section to a few brief rounding-up comments.

"Managers ... manage messes" (Ackoff 1979 p.99 quoted in Schön 1983 p.16). Lindblom (1959) outlines a process of policy makers "muddling through". Kupferberg (1996) claims that "[c]haos and lack of structure is the first surprise of project work" (p.237). Cook (1998) describes action researchers "bumbling, messing" (p.105). Spellman and Harper (1996) refer to the "[f]ailure, mistakes, regret and other subjugated stories in family therapy" (p.205). Pava (1986) explains how in planning there is an effective approach which displays:

A disjointed, undisciplined quality... Goals and procedures are left unclear. Leaders initiate the program with low support and little comprehension of its nature. Action precedes understanding ... [the approach eschews] initially detailed object, plans, projections, or evaluation. (p.631) The individual analyses presented by many of these writers I will not address. My point is that some concept of mess may be necessary in exploring a wide spread set of areas[2].  There may be mess in the problems which confront us; there may be mess in the way we tackle these. I would wish, however, to rescue the term mess from its negative baggage: to see messy approaches not as "sloppy", but as difficult, requiring a high level of skill; and to extract what structure I can from such approaches (see chapter 11). As Cook (1998) quotes one member of her focus group: "Mess is skilled - [a] very highly skilled process" (p.103).

Hart (1995) whose article I found very encouraging in the early stages of the project ("the universe provides - just what I was looking for" diary note 5.9.95) explores a very flexible way of researching. She describes techniques which "made no claim to use "methods" other than the authors’ own eyes as experienced teachers" (p.213, emphasis in #[p229] original). My realisation, however, in researching practice, then moving on to researching research, was of a need to un-learn old ways of seeing, and then to learn to see anew. The skills of practice, although helpful, did not carry me through.

Practitioner research, or at least, the only variety I can claim to have some knowledge of, seems to invite consideration of mess, both arising from the problems of practice and arising from the problems of research. (Other research stances may also benefit from some stronger appreciation of the importance of mess. I restrict my comments, however, to the field studied). It is from actively welcoming the uncertainty surrounding such mess, welcoming the ensuing confusion and the angst, that I believe some deeper understanding may arise. As Eliot (1944) demonstrates in his poem quoted in chapter 6, one way to knowledge may be through first embracing ignorance. However, while "[q]ualitative researchers only gain control of their projects by first allowing themselves to lose it" (Kleinman, Copp and Henderson 1992 p. 9 quoted in Kleinman and Copp 1993 p.3) this, at least in my case, required a great deal of emotional support.

B. Values.

Reed and Biott (1995) argue that "the things we value in research are inextricably connected to the things we value in practice" (p.194) and "the hallmarks of strong practitioner research [are in] research which embodies these values rather than the values of objectivity" (p.194). They point out that the researchers, whose accounts appear in the same volume, "were all motivated by practitioner values" (p.194) and they struggled with the evaluation of such work:

[they were] unable to identify appropriate ways of evaluating what they had done, or make decisions about what they should do, given that they had recognised that traditional research prescriptions did not fit their study (ibid p.194). Reed and Biott take the position that "practitioner values should form part of the way in which practitioner research is evaluated" (p.194). In the current study, I considered a number of what I called ethical issues which I came to see as drawing on moral values. From the above discussion I feel Reed and Biott are using the term "values", in part [3], in a similar manner ("moral principles ... beliefs ... accepted standards", Collins dictionary). For me, a prime "principle" or "value", which only clarified over time, was that the needs #[p230] of practice should have priority: as Dadds (1995) describes for one teacher "[t]he children’s welfare ... took priority over [the] search for truth" (p.47).

I had no permission to do other than my job (although I could probably have obtained this), indeed I came to realise I did not want to do anything other than my job, such as set up a controlled trial, experimental comparison of the Eric Harvey method with some other. Anyway what alternative method of working could I possibly adopt with the same level of commitment and expertise? Such a paradigm seemed irrelevant. There are alternative, single case experimental designs, but my perspective was shifting away from conventional notions. I wanted research to be the servant of practice. If that meant in some eyes doing "poor research" then I was comfortable with that from my value position. In any contest between the demands of research and the demands of practice, there was no contest. For me, practice had pride of place. As my confidence in messy methods grew, I became more convinced that I would not want to fall into the trap of allowing some idea of "good research" undermine my commitment to the clients in any research into practice which I carried out. A few examples from other researchers will illustrate the dilemma (which despite the position adopted above, I am sure will continue to tax me).

Calaam and Franchi (1987) describe how in their study of families of abused children they felt unable to give support to the mothers as that would have disrupted their investigative rationale. They describe "the commitment that the practitioner has to maintaining the best possible therapeutic relationship with each individual client" (p.183) but how the use of "standardised measurement procedures may seem to run counter to this" (ibid p.183). I wanted to circumvent the trap they faced:

Research traditionally requires restraint and restriction of behaviour on the behalf of the experimenter, in order to avoid bias or skew results. Hence, we were unable to play with the children ... for considerable periods of time when engaged in formal observation, and restricted our discussion with the mothers to the format of the structured interview, rather than dealing with the issues that the mothers themselves might have wanted to raise. (ibid p.183, emphasis added) Pirie (1995) describes how in her research with children "I had to remind myself constantly that I must not stray outside the confines of the research design"(p.96) and "trying to comply with both clinical and research procedures ... was much more stressful #[p231] than I had ever anticipated" (p.97). Stevenson (1995) explains the tensions she felt in her study of her family therapy sessions. She felt constrained by the need to provide her agency with "data in the language they could understand i.e. numbers" (p.102) but recognised that "there was a risk of losing the family by overburdening them with too many questionnaires. This was against my priority as a therapist" (p.105).

I am not trying to argue here that all "conventional" research is unethical or that I can claim the right simply to practice, because it seems to work, and not examine that practice. There may be any number of ways to combine study of practice with care for the client, such as in action research. It may even be possible to hold the view that it is on some scale unethical not to scrutinise one’s work (whether that be clinical practice in the field or, for academic colleagues for instance, research and teaching in the academy). My struggle to resolve the issue, leading to the account of messy method, is simply one position which others may wish to consider.

C. Honesty.

I separate this out from values generally as it seems to warrant special consideration. Honesty, for instance in describing the off-shoots and anxieties of research, and how I dealt with these, appears to me to be a stance I would not now want to avoid. It appears in any case, a crucial element in my identity. Whether such honesty does indeed add to the quality of the study only time, and the reactions of others, will ultimately tell. What seems to me an additional important dimension in research, following the theme of honesty, is evidence of attempts to evaluate a piece of work, in particular to point out its remaining shortcomings (even on top off seeking the views of others constantly during the study and extra to including accounts of the messiness of the processes). Standing back, near the end of the project, the following areas seem to present themselves with respect to the actual research (my practice related concerns, and those questions raised by the discussion of making sense and identity, are recorded earlier):

* I propose that an honest account of "errors" enhances the believability of a report. This is a largely untested assumption.

#[p232] * The study does not take account of feedback from the various publications existing or in preparation, arising from it. I hoped, for instance, to generate some dialogue over my article "Notes from a Method". The time scale of the project has prevented this. My study leave is due to finish at the end of March 1999 and I cannot sensibly prolong the project beyond then. The study is weaker because of this lack.

* Being consistent with the interactive spirit of the messy method, an appropriate final stage would have been to submit the thesis as a whole to criticism by a series of practitioner colleagues, before fixing its final form and content. Again, I have imposed a time limit on the research which, much to my regret, has prevented more than very limited moves in this direction (so far one colleague has read the work, I refer to her comments later).

* Although not about science, science has been a constant presence in the inquiry. Borrowing from Lather (1993), its impact is "rhizome-like" for me: a mass of tangled ideas, constantly invading my efforts to clear-cut a portion of the field. Lather, rather ironically in the present context, actually uses the concept of the rhizome to describe her attempts at undermining scientific authority; in my case, science is another rhizome, constantly undermining the quest for my personal "authority".

It would, perhaps, take a complete investigation on its own to begin to tease out the interconnecting beliefs and practices surrounding such terms as "science", "objectivity", "rationality" which, from my perspective, present such an impenetrable thicket in coming to know my own processes. Gallie (1957) for example claims "the term science, in its widest sense, is a slippery one, used for the most part for dubious honorific purposes" (p.120) and Bridgeman, a Nobel physicist, explains "There is no scientific method as such ... The most vital feature of the scientist’s procedure has been merely to do his (sic) utmost with his mind, no holds barred (quoted in Denzin 1978 p. 316 emphasis in original).

I return, albeit briefly and speculatively, to this topic in a postscript.

#[p233] D. Communication

I argue throughout the thesis that material unread, whatever its quality, is of little use (see in particular chapter 11). Practitioners appear to call only rarely on academic works in every day practice. Referring to survey results and a number of articles, Polkinghorne reports "therapists learn about therapy overwhelmingly from practical experience with clients and only rarely consult therapy research" (Cohen et al 1986 p.198 cited in Polkinghorne 1992 p.154).

While a pithy scientific text may have a beauty of its own, to the initiate, I believe that practitioners, particularly those who wish to take steps towards being practitioner researchers, may need something different. Collaborative work with the academy, for instance in action research, may be one route to achieving engagement with the world of theory. An accessible text, such as I have tried to create, may be another. This is not to denigrate practitioners, but simply to face the reality of certain of the barriers which may have to be overcome in linking theory and practice; and that to overcome these, some element of conventional notions of academic rigour (at least from one image I carry round in my head) may need to be sacrificed. That my account is readable and yet sufficiently rigorous is, for me, however, simply a matter of faith at this point; as is my belief that I am communicating (see Dadds 1994 for example on the need for alternative texts).

It may be true that in one sense we can communicate without knowing the effects of our actions. It may also be true that in potentially interactive settings "we cannot not communicate" (Watzlawick et al 1967 p.48 emphasis in original) i.e. we may pass on messages whether we want to or not. There may be many routes to exploring the idea of communication. Within psychology, for instance, we might encounter a definition such as "[c]ommunication occurs when one animal (the signaller) signals to another (the receiver) in such a way that it changes the behaviour of the receiver in some way" (Cardwell et al 1996 p. 514), thus emphasising the need for an observable behaviour change resulting from the message. This focus on observable behaviour, however, seems an unnecessarily restrictive approach to such a potentially complex concept.

#[p234] We may wish, for example, to examine conditions for effective communication such as sincerity, comprehensibility, truth and appropriateness (McNiff 1993, drawing on Habermas). Or consider levels of communication, from the baby’s cry upwards (Hoffman 1995). Or, remaining with family therapy authors, we might argue that families "follow poetic rules, not those of deductive logic" (Allman 1982 p.52), and suggest, as he does, that therapists should have "a twinkle in their eyes" (p.52) and attend to aesthetic aspects of communication in "search[ing] for non-linear metaphorical connections" (p.47). While interesting, however, such topics do not get to the heart of my present concerns.

Griffiths (1995a) considers a number of issues, for instance the effect of "large-scale inequalities" (p.67) and the need for the second person to know that the first was trying to communicate. One other aspect she raises, however, brings me close to the position I am developing. Citing Martin (1985) she illustrates how a good conversation "is neither a fight nor a contest ... it is an interchange of ideas" (Martin 1985 p.9 cited in Griffiths 1995a p. 171) and it is this element of interchange which interests me. Schön has a similar notion in his "reflective conversation with a unique and uncertain situation" (Schön 1983 p.130) where the practitioner must listen to the "back talk" from attempts to change events.

In chapter 11, two aspects of communication came to the fore: clarity and having effect on others, although, as I argue, true evidence that ideas "resonate" (or not) requires a time scale longer than I can maintain in this project. For me, however, the final, vital element of communication I wish to explore is embodied in the phrase "two-way communication". I need to know the impact of the point I hoped to make: "[t]he meaning of communication is the response it elicits" (Barker 1986 p. 59). In this sense, communication is not a one-way process. I require feedback.

Watson and Hill (1989) in their Dictionary of Communication and Media Studies, spend some time examining the complexity of the definition of communication. They identify a number of "fundamental factors", including "a receiver who ... decodes ... and interprets the message, returning a signal in some way that the message has or has not been

#[p235] understood" (p.40). They also, however, go on to emphasise the confusing range of normal usage of the word, from "‘transmit’, a one-way process [to] ‘share’ ... a common or mutual process" (ibid p.40). Adair (1997), to underline the non-passive nature of the recipients of messages, offers the alternative term "commoning" (perhaps highlighting the similar Latin roots of communication and common, Concise Oxford Dictionary).

Overviewing my practice and this research I can begin to see a connecting theme emerging. In parent interviews there is constant monitoring of the verbal and non-verbal feedback from the clients. Are they relaxed? Do they laugh at the stories? Are they one jump ahead? Are they safe enough to be honest? All these clues help to reinforce the appropriateness of the intervention and build confidence in the approach. The follow up session’s back talk is the final, rigorous test: did it work? Counselling, which I discussed earlier, I would argue could hardly proceed without some kind of response from the client. In researching research, however, this evidence of effective communication, this feedback, for me remains tantalisingly out of reach, despite my efforts to uncover it, through discussion, papers and publication.

Lincoln and Guba (1985) argue, as mentioned in chapter 9, that good writing is essential to case study reporting, and that such work during its creation should be subjected "repeatedly to searching criticism" (ibid p.364). But in seeking "relatability" or "resonance" at the close of the inquiry, I need additional evidence that others have indeed embraced what I have to offer. Perhaps the best I can do is to strive for this confirmation, and show evidence of this striving.

As a final note, my colleague Rene E.[4], who looked at a draft of the thesis, gave what to me was powerful affirmation that I had indeed "communicated something to her" effectively. This response reinforced my belief in the importance of emphasising the two-way nature of the process. After reading, she said she felt "enhanced" .

#[p236] Post script

xxxxxxxx (Eliot 1944 p. 48)

There is a real world out there, I have to believe this. As Sokal points out in the controversy around his spoof post-modernist article (see Sokal 1996a):

... anyone who believes that the laws of physics are mere social conventions is invited to try transgressing those conventions from the windows of my flat. (I live on the 21st floor). Sokal (1996b). Gouldner (1982), as mentioned earlier, discusses the "background assumptions" which we import into research. A little while before beginning the research, I guess that one of my assumptions would have been that the methods of natural sciences were also applicable to the study of the social world; even, had I considered such an enterprise, to the study of the intrapersonal world. But along with these assumptions I also carried an un-formed, unsophisticated view of the nature of science. Constantly erupting into the manifest inquiry was a latent[5] inquiry, which I resisted, best summed up in Chalmer’s (1982) words "What is this thing called science?"

I began to discover in my reading what to me had the mystique of underground texts, although they possibly form part of a common core of science studies course[6].  In their different ways, they all seemed to point to one idea: that the "scientific method" might be a much more complex creature than I had imagined.

In my mind I had in the past been a poor scientist; throughout the current project I gradually re-evaluated myself. My view of science transformed in parallel, particularly to do with the creative aspects of the messy inquiry and my attempts to subject these to external critique: the notion of "testing while protecting", described here and by implication in Chalmers (1982). Whether or not my new image of science as messy is true in general I cannot say; it did, however, help me to a resolution of these seemingly contradictory positions: seeing myself as poor scientist yet seeing myself as creative [p237] investigator, ready to test out developing ideas in public, thus mirroring (some) scientific procedures.

Eliot’s enigmatic but evocative phrase quoted above, sums up for me this reconciliation of apparent opposites. I note in my diary, near the end of the project a sudden feeling that, like Eliot again, "xxxxxxxxxxxxxx" (Eliot 1944 p.48); that I am, after all, perhaps some kind of scientist:

I have come home (31.10.98 2.00 p.m. in park). Key points emerging from the chapter.

I began with a "storybook" view of research. The messy method I describe is how I now understand I work. It is also how, I believe, some other researchers may work, in part, some of the time. Its understanding and its application, however, while drawing on existing skills, required both un-learning and learning.

Focusing initially on the benefits of the project, following reflection on practice over the period, I feel my casework is now more defensible. The research appears also to have helped reinforce the place of honesty, reflection and attention seeking on the agenda of practitioner colleagues.

Many practice issues remain to be settled, however, such as decisions on initiating casework and what forms of knowledge we draw on (see also footnote 1). The research has had limited comment from practitioners and does not address wider social and political concerns. Many questions remain around attention seeking and its long term impact. Finally, practice appears to have suffered during the course of the research, although in an unexpected manner.

With regard to research, I draw out three hallmarks: mess, values and communication. The term messy applies both to the nature of the problems encountered and the way I set #[p238] about tackling them. I see this term in a positive light. The messy method is outlined in chapter 11.

Along with Reed and Biott I argue that the things valued in research are those valued in practice and in daily life, and these aspects appear to include, in particular, moral values. Thus, for example, a prime constraint on research was that practice should have priority. Honesty is singled out for particular attention.

I add a number of remaining criticisms of the project, to those raised in earlier chapters: limited evidence of the assumed positive effect on readers of recounting errors; limited feedback from publications and from practitioners; limited attention to issues raised from discussion of the nature of science.

With regard to communication, I argue that, for practitioners, and practitioner researchers, there is a need to balance rigour with readability.

Of the many facets of communication, that of feedback, knowing the impact of one’s attempts to communicate, appears vital. I relate this to aspects of practice. Evidence of successful communication in the thesis, in this sense is, however, limited.


[1] Looking to the future the two mini-projects raise a dozen or so topics for further inquiry in addition to the points raised in this chapter, sections 1 and 2C, as well as, for instance, settling concerns over protocols for assessing research strength. My main interest, however, would be to return to practice: to test out the messy method in the field of practical action, and at the same time, to unpick some of my (largely unexamined, enlightenment) assumptions around terms such as "values" , "action" and "improvement" (see McTaggart 1996, for instance whose chapter, touching upon some of these issues, is contained in a volume in which several authors pose post-modern challenges for action research).

#[p239] [2] My view of research in the social sciences as a messy process is reinforced by other works cited earlier such as Atkinson (1994), Atkinson et al (1991), Bannister (1981), Cornett (1995), Fine and Deegan (1996), Frost (1995), Kleinman and Copp (1993), Minkin (1997), McGrath et al (1981), Salmon (1992), Stanley and Wise (1993), Staw (19 81), St. Pierre (1997), Whyte (1955).[BACK]

[3] McLaughlin (1994) explores many meanings of the term "values", in an educational context. The studies Reed and Biott refer to I feel emphasise the tensions arising from values with a "moral" dimension - such as choosing between patient care and academic concerns. Which is not to say I do not also value other "non-moral" aspects, such as clarity in writing, and Reed and Biott also use the term values to refer to these, for instance, valuing "imaginative approaches" (p.194). Elliott (1994), however, refers to the "complex, messy, conflictual and provisional nature of values clarification" (p.421). I will thus use the term in a somewhat loose way to indicate moral values.[BACK]

[4] See comments from Rene E. in appendix D.[BACK]

[5] The term latent is borrowed from Biott (1996).[BACK]

[6] I am referring here to works cited throughout the thesis such as Chalmers (1982), Collins (1992), Crane (1972), Feyerabend (1978), Knorr-Cetina (1981), Kuhn (1970), Medawar (1963, 1968), Mitroff (1983) and Ravetz (1971).[BACK]

Click here Contents Page
to return to the contents page.

Appendix A

web analytics