Dissemination of In-House Research in LASSDs (Local Authority Social Services Departments)

Marian Barnes

Development Officer (Research), Hounslow Social Services Department

Tom Wilson

Professor of Information Studies, Sheffield University

Abstract
After identifying the motivations underlying in-house research in social services departments and considering the possible target audiences for dissemination, the authors examine the 'scientific paradigm' as a possible source of problems. It is suggested that a more comprehensive typology of modes of research may offer solutions to some of these problems. The paper concludes with a review of the modes of dissemination appropriate to different audiences and with an examination of the barriers to dissemination.

Why research in LASSDs?

Like all good authors on any subject that has to do with social services departments, we have to make due recognition of the Seebohm Report (1968). This is not simply a token gesture: serious research activity in SSDs is a consequence of the recommendations in that Report and, in the first paragraph of the chapter on research, the Committee members noted the underlying motivation:

'The personal social services are large-scale experiments in ways of helping those in need. It is both wasteful and irresponsible to set experiments in motion and omit to record and analyse what happens. It makes no sense in terms of administrative efficiency and, however little intended, it indicates a careless attitude towards human welfare.' (para. 455)

The report goes on to identify two kinds of research: collecting basic data; evaluation and analysis. There is recognition that these two kinds are related, in that basic data collection is needed if analysis and evaluation are to take place. Paragraphs on the machinery for collecting information and co-ordinating research follow, but nowhere is there any mention of the need for the dissemination of research, the target audiences for research, and the barriers to dissemination that may exist.

The basic typology offered by the Report is clearly unsatisfactory from the point of view of identifying the motives for undertaking research and the initial paragraph suggests only an evaluation motive (but this may be no more than a reflection of management's perception of the role of research). If we look for a more detailed typology then that offered by Murray (1982) has more possibilities:

'There are at least four ways knowledge may be relevant to the creation and implementation of policy. It may serve to:
1. Identify a problem (Discovery)
2. Propose and compare possible interventions (Intervention)
3. Aid in the effective management of an intervention (Management)
4. Evaluate the intervention (Evaluation).'

Although more detailed, this typology is concerned with policy-driven research and it is necessary to look further for other motivations, or for a more detailed examination of policy motivations. We would propose an alternative set of five purposes for undertaking research in SSDs:

  • to guide policy formulation and practice by assessing social and individual needs;
  • to discover the possible consequences of locally determined policy;
  • to discover the possible consequences of central government policy;
  • to evaluate the effectiveness of specific programmes and innovations;
  • to provide management with data on the general course of departmental activities for monitoring and problem identification.

Whether we consider the function of research under the rather simplistic headings of the Seebohm Report or under the more elaborate typologies above, it can be said that the underlying intention is to assist 'organisational learning' (Argyris and Schon, 1978) and the import of that first paragraph of the Report is that SSDs must be 'self-evaluating organisations' (Wildavsky, 1972).

We argue that, if organisational learning is to take place, the dissemination of in-house research is of crucial importance.

Who needs to know?

If, as we suggest, the dissemination of information derived from research is crucial to organisational learning, one of the key questions the researcher or research administrator must ask is, "who needs to know?'. Given that the motive for undertaking research may come from a variety of sources, it is clear that those sources may be the targets for dissemination. Potentially the list of targets is almost endless, the following will be agreed by all (we think):

Departmental management: the main sponsors of research (for all of the reasons outlined above);

  • Local authority committees and Council: the final arbiters of policy;
  • LASSD staff and unions;
  • Clients individually and as represented by pressure groups and voluntary agencies;
  • Local citizens generally;
  • Central government departments involved in setting out policy guidelines;
  • Researchers in other SSDs, in academic research and teaching, and elsewhere;
  • Practitioners in other social services departments.

One fact above all others should be obvious from the relatively short list above: for any one piece of research it is virtually impossible to conceive of a single mode of dissemination which will be appropriate to all targets. The typical SSD research report (at least as reprinted in the Clearing House compilations) is clearly an inappropriate form of dissemination for many of the targets listed above. In our more pessimistic moods we might even conclude it is inappropriate for any of them. This very fact is one of the principle barriers to dissemination and it is important to consider how it arises before going on to consider alternative modes of dissemination.

Research under the 'Scientific Paradigm'

It is only in recent years that the power of the scientific model of the research process has come to be challenged in the social sciences (and particularly in applied social research). However, the scientific model is still widely used and respectable and, under this model, the role of formal publication as the means of dissemination is strongly emphasised.

The effects of this from the point of view of dissemination of research can be serious. The reason is suggested in Figure 1: the practice of science is scientific research. Under the scientific paradigm the results of research activity are assumed to be taken into the body of scientific theory as a matter of course; new research on a given theme is undertaken as a consequence of prior research findings and there is a strong probability that attention will be given to particular problems of science over a period of years.

figure1

There are a number of things wrong with this model from the point of view of research undertaken in relation to organisational objectives. Figure 2 aims to draw attention to the defects. Briefly, for research results to have the effect intended, organisational management (and the associated policy makers) must be persuaded of the power of the argument presented by the research results - in other words there must be some degree of attitude change, or attitude reinforcement. If organisational learning is to take place and organisational change is to follow as a consequence of research, then other parties to the work of the organisation must be persuaded and, even if only in a very limited sense in some cases, organisational change must occur. The dashed lines are intended to convey the possibility that these changes cannot be taken for granted if the mode of dissemination is modelled upon that of science.

figure2

Of course, the scientific paradigm might be an effective mode for SSD research if planning and management were rationally conducted activities: naturally, because they involve all the interpersonal and intergroup tensions and conflicts typical of all organisations, they are not totally rational. SSDs also have to work within the framework of law laid down by government and are subject to political and financial pressures. These factors mean that even when management's plans are rationally based they may be dismissed for political or financial reasons.

Work carried on under the scientific paradigm has a number of significant characteristics which are intended to give substance and validity to the work. Canons of 'good science' are applied such as proper attention to the definition of the problem, proper sampling of populations (or at least recognition of the need for proper sampling), observance of proper ethical considerations, and an attempt at objective presentation of results. Some of these have problems of their own which we cannot go into here. This is not intended to imply that research carried out in SSDs does not, or should not, conform to good practice in research methods to produce reliable and valid results. In this general sense it can most certainly be described as 'scientific'. However, organisational research, almost invariably, is applied social research which is carried on under constraints which are, in large part, foreign to the ethos of pure, scientific research and which include explicit recognition of the ethical dimension which, according to Ratcliffe (1983), ought to be a feature of all modes of inquiry. We now wish to consider some of the modes of applied social research before examining further appropriate modes of dissemination.

Applied social research

If we regard the role of the SSD researcher as being to engage in applied social research we must then ask, 'what kind of applied social research?', because clearly there are alternative modes of research which are related to the functions to be performed by the results of the research (or by the anticipated results) rather than by methodological canons.

Murray's categorisation of the functions of knowledge for policy, referred to above, is useful here but needs a little elaboration. We can relate it to another categorisation of our own which distinguishes three aims of applied research -management data collection, system evaluation and policy research - and three broad types of method - survey research, qualitative research and action research.

The different main modes of research are associated with different constraints which have an effect upon the way the research is carried out and which make for significant differences between any of the modes of applied social research and basic research. For example: in policy research one of the major constraints is time - always a constraint in organisational research, but particularly so in relation to policy research. There is little use in having the data for policy determination available after the policy decision has already been taken. For example, an SSD study of day-care for under-fives was intended to produce policy guidelines, but at least one major service development decision was made while data-collection was still taking place and only the researcher's still subjective evaluation of the situation could be called upon.

This illustrates a common problem which results from the different time-scales of crisis intervention and the research function. Not only may research results be too late for associated policy development, they may also be too late because new problems have become more pressing than the old.

Evaluation research has a similar kind of time constraint, particularly when associated with innovations. The activity must be continuously monitored with warning notices being posted if the innovation looks like failing or getting into difficulties. Researchers are then in the position of having to give judgements of a situation before all the data are available. They are under continuous pressure to provide information and, because of the prevailing research paradigm, may be unwilling to give the information or make the judgement. There is also the slightly different problem that, whatever demands have been made during the lifetime of an innovation, the demand to know the results of the evaluation will come very shortly after the project has ended. Indeed, it is not unknown for a policy decision to be taken to continue an activity (for example, to make a claim on next year's budget) only to discover a few months later that the project had not succeeded.

Action research is really a composite activity designed to ensure better definition of problems, participative decision making on potentially useful interventions, data collection, and ongoing dissemination of the data to the collaborative team which has designed the intervention (see Foster, 1972; Lees and Lees, 1975). Action research is much publicised in organisation development circles and seems to have much to recommend it when the focus of interest is an organisational problem.

It too has its problems, however. It is expensive in staff time, because of the collaborative nature of the process; it adds to the roles of the researcher, who must perform an educational function so far as the choice of research strategy and interpretation of data are concerned; and because, by the very nature of the process, members of the team are likely to be well-informed on the problem and the chosen strategy, the research is likely to be subject to rather more searching analysis than usual. Also, because most research is commissioned by senior management of the Department, the involvement of other categories of staff in a collaborative research project may be suspect where labour relations are tense. The question, 'which constituency does the researcher serve?', is a thorny one and it could be said that action research might simply exacerbate the issue.

The big advantage of action research from the point of dissemination, of course, is that it is built in, at least to a certain degree - the members of the team will have been the focus of dissemination/education from the beginning of the research process. There may be problems in disseminating the results of action research outside the organisation, however: those inside have the benefit of knowing the context, the course of events and the rationale behind action. Conveying these aspects of the whole process to an outside audience may be very difficult indeed.

Modes of dissemination

We can now turn to the main focus of this paper -dissemination. The scientific paradigm, as we have presented it, assumes that paper publication of results is the most important mode of dissemination in science. Of course, this is not the case: as in all other fields of human activity the primary mode of disseminating information is by word of mouth -person talking to person, or person talking to group. Long before a scientific paper reaches a journal (let alone being published in it) the results of research will have been disseminated in a variety of ways. If the research is supported by government funds, for example, there will have been an official report to the funding agency. It is likely that working papers on the subject of the research will have been sent to colleagues in other places for their comments. The researchers will have given papers at conferences, workshops, and internal seminars. Letters setting out the main results will have been sent to colleagues in other places, particularly if the results are known to be of importance in another area of work. In other words, there are many ways of disseminating information and all of them may be useful for different kinds of research and different kinds of target audience.

We should not be led by the word 'research' into thinking that the only, or even the best, form of dissemination is a printed document. Given the size of some of those documents and the known shortage of time available for reading them, one might hazard the guess that little research gets the audience it deserves - or does it?

Let us consider some of the chief modes of dissemination and their uses. First a simple listing:

  • internal publication
    • progress report
    • newsletter
    • final report
    • abstracts and summaries
  • external publication
    • press release
    • progress report
    • journal publication
  • internal training activities
  • organisation development
  • talks to staff, working parties, etc.
  • external talks and lectures
  • publicity/public relations
  • lobbying

The very words used above suggest that different modes of dissemination may be suited to different target audiences and this certainly ought to be the case, although one suspects that it rarely is the case. So, let us go back to the audiences and see how they match up.

Departmental management: we suspect that most SSD research reports are written under the assumption that at least some member of the senior management team will read them. How much evidence there is to suggest that they are read we do not know; however, when one considers the kinds of time pressures under which senior management work and the known pattern of their behaviour (e.g., Wilson and Streatfield, 1977), we would be surprised if the members of the audience come up to expectations. It is interesting to compare SSD research reports with reports issued by government departments - the latter usually assume that the Minister is not going to have time to read the whole document and so an Executive Summary is presented, often on page 1. This is also the case with consultants' reports to business and industry. Management has essentially three or four questions about any piece of research they have commissioned: what did you find out? what are the policy/action consequences? what recommendations do you make for action or policy? what difficulties might we run into if we follow the recommendations? Both the responsibility and opportunity available to the researcher to take an active part in the process by which policy and action options arising out of research findings are pursued will depend on the position the researcher occupies in the departmental hierarchy and the extent to which research and development functions are linked. If no link is possible departments may not be getting the most out of the research which has been commissioned and researchers may well become frustrated by their inability to see any outcome from their work. Researchers may have to convince management that they have a legitimate role to play in making recommendations as a result of research and can usefully be involved in communicating those recommendations to staff responsible for implementing them.

Because of the time pressures under which policy is developed, the publication of progress reports is highly desirable. In any event, management will ask for progress reports and it is better that these be written rather than being oral reports, to prevent misunderstandings and, possibly, subsequent claims of misinforming.

Local authority committees: committees are not the recipients of research results as a matter of course but, when they are, the need for effective summaries is all the greater for the majority of lay members. The main points of a research project need to be clearly expressed and any data reported should be in a readily understandable form (sadly not the case with most statistical reporting to committees). Furthermore, if an element of persuasion is thought necessary in presenting the results of research to committee members, then even more care ought to be given to these points. In some cases it might be thought useful for the researcher to present the results in person and to be available to answer questions on the work. As in the case of other aspects of dissemination, however, this will depend on how researchers are viewed by senior management and on their position in the organisation. It also presumes they will have skills in effective public speaking.

LASSD staff and unions: the line between labour relation activities and research dissemination may be a fine one to draw in the case of this target group. For the researcher it can involve delicate negotiations to avoid a fairly common perception of researchers as 'management tools'. If research findings are always filtered and communicated to staff by management, it is inevitable that researchers will be viewed as providing a service to management only and this may create resistance to the acceptance of research findings. Whilst the majority of research projects are commissioned by middle or senior managers, staff at all levels are likely to have been involved in contributing to the planning and data-collection phases and possibly to the analysis phase. We consider, therefore, that the researcher has a responsibility to disseminate findings not only to those who commission the project, but also to those who have been involved in it as subjects or contributors. Once again the researcher may have to be prepared to convince managers of the importance of this.

The mode of dissemination ought to vary according to the nature of the research and its purpose. Research which is large-scale in terms of the number of people involved in it or affected by it, or which continues for a long time, may benefit from having a steering group with appropriate staff and union members. This would provide a forum for feedback and staff input during the course of the project and would assist in the de-mystification of the research process and hence encourage a more positive response to the outcome. In the case of evaluative research which focuses on a particular section of the service the prime responsibility is to communicate the results to the staff directly involved and this may be done best by talks to staff meetings. Dissemination to other staff in the department can be achieved by means of a summary in a newsletter and by making the full report available to those who ask for it.

It is the staff category, of course, for which the training and development uses of research findings arc most appropriate. To take a very simple example: a good deal of work goes on in SSDs to discover the extent of a client group in the authority; for example, of the numbers of mentally handicapped children, or single parents, or housebound elderly persons. Data such as these have a clear use in induction programmes for new social workers and ought to be made available by researchers to their colleagues in training sections in a form suitable for immediate use. Again, specific projects might have relevance for other kinds of in-service training programmes. For example, if a study has been carried out on the effectiveness of different modes of intervention for, say, people with mental disturbance. If an in-service training course is to deal with wider issues in a field such as this then researchers again ought to make relevant information from their work available to the trainers or, better, participate in the training programmes themselves, thereby attaining greater visibility in the organisation. No doubt the examples could be multiplied again and again in all SSDs.

The issue of effective dissemination ought not to signify in the case of action research, at least for those persons associated with the innovation. But it may be forgotten that more members of staff will not be involved in a programme than are so involved. The same process of dissemination ought to take place, therefore, in respect of such research as with other research. One advantage is that more people are in a position to disseminate information than is the case with other forms of research and will do so by word-of-mouth. Again, a great deal of information may emerge out of action research which is relevant to training programmes and action research itself is advocated by practitioners of organisation development as a way of fully involving staff in innovative programmes.

Clients: how often are clients and representative organisations, such as NAYPIC, actually considered to be an audience for SSD research? It seems to us that there is a case for treating them as such on a number of grounds:

  • they are citizens who ought to be made aware of issues and projected solutions to problems;
  • they are receiving services (or may be if new policies are implemented) and, therefore, ought to be made aware of the reasons for decisions that affect their lives, and involved in reaching those decisions;
  • they may have been the subjects of a piece of research and it is, in our opinion, ethically desirable to inform subjects of results.

The work of the 'In and Out of Care' team from the University of Sheffield provides a good example not only of ensuring that clients are informed of research results, but also of involving them in the research process in a way which can make a useful contribution to the outcome. Parents, and children received into care, were research subjects as were social services staff involved in the process. The research team took steps to ensure that the analysis of results made sense to staff by discussing the analysis proposed, throughout the course of the project, with a group made up of practitioners and a researcher from the Department in which the research was conducted. Similar discussions were held with some of the client research subjects to determine whether the explanations and interpretations arrived at by the researchers made sense in their terms.

In general, dissemination of the more traditional kind seems quite straightforward - information needs to be presented in an assimalable form. Here liaison with publicity sections of the council, use of press releases, etc. are obviously of more use than report publication, although summaries may be useful for voluntary agencies, and the same might be said for external talks and lectures.

Local citizens: if, as a consequence of a piece of research, it is felt necessary to inform the local electorate of the results, the most appropriate form of dissemination is more likely to involve publicity and public relations, with preparation of press releases and articles, than it is to involve scholarly presentation. Talks to citizens' groups, neighbourhood councils, and voluntary agencies etc. are also likely to be useful. Clearly, the researcher may not be the best person to address these groups and may need to be able to persuade others to take on the task.

MPs: when an issue arises out of actual or projected legislation there is a clear need to inform local MPs in particular, and MPs in general who are known to have an interest in the issue: for example, known to have spoken in Parliament on the issue or known to be involved with pressure groups or voluntary agencies. Summaries of results with effective statistical presentations where appropriate, together with lobbying, and talks to groups of MPs are the most suitable means of dissemination.

Central government departments: the circumstances under which it is necessary or desirable to inform central government departments of the results of research are fairly clear:

  • when the Department has commissioned a piece of work in a social services department;
  • when researchers within the Department are known to be working in the same area, or when the Department is known to have commissioned academic research in the field;
  • when the subject of the research is known to be of interest to a Department from the point of view of recently implemented legislation or likely future legislation;
  • when the SSD (or, perhaps better, a group of SSDs) wish to challenge government's thinking or policy on an issue.

The appropriate means of dissemination also seems fairly clear - a copy of the final report (with its Executive Summary, one hopes) together with a brief additional paper on the potential relevance of the work from the point of view of the Department.

Other researchers, teachers and practitioners: a number of organs exist for publicising research which is going on in departments, from the research registers produced by branches of the SSRG, to the Clearing House for Local Authorities Social Services Research. The research registers are really the first step in the dissemination process, since they enable SSDs to get information about projects in progress rather than simply at the end of the project. The Clearing House (in addition to its own register of research) serves a different function seeking, as it does, to publish the results of research viewed as useful to SSDs.

In addition to what might be called these formal means of dissemination there are other modes: publication of scholarly papers based on the research, presentations to meetings of the SSRG and other interested parties, and professional journalism. All one needs is the time, the inclination, and the specialised skills and abilities.

Barriers to dissemination

We can consider the barriers to the dissemination of in-house research under several headings (and we are grateful to Hooper (1983) and Stapleton (1984) for their preliminary forays into this area):

Intrinsic barriers: that is barriers that are a function of the research itself. Some research is less easy to disseminate than others because it is related to a conceptually difficult area. Possibly the researcher has not reached a full understanding of the problem and, hence, will find it difficult to convey the meaning of the results or conclusions. In the case of, say, evaluation research or other research with very locally-specific implications it may be necessary for the researcher to try to draw out more general lessons and this, too, may not be easy.

Extrinsic barriers: which are subdivided into a number of categories:

  • practical barriers;
    • costs of dissemination, e.g. reproduction of reports, etc.
    • time available to the researcher for the preparation of additional materials for dissemination;
  • organisational barriers:
    • problems of the legitimacy of dissemination activities in the organisational definition of the researcher's role;
    • confidentiality;
    • priority accorded to research and to dissemination;
    • perceived constituency of the research section, or other staff members' perception of the role of the researcher.
  • psychological barriers:
    • researchers' self-perceptions of the value of their work, e.g. viewing it at somehow less rigorous or scientific than academic research;
    • publication or dissemination not viewed by the researcher as tasks upon which it is valid to spend time;
    • others' perception of the value of research and dissemination functions.

These barriers are very real, from the point of view of the SSD researcher, however curious they may look to the outsider and, particularly, to the academic researcher with little experience of the practical constraints of work in local authorities. The issue, however, from the researcher's point of view, is to understand the nature of these barriers and to have some idea on the circumstances under which it is desirable to overcome them. How they might be overcome, at least in part, is the subject of our conclusion.

Conclusions

We have tried to show that the nature of the research and the potential audience are factors that affect the way in which the results of SSD in-house research should be disseminated and we would like to conclude by arguing that the researchers should ask themselves the same questions as the statistician is urged to ask by Clark (1983).

First, let us look at the question 'dissemination of what?'. Research activities in SSDs might be said to result in a number of different outputs (at least potentially):

  • they may result in research results in the sense of data which may be stored in some departmental filing system, or reported to others;
  • they may result in conclusions that are not reducible to numbers, but which convey qualitative judgements about states of affairs;
  • they may result in more or less well-founded opinions which the researcher is impelled to convey to the group or person 'commissioning' the research;
  • they may result in ideas for action which the researcher conveys as recommendations for future action on the part of the organisation.

The important thing to note about each of these is that they differ one from another, but overlap. In disseminating research results, however, researchers must be quite clear about what they are trying to convey. Unless they are clear, confusion will result. The first imperative, therefore, is: 'what are you trying to say?'

The next question is one we have discussed already: 'Dissemination to whom?' Here, the instruction to the researcher is, 'Define your audience and seek, assistance in reaching it where appropriate'.

Finally, if statistics/research reports are intended for communication/dissemination, are they intended for analysis by others or for presentation to others? If they are intended for presentation, is the aim to stimulate, to persuade, or to inform? These three possibilities lead to crucially different modes of presentation for statistics, argues dark. It statistics are intended to stimulate they need 'pure graphics' that follow the 'rules of aesthetics'; if to persuade, they need 'persuasive graphics' that follow the 'rules of rhetoric'; and if to inform, they need 'information graphics' that follow 'rules of exposition'.

We argue that the same questions must be asked of presentations of research results, conclusions, recommendations, or opinions (including any statistical information) as dark suggests need to be asked of statistics alone. The exact parallels for text are not altogether clear, but we have already made the distinction between presentations that are intended to inform (e.g., by the use of summaries of various kinds), and those that are intended to persuade (e.g., by using publicity and public relations techniques). To a certain extent, the distinction is dependent upon the way in which statistical material contained in reports is presented, to some extent it is dependent upon the nature of the language used.

There is no easy answer to the problems of the effective use of data and ideas generated in the research process. The researcher is under the hard obligation to view dissemination as equally important as the research task itself. There is no way that all research can be disseminated equally and the researcher must set priorities for dissemination. Some research is intended only to result in data that can be stored and manipulated and used at a later date. Some research has local political implications that must be explained to the elected representatives of the citizenry. Some research has national implications that other SSDs will wish to pick up and of which they want to consider the local implications (the evaluation of patch-based fieldwork systems is one such issue, e.g., Hadley and McGrath, 1981). In other words, different pieces of research do not have the same qualitative value - some pieces of work are more important or more significant than others and the task of the researcher, in this respect, is to form judgements about which research to spend time on in the dissemination phase and which to neglect. Only he or she can make that judgement, aided by the views of management and professional colleagues.

Perhaps we should end on a positive note by considering some of the opportunities available to in-house researchers for the dissemination of their work which may not be equally available to, for example, the academic researcher. The first point is a very practical one: researchers' salaries are a cost to a department and there should be motivation to gain some benefit from that cost by being open to, and making use of, the results of research. Second, in-house researchers are in more frequent day-to-day contact with the people who are the primary target for dissemination, and thus there is an opportunity for informal dissemination which is not available to external researchers who may only be in a department for the duration of a project. Third, researchers within the department ought to be more aware of the issues that have preceded the research project and those issues that are of current concern. This should assist in the presentation of results in a way which is relevant to other staff.

We have tried to set out some of the problems and possibilities relating to dissemination within a broad context. One thing that emerges from this is the way in which dissemination cannot be separated from other qualitative issues relating to in-house research. These include the role and hierarchical position of researchers; researchers' own definition of their functions; values relating to the role of the research subject; and the ethical accountability of researchers.

Note

Dr. Marian Barnes is now Professor of Social Research at the University of Birmingham. Professor Wilson is now Professor Emeritus, University of Sheffield.

References


This paper was originally published in Research, Policy and Planning, 2 (1984) 19-24

counter
Web Counter

Valid XHTML 1.0!