21 October 2013
PDF version [484KB]
Matthew Thomas and Luke Buckmaster
Social Policy Section
- In developing policy and assessing program effectiveness, policy makers are required to make decisions on complex issues in areas that involve significant public risks.
- In this context, policy makers are becoming more reliant on the advice of experts and the institution of expertise. Expert knowledge and advice in fields as diverse as science, engineering, the law and economics is required to assist policy makers in their deliberations on complex matters of public policy and to provide them with an authoritative basis for legitimate decision making.
- However, at the same time that reliance on expertise and the demands made of it are increasing, expert claims have never been subject to greater levels of questioning and criticism. This problem is compounded by the growing public demand that non-experts should be able to participate in debates over issues that impact on their lives. However their capacity to understand and contribute to the technical aspects of these debates may be either limited or non-existent.
- This paper provides a guide to assessing who is and who is not an expert in the technical aspects of public policy debates, by providing a framework of levels of expertise. It also notes the importance of identifying the specific fields of expertise relevant to the issue in question. The main focus is on scientific and technical areas, but the issues raised also apply in other domains.
- It then examines the problem of how non-experts can evaluate expert claims in complex, technical domains. The paper argues that, in the absence of the necessary technical expertise, the only way that non-experts are able to appraise expertise and expert claims is through the use of social expertise. This is expertise using everyday social judgements that enables them to determine who to believe when they are not in a position to judge what to believe.
- In this context, the paper suggests policy makers ask a series of questions:
– can I make sense of the arguments?
– which experts seems the more credible?
– who has the numbers on their side?
– are there any relevant interests or biases? And
– what are the experts’ track records?
- By identifying the strengths and limitations of each of these strategies, the paper provides guidance on how each might best be used. It also argues that using them in combination improves their strength and reliability.
- The role of those who can act as intermediaries between technical experts and non-experts is also examined.
- The paper makes clear that none of these strategies are without problems, but it postulates that a more systematic approach to how non-experts use social expertise might enhance their ability to become active rather than passive consumers of technical expertise.
The need for expertise
The political problem of expertise
A classification of expertise
Defining expertise—realist vs. relational definitions
Table of expertises
Lower level specialist expertise
Specialist tacit knowledge-based expertise
Ubiquitous discrimination (social expertise)
Criteria for social discrimination
The application of expertises
Using social expertise
Which expert seems the more credible?
Who has the numbers?
Are there any interests or biases?
What are the experts’ track records?
Refining the use of social expertise
Attachment A: The use of expertise in the courts
The authors would like to thank Emeritus Professor Sheila Shaver, Associate Professor Adrian Kay, colleagues, Carol Ey, Brenton Holmes and Roger Beckmann, as well as participants in Parliamentary Library and Brotherhood of St Laurence seminars for their insightful comments and assistance in the preparation of this paper.
Policy makers are often required to make decisions on complex issues in areas where they have limited or no understanding of the technical aspects of the issue under consideration. Politicians and bureaucrats are routinely required to develop policy responses to identified issues, evaluate the effectiveness of programs or proposed solutions, or even make a judgement about whether an issue requires government intervention. The frequency with which such issues arise has increased in modern times, in part due to the increasing influence science and technology have over everyday life, and in part due to pressure on governments to legislate across a wider spectrum of areas.
In this context, policy makers and societies more generally are becoming more reliant on the advice of experts and the institution of expertise. Expert knowledge and advice is required to provide policy makers with an authoritative basis for legitimate decision making.
If societies are to be dependent on expertise, then this demands that they should have a significant amount of trust in it. However, while trust in expertise is relatively assured in some areas, it can by no means be guaranteed in others.
At the same time that our reliance on expertise and the demands made of it are increasing, expert claims and the institution of expertise have never been subject to greater levels of questioning and criticism. This dilemma is exacerbated by the fact that the public has become more aware that experts frequently disagree. To a large extent this disagreement is to be expected. Complex problems are typically characterised by uncertainty as a result of incomplete knowledge, and this uncertainty tends to generate expert dispute. The public has also become more aware that expertise is fallible. Once again, to a large degree this is understandable because experts are often called upon to provide policy advice in conditions of uncertainty or where scientific consensus has not been reached. Undoubtedly, an increasingly educated public with independent access to vast amounts of information feels more and more capable of questioning and challenging expertise.
In this situation policy makers are, as consumers of expertise, obliged to determine which expert claims they should and should not believe. This presupposes some understanding of who the experts are in the context of a given debate and the relative weight that should be placed on their claims—an understanding that cannot be assumed. In some cases it also requires the ability to critically assess third party representations of experts and their claims, such as those provided in the media or by lobby groups, many of which are likely to be partial and heavily politicised, or are themselves provided by non-experts with a limited understanding of the issue.
This paper seeks to provide a guide to better understand:
- what is expertise
- how to determine who are the relevant experts where it comes to the technical aspects of public policy debates and
- how to go about choosing between competing expert claims.
It draws primarily on recent scholarship on expertise in sociology and philosophy, as it is writers in these fields who have developed frameworks of expertise that are applicable across different subject areas and applications.
The major focus of the paper is on areas involving scientific or technological debates, as these are where non-experts are particularly reliant on experts, but the principles can be applied to other fields of expertise such as economics, law and the social sciences.
The paper argues that in the absence of the necessary technical expertise, the only way that non-experts are able to appraise expertise and expert claims is through the use of social expertise. This is expertise using everyday social judgements that enables them to distinguish who rather than what ought to be believed. As such, the paper focuses on identifying better practice in using social expertise for this purpose.
The need for expertise is not new. Even in primitive societies ‘experts’ were consulted on matters such as the best time to plant crops, or cures for health problems. In some cases these ‘experts’ did not possess specialist knowledge of the area, but were designated authorities due to rank or position.
However the need for policy makers to seek expert advice has grown dramatically over the last century, driven by growth in the volume, range and specialisation of subjects in the public policy domain. For example, in its first five years of existence the House of Representatives considered an average of less than 30 pieces of legislation each year. For the period 2008 to 2012 the average had grown to 220. The 17 Acts made in 1901 covered taxation, post and telegraph services, immigration, revenue and administrative matters. All these issues were also covered by Acts made in 2012 (with telecommunications replacing postal services), but in addition legislation also covered issues such as offshore petroleum and greenhouse gas storage, nuclear terrorism, the prohibition of tobacco advertising, health insurance, road safety and higher education support to name but a few.
While the 1901 legislation did include some technical aspects (for example the Distillation Act 1901 describes the distillation process in some detail) it was at a level where most parliamentarians would have been able to understand the concepts involved. While this is still true of some legislation in the contemporary environment, in areas covering technical and scientific aspects, policy makers who do not have the relevant expertise themselves may be reliant on experts to identify issues, develop solutions and evaluate the effectiveness of measures.
At the same time, there is also an increasing questioning of authority, including the authority of experts. In part this is an extension of the increasing secularisation of modern society, but it also reflects a recognition that some of the risks and hazards faced by society are themselves the product of the modernisation process. That is, these risks are a result of human activities that have been deliberately undertaken in the pursuit of potential benefits. Reflection on human-produced or ‘manufactured risks’ (some of which are a result of our increasing knowledge about the world) allows for social intervention in the planned activities that are responsible for generating risks. Sociologists Anthony Giddens and Ulrich Beck describe this circular process in which society critically assesses the risks being entered into and attempts to decrease levels of risk as ‘reflexive modernisation’.
The sciences are not immune from the broader process of reflexive modernisation. The sciences have contributed to many of the positive developments associated with modern society; that is, by enabling technological innovations they have helped to improve many people’s standards of living. But they have also, relatedly, contributed to the development and knowledge of increasingly globalised risks:
… the sciences are now being confronted with their own objectivised past and present—with themselves as product and producer of reality and of problems which they are to analyse and overcome. In that way, they are targeted not only as a source of solutions to problems, but also as a cause of problems. In practice and in the public sphere, the sciences increasingly face not just the balance of their defeats, but also that of their victories, that is to say, the reflection of their unkept promises.
In short, in the context of reflexive modernity, the superiority of scientific rationality and methods of thought is no longer (if it ever was) taken for granted. While the sciences are more and more necessary as the source of solutions to risks, they have at the same time become increasingly demystified and contested—both by scientists themselves and by non-experts. In large part, this is by virtue of the sciences’ own nature. The sciences are characterised by their institutionalised self-scepticism and as scientific arguments have become more socially available, scepticism about scientific claims and the sciences more generally has become increasingly widespread.
The increased complexity of modern society, combined with the need to reduce uncertainty and enable decision making has led to an exponential increase in the demand for expertise (both on the part of the state and individuals), and in particular, specialised technical expertise.
Because there is more to be known than any one person can know by him or herself, this demands that there be (ever greater levels of) intellectual specialisation. The finite capacity of individuals to master knowledge in all areas—due to lack of competence, specific aptitudes and skills to gather and evaluate the evidence, as well as the necessary time—means that some people must invest their time and energy in mastering particular areas. This is the source of the institution of (professional) expertise, which may, at its most basic, be defined as special skill, knowledge or judgement in a particular area.
People may have varying levels of expertise in a wide range of different areas, and these types and levels of expertise are briefly considered in the first part of this paper. However, it needs to be said that in this paper we are primarily concerned with technical expertise in the sciences on the basis that these are generally the areas least comprehensible to non-experts. More specifically, we are mostly concerned with identifying just what non-experts are able to do in the face of technical expertise which they cannot understand or understand fully.
The institution of expertise (along with our necessary reliance on experts) poses a number of problems, some of which are particularly thorny. For the purposes of this paper, we focus on the three problems that are arguably of most relevance in public policy terms. The first of these may be described as the political problem of expertise; the second is the problem of identifying who are the relevant experts when it comes to technical decision making in the public domain and what forms of expertise are available to non-experts; and third, the problem of how non-experts can evaluate expert claims.
As some expertise scholars see it, the institution of expertise poses an inescapable dilemma in a democratic political system. Zoltan Majdik and William Keith observe that ‘expertise is a kind of authority, and so stands in contrast to liberal democratic values; at its core, a democratic polity depends on its ability to keep a check on authority’.
Majdik and Keith are by no means the first scholars to grapple with the political problem of expertise. In the early 20th century, American pragmatist philosopher, John Dewey wrote extensively on the tensions between democracy and expertise. In his debate with American political commentator Walter Lippmann on the role of citizens in modern democracies (among other things), Dewey defended the potential of the public for participatory democracy. He reasoned that an organised and educated public could and should participate in deliberation on public policy, and rejected Lippmann’s argument that government should be delegated entirely to political representatives and their more capable expert policy advisors.
At its most basic, democracy refers to government by the people. This demands that the people should participate in debates over technologies that impact on them, and that the people have rights in decision-making on issues that affect them (such as questions to do with climate change mitigation, nuclear power, genetically modified foods and genetic engineering). The people have such rights not simply because public technologies such as those described above inevitably impact upon their lives, but also because questions to do with the use of such technologies are not solely technical, but also involve values and interests. If democratic consent is to be given by the people, then they should (ideally) be able to participate in political discussion over the relevant issues, and their consent should be informed. If this is not the case, for whatever reason, then arguably democratic consent ‘is essentially an illusion’. At the same time, without the people’s contribution to debates over technological developments, these developments are likely to be mistrusted and even resisted.
Stephen Turner is perhaps the scholar who has considered this issue in the closest detail in recent years. The problem, as Turner sees it, is that for meaningful political discussion to take place a basic prerequisite is some degree of comprehension of the subject matter being discussed. In the case of expert knowledge, Turner maintains ‘there is very often no such comprehension [by the masses] and no corresponding ability to judge what is being said and who is saying it’. This last point is an important one for, in Turner’s view, non-experts are frequently not in a position to assess the claims of experts (including their defining of risks—the monitoring and controlling role they play in society); in addition, people are typically unable to determine whether experts are justified in making their claims—that is, whether they are speaking beyond the bounds of their knowledge. This signals a dual reliance on expertise, with both expert knowledge and the ability to identify who is an expert and can speak on a given matter being potentially inaccessible or incomprehensible to the public.
On this reading, not only do non-experts not understand expert knowledge itself but frequently they also ‘cannot understand, or understand sufficiently, the experts whose claims we must discriminate between’. In short, non-experts rely on experts to determine whether or not to believe experts. And, where experts’ claims conflict, as they frequently do under conditions of scientific uncertainty, the problem is compounded.
A range of different methods is either already being used or has been proposed to solve (or to ease) the political problem of expertise—that is, addressing the general power imbalance between experts and non-experts.
Typically, these methods take one of two different (but frequently overlapping) forms. The first ‘quintessentially liberal’ and perhaps most frequently employed form is public education, with scientists, economists and others seeking to increase public understanding in order to help the public to keep up with the demands of the modern world. The second form is that of democratic controls on expertise. In some jurisdictions, for example, increasing use is being made of forums such as citizen’s juries, consensus conferences and public hearings, in which citizens are able to question experts on a given issue and to report on their conclusions. The degree to which these conclusions then influence policy and decision making varies. Broadly speaking, these controls all fall within the practice of deliberative democracy.
Another proposed mechanism for tackling the political problem of expertise is the institutionalisation of a process of expert contestation. In the context of modernity and increasingly complex risks, ordinary citizens are ever more aware of scientific uncertainties and conflicting empirical claims. And, given that scientists ‘do not possess any extraordinary expertise when it comes to discussing normative or evaluative issues’—that is, whether it is right or just that a particular course of action should be taken—it has been suggested that there should be greater opportunities for experts to publicly contest each other’s claims. The main potential benefits of a process for free expert debate would be threefold. Firstly, institutionalised expert dispute could ‘function as a sieve, filtering out unfounded scientific presuppositions and empirical claims’. In doing so it could serve as a check on expert group-think—on the illegitimate formation of consensus. Secondly, it could help to educate the public and to increase the mutual understanding of experts and non-experts. Thirdly, it could help to make experts more accountable and responsive to the wider society by forcing them to take responsibility for their claims.
Another more novel means for dealing with the problem of expertise is to shift the focus of expertise to argumentation and problem-solving and away from specialised subject matter. This would be to treat expertise as though it were essentially oriented toward making an argument for a given solution to a particular public (as opposed to specifically scientific) problem. An expert’s expertise would be manifested in and defined by their ability to make a case for a particular definition of a problem or solution, rather than simply their possession of knowledge. Expertise would thus become located within democratic practice, with experts involved in a process of debate with non-experts and other experts over what course should be taken in response to a given problem. Such an approach would allow for maximum participation and for the consideration of the full range of norms, values, interests and preferences that might bear on a situation, including those of experts themselves. In doing so, it would, according to Majdik and Keith, expand the possibilities for the best possible democratic resolution of a particular problem.
To illustrate their argument and defend its rationale, Majdik and Keith draw upon the example of the doctor-patient interaction. As they see it, this is a situation of multi-dimensional expertise, in which the patient is an expert where it comes to the question of how to work out the benefits and risks of various forms of treatment in relation to their own life circumstances. In the process of interaction, the patient’s interests, preferences and values necessarily become a part of the expertise required to solve the problem of which treatment to adopt. Importantly, the patient’s expertise is independent of how much knowledge they do or do not have of their particular illness. Majdik and Keith argue that this scenario is a microcosm of the broader situation where it comes to public use technologies. In this broader context, the expert has access to specialised knowledge, but must, if a problem is to be optimally solved, account not only for the factual aspects but also the normative aspects.
As noted above, a problem-centric approach to expertise, such as that proposed by Majdik and Keith, expands what can count as expertise and ‘evidence’ and thereby enhances democratic expectations about the way in which choices related to technical issues should be made. At the same time, such an approach seeks to equalise different forms of expertise in an argumentative form.
However, few expertise scholars appear to place much faith in the ability of any of the above methods to completely tackle the political problem of expertise. For example, while Turner would undoubtedly agree that educating the public about technological issues is a worthy goal, he is nevertheless sceptical about the capacity of public education to address the political problem of expertise. Like a number of other scholars of expertise, including sociologists Harry Collins and Robert Evans, he insists that it is unrealistic to assume that expertise could become the property of all. And, because expert knowledge may be incompletely intelligible to non-experts, Turner sees no real institutional solutions to the problem of democracy in relation to expertise.
In any case, it is important to note at this point that the democratisation of expertise has its limits, and that this is as it should be. While science and technology have become progressively more familiar and demystified (with unprecedented access to more or less valid and useful information and the diffusion of science and technology findings and conclusions), it is simply not feasible that everybody can or should be able to participate equally in technical debates. As Collins and Evans stress, technical judgements (such as judgements to do with the estimation of risk) should be reserved for ‘those who know what they are talking about’. If this is not the case, then we run the risk of technological populism, with the boundary between the knowledge of the expert and the non-expert disappearing to the detriment of society as a whole. In such a situation of unchecked pluralism, expertise becomes entirely relativistic, with its potential reduced to individual subjective opinion. To put it starkly, where everybody is an expert, nobody is. Indeed, Collins and Evans have expressed fears that our loss of confidence in experts and expertise courts the emergence of just such a situation.
If technological populism is to be avoided while at the same time respecting the public’s rights to participate in and contribute to technical decision making, then this demands that we should be able to define those who know what they are talking about, and in which areas. Limits need to be set to public participation in technical decision making. As Collins and Evans argue, boundaries need to be set around the legitimate contribution of the general public to the technical part of technical debates. An appropriate balance needs to be drawn between expertise and public opinion.
Majdik and Keith’s attempt to solve the political problem of expertise by reinterpreting it as argumentation and locating it in a democratic setting holds some promise. In theory it broadens the potential for public participation in debate while not, on the face of it, fundamentally undermining expertise. However, it is not clear that Majdik and Keith have confronted the political problem of expertise, as they claim. This is because, for one thing, they have not dealt with the issue of whether or not experts can or should necessarily be able to explain their problem-solving rationale, or that non-experts could or should necessarily be able to understand experts’ reasoning.
In any event, two threshold level questions remain unanswered by Majdik and Keith. The first of these is the question of who actually qualifies as a technical expert for the purposes of the problem-solving process they outline. Secondly, while non-experts may and arguably should be involved in the problem-solving process, the basis on which this should occur and in which particular areas are not made clear. Therefore we need to turn to the work of others to explore who is an expert and how non-experts can contribute to the problem-solving process.
To assist in the necessary setting of boundaries to non-experts’ contributions to the technical part of debates Collins and Evans have developed a classification of expertise that is intended to ‘help put citizens’ expertise in proper perspective alongside scientists’ expertise’. But before considering this classification (summarised in the table below), it is first necessary to briefly outline just what they understand expertise to be.
Collins and Evans defend a realist approach to expertise. That is, they argue that expertise is the real and substantive possession of groups and that individuals acquire real and substantive expertise through their membership of these groups. While expertise is acquired through a social process—it is a result of socialisation in a particular field of study—individuals may nevertheless possess expertise independently of whether or not others think they possess expertise. Hence, for Collins and Evans, expertise is an objective and tangible phenomenon. It enables people to understand and do things that they could not understand and do before they gained their expertise. And, unlike people who might pretend to be experts or who may sound like or present themselves as though they were experts, people with expertise are able to do things that most other people cannot do.
This position may be contrasted with a relational (or, social constructivist) understanding of expertise, which regards expertise as being simply attributed or socially constructed. While Collins and Evans acknowledge that certification and credentials play some role in identifying experts, for them, the key criterion for identifying experts is relevant experience. As they see it, in any given field there may be certified and experience-based experts, the latter of whom, although they are not accredited as such, are nevertheless experts. So long as someone has relevant experience in a particular field—irrespective of whether or not they have had formalised training and accreditation—they may potentially contribute to a technical debate in this field.
In short, for Collins and Evans, expertise is not only attributed or socially constructed, but also demonstrably real. Indeed, they argue that it is only by treating expertise as real—and experts as knowing what they are talking about—that we will be in a position to resolve the dilemmas associated with expertise that have been touched on above.
Adapted from H Collins and R Evans, Rethinking expertise, University of Chicago Press, Chicago, 2007.
As discussed above, where people do not have specialist expertise in a particular field, they are not in a position to contribute to the technical aspects of debates relating to that field. However, this necessarily raises the question: just what expertise do non-experts possess where it comes to technical areas—for example, areas with science and technology content? And, relatedly, what expertise is required for someone to be able to contribute to the technical part of debates? It is only through answering these questions that it is possible to begin to sketch the necessary boundaries of expertise.
Typically, non-experts’ specialist expertise is limited. Based on Collins and Evans’ classification of specialist expertises, most people would fall somewhere on a scale that ranges from what they term ‘beer-mat knowledge’ to popular understanding to primary source knowledge. These lower levels of specialist expertise might better be described as levels of knowledge, or even as information. This is because they may be acquired using ubiquitous tacit knowledge alone (such as natural language speaking and knowing what books and written sources are and how they might be obtained and used) and because they primarily involve learning facts or fact-like relationships.
Beer-mat knowledge, for example, is the sort of knowledge that would enable a person to answer a Trivial Pursuit question on a given technical subject. However, it is of relatively little further utility in that it ‘does not enable one to do anything much that one would not be able to do if one did not know it’. As its name implies, popular understanding is knowledge that can be gained through ‘gathering information about a scientific field from the mass media and popular books’. It is a slight advance on beer-mat knowledge in that it involves a deeper grasp of the meaning of information. This enables a person to at least make some use of the information. For example, Collins and Evans argue that based on a popular understanding of science a person might be able to draw an inference such as ‘antibiotics will not cure viral diseases, influenza is a viral disease, antibiotics won’t cure influenza’. Similarly, a person in possession of popular understanding is likely to be able to communicate this knowledge as a set of ideas, rather than simply a formulaic statement—as is the case for beer-mat knowledge. Primary source knowledge is based on a reading of primary or quasi-primary literature (such as journal articles), rather than that produced for a popular audience.
Where a person has a popular understanding of science, this is based on a simplified description of events and phenomena that is specifically designed for a popular audience. Such a description ‘hides detail, has no access to the tacit, and washes over scientists’ doubts’. Because this detail and qualification is absent, people who hold a popular understanding of an area of technical specialisation may believe their grasp of a given area to be far more extensive and sound than it is. In the case of primary source knowledge, a similar misapprehension is often evident, albeit for the opposite reason. Collins and Evans argue that the primary literature in areas of specialist expertise is very difficult and frequently so technical that just being able to read it ‘gives the impression that real technical mastery is being achieved’. The person with primary source knowledge may be doubly deluded into thinking that they understand a particular area, in that without contact with the scientists who actually carry out research in that area they do not know whether or not they are actually reading the relevant literature.
Each of the above forms of expertise has shortcomings which Collins and Evans discuss in some detail. Perhaps the most relevant of these in the context of this paper and one that is evidenced in many people who possess either popular understanding or primary source knowledge, is that they may be deluded into thinking that they know more than they actually do.
As already noted, the forms of expertise considered above are almost exclusively based on the assimilation of knowledge but not necessarily its comprehension. They are founded on a person’s learning of facts or fact-like relationships, primarily from books or published papers and independent of the research itself. While this knowledge may be personally gratifying and provide its holder with a basic grasp of certain phenomena, it has potentially serious limitations and cannot be put to much meaningful use. It certainly does not equip its holders, who are ‘buffs’ at best, to contribute to the technical part of debates. For this, higher forms of specialist expertise—what Collins and Evans term interactional or contributory expertise, described below—are required.
Specialist expertise is, in its higher forms, something that is practical, something that is based in what can be done as a result of being a participant in a domain of expertise rather than, for example, as an isolated individual learning something through reading a book. It is based primarily on the tacit knowledge that is located in specialists’ activities and practices rather than in the research literature—in books and published papers. Indeed, because this specialist knowledge is tacit it is inextricably embedded in the activities and practices of experts themselves and cannot be codified or reduced to written down rules.
As a result, the only way that it is possible to master, to a high level of expertise, a specialist field that is rich in tacit knowledge, is to be immersed in the culture of that specialist field. It is only through interaction with specialists and through common practice—enculturation in the field—that a person is able to learn the unwritten rules and skills that mastery of the domain demands.
Collins and Evans describe the first form of specialist tacit knowledge-based expertise as interactional expertise. This form of expertise does not involve the actual practice or performance of expertise. Instead, it entails mastery of the language of a specialist domain in the absence of practical competence. Somebody with interactional expertise has been steeped in the language and culture of a specialist domain. They are as a result not only familiar with the formal rules and facts associated with that domain—those aspects that can be written down—but also its tacit and informal rules, which are difficult or impossible to write down or describe verbally. While their immersion in the domain stops short of full-blown practical involvement—they are not able to ‘do’ the practical activity—they are nevertheless completely fluent in the language of the domain and with how the practical activity works.
An important characteristic of interactional expertise is that, as its title indicates, it straddles the ground between the formal or propositional knowledge that is generated by specialists in a domain and the informal or tacit knowledge that is the preserve of these specialists alone. This means that, potentially, it can form ‘the medium of interchange’ between expertise in the practice of a specialism and non-experts, or members of the public.
To illustrate, a music teacher can teach a student even though they may not necessarily be able to play the particular instrument themselves. Nevertheless, through their immersion in the domain of activity they have mastered the language of the domain and may be able to teach some of that activity to students. What they are able to teach to students will be limited to those aspects that can be expressed through formal or propositional knowledge. This is because, as stressed above, the prerequisite for communicating informal or tacit knowledge is, by definition, immersion in the domain of activity. In short, this knowledge is only mutually understood by people with interactional expertise in the domain, and those who are experts in practice of the domain. A person with interactional expertise will thus be able to understand the particular domain of expertise; to discuss all aspects of it with fellow experts in the domain; and to communicate through propositional knowledge and more or less effectively the knowledge of the domain; but will not be able to ‘do’ the practical activity of the domain. It is those experts who have full-blown physical immersion in the domain through their practical activity that make up the next category of expertise.
As suggested above, the ability to perform a skilled practice constitutes the highest form of specialist expertise. This form of expertise has been labelled ‘contributory expertise’ by Collins and Evans, in recognition of the fact that it enables its holders to ‘contribute to the domain to which the expertise pertains’. A contributory expert has ‘the ability to do things within the domain of expertise’.
Contributory expertise is acquired through a process in which the physical skills are internalised. A number of researchers in the field of education research have attempted to map this sort of process, with one of the more well-known and influential schemas being the five-stage model that is drawn upon by Collins and Evans. Essentially, this schema involves a move from novice to expert status as a person’s performance becomes progressively less rule-based and mechanical and more intuitive. When expert status has been achieved, skills and contexts are entirely internalised and non-conscious. The expert is able to unselfconsciously recognise complete contexts; their performance is related to these contexts in a fluid and seamless fashion and on the basis of cues that it is not possible for the expert to articulate. For example, an expert driver is able to drive a route entirely unselfconsciously, without having to think about their performance, just as they would do when, say, chewing. Indeed, when the expert’s performance is disrupted, for whatever reason, and becomes self-conscious, the task being undertaken is likely to be done less well.
It is important to reiterate at this point that a person may possess contributory expertise in a particular field, but not hold formal qualifications.
Perhaps the most well-known and influential illustration of this is elaborated in the work of UK sociologist, Brian Wynne. In the late 1980s, Wynne studied the interactions between Cumbrian sheep farmers and scientists from the UK Ministry of Agriculture Food and Fisheries following the Chernobyl nuclear meltdown in 1986. Briefly, the scientists attributed radioactive contamination of the Cumbrian fells to fallout from the Chernobyl nuclear accident. However, the sheep farmers were sceptical of the scientists’ assertions, based on their specialist farming knowledge and their knowledge of the Cumbrian fells, as well as their past experience of the relevant expert authorities and their behaviours (more on this below).. Their observations and expertise led them to believe, correctly, that the source of contamination was not Chernobyl, but rather emissions from the closer-to-home Sellafield nuclear reprocessing complex.
The point is that while the Ministry scientists might have been dismissive of it, the Cumbrian sheep farmers did possess esoteric and highly localised expertise in sheep farming. Arguably, this expertise should have been recognised as such and should have enabled the farmers to contribute to the technical debate and decision making over how to deal with the nuclear contamination issue.
We may infer from the above discussion of specialist expertises that very few people are in a position to contribute to the technical aspects of any particular policy debate. Drawing on Collins and Evans’ schema it is possible to draw a general boundary between lower and higher level expertises. So, where does this leave people such as politicians and bureaucrats who need to participate in decision making processes on public policy matters that may have profound impacts? In the absence of higher forms of expertise that would allow them to understand or contribute to the technical aspect of debates, are there any forms of expertise available to them that could help to address the political problem of expertise?
In their classification, Collins and Evans distinguish between two main forms of expertise. The first form, dealt with above, is specialist technical expertise. The second form of expertise identified is meta-expertise. As this title implies, meta-expertises are forms of expertise that are used to judge other expertises. As such, these forms are of some interest to us. While non-experts might not be able to understand the content of technical debates, were they to possess some level of meta-expertise then this might at least enable them to make more-or-less reasoned judgements about the credibility and relevance of experts and their statements.
Collins and Evans identify two different types of meta-expertises—external and internal meta-expertises. External meta-expertises do not rely on acquisition of the expertise that is being judged; rather, they involve judgements of expertise based on an understanding of the individual experts themselves. By contrast, internal meta-expertises do involve an acquaintance with the substance of the expertise being judged. They ‘depend on a degree of technical expertise within the domain’. Given that we are largely concerned in this paper with what non-experts are potentially able to do in the face of the problem of expertise, our focus below is on external rather than internal meta-expertises.
According to Collins and Evans, external meta-expertises are made up of two forms of discrimination—ubiquitous discrimination and local discrimination. As is the case for ubiquitous knowledge, ubiquitous discrimination is something that is acquired as ‘a part-and-parcel’ of living in a society. Essentially it amounts to the social judgement of experts in a fashion similar to the regular judgements about ‘friends, acquaintances, neighbours, relations, politicians, salespersons and strangers’. As such, ubiquitous discrimination might also be referred to as ‘social expertise’.
For example, simply by being a member of a western society and exposed to mainstream media, people are able to distinguish—in general terms—which groups of experts should and should not contribute to the technical part of a debate. Based on their social understanding alone most people would recognise, for instance, that expert astrologers should not be contributing to the scientific element in a technical debate. Similarly, based on their social judgement, many members of society would be able to determine that claims that the moon landings were a hoax are simply not credible. Non-experts may not be in a position to make a technical judgement on the question of whether or not the moon landing footage was genuine. Nevertheless, they would be sufficiently aware of the difficulties associated with sustaining such a fiction to conclude that the claims fall beyond the bounds of credulity.
Ubiquitous discrimination thus turns on the knowledge of people:
on whether the author of a scientific claim appears to have the appropriate scientific demeanour and/or the appropriate location within the social networks of scientists and/or not too much in the way of a political and financial interest in the claim.
As Collins and Evans emphasise, ubiquitous discrimination enables people to make ‘social judgements about who ought to be agreed with, not scientific judgements about what ought to be believed’.
Like ubiquitous discrimination, local discrimination is based on a social assessment of the experts in a given field and not on an understanding of the expertise itself. However, whereas ubiquitous discrimination is founded on a generalised capacity for discernment that is shared by more-or-less all members of a society, local discrimination is based on experience that is closer to home.
A useful illustration of local discrimination is provided by the example of the Cumbrian sheep farmers, mentioned above. These farmers had in the past, along with other residents of the area surrounding the Sellafield nuclear processing plant, heard or read a number of pronouncements from nuclear industry representatives concerning radioactive contamination at the plant. Drawing on their local experience of such pronouncements and their misleading or obfuscatory nature, the farmers and residents knew to be wary about these assertions. Moreover, based on their local experience, they were able to skilfully analyse and judge the substance of the statements made. As Collins and Evans emphasise, an outsider to the area who had not been exposed to the discussions concerning radioactive contamination in the particular social and geographical location would not have been able to exercise such discrimination. Any assessments made by these non-locals would have been made based on a more generalised discriminatory ability. They would have been ubiquitous rather than local discrimination.
Both ubiquitous and local discrimination are reliant on social discrimination. They are based on judgements of things like experts’ conduct, past track record, the coherence of their statements and whether or not they occupy an appropriate social location to qualify as an expert in a given area. Because ubiquitous and local discrimination both rely on social judgement to produce technical discrimination, Collins and Evans describe them as transmuted expertises.
If non-experts are to judge experts on a reasonably sound basis, that is, without recourse to stereotypical appearances and/or behaviours, then this demands the use of more-or-less reliable criteria. Collins and Evans identify three externally measurable criteria that may be used by non-experts to judge between experts—namely, credentials, track record and experience. These criteria may also help to define the essential boundary of expertise.
Credentials are perhaps the most frequently employed means of measuring expertise. Credentials such as diplomas, academic degrees and certifications provide some indication of a person’s competence, learning and skills in a given domain. They are, in effect, the institutional recognition of a person’s past achievement of proficiency. An expert’s track record—evidence of their actual performance or accomplishment in an area—is, Collins and Evans maintain, a better criterion for establishing their expertise than are credentials. Where an expert has a demonstrable record of making sound judgements in their field, this provides lay people with a reasonably useful means of sorting out who knows what they are talking about.
That said, as is the case for the criterion of credentials, track record excludes too many forms of expertise, for which a past record of success is unlikely to be available. For example, there is no track record for the ubiquitous or local discrimination of the public. And, even in areas where the track record of experts is typically available, such as that of qualified scientists and technologists, where there are new fields or fields in which there are disputes or ambiguity, there are often no clear-cut track records. For example, in the early stages of the climate change debate there were no ‘climate scientists’, and the debate cuts across multiple established areas of expertise. The fact that track records are more likely to be available in areas where disputes are shallow rather than deep means that the criterion of track record is likely to be of little utility for sorting between experts in the complex areas with which we are most concerned.
The third method for criterion-based judgement of expertise is an expert’s experience in a domain.
Collins and Evans argue that it is the criterion of experience that is of most use in setting the boundary between the knowledge of the layperson—public opinion—and that of the expert. This is because experience in a domain ‘nicely includes [all of the categories of expertise identified above] while excluding the general public from technical domains [since] without experience within a technical domain, or experience at judging the products of a technical domain, there is no specialist expertise’. And, without specialist expertise, ‘the minimal standards for making judgements in [technical] areas have not been met’. While experience in an area does not guarantee that an expert is competent, it is nevertheless a prerequisite for inclusion in the class of experts.
Collins and Evans’ classification of expertises is ‘intended to help us to make decisions about who counts as an expert and who does not in respect of the specifically technological aspects of technological disputes in the public domain’. What the classification does not do is demarcate the various fields of expertise, ‘all of which have their own experts with their own expertises ranging from beer-mat, to contributory, to meta-level’. This is why in the case of any public policy dispute with a technical element it is important to establish, to the extent that this is possible, just what the problem is, and which fields of expertise are relevant to its solution. It is only then that we are in a position to determine who might be expected to be the relevant experts within those fields.
Identifying the policy problem to be solved and the relevant fields of expertise are by no means straightforward tasks.
As noted above, policy makers are increasingly confronted with the need to tackle complex policy problems. Due to their complexity and their seemingly intractable nature, some of these problems have been dubbed ‘wicked’ problems. Wicked problems have several characteristics, including being exceedingly difficult to define and having multiple causal factors. As a result of these characteristics, wicked problems can be defined in many different ways and there is likely to be competition from any number of experts (and other stakeholders) among these alternative definitions of the problem. In such a situation, experts are likely to be involved, either directly or indirectly, and along with other stakeholders, in the actual framing of the problem. In doing so, they will be drawing, to a greater or lesser degree, on their own values and ‘their interpretation/expectation of what decision makers ought to value’. Policy makers are then obliged to weigh the experts’ assessments with their own values in arriving at a definition of the problem. In turn this is likely to influence their selection of expertise, in what is essentially a political decision.
In addition, the expansion of knowledge in almost all disciplines and the constant creation of new technologies means that there is always a great deal more to be known. This demands ever increasing levels of intellectual specialisation and the development of more fields of knowledge and expertise. Increasingly, experts are only able to focus on relatively small areas of knowledge and practice if they are to master them. This, combined with the fact that many fields of expertise can be highly abstract and esoteric, can complicate the task of isolating the relevant field of expertise.
On the face of it, in some areas it is relatively easy to establish the relevant field or fields of expertise required for solving a given public policy problem. This is because in these fields expertise has been institutionalised by being turned into a profession. That is, a group of experts has claimed jurisdiction over the skills required to be qualified to practice in the field, with access to practice in this field restricted to those people who have undertaken the relevant training and gained formal credentials. An obvious example of such specialisation is the field of medicine, which is itself made up of non-specialists (general physicians) and specialists.
It must be stressed, however, that as discussed above, expertise is not necessarily confined to those people who hold formal credentials in a field. Professional credentials can provide a useful starting point for identifying relevant fields of expertise, clearly delineating as they do, in theory, fields of expertise. That said, they should certainly not be seen as the end of the matter—as defining the boundary or the limits of expertise in that field. As emphasised above, so long as people have relevant experience in a given technical area, then they may be seen as experts for the purposes of contributing to the technical aspects of a public policy debate. This is irrespective of whether or not they hold credentials and are thus officially members of a field of expertise. The key issue is the extent to which their expertise is relevant to the particular aspect under debate.
The categories of expertise defined above are not mutually exclusive. A person is likely to hold varying levels of expertise in a range of different fields, and their capacity to contribute to the technical aspects of a debate in one field should not be assumed for other fields. For example, an economist might take an interest in the debate over whether or not wind turbines have adverse health impacts on residents who live close to wind farms. While they could be classified as having contributory expertise (according to Collins and Evans’ schema) in, say, the field of micro-economics, they might only possess primary source knowledge in a field such as medicine or acoustics. In the absence of referred expertise or relevant experience, clearly, such a person’s expertise would not qualify them to contribute to the technical aspects of such a debate. On the other hand, they may be well-placed to contribute to a consideration of the cost-benefit analysis of the issue.
As Collins and Evans themselves stress, the boundaries of their classificatory scheme are ill-defined. More ‘boundary work’ is required in order to determine who is a legitimate commentator in relation to the technical aspects of any particular dispute. Similarly, decisions would need to be made about how lower level expertises should be balanced with higher level expertises in the context of any given decision. That said, as a general rule, we are able to say that the closer a person is to the matter in question, the more likely they are to make better judgements about experts and their claims. This rule applies to all of the forms of expertise described above.
This is not to say that lay people should not or cannot have a role in technical debates. In fact policy makers are often obliged to have a role as decision-makers. However that role should relate to the political aspects of the debate rather than the specifically technical aspects.
To illustrate, in a debate about whether or not coal seam gas exploration should be expanded, technical questions relating to the potential risks associated with such an activity are the preserve of the relevant experts. These are most likely to be accredited scientists and engineers. However, the appropriate experts could also include people with contributory or interactional expertise in the area but without formal qualifications. To the extent that there might be expert disagreement, the non-expert may use their social expertise (ubiquitous and/or local discrimination) to make a technical judgement about who to believe. Crucially, though, they are not in a position to judge what to believe. But when it comes to the question of whether or not the risks identified as being related to coal seam gas exploration are acceptable, this is where the public has a legitimate role.
In the above section, we have drawn on the work of Collins and Evans to provide a brief overview of various forms and levels of expertise. This is intended to furnish something of a guide to who is and who is not in a position to legitimately contribute to the technical part of debates over aspects of public policy. The focus of the paper now shifts from identifying who are the relevant experts, to establishing how non-experts may use social expertise to evaluate claims made by technical experts.
For social expertise to be of any practical value there need to be methods for putting it to use. As with specialist tacit forms of expertise, there needs to be some way of deciding which of these methods is the best to use in a given situation. There also needs to be some way of understanding what it looks like to use these methods well and not so well.
As such, this section of the paper examines some of the main options for putting social expertise into practice. In doing so, the objective is to provide a framework for using social expertise and, just as importantly, for using it well. The focus here is on ubiquitous discrimination, rather than local discrimination, given that the overwhelming majority of non-experts will generally only have access to the former in a given discussion of technical expertise.
While Collins and Evans’ work identifies social expertise as a distinct and legitimate form of expertise, it does not clarify in any great depth how it might be used. The most detailed work in this area is in the epistemology of testimony, that is, the area of philosophy concerned with how we should evaluate what others tell us; when we are justified in believing what others tell us; and when this can be considered to amount to knowledge.
The 18th century philosopher David Hume provided an early and influential attempt to develop a framework for evaluating testimony in his An enquiry concerning human understanding. In this work, he suggested the following evidence as relevant to considerations of whether a given piece of testimony should be believed:
… we entertain a suspicion concerning any matter of fact, when the witnesses contradict each other; when they are but few, or of doubtful character; when they have an interest in what they affirm; when they deliver their testimony with hesitation, or on the contrary, with too violent asseverations.
More recently this has been developed further by epistemologists in work on how non-experts decide who to believe in cases of expert disagreement. According to the epistemologist, Alvin Goldman, in such cases:
The task for the layperson who is consulting putative experts, and who hopes thereby to learn a true answer to the target question, is to decide who has the superior expertise, or who has better deployed his expertise to the question at hand. The novice/2-experts problem is whether a layperson can justifiably choose one putative expert as more credible or trustworthy than the other with respect to the question at hand, and what might be the epistemic basis for such a choice?
Goldman proposes a number of different types of evidence that a non-expert might consider in order to establish that the word of one expert is more credible than that of their rival. These can be summarised as:
- can I make sense of the arguments?
- which expert seems the more credible?
- who has the numbers on their side?
- are there any relevant interests or biases? and
This list forms the basis of the framework for using social expertise examined in the remainder of this paper. While Goldman applies this evidence to the problem of deciding between experts who disagree (the ‘novice/2-expert problem’), it can be equally applied to situations in which there is no expert dispute as such but in which a non-expert is simply seeking to evaluate a single piece of expert testimony (the novice/expert problem). There is notable overlap between Goldman’s considerations and those proposed earlier by Hume.
The one consideration from the above list not included in the framework is whether non-experts can make sense of the arguments put by experts in favour of their respective positions. There are two reasons for this. First, while not all statements made by an expert will be esoteric (inaccessible to a non-expert), by definition they will loom large in any expert discussion. In the words of epistemologist, David Matheson, when it comes to novices acquiring good reasons for giving greater credence to one expert over another:
The worry, in a nutshell, is this: it seems that in order to acquire such reasons, you would need to in effect have become something of an expert in the relevant domain of expertise. But you can’t do that while remaining a layperson.
Secondly, any attempt to directly grapple with the arguments made by experts takes us out of the domain of social expertise.
As highlighted from the outset of this paper, it is important to note that non-experts will rarely be in a position to judge experts and expertise directly. While this is less true of policy makers, who typically have greater direct access to expertise, in some cases assessments will be based on media or other third party representations of experts and expert claims and these accounts are likely to be more or less analytical and politicised. Non-experts are thus required to use their social expertise to assess experts and their claims directly where possible, and where this is not possible, to also assess those sources that present experts and their claims.
The discussion below makes particular reference to the work of Goldman, other epistemologists working in this area such as David Coady and David Matheson, as well as some contributions from argumentation theory. The attempt to draw on both fields is important because of the focus of each on different but equally significant aspects of the problem.
For epistemologists, the focus is on when to accept expert ‘testimony’—that is, more or less formal conclusions provided by experts in the form of debates, appearances as expert witnesses, written publications and so forth. Argumentation theorists, on the other hand, are sceptical of reliance on formal testimony and focus on the communicative aspects of expertise. In other words, they are interested in contextual matters such as how people argue and reach conclusions. Overall, the following section draws more heavily on the epistemological approach (how belief in one expert over another can be justified). However, work in argumentation theory has been particularly useful in clarifying issues arising from a number of the strategies discussed below.
A common approach to evaluating the credibility of experts is what Goldman calls ‘indirect argumentative justification’. By this he is referring to one expert being able to demonstrate ‘dialectical superiority’ over the other in, for example, a debate between experts in which each puts arguments in favour of their case and against that of their opponent. He argues that this could serve as a ‘plausible indicator’ of one expert’s superior expertise, thereby providing the novice with at least some justification in believing the superior speaker’s conclusion. There are three kinds of indirect argumentative justification identified in the literature: rhetorical, moral and explanatory.
Rhetorical argumentative performance has been discussed by Goldman, who proposes that evidence of superior rhetorical performance can be taken as (non-conclusive) evidence that an expert has ‘a superior fund of information in the domain’.  He suggests as an example, a situation in which one expert is able to provide an ostensible rebuttal or defeater to the other’s evidence, while the other never manages to do so.
Goldman suggests, as a further (albeit ‘far more tenuous’) example, a situation in which an expert responds with greater ‘quickness and smoothness’ than the other. This may suggest that the former has a ‘prior mastery of the relevant information’ that exceeds that of the latter.
Goldman suggests that at the very least, indirect argumentative evidence can provide a novice with some reason to believe that ‘one expert has better reasons for believing their conclusion than her opponent has for hers’. While it may not be easy for a novice to acquire such justification, he argues that it does seem to be possible.
Moral and explanatory aspects of argumentative performance have been subsequently proposed by Matheson as additional markers of superior expertise. The first of these is ‘greater receptivity to new relevant evidence’. Matheson proposes this as an indicator of greater credibility, particularly where such evidence threatens to call into question an expert’s previous opinion. He argues that the idea of greater receptivity ‘corresponds to a cognitive trait long recognised as a virtue in the pursuit of higher knowledge: open-mindedness’. Such receptivity, he argues, would be characterised by such things as:
- providing more charitable answers to a conflicting expert’s opinions
- affording more fair opportunity to the conflicting expert to express her opinions
- expressing greater interest in the conflicting expert’s evidence and
For Matheson, greater receptivity to new evidence is best described as ‘moral’ rather than rhetorical. The emphasis here is on ‘moral superiority’ in the expert’s performance; their capacity to adopt the moral ‘high ground’ in the argument by being receptive to new or conflicting evidence is equated with greater overall knowledge of the area under discussion.
Matheson’s second proposal in relation to moral argumentative performance is that ‘greater sensitivity to misleading evidence’ should be seen as an indicator of greater credibility. That is, ‘more readily admitting the falsity (or otherwise epistemically problematic nature) of one’s previous opinions’ on the question at hand ‘upon recognition of this falsity (or otherwise epistemically problematic nature)’. Ready admission of past failures is again taken to indicate greater knowledge of the area under discussion.
Explanatory superiority, Matheson’s other form of argumentative performance, is based on the notion that experts with a ‘greater ability to manage relevant evidence’ should be afforded greater credibility. By this he means an ability to relate distinct pieces of evidence ‘both to each other and to the question concerning which they are relevant’ or, alternatively, an ability to see how the various pieces of evidence in one’s possession ‘fit together into one coherent whole, relevant to the question at hand’. This is a relevant consideration, he argues, because this ability makes it more likely that one possesses the explanatory knowledge of why these pieces of evidence are relevant to the issue at hand, or of why they count as good pieces of evidence. Explanatory superiority might be characterised by such things as:
- explaining more successfully the nature and relevance of the evidence presented—whether presented by the expert herself or the conflicting expert—to the layperson and
The focus is on the competence of the expert in managing evidence, rather than their apparent capacity to provide rebuttals or the ‘quickness and smoothness’ of their responses.
There are obvious advantages for the non-expert in using the indirect criteria proposed above in evaluating expert claims. First, it allows non-experts to apply the kinds of social judgment they use in everyday life in assessing credibility claims (for example, as consumers of information, goods and services). Second, expert testimony is a relatively accessible form of evidence, especially given the increasing appearance of experts debating their conclusions in broadcast media, access to such debates on university websites, blogs, YouTube and so forth, as well as directly providing evidence to Committees and public inquiries.
There have, however, been a number of serious objections raised in relation to the use (or at least unsophisticated use) of indirect argumentative justification. One objection, raised by Coady, is that indirect evidence of this kind can be ‘quite unreliable’. A situation in which an expert is able to provide a greater number of rebuttals than their opponent could be a sign of superior grasp of the subject matter, or it could mean the opposite:
Given the imperfect nature of our understanding of the world, we should expect that even the greatest experts will often be unable to offer effective rebuttals to all apparent counterevidence to their views, and we may reasonably be suspicious of experts who think they can. We may reasonably construe their attitude as evidence that they are fixated on a view to the point that they are unwilling to admit the existence of any evidence against it.
Goldman himself has acknowledged this problem, noting that because it may simply be the result of better debating skills or ‘stylistic polish’ this makes ‘the proper use of indirect argumentative justification a very delicate matter’. Similarly, Coady cautions that an expert who is able to provide quick, smooth responses in a debate may simply be an expert ‘in the dark art of rhetoric, an art which consists, to a large extent, in knowing how to appear to be an expert’. As Brewer argues (in the context of expert scientific testimony in the legal system):
Demeanour is an especially untrustworthy guide where there is what we might call a lucrative ‘market’ for demeanour itself—demeanour has “traded” at high prices since the days of the sophists and finds exceptionally robust business in adversarial legal systems. When judges and juries use demeanour as a test for the credibility of expert evidence, they face this severe difficulty: Epistemic warrant and persuasiveness diverge, especially when the “persuadee” has too limited an epistemic capacity to be able to assess competently the epistemic warrant of testimony independently of the criteria that make an expert seem persuasive.
Coady also suggests that quickness and smoothness of response may also be a sign that the expert is not giving counterarguments the consideration they deserve.
Matheson’s proposal for addressing the problems with Goldman’s rhetorical approach is to bolster it with some additional forms of argumentative performance (as discussed above, moral and explanatory) from which novices might gain evidence. However, he suggests that there are still likely to be ‘significant conflicts’ arising from the layperson’s attempt to utilise them. One would expect the rhetorical/moral/explanatory approaches to ‘pull against each other’ and to ‘render conflicting verdicts in a great many particular cases of conflicting expert testimony’. However this is no different, he argues, from the conflicts that arise within, for example, science over which hypothesis ‘is the best among available competitors’. As such, he suggests that while ‘layperson adjudication’ based on argumentative performance is likely to be messy, the corresponding gain may well be worth the price. That is, similarly to Goldman, Matheson finds that it is a strategy with potential pitfalls but one worth developing in the interests of bridging the gap between experts and non-experts.
Coady’s critique of Goldman’s position, however, suggests that indirect argumentative justification may be even messier than Goldman and Matheson believe. According to Coady, context is all. It is doubtful that any marks of dialectical superiority are universal:
It seems much more likely that marks of dialectical superiority are dependent on subject matter and the context in which the arguments are presented. If there were universal marks of dialectical superiority, it is reasonable to suppose that practitioners of rhetoric would be familiar with them by now, and would have learnt how to acquire them or successfully imitate them. This would have the effect of undermining their status as marks of superior expertise. Attempts to identify marks of expert argumentative performance are reminiscent of attempts by psychologists to identify marks of honesty. Sometimes it is possible to know whether someone is being honest. Sometimes it may be even be easy. But we have reason to be suspicious of the usefulness of rules for identifying honest speech, which purport to apply independently of the subject matter of the speech or the context in which it is delivered.
It seems reasonable that such suspicion would be equally relevant in the case of Matheson’s markers of ‘moral’ and ‘explanatory’ superiority, as it is in the case of Goldman’s ‘rhetorical’ superiority. Markers of argumentative openness or explanatory ability may tell a novice something about an expert’s superior understanding of a domain but on the other hand may not. An expert may well be simply skilled at creating the appearance of such markers in the same way that someone can be an expert in the art of rhetoric. The point is that to be reliable, indicators of credibility must be robust; they must be difficult to fake.
This suggests that non-experts should use great caution in relying on indirect argumentative justification in choosing to believe one expert over another. Those using this strategy would need to be awake to the possibility of alternative interpretations of aspects of argumentative performance such as those outlined above. Another important point is that opportunities for non-experts to directly observe experts in the process of argumentation are relatively rare. As such, non-experts would be wise to use indirect argumentative justification in combination with other strategies, rather than on its own.
One of the most commonly used strategies used in public debates for justifying acceptance of one expert’s conclusions over a rival’s is that more experts agree with the former than the latter. The most familiar current example of this is the argument that there is an overwhelming consensus among climate scientists that human activity is having an impact on the climate. In Coady’s words:
I speak for many people when I admit that when asked to defend my belief in anthropogenic climate change, I can do little more than point to the fact that the overwhelming majority of climatologists (presumably the experts on the topic) believe in it.
Goldman identifies two versions of the ‘going by the numbers’ approach. The first looks to how many experts in the same field agree with either of the experts in dispute. Presumably Goldman is referring here to contributory experts but using the expanded definition of specialist tacit knowledge discussed earlier it could possibly also include interactional experts (those completely fluent in the language of an area of expertise, while not actually able to ‘do’ the relevant practical activity done by contributory experts). In relation to the question of the impact of human activity on the climate, this would involve evidence about the numbers of climate scientists (or those with interactional expertise in climate science) on either side of the debate.
The second version looks not to numbers of experts in the specific field under discussion but ‘meta-experts’ who can provide evidence about the rivals’ relative levels of expertise. In this version, a non-expert would be justified in believing the conclusions of an expert over that of a rival if evidence existed that meta-experts regarded the expertise of the former more highly than that of the latter.
Harry Collins and Martin Weinel have suggested that lay people are relatively well placed (even compared with some scientists) to employ the strategy of going by the numbers—understood as being able to ‘read the scientific consensus’ on a particular controversy. Using the example of cold fusion, they argue that:
There was a time when cold fusion was continuous with science but there were now enough clues in the mass media to indicate that its cognitive and social networks had drawn apart from ordinary scientific society. Crucially, to make this judgement it was essential to ignore scientific credentials or track records. Thus Martin Fleischman [sic], the co-founder of the cold fusion field, had an enviable track record for success in the sciences, was immensely well-qualified, honoured as a Fellow of the Royal Society, yet still believed in the effect, contrary to the scientific consensus. To expect the citizen to be sufficiently educated in science as to be able to make a technical judgement that went against Fleischman was, of course, ridiculous but to rely on qualifications or track record was just as bad. The crucial judgement, however, concerned whether the mainstream community of scientists has reached a level of social consensus that, for all practical purposes, could not be gainsaid in spite of the determined opposition of a group of experienced scientists who know far more about the science than the person making the judgement. Note that this is not the sort of judgement that we would expect even an immaculately qualified scientist from “another planet” to be able to make. A scientist from another planet, reading published papers for and against cold fusion, would have difficulty working out who was right; the scientifically ignorant citizens of this planet, in contrast, had a relatively easy decision to make.
For Collins and Weinel, going by the numbers constitutes a legitimate form of social expertise for use by non-experts. However, they acknowledge that one practical problem with this strategy is that a scientific consensus may not be available:
Policy decisions often have to be taken immediately and cannot wait for the complete closure of a scientific controversy which might take a generation. Thus, when a science is characterised by controversy, policy-makers are faced with the problem of making decisions that turn on science without scientific consensus to fall back on.
The absence of a consensus might, in some cases, be a result of scientific disagreement and, in others, of judgement been having suspended while scientists wait to see whether the particular finding can be reproduced by other scientists.
Goldman provides two further examples that raise doubts about going by the numbers as a strategy. First, he suggests the case of a ‘doctrinal community whose members devoutly and uncritically agree with the opinions of some single leader or leadership cabal’. Should, he asks, the size of such a community make their opinion more credible than that of a less numerous group of experts? Second, he raises the issue of rumours, noting that rumours are not lent further credibility through repetition by further sources.
However, Goldman asks, if we are speaking of experts (people with ‘positive initial credibility’) rather than mere ‘rumour spreaders’ shouldn’t greater numbers add further credibility? Surely the additional experts can be relied upon to exercise their expert judgement and be assumed to have some credibility/reliability? The problem with this, argues Goldman, is that the non-expert can never automatically count on this being the case. In Goldman’s words:
The same point applies no matter how many additional putative experts share an initial expert’s opinion. If they are all non-discriminating reflectors of someone whose opinion has already been taken into account, they add no further weight to the novice’s evidence.
This view is shared by Kelly, who argues that ‘numbers mean little in the absence of independence’. Further, Almassi applies this principle to the case of global warming, arguing that:
… were one to discover that all climatologists believe in global warming entirely on the basis of a single scientist’s research, while global warming skeptics believe on mutually independent grounds, the novice bystander ought not to be swayed by the numbers favouring global warming.
Nevertheless, Goldman does allow that, if the non-expert had reason to believe that the second expert had used an independent route to their belief, rather than a route that guaranteed agreement with the first expert, there might be grounds to give consideration to the numbers in a debate:
Certainly in the case of concurring scientists, where a novice might have reason to expect them to be critical of one another’s viewpoints, a presumption of partial independence [between experts] might well be in order. If so, a novice might well be warranted in giving greater evidential weight to larger numbers of concurring opinion holders.
Goldman qualifies this by adding that such a warrant should really only apply in cases where ‘all or almost all supplementary experts agree with one of the two initial rivals’. The overwhelming consensus that human activity is leading to climate change would be an example of this phenomenon.
However, Goldman suggests that such cases are rare and that ‘vastly more common are scenarios in which the numbers are more evenly balanced, though not exactly equal’. In these more common scenarios, going by the relative numbers would be a questionable approach. The non-expert would need to weigh up which of the members of the opposed sets of concurring opinions are (a) more reliable and (b) more independent of one another. However, in doing so, they might encounter the situation in which the members of the smaller group are more reliable/independent than the larger one, possibly implying that the numbers of the smaller group should be given greater weight. Given that such matters are generally opaque to the novice, ‘there will be many situations in which he has no distinct or robust justification for going by the relative numbers of like-minded opinion holders’.
Coady, on the other hand, takes a far more optimistic view of the going by the numbers strategy, suggesting that the problem of ‘non-discriminating reflectors’ has been overstated by Goldman and others. In relation to Almassi’s position quoted above on climate science he argues that:
… if I were to discover that the climatologists who believe in global warming do so entirely on the basis of a single scientist’s research, while those on the other side of the debate reach their conclusions on mutually independent grounds, I may still be rationally be [sic] swayed by the numbers favoring [sic] global warming. Whether, and in what way, such a discovery should affect my confidence in global warming cannot be decided in the abstract. Such a discovery could rationally reduce my confidence in global warming, but equally it could rationally increase my confidence, or leave it unaffected. It would all depend on the details.
Why does Coady think that the existence of non-discriminating reflectors could be used as evidence in favour of a proposition? The argument turns on the matter of meta-expertise. In a situation where an expert (Y) was a non-discriminating reflector of another (X) because they believed them to be an expert in a particular field, the novice’s confidence in X’s expertise would be rationally increased by their confidence in Y’s meta-expertise. This meta-expertise ‘consists in Y’s knowledge of (or justified belief about) the scope and extent of X’s expertise’.
Coady goes further and argues that confidence in X’s expertise may be increased even where it is discovered that all but one of those who agree with X had taken a partially autonomous causal route to their belief:
Suppose, for example, that the only even partially autonomous causal routes to belief available to the nondiscriminating reflectors are intuitive inductions from their own personal experiences. They may be [for example] poor meteorologists, but good judges of meteorologists, and I, as a novice, may rationally judge that this is so.
The point is that nondiscriminating reflection in the form of meta-expertise may be the more justified response in certain circumstances.
Coady also highlights the role of collaboration in the development of expertise, suggesting that this may be an example of where non-independence may be taken as a virtue, rather than as evidence that one should have less confidence in going by the numbers. For example, he argues that:
… the scientists who believe in anthropogenic climate change have not reached their conclusions entirely independently of one another. The Goldman/Elga/Kelly view implies that recognition of this (probable) fact should reduce the collective authority of these scientists in the eyes of laypeople, and that we should be less confident of expert consensus (or near consensus) in a proposition to the extent that it is the product of teamwork. But this seems to be clearly wrong. The fact that a scientific consensus (or near consensus) is not the result of scientists independently arriving at the same conclusion should not undermine its significance from a novice’s perspective. If anything, it should strengthen it.
Such teamwork should strengthen a novice’s confidence because:
Research on global warming and its causes clearly requires people to take measurements in a wide range of places over long periods of time, and it is obvious that these measurements could not be performed adequately by a single scientist. If we were to discover that the majority of climatologists were irrational enough to think otherwise, our opinions of their intellectual capabilities and hence our faith in their conclusions, would rationally be reduced.
Coady’s point is not that non-independence should always strengthen a novice’s confidence in the ‘numbers’ but rather that it should not be ruled out. As noted above, such evidence could reduce a person’s confidence in the numbers, increase it, or leave it unaffected depending on the details.
Even allowing for the validity of Coady’s more optimistic view of going by the numbers, it is an approach that should be used cautiously. Arguably, it is most justifiably used in cases where the numbers are substantially in favour of one side. In Hume’s words:
… a hundred instances or experiments on one side, and fifty on another, afford a doubtful expectation of any event; though a hundred uniform experiments, with only one contradictory, reasonably beget a pretty strong degree of assurance.
As with other approaches discussed in this section, the numbers approach is probably best used in combination with other approaches (particularly where the numbers are close). There is also the practical problem for most non-experts of knowing how they might feasibly go about determining which side of the argument has the numbers. A related problem is that lay people may not be in a position to tell if the large numbers of experts in agreement are indeed basing their agreement on one piece of research (that is, are acting as non-discriminating reflectors) or not. It may appear that there are many independent strands of evidence when in fact there may not be.
Another commonly used strategy in deciding between conflicting expert claims is evidence of distorting interest and biases. Goldman appears to look more favourably on such evidence than either of the two preceding strategies, arguing that it comes ‘directly from common sense and experience’. He argues, if a non-expert has ‘excellent evidence’ for bias in one expert and no evidence for bias in the rival expert (and has no other basis for ‘preferential trust’) then they are ‘justified in placing greater trust in the unbiased expert’.
Goldman cites a number of examples of how interest and biases can reduce an expert’s trustworthiness. These include outright lying but also a number of other more subtle ways that interests and biases can be a distorting influence on expert opinion, such as:
- a bias that might infect a whole discipline, sub-discipline or research group
- exclusion or underrepresentation of certain viewpoints or standpoints within a discipline or expert community (often emphasised by feminist critics)
While some of the above are not easily detected or understood by novices, Goldman argues that interests are ‘often one of the more accessible pieces of information that a novice can glean about an expert’. In some cases, both members of a pair of conflicting experts may have potentially distorting biases but ‘where non-negligible differences exist on this dimension, it is certainly legitimate information for a novice to employ’.
However, argumentation theorist, Frank Zenker, has questioned the legitimacy of the interest-based approach to evaluating expert arguments. Zenker has two main objections. First, he argues that the motivational role of self-interest for human behaviour is unclear. Second he suggests that because the only way to demonstrate that an expert’s bias has had a distorting impact on their conclusions is on the basis of an assessment of the merits of their arguments, then one does not need to speculate about distorting biases—’the conflict of interest-objection does not, as it were, yield additional mileage’.
In relation to his first objection, Zenker begins by noting that the interest based approach when used by competing experts is a circumstantial variant of the ad hominem form of argument. That is, it raises suspicion about the motives that lie behind the standpoint of an opponent: they have an interest and the matter is therefore presumed to be biased. While, as noted above, Goldman suggests that evidence of bias can be legitimate information for a novice to employ, Zenker suggests that there are ‘diverging positions on the role that self-interest plays (or should play) when it is identified as bias’.
For example, he highlights recent literature suggesting that, rather than a distorting factor, biases are integral to processes of argumentation. ‘To explain the effect of bias on argument’, he suggests ‘one might postulate an interest in maintaining one’s personality (“one’s acquired system of beliefs”), come what may’. Zenker connects this with the views of 18th and 19th century writers like Hume and Tocqueville who, for example, viewed self-interest as congruous with the common good. He also highlights recent work in the social sciences (most notably, behavioural economics) indicating that, rather than simply being motivated by maximisation of self-interest (as postulated in classical economic models), human actions are characterised by more complex processes involving acceptance of suboptimal solutions. According to this work, humans are ‘satisficers’—that is, faced with the problem of inadequate knowledge about what might be the best available option, they will choose among a more limited range of (acceptable) options. On this basis, Zenker postulates that ‘humans are equally satisficing when it comes to their self-interests, and twice over too: first, when discerning them, and secondly when acting upon them’. This, he suggests:
… poses some difficulty for the view that, in particular cases, self-interest motivates (“drives”) human action: The self-interest may … not merely be ill-identified, also the ensuing action may not fully serve that interest but, again, satisfice it.
Zenker also highlights the difficulty involved in defining what self-interest is, how it is to be identified and how its effect might be measured. While one can develop categories such as narrow versus wide self-interest, this does not assist matters greatly, given that:
Notably, narrow interests (e.g. “my prosperity and perceived dignity”) may conflict with wider ones (e.g. “our sustainable development”).
Finally, in relation to the role of self-interest, he takes a position close to that of sociologist and philosopher, Jurgen Habermas, that ‘completely dis-interested standpoints are beyond human capacity’. As such, the issue becomes not one of presence of interests but of conflicts of interest—that is, circumstances in which decisions regarding a primary interest are at risk of influence by a secondary interest.
Zenker suggests that novices will inevitably raise questions about such conflicts in the process of deciding which experts to trust. In response, he suggests, experts might pre-emptively ‘reveal their conflicts of interest before they engage in the public context’. Munnichs has in mind something similar, though more formalised, in his proposal that the ‘expectation of expert impartiality’ be abandoned and replaced by structured processes of ‘expert contestation’ accessible to both experts and counter-experts. According to Munnichs, such a process would form a prerequisite for public trust.
However, Zenker notes that such disclosures are not without their problems. He highlights evidence that voluntary disclosure of conflicts of interest information can have adverse effects—for example, causing experts to overstate their conclusions to ‘correct adverse perceptions caused by disclosed biases’. Nevertheless, Zenker suggests that this could be overcome given ‘sufficient experience and reputation of the expert’.
The bigger problem for Zenker, though, is that interest objections add little to attempts to evaluate the validity of an expert argument. For fellow experts, the ability to demonstrate the distorting impact of a conflict of interest will be on the basis of the quality of the arguments raised (their ‘content merits’). However, having done so, one does not need the interest-objection—’it should suffice to demonstrate that the expert’s claim is unsupported, or not supported with the purported argumentative strength’.
Rehg illustrates this problem with reference to claims that the Intergovernmental Panel on Climate Change (IPCC) is subject to a kind of ‘groupthink bias’ by a prominent critic of the IPCC, Judith Curry, an established and active climate scientist. He notes how Curry’s charge that certain IPCC report chapters ‘exhibit specific technical flaws in their content’ such as a failure to consider relevant literature. This, in Curry’s view, results from the defensiveness of the IPCC authors, which in turn is said to cut off fruitful dialogue with critics. However, Rehg notes that ‘it is unclear how much Curry’s critique simply turns on differing judgements of content merits and the proper expression of uncertainty’.
Curry also finds bias in the IPCC’s transactional strategies (interaction with critics), arguing that failure by IPCC scientists to engage with their critics (so as not to give them legitimacy) has resulted in a loss of the moral high ground by the scientists. Rehg’s response to this is to argue that it fails to prove genuine bad bias but rather should be seen as ‘a genuine dispute over transactional merits’. This highlights an important difficulty associated with allegations of bias in disputes among experts. In Rehg’s words:
… judgements regarding performance and content can presuppose potentially debatable conceptions of effective [in this case] collaborative expertise, on the one hand, and the evaluation of content on the other.
For non-experts, the use of the interest-objection is an even less sound move than it is for experts in the given domain because they are not in a position to demonstrate the validity of this objection on the basis of the argument’s content-merits. As such, ‘one is left with making either unsound use of this objection … or no use at all’. Thus, while it can be ‘effective, or persuasive’, given the widespread view that interests drive argumentative behaviour, Zenker argues that the conflict of interest-objection is ‘an unsound direct personal attack aimed at discrediting the authority of an expert’. At best, it might most justifiably be used as a prompt or intuition for closer examination of an expert’s claim (that is, using other strategies discussed in this section).
According to Goldman, experts’ track records ‘may provide the best source of evidence for making credibility choices’. However, in making this case he works through several theoretical problems.
First, harking back to the earlier discussion about the capacity of non-experts to assess the arguments of experts, he asks ‘how can a novice … have any opinions at all about the past track records of candidate experts’? Goldman’s answer involves returning to the distinction between esoteric and exoteric statements. As discussed previously, by definition, non-experts are not well placed to make assessments of esoteric statements. However, he argues, it needs to be understood that a given statement is only esoteric or exoteric relative to a particular standpoint or position. For example what counts as esoteric now may become exoteric at some future time:
For example, consider the statement, “There will be an eclipse of the sun on April 22, 2130, in Santa Fe, New Mexico”. Relative to the present epistemic standpoint, i.e. the ordinary people living in the year 2000, this is an esoteric statement. Ordinary people in the year 2000 will not be able to answer this question correctly, except by guessing. On the other hand, on the very day in question, April 22, 2130, ordinary people on the street in Santa Fe, New Mexico will easily be able to answer the question correctly.
In this way, the question concerning an eclipse of the sun becomes an exoteric one, rather than an esoteric one. The point is that, in just such a way, a non-expert ‘might easily be able to determine the truth-value of a statement once it has become exoteric’. The non-expert will not need to know how the expert reached their conclusions but may infer that the expert ‘must possess some special manner of knowing—some distinctive expertise—that is not available to them’. The non-expert can verify the expert’s track record without needing to be transformed into an expert.
While lucky guesses are possible, the kinds of questions to which experts are asked to provide answers are generally more complex than yes/no questions. They generally ‘admit of innumerable possible answers, sometimes indefinitely more answers’. For example:
… when rocket scientists were first trying to land a spaceship on the moon, there were indefinitely [sic] many possible answers to the question, “Which series of steps will succeed in landing this (or some) spaceship on the moon” Choosing a correct answer from among the infinite list of possible answers is unlikely to be a lucky guess.
One problem with this approach, acknowledged by Goldman, is that ‘only occasionally will a novice know, or be able to determine, the track records of the putative experts that dispute an issue before him’. As Goldman notes, a juror in a civil trial is not in a position to be able to run out and seek information about the track records of expert witnesses appearing before the court. Further, expert disputes are frequently over matters that are far more complex than simply determining whether a solar eclipse or moon landing took place. In some cases, the ‘true’ outcome of an expert dispute can itself be highly contested, leaving novices none the wiser in terms of the relative track records of the experts in question. A further problem is that an expert’s track record tells us about their performance in the past but is no guarantee that they will continue to be deserving of trust into the future.
Nevertheless, argues Goldman, the ‘track records’ approach, at least provides non-experts with a plausible strategy for identifying evidence that can be used to test a candidate’s claims to expertise (albeit one that the non-expert is not always in a position to utilise). Goldman also suggests that this strategy could be used to confer credibility on still further experts, including those trained by an expert deemed to have a verified track record. Further, he suggests that when such experts are consulted as meta-experts about the expertise as others (even if they haven’t trained them or provided them with credentials) ‘the latter can be inferred to have comparable expertise’.
Goldman seems to envisage here a system of verification and credentialing that utilises meta-experts to assist novices in meeting the significant challenge posed by the novice/2-expert problem. He argues that ‘some of the earlier scepticism engendered by the novice/2-expert problem might be mitigated once the foundation of expert verification in this section has been established’. He argues that this could also lay ‘the foundation for legitimate use of numbers when trying to choose between experts’.
As with most of the strategies discussed in this section, the use of track records has strengths and obvious limitations. To be used well, it would appear to require the assistance of meta-expertise, given that, as Goldman acknowledges, most non-experts are probably not well positioned to make judgements about track records on technical matters. Further, a non-expert would be unwise to assume that an expert with a good track record could automatically be trusted on all matters into the future. At best, a good track record should be taken as evidence that an expert’s current claim is worthy of further investigation using a range of (social) evidentiary sources.
The above examination of the use of social expertise is not to suggest that it is not already being used more or less effectively in some institutional settings. The parliamentary committee system, for example, is one in which social expertise is employed routinely. Most parliamentary committee members and their staff are non-experts in many of the areas that they deal with in the course of conducting inquiries. As a result, they are inevitably reliant on the use of social expertise.
Unlike the court system, which has some codified rules to assist judges and juries in making decisions based on technical evidence that they have limited ability to comprehend or assess (see Attachment A), the parliamentary committee system does not have a formal, systematic approach to the use of expert evidence. Nevertheless, in the process of obtaining and evaluating expert advice, committees do employ some of the strategies outlined above, and hence what could be considered to be ‘good practice’ in the use of social expertise.
For example, in selecting expert witnesses to make submissions to an inquiry and to appear before them, committees are likely to be making such decisions on the basis of these experts’ past experience or track records. And, where committees are not in the best position to make such assessments, they sometimes draw on the services of other meta-experts (such as Parliamentary Library researchers) to assist them. Parliamentary committees are also likely to make use of the ‘going by the numbers’ strategy. This strategy could be employed in assessing the weight of expert submissions falling on one side or the other of a given debate.
In the context of formal hearings, committees generally only question one expert (or related group) at a time. However, when conducting round table discussions on certain issues, committees are able to manage a direct debate between experts (where they disagree). This forum thus enables committees to draw on the strategy of judging which expert seems the most credible on the strength of their arguments.
Given non-experts’ (including parliamentary committees’ and the courts’) necessary reliance on social expertise it is important that this form of expertise should be improved upon as much as is possible. The above analysis has examined a number of methods for putting social expertise into practice. In doing so, it has highlighted the strengths and weaknesses of each approach; suggested how they might be used in the best possible way; and provided a basis from which to critically evaluate social expertise when it is used by others.
The above discussion makes clear that none of the available methods for using social expertise are without problems. More broadly, it is clear that the area of social expertise is in need of further, more systematic and considered discussion at both the scholarly and public level. A key question is how the methods discussed above might be developed further as ways of non-experts more actively engaging with experts and expertise. How might the tools of social expertise be sharpened and how might policy makers improve their use of them?
As Goldman asks:
What kinds of education, for example, could substantially improve the ability of novices to appraise expertise, and what kinds of communicational intermediaries might help make the novice-expert relationship more one of justified credence than blind trust?
In the area of education, argumentation theorist, Mary Goodwin suggests that ‘the expansion of higher education can ensure that citizens gain the experiences in one field that can give them a referred expertise ... useful to assess results in another’. This would mean that those without expertise in a specific domain would be increasingly less reliant on social judgments about the experts and more capable of making judgements about the conclusions of experts. A further development is (as highlighted throughout this paper) the recent rise of expertise as an area of study in itself. Greater understanding of the sociological and philosophical aspects of expertise suggest at least the possibility of non-experts being in a better position to more productively participate in debates over expertise and engage with experts.
There are also increasing examples of systems of what Goldman calls ‘communicational intermediaries’ (meta-experts) that might be used by policy makers to ‘level the playing field’ in the lay-expert relationship. Parliamentary libraries and research services and other similar (independent, non-partisan) expert bodies established to provide advice to policy makers and (in some cases) the general public on technical matters (particularly in increasingly complex areas like science, law and economics) are important examples of the institutionalisation of meta-expertise. Some observers have recently highlighted ‘boundary organisations’ as potentially playing a similar role. Another, less formal, example is the rise of policy blogs, many of which are increasingly focused on communicating technical policy areas to a lay audience and are generally highly interactive.
Certainly, the reliance on such intermediaries is not without further problems: to many non-experts, the difference between experts and meta-experts would be practically irrelevant. Also, the depth of expertise of the meta-experts may not be clear. As Goodwin, has suggested, ‘from the citizens’ point of view, the friendly meta-expert is yet another apparent egghead demanding their regard’. There is also the potential problem of infinite regress: experts on experts on experts (and so on).
The point is that, while far from offering a perfect solution, such intermediaries present options for lay people to address the increasing cognitive gap between themselves and experts in policy decision-making. As Collins and Weinel have argued:
… judgements made using transmuted expertise can never be more than the “best possible” judgements in the circumstances. It is only science that can afford the luxury of being right in the long term and that is because science is directed toward truth rather than policy. The policy maker has, in general, to make decisions long before scientific truth is established. The crucial point is that because a decision is only the best possible, it does not mean that it is not much better than a decision based on no kind of expertise at all.
Importantly, assuming that there is some foundation for social expertise, there is also a concomitant increased responsibility on non-experts to become more active in their use of experts and meta-experts, to better understand expertise and sharpen their tools for evaluating the claims of experts. Indeed, it has been argued that non-experts ‘have social obligations to shoulder their share of the burdens required to maintain public good. Failure to do so … is a form of epistemological free-riding’. As John Hardwig has suggested in an article on the ethics of expertise:
The ethics of expertise is not a one-way street. A layperson is usually not simply a passive recipient of expert activity. Even if he cannot very well evaluate the testimony of an expert, a layperson remains an agent and an important part of the ethics of expertise is the ethics of one who appeals to experts.
Hardwig’s ethics for those appealing to experts is specifically about such matters as how non-experts can better use experts to improve their likelihood that they will provide reliable testimony and how best to use expertise once it has been provided (particularly where uncertainty remains over the conclusions and the policy implications are not clear). However, the insight that non-experts are ‘agents’, rather than ‘passive recipients of expert activity’ is also particularly relevant to the question of which experts to trust and should underpin the efforts of non-experts in evaluating expert judgement.
This paper has illustrated the difficulties facing non-experts in seeking to understand and evaluate claims made by experts. Rather than dwell on what non-experts cannot do, however, the paper has focused on what they can do, principally by drawing attention to social expertise as a particular form of expertise that is already used by non-experts. The paper notes that there are difficulties associated with the main methods for using social expertise but emphasises that these may not be insurmountable. A more systematic and considered approach to social expertise could conceivably lead to its more sophisticated use, resulting in at least some bridging of the daunting epistemic gap between experts and non-experts. This could, in turn, go at least some way towards helping to reduce or ameliorate the political problem of expertise. One key means of improving the ways in which social expertise is used could be the development of an ethic of expertise. Such an ethic would commit non-experts to become more active in their use of experts and meta-experts, and to strive to better understand expertise and sharpen their tools for evaluating the claims made by experts.
Section 79 of the Commonwealth Evidence Act 1995 provides for an exemption from the normal prohibition of opinions being considered as evidence:
If a person has specialised knowledge based on the person’s training, study or experience, the opinion rule does not apply to evidence of an opinion of that person that is wholly or substantially based on that knowledge.
This requires the court to determine (on the balance of probabilities) that:
‘(1) the person has “specialised knowledge”;
(2) that specialised knowledge is based on person’s training, study or experience; and
(3) the opinion is “wholly or substantially” based on that specialised knowledge.’
Most jurisdictions have a code of conduct to guide expert witnesses in their role. For example, the ACT Civil and Administrative Tribunal Expert witness Code of Conduct states that an expert witness must specify (among other things) their qualifications as an expert and if applicable that a particular question or issue falls outside his or her field of expertise. In addition, where an expert witness considers that his or her opinion is not a concluded opinion because of insufficient research or insufficient data or for any other reason, this must be stated when the opinion is expressed.
In addition, expert witnesses can be challenged by the opposing counsel, and judges can determine the admissibility of their evidence and guide juries as to the weight to be placed on their testimony.
However, despite such safeguards, problems still exist. Gary Edmond in his lecture ‘Impartiality, efficiency or reliability? A critical response to expert evidence law and procedure in Australia’ quotes a number of examples where courts have relied on very questionable ‘expert’ evidence. As he notes:
‘Confidence in cross-examination and the restorative potential of defence experts assumes that defence lawyers are conversant with the technical detail and limitations with identification expertise, and capable of effectively conveying them to a lay jury (and judge)’
‘…one revealing development, in the wake of the implementation of codes of conduct and other reforms, is the failure formally to identify, let alone sanction, partisan or incompetent experts. Experts are not disciplined even when they testify in very confident terms based on techniques that have never been tested, simply invent levels of confidence and rates of error, and fail to disclose inconsistent or critical bodies of research’.
. Here, we use the term institution in its sociological sense, that is to refer to ‘social practices that are regularly and continuously repeated, are sanctioned and maintained by social norms, and have a major significance in the social structure’. N Abercrombie, S Hill and B Turner, The Penguin Dictionary of Sociology, fifth edition, Penguin Books, London, 2006, p. 200. Essentially, institutions are established patterns of behaviour. The term is more frequently used to describe particular examples of institutions—economic, political, educational and cultural institutions such as school, university, the Parliament etc.
. There are various terms used in the literature on expertise to connote people who are not experts, including ‘the public’, ‘citizens’, ‘lay people’, ‘novices’, and ‘non-experts’. For consistency’s sake, this paper will mainly use the term non-experts.
. U Beck, Risk society: towards a new modernity, Sage, London, 1992, pp. 19–20; U Beck, ‘The reinvention of politics: towards a theory of reflexive modernisation’ in U Beck, A Giddens and S Lash, Reflexive modernisation: politics, tradition and aesthetics in the modern social order, Cambridge, Polity Press, 1994, pp. 1–55; A Giddens, ‘Living in a post-traditional society’ in U Beck, A Giddens and S Lash, op. cit., pp. 56–109. It should be emphasised that Giddens and Beck are describing a general social phenomenon. People are more reflexive about and inclined to respond to some manufactured risks than others.
. As Turner and others have pointed out, to some extent the demand is artificial, in that claims of expertise and expert claims themselves produce the demand for expertise. See S Turner, Liberal democracy 3.0: civil society in an age of experts, Sage, London, 2003.
. See J Hardwig, ‘Toward an ethics of expertise’ in D Wueste, ed., Professional ethics and social responsibility, Rowman and Littlefield, Maryland, 1994, p. 84.
. Z Majdik and W Keith, ‘Expertise as argument: authority, democracy, and problem solving’, Argumentation, 25(3), 2011, pp. 371–2.
. Dewey dealt with the problem more or less extensively in a number of his works, including J Dewey, The public and its problems, Ohio University Press, Ohio, 1988; J Dewey, Experience and nature, Open Court Publishing, Illinois, 1997; J Dewey, The quest for certainty: a study of the relation of knowledge and action, Minton, Balch and Company, New York, 1929; and J Dewey, Democracy and education: an introduction to the philosophy of education, Collier-MacMillan, London, 1955.
. For a brief overview of the Dewey-Lippmann debate, see C Pearce, A note on the Dewey-Lippmann debate, Social Science Research Network, 7 August 2009, accessed 21 May 2013. Lippmann’s position is most clearly expounded in W Lippmann, Public opinion, Transaction Publishers, New Jersey, 1991 and W Lippmann, The phantom public, Transaction Publishers, New Jersey, 2009.
. S Turner, op. cit., p. 10.
. Harry Collins and Robert Evans describe this dilemma as the ‘problem of legitimacy’. This problem has to do with ‘how we can continue to introduce new technologies in the face of the widespread and growing distrust of certain areas of science and technology’, through the greater involvement of the public. H Collins and R Evans, Rethinking expertise, University of Chicago Press, Chicago, 2007, p. 113.
. S Turner, op. cit., p. 12.
. It is important to note that reliance on experts and expert opinion is not confined to non-experts. Experts themselves rely on information and advice given to them by other experts, both outside their own areas of expertise and within their own disciplines. This is because, as highlighted above, it is impossible for a person to become expert in everything. See J Hardwig, op. cit.
. S Turner, op. cit., p. 46.
. One controversial method used to redress the imbalance between experts and the public is the imposition of penalties for expert ‘failures’. A recent and prominent example of this is the sentencing of Italian seismologists to prison terms for not adequately warning residents of the Italian city of L’Aquila about the risk of an impending earthquake that killed more than 300 people in 2009. See ‘Italian seismologists ordered to prison for not warning of quake risk‘, Los Angeles Times, 22 October 2012, accessed 21 May 2013. Of course, the risk of penalising scientists—irrespective of whether or not the penalties are warranted—is that they may become reluctant to speak out in the future, for fear of being censured. Although it is not, strictly speaking, a democratic control, one method that has been offered as a means to improve ordinary citizens’ trust in experts is an ethics of expertise. Such an ethics would be sensitive to the fact that expertise entails an imbalance in power relations between ordinary citizens and experts. It would attempt to ease the vulnerability of ordinary citizens who are inevitably reliant on experts by promoting the principled practice of experts. See J Hardwig, op. cit.
. Briefly, deliberative democracy is a form of democracy that prioritises deliberation in decision making, rather than simply the aggregation of more or less considered preferences through voting.
. G Munnichs, ‘Whom to trust? Public concerns, late modern risks, and expert trustworthiness’, Journal of agricultural and environmental ethics, 17(2), 2004, p. 122.
. Ibid., p. 125. Philosopher of science, Paul Feyerabend has observed that expertise is simultaneously an enabling and disabling phenomenon. It is enabling in that it allows experts to make decisions rapidly and even automatically, but disabling in that experts can become ‘habituated in a fixed mode of thinking’ and find it increasingly difficult to re-examine or question the foundation of his or her beliefs. See also E Selinger, ‘Feyerabend’s Democratic Critique of Expertise’, Critical Review, 15(3–4), 2003, pp. 359–373.
. S Turner, op. cit., p. 122.
. It could be argued that this is often an unrealistic demand of experts. For one thing, they might not be in a position to give a public opinion on a scientific or technical problem, given that the science might be disputed. Such a demand could also result in experts being more cautious about exerting their authority, or deterred entirely from doing so. This would not necessarily be a good thing. The urge to get experts ‘out of their labs and into public life’ does not account for the fact that there is something of a division of labour within expert ranks, and that this is not necessarily a bad thing. For a discussion of the different roles that scientists, and experts more generally, can play in the context of policy and politics, see R Pielke, The honest broker: making sense of science in policy and politics, Cambridge University Press, Cambridge, UK, 2007.
. Z Majdik and W Keith, ‘Expertise as argument: authority, democracy, and problem-solving’, Argumentation, 25(3), 2011, pp. 371–384. Majdik and Keith’s proposal may be seen as part of a broader ‘argumentative turn’ in the social sciences. This approach stems from an assessment that in the context of public policy debates, experts are not solely concerned with empirical findings but also with interpreting these findings and conclusions, a process that necessarily involves normative concerns and assumptions.
. S Turner, op. cit., p. 140. As Turner sees it, ‘expertise poses a problem that goes to the heart of liberalism. But it also goes to the heart of every ‘participatory’ alternative to liberalism, and particularly to the normative ideas of ‘civil society’ and democratic participation’. Ibid., p. 12.
. H Collins and R Evans, Rethinking expertise, University of Chicago Press, Chicago, 2007, p. 113. This is not to suggest that experts always get it right. Indeed, in areas such as long-range weather forecasting experts are prone to making incorrect judgements and often ‘disagree markedly’. Nevertheless, in Collins and Evans’ view, even in these more difficult areas, where experts are most likely to be fallible, we should support their efforts and prioritise their advice. This is because, to use the example of meteorologists, firstly, they know more about how the weather works than anyone else. And, secondly, while weather prediction may currently ‘miss the mark’ on many occasions, if this form of expertise is nurtured then in the future it may be possible to make more accurate predictions.
. Majdik and Keith, op. cit., p. 376.
. Collins and Evans, op. cit., pp. 1–2. As Collins and Evans see it, the main contributory factors to the public’s loss of confidence are political movements associated with environmentalism and animal rights, which have helped to bolster distrust in science and technology, and postmodernism.
. This holds not just for the general public, but also for experts themselves. Collins and Evans stress that experts in one area may have ‘nothing special to offer toward technical decision-making in the public domain where the specialisms are not their own’. H Collins and R Evans, ‘The third wave of science studies: studies of expertise and experience’, Social studies of science, 32(2), 2002, p. 54. Thus, it needs to be established just what contributions can be made by experts outside their areas of specialisation.
. Collins and Evans, 2002, op. cit., p. 55. Collins and Evans describe their project as falling within the Third Wave of Science Studies, which they have termed Studies of Expertise and Experience (SEE). In the First Wave of Science Studies, science was generally understood to be a special domain of intellectual activity that guaranteed or offered the best prospect of discovering scientific truths. The Second Wave of Science Studies questioned the rational basis of science—the notion that science is a rational process that leads to the disinterested discovery of truth. It showed that scientific truth was very much socially constructed, and inescapably so. The Third Wave of Science Studies comes out of criticism of the Second Wave and its inherent relativism. The Third Wave argues that if, as Second Wave theorists argue, scientists do not offer any rational support for their position, then no-one’s scientific opinion is better than anybody else’s. This results in a situation in which we do not know whom to trust on scientific matters and in relation to technical decision-making. The Second Wave, it is argued, does not allow us to distinguish between experts and non-experts.
. Note that the term ‘groups’ in this context does not necessarily mean formal groups such a recognised profession or an academic society.
. For example, according to a realist approach to expertise, the degree of expertise a person possesses in speaking a language remains the same in whichever country the language is spoken by them. The process of acquiring expertise is explained in further detail later in the paper.
. Collins and Evans, 2007, op. cit., pp. 113–114.
. Every member of a society possesses (to a greater or lesser extent) various forms of expertise that are essential if they are to function in that society. These include natural language speaking, native intelligence—that is, a general understanding of the way the society ‘works’ and ability to navigate it—moral sensibility and political discrimination. These forms of expertise are based on tacit knowledge—that is, people are able to do them without necessarily being able to explain the rules that underpin them—hence, they are frequently not understood as such. They are also typically not regarded as forms of expertise because everybody possesses them, more or less. These forms of ‘ubiquitous expertise’ underpin all other forms of expertise. Ibid., p. 16.
. This dilemma is captured in the saying ‘a little knowledge is a dangerous thing’.
. Collins and Evans use the example of learning a natural language to illustrate how expertises that involve specialist tacit knowledge work. While it is possible to learn a language through the study of dictionaries and grammars, it is only through immersion in a community of language speakers that it is possible to grasp the tacit knowledge of that language’s use that would make one able to speak it as though one were a native (or an expert). As Collins and Evans point out, any form of language consists of more than propositional knowledge. There are always informal or tacit rules that ‘cannot be explicated’ and are only known ‘through their expression in action’. Collins and Evans, 2007, op. cit., p. 28.
. It should be noted that interactional expertise cannot always be attained, or be attained in all areas of a given domain. Some areas may be beyond the interactional expert’s capacity to grasp, or beyond their capacity to grasp at a particular point in time. The mastery of interactional expertise is one of steady acquisition as the person gradually masters the language of a domain. Collins and Evans point out that it progresses from ‘interview’ to ‘discussion’ to ‘conversation’ as more and more of the domain is understood.
. Another example is provided by a network of parents of autistic children who sociologists, Gil Eyal and Brendan Hart, argue were obliged to challenge the control of autism by psychiatrists in order to forge a new alternative type of expertise. While these parents did not become institutionally recognised as ‘experts on their own children’, the inclusion of their experience and knowledge helped to ‘modify, extend and strengthen’ the existing network of medical expertise on autism. See G Eyal and B Hart, ‘How parents of autistic children became “experts on their own children”: notes toward a sociology of expertise’, Berkeley Journal of Sociology, 54, 2010, pp. 3–17.
. These included mistakes made by them in the past; a failure to qualify their claims with any sense of uncertainty; and their out-of-hand dismissal of alternative explanations or possibilities.
. Collins and Evans, 2007, op. cit., p. 69. Collins and Evans isolate three levels of internal meta-expertise: technical connoisseurship, downward discrimination and referred expertise.
. Of course, this exercise might itself fall within the purview of experts.
. For a description of wicked problems’ other characteristics, see H Rittel and M Webber, ‘Dilemmas in a general theory of planning’, Policy Sciences, 4, 1973, pp. 155–169. For an examination of wicked problems in an Australian public policy context see: Australian Government, Tackling wicked problems: a public policy perspective, Australian Public Service Commission, 2007, accessed 21 May 2013.
. R Pielke, op. cit., p. 5.
. As its name implies, referred expertise is ‘expertise taken from one field and indirectly applied to another’. Collins and Evans, 2007, op. cit., p. 64. While a person with referred expertise does not possess contributory expertise in the different field, they may nevertheless be able to draw upon their experience and knowledge of their own field to participate (more or less successfully) in technical debates and decision making in the new field.
. This is especially the case where mainstream or socially recognised expertise is challenged or contested. For example, Gil Eyal has drawn attention to the struggle of parents of autistic children to have their alternative form of expertise recognised and incorporated in the treatment of autistic children. This is but one of many instances in which the boundaries between lay and expert have been tested. See G Eyal and B Hart, op. cit. The advantage of Collins and Evans’ classificatory scheme is that it provides a means of identifying the nature of the parents’ expertise and the areas in which it is and is not relevant. It allows for the drawing of such necessary boundaries.
. Collins and Evans, 2007, op. cit., p. 142.
. D Fallis, ‘On verifying the accuracy of information: philosophical perspectives’, Library trends, 52(3), 2004, pp. 466–7. See D Hume, An enquiry concerning human understanding: and selections from a treatise of human nature: with Hume’s autobiography and a letter from Adam Smith, J McCormack and M Whiton Calkins (eds), Leipzig , F. Meiner, 1913.
. A Goldman, ‘Experts: which ones should you trust’, Philosophy and Phenomenological Research, 63(1), July 2001, p. 92. Goldman’s focus here is on cognitive or intellectual experts—’people who have (or claim to have) a superior quantity or level of knowledge in some domain and an ability to generate new knowledge in answer to questions within the domain’. Ibid., p. 91.
. D Matheson, ‘Conflicting experts and dialectical performance: adjudicating heuristics for the layperson’, Argumentation, 19(2), 2005, p. 146. This suggests that it would be particularly difficult for a novice to directly adjudicate between the arguments made by competing experts. Further, as Brewer has argued, expecting that they might be able to do so ‘can seem especially puzzling in that it may look like we are expecting greater ability to discern the scientific truth from the nonexpert than we are from the expert’. S Brewer, ‘Scientific expert testimony and intellectual due process’, Yale Law Journal, 107(6), 1998, p. 1595.
. A Gelfert, ‘Expertise, argumentation and the end of inquiry’, Argumentation, 25(3), 2011, p. 298.
. Axel Gelfert has argued for an approach that brings together insights from epistemology and argumentation theory on the question of expertise. Ibid.
. A Goldman, op. cit., p. 95.
. D Matheson, op. cit., p. 152.
. D Coady, What to believe now: applying epistemology to contemporary issues, Chichester, West Sussex, Wiley-Blackwell, 2012, p. 48.
. A Goldman, op. cit., pp. 95–6.
. D Coady, op. cit., p. 48.
. S Brewer, op. cit., p. 1622.
. D Coady, op. cit., p. 48.
. D Matheson, op. cit., p. 155.
. D Coady, op. cit., p. 49.
. D Fallis, op. cit., p. 474.
. D Coady, op. cit., p. 39.
. A Goldman, op. cit., p. 97.
. H Collins and M Weinel, ‘Transmuted expertise: how technical non-experts can assess experts and expertise’, Argumentation, 25(3), 2011, p. 406.
. Ibid., p. 409. They argue that the ‘transmuted expertise of Sociological Discrimination’ may be useful in such cases.
. A Goldman, op. cit., pp. 98–9.
. T Kelly, ‘Peer disagreement and higher-order evidence’, in R Feldman and T Warfield (eds.), Disagreement, Oxford University Press, Oxford, 2010, p. 148. Similarly, Elga argues that having the majority of experts on one side of the argument ‘should move one only to the extent that one counts it as independent from opinions one has already taken into account’. A Elga, ‘How to disagree about how to disagree’, in R Feldman and T Warfield (eds.), op. cit., p.177.
. B Almassi, ‘Review of the philosophy of expertise’, Ethics: an international journal of social, political and legal philosophy, 117(2), 2007, p. 378.
. A Goldman, op. cit., p. 102.
. D Coady, op. cit., p. 40.
. D Hume, op. cit., p. 116.
. A Goldman, op. cit., p. 104.
. It is worth making a couple of brief points with regard to financial interests and their potential for inducing bias. Firstly, with funding for research increasingly being provided by the private rather than public sector it is likely that more and more experts will be exposed to such claims in the future. Secondly, many institutions require that potential financial conflicts (along with other factors that may compromise independence) must be disclosed by experts, as a means to ensure public trust in these institutions.
. A Goldman, op. cit., p. 105.
. F Zenker, ‘Experts and bias: when is the interest-based objection to expert argumentation sound?’, Argumentation, 25(3), 2011, p.355.
. G Munnichs, op. cit., p. 127.
. F Zenker, op. cit., p. 366.
. W Rehg, ‘Evaluating complex collaborative expertise: the case of climate change’, Argumentation, 25(3), 2011, pp. 397–8.
. F Zenker, op. cit., p. 367.
. A Goldman, op. cit., p. 106.
. Though they may of course possess or gain over time lower levels of specialist expertise in certain areas. This is also not to downplay many committee members’ experience in interrogating witnesses, teasing out issues and challenging expert assumptions.
. J Goodwin, ‘Accounting for the appeal to the authority of experts’, Argumentation, 25: 3, 2011, p. 290.
. Boundary organisations can be defined as ‘organisations that cross the boundary between science and politics and draw on the interests and knowledge of agencies on both sides to facilitate evidence-based and socially beneficial policies and programmes’. S Drimie and T Quinlan, ‘Playing the role of a ‘boundary organisation’: getting smarter with networking’, Health Research and Policy Systems, 9(1), 2011. See also D Guston, ‘Boundary organizations in environmental policy and science’, Science, Technology and Human Values, 26(4), pp. 399-408; and J Goodwin, op. cit., p. 290.
. H Collins and M Weinel, op. cit., pp. 411–2.
. C Tindale, ‘Character and knowledge: learning from the speech of experts’, Argumentation, 25(3), 2011, p. 343. See also: S John, ‘Expert testimony and epistemological free-riding: the MMR controversy’, The Philosophical Quarterly, 61, 2010, pp. 496–517.
. J Hardwig, op. cit., p. 94.
. S Odgers, Uniform Evidence Law: Tenth edition, Thomson Reuters, Pyrmont, New South Wales, 2012, p. 352.
. Ibid., p. 89.
. Ibid., p. 93.
For copyright reasons some linked items are only available to members of Parliament.
© Commonwealth of Australia
In essence, you are free to copy and communicate this work in its current form for all non-commercial purposes, as long as you attribute the work to the author and abide by the other licence terms. The work cannot be adapted or modified in any way. Content from this publication should be attributed in the following way: Author(s), Title of publication, Series Name and No, Publisher, Date.
To the extent that copyright subsists in third party quotes it remains with the original owner and permission may be required to reuse the material.
Inquiries regarding the licence and any use of the publication are welcome to email@example.com.
This work has been prepared to support the work of the Australian Parliament using information available at the time of production. The views expressed do not reflect an official position of the Parliamentary Library, nor do they constitute professional legal opinion.
Feedback is welcome and may be provided to: firstname.lastname@example.org. Any concerns or complaints should be directed to the Parliamentary Librarian. Parliamentary Library staff are available to discuss the contents of publications with Senators and Members and their staff. To access this service, clients may contact the author or the Library‘s Central Entry Point for referral.