THE LACK OF STRUCTURE OF KNOWLEDGE

For a long time philosophers have struggled to reach a definition of knowledge that is fully satisfactory from an intuitive standard. However, what could be so fuzzy about the concept of knowledge that it makes our intuitions to not obviously support a single analysis? One particular approach from a naturalistic perspective treats this question from the point of view of the psychology of concepts. According to it, this failure is explained by the structure of our folk concept of knowledge, which organizes its constitutive information in a much looser way than we assume when we rely on intuitive knowledge ascriptions. I will adopt the same starting point here, but argue against the proposed answer and defend the view that this difficulty is explained not by something related to the specific structure of our concept of knowledge but, on the contrary, by its lack of structure. I claim that our folk concept of knowledge should be understood as a primitive mental state concept.

The finding of typicality effects disprove the classical theory and demonstrate that categorization is not a "yes or no" question as drawn by a definitional view. Instead, it corroborates Ludwig Wittgenstein's (1953) claim that categorization is more a matter of family resemblance than meeting necessary and sufficient conditions. This led to the view that our conceptual system reflects statistical properties of their members. There are more and less common instances of a category, and makes sense that the statistical experience we have with different instances generates some impression in the information we store about the category. It seems, therefore, that conceptual membership is determined not by defining properties, but by characteristic properties: c is considered an instance of C if c has properties that are characteristic enough of C. Of course, only a very different kind of structure could account for these effects and the statistical view it demands.
Eleanor Rosch and her colleagues influentially claimed that the main content of a concept is a prototype, i.e., an abstract set of properties of their typical instances. We intuitively judge something to be an instance of a concept only if it is similar enough to the prototype of such a concept. As Ramsey and Kornblith note, the prototypical view can explain why analyzing a folk concept seems like an impossible task. Roughly, the problem is that a prototypical structure allows many different sets of properties, maybe even an indefinite number, to count as sufficient criteria for conceptual membership.
Suppose the prototype of C, which contains properties {f1, f2, f3, f4, f5, f6, f7, f8, f9, f10}, listed in decreasing order of typicality. For c to be intuitively categorized as C it is enough that it be sufficiently similar to the prototype of C, or that the sum of the typicality values of its properties reach a certain value. So a number of very different instances composed by distinct sets of properties, like {f1, f3}, {f1, f2, f10}, or {f6, f7, f8, f9, f10} can trigger a categorization. That is why very different things like a closet, a rug, and a lump can intuitively trigger FURNITURE. Therefore, Ramsey objects, any definition of C in terms of necessary and sufficient conditions could not be given by a simple small set of properties, but it would contain a minimally extensive disjunction of sets, e.g., "c is an instance of C if and only if c satisfies {f1, f2, f3} or {f1, f2, f10} or {f6, f7, f8, f9, f10} or...". To propose a simple definition to a concept with a prototypical structure is to arbitrarily treat a subset of a large or indefinite disjunction as necessary and sufficient and to submit it to intuitive counterexamples.
The suggestion of Ramsey and Kornblith, of course, is that this is the case of our folk concept of knowledge, that is, that KNOWLEDGE has a prototypical structure. Every time a definition is proposed it fails to capture all sets of the extensive disjunction that reflects our intuitive ascriptions of knowledge. And the situation worsens here where it is so easy to create the most varied sets of properties through imaginary cases. We can always manipulate the typicality values of imaginary cases by adding or taking out atypical and typical properties in order to produce intuitive counterexamples. This explanation is supported by the general acceptance of the prototypical theory and it places a proper emphasis in our practice of generating very imaginative cases. But do we have reasons to think it is true?
In his response to Kornblith, Goldman (2007) defends conceptual analysis by claiming that there is no obligatory commitment between the classical view of concepts and the practice of describing concepts in terms of necessary and sufficient conditions. " [P]hilosophers have customarily adopted the format of necessary and sufficient conditions, but I see nothing essential about that practice. (…) [A] recursive format could be adopted instead, using base clauses, recursive clauses, and a closure clause" (p. 24). So there is no strong reason for keeping the necessary-and-sufficient format. But saying that is not saving the standard practice of conceptual analysis and the specific reason why we should give up intuitive satisfaction is still an issue. Indeed, Goldman's own view about the conceptual representation of epistemic folk concepts provides a reason for loosen the format and also a distinct structural hypothesis about KNOWLEDGE. When discussing some alleged intuitions against his reliabilist theory of justification, Goldman (1993) did not try to refute those intuitions, but explain them by articulating the underlying representations. Goldman claimed that what causes those intuitions is the storage of exemplars. Instead of positing summary representations for categories, the exemplar theory of concepts claims that a concept stores detailed exemplars of the category, i.e., a set of detailed representations of its instances. So, roughly, according to this alternative view, to have a concept C is to think of C as being the class of entities similar to its set of exemplars stored in long-term memory. To have FRUIT, for example, is to think of a class of objects similar to a set of objects, like an apple, a peach, a watermelon, a tomato, etc. Categorization is indeed a similarity judgment, but one that compares a particular input to one or a set of stored particular representations. Goldman says: The hypothesis I wish to advance is that the epistemic evaluator has a mentally stored set, or list, of cognitive virtues and vices. When asked to evaluate an actual or hypothetical case of belief, the evaluator considers the processes by which the belief was produced, and matches these against his list of virtues and vices. If the processes are matched partly with vices, the belief is categorized as unjustified. (1992: p. 157).
Although Goldman's focus here is on justification, we can read his proposal like a hypothesis about KNOWLEDGE. In this case we have a similar explanation why we cannot achieve a satisfactory short definition of knowledge. This is because the set of stored exemplars in our folk concept is too diverse to be captured by a small conjunction of conditions. For example, considering the variety of cases and sources of knowledge, it is probable that a list of virtuous processes contain very different instances, like beliefs formed by vision, hearing, memory, a number of approved kinds of reasoning, etc. Indeed, the exemplar view, as an alternative to the prototypical view of concepts, is in fact originally motivated by an aversion to the possibility of summary representations of a class.
We have, therefore, two distinct hypotheses about KNOWLEDGE and why it delivers the patterns of intuitive attributions that it does. What they have in common, however, is that they both share a presupposition. Both of them start from the assumption that KNOWLEDGE is a structured concept. That is, both hypotheses try to explain the difficulty of the analysis of knowledge by pointing to something in the structure of our folk concept, either a prototypical structure or an exemplars structure. We belief these hypotheses fail, and they fail precisely because of this basic assumption.

DOES KNOWLEDGE HAVE A STRUCTURE?
A prototypical hypothesis about KNOWLEDGE seems like a fine solution for our subject matter. The prototypical theory has a Wittgensteinian tone that shall please many philosophers and is largely accepted in the psychological literature, if not as general theory of concepts, as true about a large number of particular concepts. We can see how its initial defense goes. Knowledge cases, ordinarily understood, are much diversified, varying between perceptual cases, testimony cases, inferential cases, and between many distinct subtypes of those cases. Surely some instances are more typical than others. Also, as we explained, it is important to emphasize the freedom with which we create imaginary scenarios. Famous cases include people with clairvoyance powers, people with abilities to make precise measurements of ambient temperature, unexpected acquirements of abilities, evil demons, fake barns, hidden sheep, etc. The typicality of these features, its lack, or the opposite of it, could easily lead to any desired intuitive outcome. The question is whether in fact KNOWLEDGE stores statistical differences in a permanent representation. I do not think so.
The prototypical theory is mostly motivated by experiments dealing with concrete concepts, but KNOWLEDGE is an abstract concept, i.e., a concept about entities that are neither purely physical nor spatially constructed, and it does not apply so easily to abstract concepts. The cognitive processes that create prototypes certainly can deal with some level of abstractness. Experiments detected typicality effects relative to abstract concepts like LIE (Coleman & Kay, 1981), CRIME, and SCIENCE, but experiments also failed to detect evidence of prototypical structure in abstract concepts like BELIEF and INSTINCT (Hampton, 1981). So we always need to go case by case. There is no direct empirical evidence of typicality effects or its lack regarding KNOWLEDGE, but I think there are enough reasons to doubt that a prototype is responsible for our intuitive attributions of knowledge. Furthermore, those reasons equally affect the possibility of KNOWLEDGE be constituted by exemplars.
We can start by pointing out the abstractness degree of KNOWLEDGE. As a category, KNOWLEDGE is not a superordinate category like CRIME, i.e., a category whose members are themselves categories. Although we can think of subcategories of knowledge, like PERCEPTUAL KNOWLEDGE and INFERENTIAL KNOWLEDGE, their instances are not in a basic level of experience in the sense that robbery, assault and murder, for example, are. It is more natural to ordinarily think and categorize about ROBBERY, ASSAULT and MURDER, than CRIME itself, for its easier to experience, think or talk directly about the instances of those things, than the more abstract thing CRIME. We cannot say the same about PERCEPTUAL KNOWLEDGE and INFERENTIAL KNOWLEDGE. These things are much less generic than KNOWLEDGE itself, being much less identified in ordinary talk or thinking. 4 Because they are not in a basic level of experience, those subordinate categories are indeed improbable to be conceptually represented by most folks. On the other hand, even if SCIENCE is also not a superordinate category like CRIME and its instances are not in a basic level of experience, many of its members, like PHYSICS, BIOLOGY, and CHEMISTRY are much more identifiable and ordinarily intelligible than categories of knowledge. This is because KNOWLEDGE is highly abstract. The degree of abstractness of KNOWLEDGE, therefore, renders implausible that we represent typical properties of its instances, store it in a prototype and use similarity judgments to make a categorization decision.
The same goes for an exemplars hypothesis. The exemplars theory emerged because not every psychologist was convinced about the existence of summary representations formed through the abstraction of properties from particular instances.
Some of them concluded that a simpler process of conceptual learning is to store detailed representations of the instances of which the individual has relevant experiences. Because it is common to experience the more typical instances of a category, the stored exemplars of an individual normally are representative of the category. That makes sense for a lot of concrete concepts, whose distinctive properties are mostly also concrete properties and perceptually learnable, but makes less sense for a highly abstract concept.
Think, for example, about how hard is for people to remember a situation where a property like TRUE occurs, in contrast to a concrete concept like CAR (Schwanenflugel, 1991), or how hard would be to someone describe the defining properties of KNOWLEDGE in contrast to any concrete concept. To be true about KNOWLEDGE it should be the case that it is just constituted by a number of detailed representations of instances from distinct kinds of knowledge, whose features include very abstract things like "to be true", "to have good reasons", "to have visual evidence", "to have a feeling of certainty", "to have being told by someone reliable", etc. I do not doubt we can ordinarily represent some of those features and use it in our thinking about knowledge, or that at least some of us can, but that would be a marginal content of KNOWLEDGE and acquired much later in life. To say that its main content is a varied set of detailed representations of different cases of knowledge, however, is just implausible.
The second objection also affects both the prototypical and the exemplars hypotheses. The most obvious evidence of typicality effects is the intuitive quality difference of examples from a category. A prototypical or exemplar structure leads subjects to intuitively consider that certain kinds of instances are best examples or representatives of the category. So it makes intuitive sense to say that the best case of LIE is one in which what is being told is false, the speaker knows that what he is telling is false and he has the intention to deceive, and that a situation in which the speaker does not know he is telling a falsehood is also a case of a lie, but a "weaker" case of a lie in a way (Coleman & Kay, 1981). Similarly, it is common to think of murder and robbery as goods examples of CRIME, while not using you seatbelt when driving (in Brazil), throwing out a mail that arrived in your house by mistake, or adultery (in the United States) as not so good examples; and to think of physics and chemistry as good examples of SCIENCE, while many not fell the same about cartography or linguistics. Although the ideal test here is obviously empirical, we can make an armchair case for the claim that KNOWLEDGE is not equivalent to these abstract concepts in this respect.
Instances of knowledge are very diverse. We attribute knowledge to children, animals, information acquired by perception, inference, testimony, explicitly justified beliefs, etc., and every case is particular to a specific situation and context. If such diversity were organized by a summary representation defined by statistical information or a limited set of exemplars it would be only natural to think of some of them as being better examples than others, but despite such diversity, our intuitive categorizations of knowledge do not vary qualitatively with regard to their representativeness. Intuitively, there is no difference, for instance, between knowing that the dog entered in the house by seeing it, by hearing it, by inferring it, or by being told by someone, in the sense that none of them is a better example of knowing than the others. This is the general case of our intuitive categorizations of knowledge. Once we categorize something as a case of knowledge, it just feels like a good case as any other, which is different from what happens in a concept clearly structured in terms of a prototype or exemplars.
The point, precisely, is that typicality effects certainly are not a robust phenomenon in the case of KNOWLEDGE. There are detectable differences of quality regarding the evidence of someone that knows something, but this is not a matter of representativeness. Also, bizarre cases, like clairvoyance and the sudden ability to measure ambient temperature, or more mundane unclear cases of belief, are likely intuitively strange whenever they do not fit the body of beliefs of someone about the world or provoke an hesitant categorization, respectively, but, again, this is a different matter. Lastly, it is prudent to not entirely rule out the possibility that some judgment about representativeness can be found, especially because that is essentially an empirical matter, but I reject the significance of those as evidence of a prototypical or exemplars structure 5 . If KNOWLEDGE really consisted in permanent representations like a prototype or exemplars, we should easily find cases that are intuitively more representative, but those are not easy to find.

KNOWLEDGE AS A MENTAL STATE CONCEPT
Since these two structural hypotheses fail, we need another answer for our central question. So what is with KNOWLEDGE that explains the difficulty of finding an intuitively satisfactory definition of knowledge? Instead of discussing any other possible structures, in the next two sections I want to propose a radically distinct hypothesis. Concepts are initially divided into primitive concepts, which are not constituted by any other concept, and complex concepts, which are formed by simpler or primitive ones. A fundamental goal of a general theory of concepts, therefore, is to explain how complex concepts are psychologically organized and very different structures were postulated in the psychological literature, including exemplars and prototypes. Given the influence of statistical approaches and the epistemological orthodox view that knowledge is a composite state of things, it seems natural to assume, as some philosophers did, that KNOWLEDGE is complex concept, one whose fuzziness is explained by statistical differences determining its structure. I will argue, however, that this is actually a primitive concept.
A first step here is to determine the kind of category that KNOWLEDGE represents.
When discussing the prototypical and the exemplars hypothesis, I claimed that these do not fit well to the type of abstract concept that KNOWLEDGE is, but what kind specific kind of abstract concept it is? What BELIEF and INSTINCT have in common is that they are mental states concepts, and the fact that they constitute a particular kind of abstract concept allows us an explanation for their failure in having a prototypical or exemplars structure. Instances of mental states patently are entities that cannot be directly observed.
There is nothing perceptually obvious about most mental states, so it makes sense that without more salient properties our concepts about them are not essentially constituted by prototypes or exemplars. The imminent suggestion, of course, is that KNOWLEDGE is itself a mental concept. 5 One possibility is that the task of judging the representativeness of knowledge situations triggers ad hoc judgments of typicality. To count as evidence of prototypical structure, however, those judgments should predict a number of other related tasks (Rosch, 1973).
This suggestion gains plausibility if we note that the general description of mental states concepts fits well with what we found out about our case of interest. For example, Anna Papafragou and colleagues said about mental verbs that: [T]hey do not refer to perceptually transparent properties of the reference world; they are quite insalient as interpretations of the gist of scenes; (…) the concepts that they encode are evidently quite complex or abstract; and they are hard to identify from context even by adults who understand their meanings. (Papafragou et al. 2007: 126) I take the lack of properties that can perceptually identify its particular instances as an indication of the nature of KNOWLEDGE as a mental state concept. This idea, however, certainly finds resistance from the epistemological literature. The problem is that it seems to collide with ideas from the orthodox view on the nature of knowledge. For instance, how it could be a mental concept if the idea that knowledge is composed by belief and other non-mental properties like truth, for example, is intuitively supported? But there is no inconsistency here, actually. It is perfectly possible that although we can infer from our categorizations that a knowledge state is composed by a state of belief plus other properties, like truth, KNOWLEDGE itself does not contain this information properly represented. It may be that from the understanding of a folk concept, knowledge is not composed by belief plus other properties 6 , and that a proper mental state can in fact consist in a factive state. Nothing that motivates the composite assumption prevents that.
Unlike the doubts that may come from someone immersed in epistemological views, in the psychological literature KNOWLEDGE is constantly listed as just another mental state concept alongside BELIEF, DESIRE, INTENTION, etc. (Premack & Woodruff, 1978;Apperly, 2011;Baron-Cohen et al., 1994;Call & Tomasello, 2008;De Villiers, 2007 In what follows, we will endorse the view that KNOWLEDGE is a mental state concept. Note, however, that one can concede the psychologists' point and still deny that knowledge really is a mental state. We will not try to argue in favor of the more strong position that the state of knowledge really is a mental state (Williamson, 2000). Again, although I claim that we use KNOWLEDGE to theorize about knowledge, a theory about the former is not a theory about the latter. As far as this is a metaphysical matter we doubt that evidence from psychology, which is what concerns us here, can solve it. In contrast, we think it is reasonable to trust in the psychological literature to help us settle certain matters regarding our folk concept of knowledge, especially when it comes to questions about which the empirical evidence has much to say.

SIMULATION AND MENTAL STATE CONCEPTS
Given that we are dealing with a mental state concept, the investigation on its structure now necessarily passes through the working of our mindreading abilities. There are two main general views regarding our mindreading abilities, the theory-theory (TT) and the simulation theory (ST), which stand for two sides of a long debate about how we are capable of mentalizing. Each of these views refers to a number of particular theories, making the dispute too complex to be discussed in detail. One issue, for instance, concerns the possibility of specific versions of those theories implying a collapse of the theories (Davies & Stone, 2001). For our purposes, it is enough to say that we interpret the two as implying substantively different things about the nature of mental state concepts. In particular, the simulation theory is an "information-poor" approach to mindreading, while the theory-theory is "information-rich" (Goldman, 1995). As long as that supposed collapse implies an information-poor view of mental concepts, it does not affect the thesis that will be advocated here, which relies on the framework of the simulation theory.
The TT approach to mentalizing follows a paradigm in cognitive science in which a number of cognitive abilities are explained by the postulation of internally represented Who is more upset? (Kahneman & Tversky, 1982: 203) Unsurprisingly, the overwhelming majority of subjects (96%) answered that Mr.
Tees is more upset than his limo colleague. Simulationists see this piece of mindreading as a representative case of simulation. So, how we are able to simulate Mr. Crane's and Mr.
Tees's states and compare them? One obvious obstacle is that we are not really in their situations. We are not in a limo on our way to the airport trying to catch a flight, neither we are in any of their specific situations of delay. In order to properly predict the resulting state of someone, simulation requires a way to use other's relevant initial states as input.
Accordingly, simulationists attribute a fundamental role to imaginative processes in cases like this. One way we could overcome the initial interpersonal distance is by generating pretend states that are relevantly similar to those of the target (Goldman, 2006). After creating pretend states of the relevant initial states of Mr. Tees and Mr. Crane, we can just run them into our own cognitive system for, lastly, checking what their resulting states are like.
Similarly, to predict someone's decision or epistemic state about a certain matter, we create pretend states that enact his initial states, which can include propositional attitudes, and we run them into our own decision-making mechanism. Obviously, however, we do not process these pretend states as we normally process the inputs we find in tasks not related to mindreading, we have to make the system "off-line", i.e., disconnected from our action-controllers. Computationally, thereby, simulation just requires the co-optation of existing mechanism, instead of the computation of an entire body of information.
Importantly, however, a consequence of this co-optation is the necessity of "quarantine" or inhibition of the agent's own states when running his cognitive system. (Goldman, 2006).
The agent's own mental states must not interfere in the process or else it may no longer resemble the target's processes. Failure to do so leads to an egocentric bias by the agent.
Since we use or own cognitive apparatus as a model to predict others' states, called attention to certain possibilities of error -say the possibility of a clock on which he usually relies being actually broken in that moment -we adopt a different reasoning strategy than what the subject would adopt, which is to say that we adopt distinct procedural rules from those of the agent, and somehow we intuitively "disapprove" either his resulting mental state or his reasoning strategy. Furthermore, the fact that the standard way epistemologists ask for intuitive categorizations is through the description of imaginary situations is another reason for believing that the standard way we come to these ascriptions is through the imaginary component of simulative mindreading.
We now have an answer for our central problem. Because the standard way we intuitively categorize others' epistemic states is through our own cognitive system, in a simulative way, those categorizations are subjects to the kinds of factors that affect the inner epistemic statuses of our own internal states. The existence of different and independent reasoning strategies and the strong tendency of an egocentric perspective are a constant source of intuitive counterexamples to proposed analyses. As many imaginary cases describe situations where the epistemic agent is in a more naïve condition than us, as evaluators, like cases where we are told about possibilities of error unknown by the agent, we egocentrically judge their mental states from our normative point of view.
Even worst, because it is always possible to artificially create cases where the agent is in a more naïve situation then the evaluator and create facts that trigger a more rigorous reasoning strategy, it is always possible to create intuitive counterexamples to proposed definitions.

FINAL THOUGHTS
We investigated the naturalistic perspective on why it is hard to find a definition of knowledge that is not intuitively troubling. I argued against the primary answer from Kornblith and others that the problem is in the structure of KNOWLEDGE, the folk concept underlying our intuitive knowledge attributions. This answer assumes that such structure reflects statistical differences of knowledge instances, which gives the concept a much looser boundary in comparison to a definitional structure. I presented several reasons that make this particular structural approach wrong, and then argued that actually any structural assumption is mistaken. Our intuitive epistemic ascriptions do not come from a specific representation whose structure generates intuitions not consistent enough for the purposes of analysis. There is no representation, even if vague, that determines what counts as knowledge. There is no structure. The folk concept that triggers such categorizations consists in primitive concept of a mental state, the ability to identify an inner code that provides a particular epistemic status to certain types of internal states The production of such an inner code is determined by our cognitive system, which decides what epistemic status each piece of information processed by it should receive.
Because the way one have to identify those internal states in others, to "read their minds", utilize his own cognitive system, our epistemic categorizations are subject to effects that commonly prevent intuitive consistency for an analysis, like the strong tendency for egocentric perspective and the existence of different and independent reasoning strategies in human cognition. A point of clarification is important here, however. I do not want to claim that simulation is the only way to get to a categorization of knowledge. The view defended here is that simulation is the standard way we attribute or deny knowledge to others, especially in the context of assessing imaginary scenarios. But I want to leave open, for example, the possibility of us having generalizations regarding KNOWLEDGE 8 . This is