Paper Review: Hidden Qualia
Is it possible to not know you're in pain?
A month ago at AI, Animals, & Digital Minds, I had a chat with Derek Shiller about his paper “Hidden Qualia.” Derek thinks it’s probable that we have qualitative, phenomenal states that we cannot introspect upon: states we don’t self-report as having, we don’t believe that we have, etc. and nonetheless possess the same phenomenal character as the states we’re in when we stub our toe or see the color red.
In this post, I’ll be covering Shiller’s arguments that hidden qualia are possible. In the future, I might break down the rest of the paper.
1. Deflating belief
Shiller’s paper begins with two premises:
(Premise) Some mental states have phenomenal characters: properties of mental states that together constitute “what it feels like to be us.” We’ll use the term ‘qualia’ to refer to these phenomenal characters.
(Premise) Qualia are not constituted by beliefs about our own experiences. That is, believing that you are in pain is not the same thing as actually being in pain, even if there are necessary connections between the two.
I’m going to take #1 as a given; if you don’t think phenomenal characters exist at all, the central question of the paper is kind of vacuous.
#2 is trickier, and depends a little on your definition of belief.
Imagine some limited organism with detailed processing of nociception, the chemo-mechanical sense associated with the feeling of pain. But the organism has no sophisticated representation of linguistic statements, logical reasoning, or communication; very limited cognition, period. You flip the nociceptors on. The organism shifts into a state with the classic phenomenal character of pain—it really hurts. First, is this possible? Second, does the organism have a belief that it is in pain?
I think this is possible. Pain seems separate from having the linguistic thought “ow, I am in pain now,” or taking any external action to avoid the pain. It’s merely a sensation. Similarly, I do not think the organism has a belief in the sense that philosophers would think about it, as internally representing propositional content somehow. But what if you, like Daniel Dennett, have a deflationary account of belief—that there is no “neuralese” language in which propositions are encoded, that belief is nothing but a high-level description of internal and behavioral patterns?
I think most people who believe qualia cannot be deflated have no problem being deflationary about belief. It is known by direct acquaintance that there is something it feels like to think “ow, it hurts a lot,” and not to be aware of any deception or self-delusion going on. But that is simply saying that the act of thinking about “the sky is blue” feels like something, whereas presumably I do not stop believing the sky is blue when I stop thinking about it. To say “Jones believes the sky is blue” seems to me like a statement about how Jones will react when asked about the sky, how I make judgments, what mental states Jones will likely enter or not enter—a description of a high-level pattern that might include qualia, but not a description of a bare phenomenon.
To avoid this fuzziness, let’s pick something concrete, which is still aligned with Shiller’s intent and still would imply something radical if his argument is true.
(Revised Premise) Qualia are not constituted by the capacity to faithfully report on our own experiences, via language or inner monologue. That is, being able to think some version of “I am in pain” without being aware that one is lying is not the same thing as actually being in pain, even if there are necessary connections between the two.
This is less controversial. I think it’s logically possible for some animal to feel intense pain without having the machinery to explicitly report on that feeling.
Let’s proceed with this premise for now… and see if it fails us later.
2. Inductive arguments
Partially hidden qualia are possible, and the distinction is only a matter of degree.
Partially hidden qualia are “qualia that we can tell we’re having only with some difficulty,” like a droning air conditioner that fades into the background, but which become introspectable if we direct our attention to them.
If y is a partially hidden quale, and it takes x units of effort to introspect on it, surely a candidate y' which takes x + 0.00…01 units of effort is still a quale.
But then, by induction, qualia which take arbitrarily high degrees of effort to introspect would still be qualia.
The problem here is that partially hidden qualia are almost as controversial as fully hidden qualia. Consider two explanations of the droning air conditioner phenomenon:
A. You are in a mental state which has the fully fleshed-out phenomenal character you’d get from being fully aware of a loud, droning air conditioner. However, you are not attending to this mental state, so you cannot introspect on it.
B. Since you are barely attending to this air conditioner, you are in a mental state with a very weak phenomenal character associated with a very dim awareness of something noise-like. The process of directing attention to it alters your brain state, bringing more sensory data into your awareness and altering the phenomenal character of your experience.
B seems like a much more natural description of what’s happening to me. A seems to suggest a double-layered homunculus: you are somehow having the experience of hearing an air conditioner loud and clear, but you aren’t “looking at” that part of the experience so you don’t “see it.” But that just seems to mistake what an experience is! The experience is that which you see. Furthermore, B is not subject to the same inductive paradox. The effort of introspection increases in proportion to how dim the quale is, so when the effort of introspection is maximized, it doesn’t feel like anything at all. That’s very different from suggesting there is this real phenomenal character which is hanging out but isn’t doing anything.
Qualia that only partially affect our higher-order faculties are possible, and the distinction is only a matter of degree.
Suppose we see something rose red, and introspect about it. What belief (aka, internal self-report) will we find? “I am seeing rose red,” “I am seeing red,” “I am seeing a color,” “I am seeing something?” How precise is it?
Similarly, it is possible to be more or less confident about what experiences we’re having.
Surely if y is a quale which, upon introspection, leads to a belief of specificity x, then y’ which leads to specificity x – 0.00…01 is also a quale.
But then, by induction, qualia which are arbitrarily unspecified and uncertain would still be qualia.
And again, I think there are two possible answers. Either A, we are in a mental state with a fully fleshed-out and specific phenomenal character, but by introspection we only imperfectly grasp it. Or B, we are in a mental state which may have any degree of imprecision or uncertainty associated with it, and introspection may alter the mental state. As the specificity fades into nothing, so too the quale fades into nothing, no hidden qualia necessary.
3. Animal argument
Non-human animals demonstrate that hidden qualia are possible.
Non-human animals can’t form beliefs (internal self-reports) about their conscious states.
It’s plausible that non-human animals have qualia.
⇒ If non-human animals do have qualia, they would be hidden. (from a, b)
⇒ It’s plausible that hidden qualia exist. (from c, b)
Sure, dogs probably can’t represent their qualia as explicit linguistic statements.
But how are we justified in believing that non-human animals have qualia? Well, they seem to act like they do. A dog with a cut yelps in pain and avoids the pain source. So while the dog may not be able to express itself in language or self-reports, it seems to have a form of direct access to the qualia which informs its behavior. Otherwise, we wouldn’t have any reason to believe they felt those things at all! Therefore, the category of “hidden qualia” we are justified in believing exists on animal grounds are those qualia which cannot be expressed as linguistic higher-order statements but which do manifest in behavior & dispositions.
If those exist in humans, we would expect to see ourselves acting in ways that, introspectively, we couldn’t generate explanations for, and couldn’t describe the way they were feeling. We’d be flinching away from some internal, hidden pain and thinking, “why did I do that? I don’t feel any pain.” I don’t see that happening very often. The fact that we don’t notice this is evidence that these kinds of hidden qualia probably isn’t occurring in humans. Still, this is an argument about possibility, not probability, so it’s not out of the question.
4. Humean argument
Hume’s dictum, according to Jessica Wilson, is “there are no metaphysically necessary connections between distinct intrinsically typed entities.” Fact-checking with Gemini 3, here’s my best translation of that into plain English.
An intrinsic property is something like mass. A baseball has mass just by virtue of being a baseball; it does not matter whether the universe around it is different or if it were to exist alone in an empty world. Contrast with an extrinsic property like something like weight, which depends on the presence of a gravitational field.
An intrinsically typed entity is an entity which is defined only by a set of intrinsic properties. You could define a flame as “that thing which happens when you strike a match” (extrinsic type) or “gas & plasma which emits light, heat, etc.” (intrinsic type). Only the latter definition of “flame” would count for Hume’s dictum.
Distinct entities are entities where no part of one is part of the other.1 A car and its engine are not distinct. Matches and flames are distinct: while one causes the other, they are separate entities.
Putting it all together, when you have two distinct intrinsically typed entities, it is metaphysically possible to have one without the other. If you define a flame as “gas & plasma which emits light, heat…” and a match as “a stick of wood with phosphorous, red, small…”, defining neither in relation to the other, there is nothing contradictory about imagining a world in which struck matches do not emit flames. Yep, it’s a zombie thing!
Qualia and beliefs are separate by Hume’s dictum.
Hume’s dictum: “there are no metaphysically necessary connections
between distinct intrinsically typed entities.”
Qualia and beliefs are distinct.
Qualia are intrinsically typed.
There is at least some component of belief that is intrinsically typed.
⇒ There are no metaphysically necessary connections between qualia and beliefs.
The reason that (d) is different from (c) is that some definitions of belief include on the history or way in which a mental state was formed, which is path-dependent and therefore extrinsic. But, Shiller reasons, so long as the necessary conditions for belief also include something intrinsically typed, Hume’s dictum still applies.
However, (b) requires that qualia are not a necessary component of belief. This means that philosophical zombies would have “beliefs,” in the sense that Shiller is talking about. And this is restrictive in an important way.
Let’s say I am experiencing a hidden quale of pain.
Hume’s dictum says it is still metaphysically possible for me to introspect, then say the words “nope, I don’t feel any pain,” and walk around and act and have a brain state and resemble a zombie who isn’t feeling any pain. That is all it says.
But what if beliefs have a phenomenal component? If the state “I believe that X” is partly constituted by the very feeling that X, Hume’s dictum does not apply to that component, since it is not distinct from the underlying quale. So then the underlying question is, are qualia constituent parts of beliefs about them?
And in a later section, Shiller says they might be! He describes Chalmers’ direct phenomenal concept framework as the most promising justification of how we can know we aren’t zombies, while still not knowing we have hidden qualia. Shiller writes:
A direct phenomenal concept includes the very experience of redness to which it refers … such beliefs are incorrigible because the tokened quale determines the concept’s reference in a way that precludes a direct phenomenal belief from ever being wrong.
So Shiller either has to find another epistemological framework, accept that we don’t know whether we’re zombies, or ditch Hume’s dictum. Maybe it is metaphysically possible to have a pain quale and also subjectively feel that you aren’t feeling anything. But Hume’s dictum doesn’t say.2
Conclusion
I really doubt hidden qualia are possible, and I am even more doubtful that they exist. Then there’s the further question of whether hidden qualia are practically or morally relevant. But, if hidden qualia do exist and they’re morally relevant and they’re strongly valenced… that could be quite good, or quite bad, and we might be able to do something about it. So far, all I’ve done is poke holes in arguments for hidden qualia; I haven’t demonstrated that they aren’t possible, and if I’m right there really ought to be a proof of that. I expect to return to this subject soon!
Wilson has multiple definitions of distinctness, and Shiller does not disambiguate. The definition I give here is mereological distinctness, but Wilson also describes numerical distinctness (not being identical) and strong modal distinctness (it being possible for either entity to exist without the other). Numerical distinctness is so weak as to make Hume’s dictum patently false, and strong model distinctness is so strong as to make it tautological, so I go with mereological here.
Shiller tries to address this in a footnote, stating “One strain of the phenomenal concept strategy … holds that qualia are constituents in some of our beliefs about them… [But] it is exceedingly plausible that beliefs have a part that goes beyond whatever qualia they are about, and that Hume’s dictum will tell us that at the very least, qualia could exist while whatever else would be required for a further belief does not.” I agree, but if the phenomenal concept strategy is true “whatever else would be required for a further belief” is precisely the unimportant stuff: etiology, behavior, cognitive role, etc. which does not separate zombies from humans.



This would have been interesting if phenomenal states and qualia referred something real and coherent :)