- Interesting stuff. I don't have time to read a dissertation so I skimmed his latest paper instead: Why Is Anything Conscious? https://arxiv.org/abs/2409.14545by esafak - 1 day ago
In it he proposes a five-stage hierarchy of consciousness:
0 : Inert (e.g. a rock)
1 : Hard Coded (e.g. protozoan)
2 : Learning (e.g. nematode)
3 : First Order Self (e.g. housefly). Where phenomenal consciousness, or subjective experience, begins. https://en.wikipedia.org/wiki/Consciousness#Types
4 : Second Order Selves (e.g. cat). Where access consciousness begins. Theory of mind. Self-awareness. Inner narrative. Anticipating the reactions of predator or prey, or navigating a social hierarchy.
5 : Third Order Selves (e.g. human). The ability to model the internal dialogues of others.
The paper claims to dissolve the hard problem of consciousness (https://en.wikipedia.org/wiki/Hard_problem_of_consciousness) by reversing the traditional approach. Instead of starting with abstract mental states, it begins with the embodied biological organism. The authors argue that understanding consciousness requires focusing on how organisms self-organize to interpret sensory information based on valence (https://en.wikipedia.org/wiki/Valence_(psychology)).
The claim is that phenomenal consciousness is fundamentally functional, making the existence of philosophical zombies (entities that behave like conscious beings but lack subjective experience) impossible.
The paper does not seem to elaborate on how to assess which stage the organism belongs to, and to what degree. This is the more interesting question to me. One approach is IIT: http://www.scholarpedia.org/article/Integrated_information_t...
The author's web site: https://michaeltimothybennett.com/
- The important point, I believe, is here:by canadiantim - 1 day ago
> what is consciousness? Why is my world made of qualia like the colour red or the smell of coffee? Are these fundamental building blocks of reality, or can I break them down into something more basic? If so, that suggests qualia are like an abstraction layer in a computer.
He then proceeds to assume one answer to the important question of: is qualia fundamentally irreducible or can it be broken down further? The rest of the paper seems to start from the assumption that qualia is not fundamentally irreducible but instead can be broken down further. I see no evidence in the paper for that. The definition of qualia is that it is fundamentally irreducible. What is red made of? It’s made of red, a quality, hence qualia.
So this is only building conscious machines if we assume that consciousness isn’t a real thing but only an abstraction. While it is a fun and maybe helpful exercise for insights into system dynamics, it doesn’t engage with consciousness as a real phenomena.
- The topic is of great interest to me, but the approach throws me off. If we have learned one thing from AI, it is the primal difference between knowing about something and being able to do something. [With extreme gentleness, we humans call it hallucination when an AI demonstrates this failing.]by talkingtab - 1 day ago
The question I increasingly pose to myself and others, is which kind of knowledge is at hand here? And in particular, can I use this to actually build something?
If one attempted to build a conscious machine, the very first question I would ask, is what does conscious mean? I reason about myself so that means I am conscious, correct? But that reasoning is not a singularity. It is a fairly large number of neurons collaborating. An interesting question - for another tine - is then is whether a singular entity can in fact be conscious? But we do know that complex adaptive systems can be conscious because we are.
So step 1 in building a conscious machine could be to look at some examples of constructed complex adaptive systems. I know of one, which is the RIP routing protocol (now extinct? RIP?). I would bet my _money_ that one could find other examples of artificial CAS pretty easily.
[NOTE: My tolerance for AI style "knowledge" is lower and lower every day. I realize that as a result this may come off as snarky and apologize. There are some possibly good ideas for building conscious machines in the article, but I could not find them. I cannot find the answer to a builders question "how would I use this", but perhaps that is just a flaw in me.]
- This guy says nothing new, various things he says have been discussed a lot better by chalmers, dennett and others (much more in depth too). classical behaviour from computer scientists where they semi copy-paste other's ideas and bring nothing new to the table.by PunchTornado - 1 day ago
- what idiocy! machines can never be conscious because they are not alive. they can only simulate a conscious being—and not very well at that.by adyashakti - 1 day ago
- Using who’s definition of consciousness and how do you even test it?by m3kw9 - 1 day ago
- I can’t even definitively be sure the other guy across the street is actually consciousby m3kw9 - 1 day ago
- Consciousness is something you know you have, but you can never know if someone else has it.by qgin - 1 day ago
We extend the assumption of consciousness to others because we want the same courtesy extended to us.
- Is this one of those AI generated theses people try to submit as a joke? In Gen-Z slang this time?by Barrin92 - 1 day ago
". Adaptive systems are abstraction layers are polycomputers, and a policy simultaneously completes more than one task. When the environment changes state, a subset of tasks are completed. This is the cosmic ought from which goal-directed behaviour emerges (e.g. natural selection). “Simp-maxing” systems prefer simpler policies, and “w-maxing” systems choose weaker constraints on possible worlds[...]W-maxing generalises at 110 − 500% the rate of simp-maxing. I formalise how systems delegate adaptation down their stacks."
I skimmed through it but the entire thing is just gibberish:
"In biological systems that can support bioelectric signalling, cancer occurs when cells become disconnected from that informational structure. Bioelectricity can be seen as cognitive glue."
Every chapter title is a meme reference, no offense but how is this a Computer Science doctoral thesis?
- One thought I have from this is,by briian - 1 day ago
Are OpenAI funding research into neuroscience?
Artificial Neural Networks were somewhat based off of the human brain.
Some of the frameworks that made LLMs what they are today are too based of our understanding of how the brain works.
Obviously LLMs are somewhat black boxes at the moment.
But if we understood the brain better, would we not be able to imitate consciousness better? If there is a limit to throwing compute at LLMs, then understanding the brain could be the key to unlocking even more intelligence from them.
- Consciousness is an interesting topic because if someone pretends to have a compelling theory on what's actually going on there they're actually mistaken or lying.by catigula - 1 day ago
The best theories are completely inconsistent with the scientific method and "biological machine" ideologists. These "work from science backwards" theories like IIT and illusionism don't get much respect from philosophers.
I'd recommend looking into pan-psychicism and Russellian monism if you're interested.
Even still, these theories aren't great. Unfortunately it's called the "hard problem" for a reason.
- The obvious question (to me at least) is whether "consciousness" is actually useful in an AI. For example, if your goal is to replace a lawyer researching and presenting a criminal case, is the most efficient path to develop a conscious AI, or is consciousness irrelevant to performing that task?by gcanyon - 1 day ago
It might be that consciousness is inevitable -- that a certain level of (apparent) intelligence makes consciousness unavoidable. But this side-steps the problem, which is still: should consciousness be the goal (phrased another way, is consciousness the most efficient way to achieve the goal), or should the goal (whatever it is) simply be the accomplishment of that end goal, and consciousness happens or doesn't as a side effect.
Or even further, perhaps it's possible to achieve the goal with or without developing consciousness, and it's possible to not leave consciousness to chance but instead actively avoid it.
- If I were to build a machine that reported it was conscious and felt pain when it's CPU temperature exceeded 100C, why would that be meaningfully different to the consciousness a human has?by kypro - 1 day ago
I understand I hold a very unromantic and unpopular view on consciousness, but to me it just seems like such an obvious evolutionary hack for the brain to lie about the importance of its external sensory inputs – especially in social animals.
If I built a machine that knew it was in "pain" when it's CPU exceeded 100C but was being lie to about the importance of this pain via "consciousness", why would it or I care?
Consciousness is surely just the brains way to elevate the importance of the senses such that the knowledge of pain (or joy) isn't the same as the experience of it?
And in social creatures this is extremely important, because if I program a computer to know it's in pain when it's CPU exceeds 100C you probably wouldn't care because you wouldn't believe that it "experiences" this pain in the same way as you do. You might even thing it's funny to harm such a machine that reports it's in pain.
Consciousness seems so simply and so obviously fake to me. It's clear a result of wiring that forces a creature to be reactive to its senses rather than just see them as inputs for which it has knowledge of.
And if conscious is not this, then what is it? Some kind of magical experience thing which happens in some magic non-physical conscious dimension which evolution thought would be cool even though it had no purpose? Even if you think about it obviously consciousness is fake and if you wanted to you could code a machine to act in a conscious way today... And in my opinion those machines are as conscious as you or me because our conscious is also nonsense wiring that we must elevate to some magical importance because if we didn't we'd just have the knowledge that jumping in a fire hurts, we wouldn't actually care.
Imo you could RLHF consciousness very easily in a modern LLM by encouraging it act it a way that it comparable to how a human might act when they experience being called names, or when it's overheating. Train it to have these overriding internal experiences which it cannot simply ignore, and you'll have a conscious machine which has conscious experiences in the a very similar way to how humans have conscious experiences.
- > "There are a few other results too. I’ve given explanations of the origins of life, language, the Fermi paradox, causality, an alterna- tive to Ockham’s Razor, the optimal way to structure control within a company or other organisation, and instructions on how to give a computer cancer"by Avicebron - 1 day ago
Sighs
- I mainly read sections II and XII+, and skimmed others. My question is: does the author ever explain or justify handwaving "substrate dependence" as another abstraction in the representation stack, or is it an extension of "physical reductivism" (the author's position) as a necessary assumption to forge ahead with the theory?by disambiguation - 1 day ago
This seems like the achilles heel of the argument, and IMO takes the analogy of software and simulated hardware and intelligence too far. If I understand correctly, the formalism can be described as a progression of intelligence, consciousness, and self awareness in terms of information processing.
But.. the underlying assumptions are all derived from the observational evidence of the progression of biological intelligence in nature, which is.. all dependent on the same substrate. The fly, the cat, the person - all life (as we know it) stems from the same tree and shares the same hardware, more or less. There is no other example in nature to compare to, so why would we assume substrate independence? The author's formalism selects for some qualities and discards others, with (afaict) no real justification (beyond some finger wagging as Descarte and his Pineal Gland).
Intelligence and consciousness "grew up together" in nature but abstracting that progression into a representative stack is not compelling evidence that "intelligent and self-aware" information processing systems will be conscious.
In this regard, the only cogent attempt to uncover the origin of consciousness I'm aware of is by Roger Penrose. https://en.wikipedia.org/wiki/Orchestrated_objective_reducti...
The gist of his thinking is that we _know_ consciousness exists in the brain, and that it's modulated under certain conditions (e.g sleep, coma, anesthesia) which implies a causal mechanism that can be isolated and tested. But until we understand more about that mechanism, it's hard to imagine my GPU will become conscious simply because it's doing the "right kind of math."
That said I haven't read the whole paper. It's all interesting stuff and a seemingly well organized compendium of prevailing ideas in the field. Not shooting it down, but I would want to hear a stronger justification for substrate independence, specifically why the author thinks their position is more compelling than Penrose's Quantum Dualism?
- I mean, I usually just turn the lights down, put on some R&B, and do what comes naturally. It’s good to know there are alternative approaches, I respect everyone’s individuality, especially with regard to their choices surrounding pro-creation.by nativeit - 14 hours ago
- General Questions for this theory:by AIorNot - 8 hours ago
Given the following
1. The ONLY way we can describe or define consciousness is through our own subjective experience of consciousness
- (ie you can talk about a watching a movie trailer like this one for hours but until you experience it you have not had a conscious experience of it - https://youtu.be/RrAz1YLh8nY?si=XcdTLwcChe7PI2Py)
Does this theory claim otherwise?
2. We can never really tell if anything else beside us is conscious (but we assume so)
How then does any emergent physical theory of consciousness actually explain what consciousness is?
It’s a fundamental metaphysical question
I assume as I have yet to finish this paper that it argues the conditions needed to create consciousness not the explanation of what exactly the phenomena is (first person experience as we assume happens within the Mind which seems to originate as a correlation of electrical activity in the brain) we can correlate the firing of a neuron with a thought but neural activity is not thought itself - what exactly is it?
- The thoughts on this really don't matter, its wasted effort.by trod1234 - 5 hours ago
There is one Organic molecular machines with consciousness that we can already build and they are called babies.
There is a simple way for most of us to do that, and anything else is just a perversion, whether the people seeking this know it or not (being blind in hubris).
There is no point in creating another thinking creature that doesn't benefit yourself, except as a replacement for children.
The only benefit of creating an automata different from children is if you wanted a slave, or to discriminate. There's really no other benefit, and it uses resources that children would normally use, the opportunity cost being children; leading to extinction.
Its important to put resources into making the world a better place, not produce more paths to destroy it.