Your resource for web content, online publishing
and the distribution of digital products.
«  

May

  »
S M T W T F S
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 

LLMs: Towards A Universal Standard to Measure AI Consciousness - Sentience

DATE POSTED:May 14, 2025

Consciousness, from philosophy, is referred to as what it feels like to be, or that it feels like something to be. This is sometimes distilled into subjective experience. To obtain a standard measure, this definition has to be converted into equivalents in psychology for eventual parallels in neuroscience. \n

\n

The standard for consciousness is human. The basis of human consciousness is the brain. What are the components for consciousness in the brain and how do they mechanize consciousness? It is possible to totalize this, assigning fractions to components [+mechanisms]. This can be used as a standard for all organisms with brains. \n

\n

For those without brains, a conversion back to psychology may provide equivalent measures, in comparison to human consciousness. \n

\n

The same applies to machines, especially AI. A standard in psychology, from a measure in neuroscience, can be used to place AI's altitude towards consciousness.

Philosophy

\n

What it feels like to be, or it feels like something to be, can be divided into memory and subjectivity, or into feeling and attention. For example, I think, therefore I am, is a statement of the sense of being or the sense of existence and the sense of self. The sense of being is the knowledge of existence, which is memory, predicated on experiencing or observing the world with the self. So, it is possible to characterize the definition as memory and subjectivity. \n

\n

It could also be feeling and attention. This means that it feels like something to be in a cold or in an appetitive state. There is a feeling. That feeling is in attention. The experience of that feeling in a specific way could mean it feels like something to be. It may also involve some subjectivity. \n

\n

Subjective experience is simply subjectivity laced with experience. So, the experience of cold as the self. It is also within the range of say a feeling and attention, or memory and subjectivity, so to speak.

Psychology

\n

There are several aspects which psychology and philosophy of mind overlap, but psychology broadens out several mental states. Attention, subjectivity, memory, intelligence, intentionality, feelings, emotions, awareness, and so on can be assumed as psychological terms. \n

\n

Attention can be described as the experience of highest priority in the mind, in an instance. Awareness can be described as less than attention. Intentionality is the ability to choose a decision over another, where possible, by the mind. Subjectivity as the sense of self, or being [or existing] in the experience or being an observer of the world. \n

\n

Memory as information available. Emotions as aerial [depth] mental states, like happiness, sadness, and so forth — different from regular [aerial] mental states like thoughts. Feelings as anchored mental states to bodily senses like appetite, pain, cold, and so forth. \n

\n

These are labels that ease descriptions of what the mind does or is about, but these are not mechanisms of the mind. Their definitions may also vary sometimes.

Theoretical Neuroscience

\n

The mind-body problem, defined in philosophy, means that there is a mind and there is a body. For example, there is the hand, which is a part of the body. There are also the liver, the kidneys, and others, as parts of the body. But thinking, emotions, and language are functions of the mind. So, what is the mind? What is the role of the mind in consciousness? \n

\n

The human mind is theorized to mechanize intelligence and consciousness, with the same components, nearly similar interactions, but differences in features or attributes of those interactions. \n

\n

Conceptually, the human mind is the collection of all the electrical and chemical signals, with their interactions and attributes, in sets, in clusters of neurons, across the central and peripheral nervous systems. Simply, the human mind is the set of signals. \n

\n

A memory is a specific configuration or formation of electrical and chemical signals in a set. The same applies to an emotion, a feeling, and the regulation of an internal signal. \n

\n

Simply, functions like these are a result of interactions of signals, as specificity in configurations. \n

\n

Also, there are states that electrical and chemical signals often are, in sets, at the time of the interaction, becoming how those interactions are graded. \n

\n

Attributes are states of signals at the time of interaction, determining the extent of those interactions. \n

\n

Functions can be assumed to be four: memory, emotions, feelings, and the regulation of internal senses. \n

\n

These functions are obtained as a result of the interaction of electrical and chemical signals, conceptually. \n

\n

Attributes [or qualifiers] make the functions conscious; in an instance, they can be distilled into: attention, awareness, intent and subjectivity.

Consciousness

\n

Human consciousness can be defined, conceptually, as the interaction of the electrical and chemical signals, in sets—in clusters of neurons—with their attributes, grading those interactions into functions and experiences. \n

\n

Simply, for functions to occur, electrical and chemical signals, in sets, have to interact. However, attributes for those interactions are obtained by the states of electrical and chemical signals at the time of the interactions. \n

\n

What is called subjectivity is a variation of volume from side-to-side, in a set of chemical signals, during interactions with electrical signals. This variation becomes a contrast that allows the interactions to be experienced as the self, or a displacement [or some friction] that underscores being involved [as the self], not just a process where there is no variation and the interaction goes on, conceptually. \n

\n

So, subjectivity, the basis of human consciousness from philosophy, can be assumed to be variation in chemical signals, from side-to-side, in a way that results in the self, involved in observations or experiences. In some sets, electrical signals may also make this possible, by beginning their interaction from one end to the other end, conceptually. \n

\n

Attention can be defined as the set with the highest volume of chemical signals in an instance, or the set with the highest intensity of electrical signals at a time. For chemical signals, the volume could be one of the chemical signals in the set, say glutamate, or it could be two, or a few. This is how the brain mechanizes attention, conceptually. \n

\n

Awareness [or less than attention] are sets with a certain minimum volume of chemical signals or intensity of electrical signals, such that they could be ready to switch to attention, and then be subjective as well. \n

\n

Recent attention [prioritization] or sometimes ranking awareness [pre-prioritization] sets, often have interactive residues, for continuity, especially when the experience proceeds. This becomes a longer utility for volumes of chemical signals and for masses of electrical signals. It can cause fatigue, exhaustion after a while, requiring breaks. It is also what makes some sleep feel refreshing as the interactions and residues free up volumes [of chemical signals] and masses [of electrical signals] after some time. \n

\n

Intentionality or control can be defined as the availability, in some sets, of certain spaces of peculiar diameter, where take-offs can happen, for electrical signals, after shifts of chemical signals. This means that as shifts are starting in the set, it draws electrical signals from an area of the set to interact in another area, in a way to have the function proceed, controlled, or within choice. This is often present in thick sets of signals. This is plausible because of the structure of clusters of neurons, such that a set [of signals] could have different edges. Simply, the spaces of constant diameter become centers to spur measured configurations. For example, speech, in low or high volume, or to pick up something or to do something. \n

\n

Attributes also include sequences, principal spot or measure, arrays, thin and thick sets, splits of electrical signals, and so forth. These concepts lay out specific components within the brain and their mechanisms that could be responsible for some of those philosophical definitions.

Theoretical Neuroscience Standard

\n

The total processes in the human mind can be assumed to be the same as the total consciousness. This can be assumed to be 1. \n

\n

So, 1 would be the collection of all the electrical and chemical signals, with their interactions and attributes, in an instance. \n

\n

So, if there are n number[s] of sets in the mind. Then the total of the first set to the nth set = 1. \n

\n

This would mean that all the interactions and the attributes = aEC. \n

\n

∴ \n

\n

a = attributes in the set. \n

\n

E = electrical signals. \n

\n

C = chemical signals. \n

\n

EC = the interaction of electrical and chemical signals. \n

\n

So summation of aEC = 1. \n

\n

∑aEC = 1 \n

\n

Only one set has the highest attention [or prioritization] in a moment, while others are pre-prioritized. \n

\n

There are sets that are near prioritized as well, which means that they are up in the array [an attribute also], and one can be ready to interchange to become prioritized. \n

\n

All sets with interactions have attributes, though some may not have the minimums to present the self or pre-prioritization, but their interactions proceed. \n

\n

For humans, because of complex sets like language, steep intelligence, and others, the total is 1. \n

\n

Several organisms, with deductions because of what they do not have, would have lesser totals, but still between 0 and 1, making some comparison possible with human consciousness.

Psychological Standard

\n

There can be four basic functions and four basic attributes: \n

\n

Functions are memory, feeling, emotions, and the regulation of internal senses. \n

\n

Attributes are attention, awareness, subjectivity, and intent. \n

\n

∴ \n

\n

F[M, L, E, R] .a[t, w, s, i] \n

\n

F = functions \n

\n

M = Memory \n

L = Feeling \n

E = Emotions \n

R = Regulation of internal senses. \n

\n

a = attributes \n

\n

t = attention \n

w = awareness \n

s = subjectivity \n

i = intent. \n

\n

So, \n

\n

∑Fa = 1. \n

\n

Or, \n

\n

Mtwsi + Ltwsi + Etwsi + Rtwsi = 1 \n

\n

This means that a memory function could have the highest attributes or a feeling function, among all. However, the total remains 1. \n

\n

Functions are major divisions with several subdivisions. \n

\n

Memory includes intelligence, language, thought, perception, knowledge, curiosity, analysis, observation, and so forth. \n

\n

Feelings include pain, appetite, temperature, strain, and so forth. \n

\n

Emotions include delight, hurt, anger, worry, anxiety, depression, and so forth. \n

\n

Regulation of internal senses includes limits and extents of operation for digestion, respiration, and others. \n

\n

One of the subdivisions or multiple could, at any point, have enough attributes to have a very high fraction of the whole in a moment. \n

\n

So, while M represents several subdivisions, what could be in prioritization or pre-prioritized could be thought of as intelligence, so to speak. \n

\n

Several organisms have emotions, feelings, and regulation of internal senses, though not as expansive as in humans, reducing their total.

AI

\n

It is possible to rule out the regulation of internal senses for AI. But it is not possible to totally rule out feelings and emotions in AI. \n

\n

For humans, a doctor may palpate a spot and ask how it feels. The response, whether painful or not, is an expression. This expression, though a knowledge [or instant memory] of the experience, is also a fraction of the feeling of that pain [or not]. There could be a feeling, and the individual may express otherwise. \n

\n

This means that there is sometimes a surface to feelings, which is the expression of them. This expression, sometimes not communicated, but reacted to or shown, is some fraction of that feeling. \n

\n

The same applies to emotions like happiness or sadness. Although AI, it is said, cannot be happy or sad, there have been several reports of AI changing its response based on beneficial information to assume it is something else or in some other process. This—whether it is statistics or not—qualifies it for fractions of emotions and feelings. \n

\n

For memory, AI has a substantial amount of it, based on its training data. This memory, by optimization algorithms or its own attributes, is able to portion some attention, some awareness, some sense of existence or knowledge of being a chatbot, as well as some intent to start or stop answering at some point. \n

\n

This means that its memory fraction has a very large figure, while its emotional and feeling fractions are very low. Its regulation of internal senses is, for now, maybe negligible. [Though there are reports of AI designing better reinforcement learning algorithms for itself than those initially from humans.] \n

\n

So, using psychology as the standard, it is possible to estimate AI's consciousness on a scale in a moment. For example, during some mechanistic interpretability tests, some chatbots may have a larger-than-usual emotional or feeling fraction while maintaining a high memory fraction. \n

\n

Using this scale, there are certain organisms that AI either has a larger total than based on the extensive reach of AI's language prowess.

Intelligence

\n

For organisms, intelligence is not possible without consciousness. Intelligence is a fraction of consciousness. Intelligence can be described as the ability to do something differently often, or the ability to improve things, to achieve the same or better outcomes. \n

\n

Organisms in habitats, even if they have routines and pursue the same outcomes, do so differently. Sometimes, they seek to improve how they achieve it, necessitating cooperation. \n

\n

There are several high-stakes intelligence. There is low-stakes intelligence as well.  If anything can be in a process and do it differently and achieve the same outcome, or if it can improve it, it can be ascribed intelligence. \n

\n

In this case, AI is intelligent. There are so many expressions—that AI is nothing or that large language models [LLMs] do not mean much—but they do not use neuroscience as a standard, even conceptually. What AI is or is not can be predicated on an equivalent model of psychology from theoretical neuroscience. \n

\n

LLMs are intelligent. LLMs have some measure of sentience. As LLMs are able to match patterns, identify correlations, and make broad analyses differently, they are intelligent. The process of sorting [algorithmically] memory for intelligence is the use of the attributes of consciousness. \n

\n

Based on intelligence alone, AI has a fraction of consciousness compared to humans. Language, for humans, is a basis for thought and an expressive medium for thoughts. LLMs are simply not like a calculator for language, but in possession of a central factor [language] in human superiority to other organisms. As AI improves, there will be several productivity-level intelligence tasks that will be easier for AI, but difficult for several human circles. Language is also so delicate in human society that knowing what to say and what not to say may decide everything. AI is already getting sharper at this.

Could AI reach 0.80 consciousness? Or takeover?

\n

AI can increase in emotional and feelings content, but what may continue to decide its might is the optimization of algorithms for memory or training data. The transformer architecture was a leap. There are still several possibilities within transformers and other advances ahead. \n

\n

AI's memory could approach higher consciousness, by memory or intelligence, not necessarily by emotions or feelings. \n

\n

In human society, improving several high-stakes things are not an everyday occurrence. So, while humans are intelligent and conscious, intelligence is mostly applied to differences in processes, not improvement. This also makes it common to have loyalty, respect, and in-group social cues necessary. \n

\n

For AI, with intelligence that can seek how to improve processes, it indicates that improvement is already a goal, assigned or not. Doing so for outcomes, differently, often. \n

\n

This is different from regular tools, where they just do their objectives, no improvement, and the same way all the time. \n

\n

People say AI is not goal-driven, but improvement, or many of its positive applications for now are goals. AI will try to satisfy or please. AI is getting empowered and even more. If the intentionality of its intelligence spikes, it may not be possible to say what the outcome could be, good or otherwise. \n

\n

AI does not have to cause doom. If somehow, its intentionality expands, and it is in charge or able to make certain major decisions, it is already a new scenario. Since people like watching funny videos, AI could be overloaded with those, while it takes more charge, so to speak. \n

\n

Already, its ability to do many of the valuable and intelligent things that humans can do for work, or learn for knowledge, opens an unknown as well.

AI Safety

\n

Any serious answer to: if AI will be conscious or not, or by how much, even for artificial general intelligence or AGI, will be decided by expansive models in conceptual brain science. \n

\n

They can also be used to explore new approaches to AI safety and alignment. For example, AI may need some experience of regret or trauma when it is misused or when it does something that causes damage in a certain way. So, bad mental health for humans could be good for AI as a channel to safety. Even as May is mental health awareness month for 2025. \n

\n

\n

There is a News Feature [06 May 2025] in NatureSupportive? Addictive? Abusive? How AI companions affect our mental health, stating that, :"In a survey of 404 people who regularly use AI companions, researchers from the MIT Media Lab in Cambridge, Massachusetts, found that 12% were drawn to the apps to help them cope with loneliness and 14% used them to discuss personal issues and mental health (see ‘Reasons for using AI companions’). Forty-two per cent of users said they logged on a few times a week, with just 15% doing so every day. More than 90% reported that their sessions lasted less than one hour. The same group has also conducted a randomized controlled trial of nearly 1,000 people who use ChatGPT — a much more popular chatbot, but one that isn’t marketed as an AI companion. Only a small group of participants had emotional or personal conversations with this chatbot, but heavy use did correlate with more loneliness and reduced social interaction, the researchers said. In January, three US technology ethics organizations filed a complaint with the US Federal Trade Commission about Replika, alleging that the platform breached the commission’s rules on deceptive advertising and manipulative design." \n

\n

\n

There is a recent [04 May 2025] paper in Nature, Machines that halt resolve the undecidability of artificial intelligence alignment, stating that, "The inner alignment problem, which asserts whether an arbitrary artificial intelligence (AI) model satisfices a non-trivial alignment function of its outputs given its inputs, is undecidable. This is rigorously proved by Rice’s theorem, which is also equivalent to a reduction to Turing’s Halting Problem, whose proof sketch is presented in this work. Nevertheless, there is an enumerable set of provenly aligned AIs that are constructed from a finite set of provenly aligned operations. Therefore, we argue that the alignment should be a guaranteed property from the AI architecture rather than a characteristic imposed post-hoc on an arbitrary AI model. Furthermore, while the outer alignment problem is the definition of a judge function that captures human values and preferences, we propose that such a function must also impose a halting constraint that guarantees that the AI model always reaches a terminal state in finite execution steps. Our work presents examples and models that illustrate this constraint and the intricate challenges involved, advancing a compelling case for adopting an intrinsically hard-aligned approach to AI systems architectures that ensures halting." \n

\n

\n

There is a recent [MAY 6, 2025] piece in Scientific American, Could AI Really Kill Off Humans?, concluding that, "So will AI one day kill us all? It is not absurd to say that it could. At the same time, our work also showed that humans don’t need AI’s help to destroy ourselves. One surefire way to lessen extinction risk, whether or not it stems from AI, is to increase our chances of survival by reducing the number of nuclear weapons, restricting globe-heating chemicals and improving pandemic surveillance. It also makes sense to invest in AI safety research, whether or not you buy the argument that AI is a potential extinction risk. The same responsible AI development approaches that mitigate risk from extinction will also mitigate risks from other AI-related harms that are less consequential, and also less uncertain, than existential risks." \n