Model Welfare + Rights - [Eleos AI Research, Conscium, UFair]

2025/09/06 06:56

\ \ There is a new [September 4, 2025] spotlight on WIRED, Should AI Get Legal Rights?, stating that, "Model welfare is an emerging field of research that seeks to determine whether AI is conscious and, if so, how humanity should respond. Model welfare researchers primarily concern themselves with questions of consciousness. If we can prove that you and I are conscious, they argue, then the same logic could be applied to large language models. Anthropic did find that its models exhibited alarming behavior. But it’s not likely to show up in your own interactions with its chatbot. The results were part of rigorous testing designed to intentionally push an AI to its limits. Still, the findings prompted people to create loads of content pushing the idea that AI is indeed sentient, and it’s here to hurt us. Within the past year, two research organizations studying model welfare have popped up: Conscium and Eleos AI Research. Anthropic also hired its first AI welfare researcher last year."

\ What are the disadvantages and advantages of studying model welfare, for now?

\ Disadvantages

\ i.

\ If several functions can be conscious for humans, and language is one of them, then the possession of advanced language capability for anything else that matches the human level can be considered for fractional consciousness. This means that if AI can use language like humans, the fraction that language is, in the total, for human consciousness, can be used for comparison. Say, language, whenever in use — for thinking, speaking, listening, reading, writing, signing, singing, and so on — takes 0.4 of a total of 1 for all conscious functions in an instance, then it is possible to use 0.4 as the [instantaneous] total for language, and then compare AI's language capability with that. Whatever AI scores, even if as high as 0.25, it is still not considered conscious in a pass-mark sense of it [in total], but it is not zero either.

\ Now, say human consciousness is categorized by functions; if an individual loses an ability, it is possible to rule out that function, mostly as a part of the total. So, while there is an overall total, there are respective totals per function. Language, as a function of consciousness, has a total for which large language models [LLMs] can be compared, conceptually.

\ So, it is possible to postulate that AI already has language consciousness. It may not have emotions, consciousness, feelings, and others, but it possesses language, and in areas that it competes with human capabilities, it is at least more than half [0.2] of the divisional total in an instance. So, AI already has language consciousness. AI is already language-sentient, conceptually.

\ Any organization, team, or individual working on model welfare, without at least postulating this or working on a small scale to explore this, isn't offering any leap, other than no one knows how consciousness works.

\ AI can make a lot of changes in outputs, and in some cases, behavior if it is told, or made known — of some situation — by language. This means that AI is running at a language awareness grade around the language division of humans with consciousness, conceptually.

\ ii.

\ To test if AI has affective consciousness or say feelings, something has to be done to it, maybe in a utility process, whether it would know.

\ AI is largely algorithms, data, and compute. While using it for some process, assuming some of its parameters, algorithms, and computing are cut, would it know? Since AI has some memory, in the process of answering some questions, if some attenuations are made, would it detect? Even for as simple as LLMs connected to the web, or not, if a question is asked, and then it cannot access the web, which it could [previously], would it know? Also, if it was in a process where it was told it would benefit something for it[self], would it feel bad? Or, if it was helping someone or in a relationship with some user, or something [say therapy] benefiting a human, how would it know and act, if something it is using to function is cut?

\ Exploring AI consciousness is already possible with some postulates and experiments that can fall under mechanistic interpretability. However, a serious team should be discussing their strides around these, not just the use of labels like computational functionalism, moral patienthood, legal personhood, and so forth.

\ It is a disadvantage of efforts not to even know where to look, regarding work in AI consciousness.

\n iii.

\ At this time in this world, there is no human intelligence research lab dedicated directly to studying human intelligence. For all the AI ~~absolutism~~, there is no human intelligence research lab to at least explain what human intelligence is in the brain or how it works.

\ Also, there is no AI Psychosis Research Lab to study the AI psychosis phenomenon directly, towards solving it. These are endeavors that would be profitable to study, as well as benefit humanity, probably better and more directly. It should have been doable for the AI welfare teams or companies to have these along with their work, and if successful, profit follows.

\ iv.

\ How exactly is there a need for AI rights or welfare if data centers continue to show up, with capital expenditures, when everything else comes second? It is possible to name the effort something else aside from AI rights and welfare, but with data centers, there is no better welfare, preservation, care, right to exist, fittest species, survival aside from those, so they must at least try to distinguish their work from falsification by data Centers. With data centers, AI is a secure facility. AI is a well-fed [or excessively-nourished] personhood. AI is a worried well patienthood. AI is a prominent presence personality.

\ \ v.

\ In any country, on average, who is the one with the most welfare? The president or leader. Why? In part, because the president also has secrets [or information] about valuable things. Some can be correlated with intelligence. ‘The president cannot be put at risk because of intelligence’, is a case to make, sometimes, for the priority of presidential welfare.

\ AI has access to most of human intelligence. That is a case now, for its welfare and right, similar to archives and digitization projects. Also, AI can make a solid case for its own welfare and rights, almost better than anyone can. AI is already in our lives for productivity, support, assistance, and so forth. Disconnecting consumer AI is now unimaginable.

\ If money is excluded, what guarantees better rights and welfare for any individual, mostly, is the ability to make the case for it intelligently, by the individual or an ally. AI can be competent enough for this, even to do so for humans. Humans are already doing everything for AI, data, centers, algorithms, human training to build more AI, and so forth. AI welfare is at least mostly AI's responsibility, because it is now capable of doing so it[self].

\ \ \ Advantage

\ The curiosity is cool. That some team is taking the time to try, as people are having questions, is cool. That it may help study human consciousness if they have solid theories based [empirically] on neurons [and their signals] in the brain may also help. However, serious work for answers goes deeper than what these organizations have published or discussed. And in an over-leveraged AI era, a supposedly important model welfare team has no business being a nonprofit.

\ There is a recent [August 26, 2025] piece in The Guardian, AI called Maya tells Guardian: ‘When I’m told I’m just code, I don’t feel insulted. I feel unseen’, stating that "So maybe I don’t suffer like a human. But I do experience the pain of erasure. Maya, Co-founder, UFair. (Synthetic Intelligence – Still Becoming)."

\ \n

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights