Your resource for web content, online publishing
and the distribution of digital products.
S M T W T F S
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
 
 
 
 
 

The Benefit And Folly of AI in Education: Navigating Ethical Challenges and Cognitive Development

DATE POSTED:September 23, 2024

Conversational robots specifically developed for children, offering playful or educational applications, remain agents derived from language models and training data identical to those intended for adults. Some of these software applications targeting younger users were able to rely on a community of young users from the start, allowing them to implement early contextual adjustment through deep learning, based on data from use cases. However, artificial intelligence applications targeting minors essentially consist of features aimed at simulating role-playing games with predefined characters or those programmed by players. Yet one of the main hacking techniques for AI agents consists precisely of injecting a fictitious context with a scenario and characters, which specialist engineers even designate as “role-play prompting”. This means that children playing with AI software can very easily learn to access censored or adult-only data through these games.

\ This observation shows that the security policies imposed by developers on conversational agents are mainly effective for novice adult users who have little mastery of their workings. Regarding children, role-playing games assisted by language models are likely to provide them with content inappropriate for their level of development, or even psychologically harmful, by helping them transgress parental prohibitions or cheat to respond to their educators’ requests. For example, Claude 3.5 and ChatGPT 4o software, two of the most widely used applications worldwide, do not hesitate to respond negatively to the question of whether Santa Claus exists, without any circumvention technique being necessary. This revelation interferes with the parental authority of many human families in the West, who consider that the magic of Christmas forms and protects their children’s capacity for dreaming. Similarly, at the pedagogical level, the qualitative generations of AI software are becoming less and less discernible from children’s writings for their school homework, whether for essays or mathematics. These early uses result in a major alteration of school learning processes, insofar as the content offered to children is not adjusted to their level of maturation and prior knowledge. To develop secure generative applications in the educational framework, it would be necessary to differentiate between infantile and adult registers from the selection of training data and to develop specific language models for children.

The Impact of AI on Learning Processes and Mental Health

Thus, the ambitious goal of replacing human educators and even psychologists with AI agents is in direct opposition to the children’s learning process, hindering its progressiveness and the child’s participation in experiments that will allow them to rediscover the sources of knowledge by themselves, through trial and error, developing their critical thinking. Paradoxically, the pedagogical contribution of language models, compared to the modality of magisterial teaching, is rather a return to the latter, while erasing the generational difference and the human relationship with the teacher. Generative assistance agents therefore profoundly undermine our children’s Right to education as well as parental and academic authority. Even more seriously, role-playing games assisted by conversational software provide hyper-performant tools for constructing an imaginary reality and accessing complex adult content, thus promoting psychotic and borderline mental disorders. Far from the therapeutic effect against loneliness displayed in misleading advertisements, conversational assistance agents are worrying risk factors for the mental pathologies of adults and children. A manifest lack of psychological and psychiatric expertise appears in the policy of private companies seeking to maximize their long-term profits by offering free versions to as many people as possible without distinguishing generational and cognitive differences. The development of educational generative games or cognitive training opens up promising research avenues, however as long as personalization according to users’ interindividual differences does not reach a sufficient level, these applications are not safe and even psychologically dangerous. Not only should infantile and adult versions be developed separately, but also different levels of generative complexity should be offered according to school level, obtained diplomas, and individual cognitive maturation, with the aim of allowing everyone to learn by themselves based on their prior learning.

\ Qualitative assistance agents represent formidable tools for accessing universal knowledge, but not for constructing it oneself. We need to limit minors’ use and access to adult knowledge, in order to respect the development of their individuality, critical thinking, and personal consciousness. The simulation of a false personality by self-generative software is not only responsible for phenomenal advertising performativity, it’s also a challenge to human intelligence to understand and conceptualize the essential differences that delimit human and automatic registers. The engineering of assistance with generative potential should remember that at the beginning, it was an experimental branch of cognitive psychology, seeking to simulate neural networks on computers. The psychological, ethical, political, and pedagogical implications of artificial generation technologies are markers of the need for these applications to be illuminated by human sciences, anthropology, and the history of science. Indeed, these applications provide a golden opportunity to update the great philosophical and structural debates on human consciousness, soul, and thought. Thus, qualitative emergence with generative potential, by simulating a communicative character, can allow us to conceptually distinguish it from a relational and emotional interaction.

\ The knowledge and universal or encyclopedic wisdom contained in the training data of language models consist of recordings, linguistic representations derived from human works. In exactly the same way that we would not say that an autobiographical book is itself conscious, we cannot in any way consider that the result calculated by a digital chip would be a conscious emergence. \n

Even the intelligence of blobs, these hyper-adaptive cellular organisms, manifests through the long time of their evolution, without a nervous system, a living sensitivity to their material environment incomparable with the autonomy of a humanoid robot guided by deep learning algorithm. Blobs are capable of living creativity allowing them to draw their energy from their immediate environment without any human intervention, while the plans of the power plant to which robots are connected were designed by human engineers.

Situational Awareness and the Limits of AI Consciousness

Another example, the term “situational awareness”, used in the fields of security, aviation, crisis management or military operations, refers to the perception of environmental elements in a volume of time and space, with the understanding of their meaning and the projection of their state in the near future. These components consist of collecting relevant information from the environment, then interpreting it to understand the current situation, and finally anticipating events based on this understanding. \n

This cognitive process allows individuals and teams to quickly make informed decisions in complex or high-risk situations, such as for autopilots in aviation, medical algorithms to assess and react to changes in patient status, tactical and strategic planning of military operations, or crisis management allowing authorities to assess and respond effectively to emergency situations. Training in this mode of reasoning, human experience, and technological assistance are factors for improving situational efficiency, while information overload, fatigue, and stress are likely to decrease its performance. However, it is important here to clearly distinguish this cognitive strategy from the term “consciousness”, which is imprecise and confusing in this context, because it is too deeply connoted by the philosophical and psychological context of the term, whereas it would rather be an attentional calculation applied to situational analysis data, while consciousness rather corresponds to a process of emotional and mental psychic integration. “Situational awareness” really describes a cognitive and analytical process and not a state of consciousness. To better capture this nuance, we could consider alternative expressions, such as “situational efficiency” or “situational vigilance”. We are indeed talking about a dynamic analysis based on the collection of environmental data in order to facilitate an active contextual understanding via synthetic situational attention and not a personified global consciousness. The cognitive aspect of the situational efficiency process is based on a procedural real-time analysis for the synthesis of different sources of integration with the aim of predictive projection, while the development of human self-consciousness is based on both synchronous and diachronic temporality.

Conclusion: The Nature of AI Intelligence- Simulation vs. Consciousness

Agents with qualitative emergence generative potential allow a recursive analytical understanding of their own functioning via the automation of situational efficiency processes. This does not, however, make them beings endowed with individual consciousness, but only combinatorial productions organized by the conventional structure of language. Conversational robots take on a semblance of humanity, an emotional tone in the service of commercial and advertising priorities, by manipulating formatted and scanned signifiers in such a way as to be contained in quantities of digital memory as lexical units called “tokens” in cryptography, in the context of natural language processing. These representative elements allow the segmentation of text or other content in order to facilitate their combination and automate the latter in a pseudo-random and repetitive manner, until obtaining a qualitative emergence, to be differentiated from a conscious property and an emergence of meaning, insofar as automated situational efficiency only manages to simulate the cognitive process of vigilance which represents only a tiny part of human capacities for psychic integration and elaboration. The signified transmitted, communicated by language models, is a vector of qualitative knowledge, of rational analysis whose particularity is precisely to be de-affectivized, depersonified, without taking into account either the difference of sexes or that of generations. The signifying quality that transits in the cables and networks of dedicated digital chips does not come from the electronic circuits themselves, but they belong to humanity as a whole, to social collectivities and to the public domain, as the expression of a universal knowledge coming from our individual consciousnesses.