Your resource for web content, online publishing
and the distribution of digital products.
«  
  »
S M T W T F S
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
 
 
 
 
 

Criticism of ‘AI-Sounding’ Writing Overlooks Deeper Cultural Biases

DATE POSTED:May 28, 2025

At Our AI, we never use AI tools to directly generate articles. Instead, we use AI tools (ChatGPT Deep Research, as an example) to search for sources and assist us in our process of data synthesis. I’ve been writing for several years now, and I have since developed a particular fondness for the em-dash, a powerful tool used to connect two complete ideas within one fluid, extended sentence. Unfortunately for me, this particular punctuation mark, once a symbol for literary learnedness and stylistic flair, has become increasingly associated with Large Language Models like ChatGPT. Despite the potential accusations of using AI to skimp on my writing or the risk of sounding like ChatGPT whenever I write, I will never stop using the em-dash—and here’s why. \n

Indeed, there have been extensive examples of the content of human expression being stifled through external pressures—whether the censorship policies of autocratic regimes or the enforcement of social norms and taboos, many cultures have specific characteristics dictating what you can or cannot say. However, I contend that none of these aforementioned restrictions, in practicality, regulate the waythat an idea is expressed. Unable to speak for the majority of the world’s languages, I must utilize a specific example from English and Chinese, the two languages I do speak fluently. One could argue that the style of speech (such as variations in the use of punctuation, specific vocabulary, and phrases) exists in the form of dialects. Both languages, in numerous cases, have been modified as a result of regional acculturation, resulting in a plethora of dialects (African American Vernacular English and Southern American English as examples of American dialects; Guangdonghua and Beijinghua as examples of Mandarin Chinese dialects) which, along with other things, contribute to the degree of cultural diversity that many of its respective speakers celebrate. \n

In order for my argument of parallelism to be effective, the question of whether a single person’s stylistic preference could be compared to that shared amongst an extensive group of individuals must be answered. Practicing impartiality while answering this question, it is important to acknowledge that the speakers of dialects have been occasionally persecuted for their deviation from the commonly accepted grammatical syntax, as was the case during the late 1900s, when AAVE was considered “broken english” by many. This is most likely the case as a result of the vernacular itself being associated with negative perceptions of the Black stereotype, not due to an inherent weakness in the characteristics of the dialect itself; a white man using AAVE in that time period would have been shamed not because of the grammatical incorrectness of his language, but because of his voluntary association with archetypical ideas of crime and poor education. Thus, we see that the cultural bias against dialects are in fact generalizable to all manners of speech so long as they are associated with a negative social perception.

\ In turn, I contend that the verbal characteristics of a stylistic choice in language ought not be considered without accompanying context relating to whether it is constitutes a direct consequence or application of traits deemed morally reprehensible, since it is morally unjust to associate the well-meaning yet ill-spoken words of an individual with depravity unless the choices themselves perpetuate an immoral belief. By this logic, we could possibly condemn an individual’s use of racial language because the words used by the speaker ostensibly conveys his prejudice, while similar language used without prejudice, as is often the case when “reclaimed” slurs are used internally by members of corresponding racial groups, is usually morally acceptable.

\ Back to the question of “why is it wrong to sound like an AI model?”, these conclusions apply saliently. Perhaps one potential explanation would lie within the inherent tendency of our society to value individual achievement through excellence—the students taking the hardest math classes or getting the highest score on exams often receive the most commendation—as well as the association of these values to individualist ideals of creativity and self-reliance. Despite my overly optimistic belief that AI models like ChatGPT should remain, at most, a helpful tool to help humanity with solving some of its fundamental issues, many regard ChatGPT as a simple escape from many of the burdens that comes with simply being human, creativity being one of them. It follows that writing grammatically similar to text produced by one of these models may be construed as a bold departure from the aforementioned ideals of an excellent human; forgoing the complex discussion of the true role that AI plays in human lives, the perceptionof this role alone (as, in our case, the idea that AI is a lazy way to complete assigned work) is the main determinant for the way that linguistic styles associated with AI are interpreted.

One obvious criticism of the popular stance that it is undesirable to write like an AI model stems from the fundamental concept of an AI itself. The system card of the largest AI companies (OpenAI, Anthropic, Deepmind) direct their models to be helpful, informative, and professional. Making use of the generalization that LLMs are exclusively trained on human data and thus find, connect, and utilize language patterns practiced by humans themselves, it would not be a far stretch to say that many of the common patterns observed in supposedly informative AI models are, at least as dictated by humans before the AI age, signs of the intellectualism that LLMs attempt to emulate. It is then paradoxical to claim that writing like that produced by AI indicate an unwillingness to demonstrate values of human excellence, since the accuracy and rigor with which these LLMs were trained suggests that the text produced by these models were indeed aligned with the instructions in the model card, which contain, in contradiction with the original claim, paragons of human reasoning.

\

This article is brought to you by Our AI, a student-founded and student-led AI Ethics organization seeking to diversify perspectives in AI beyond what is typically discussed in modern media. If you enjoyed this article, please check out our monthly publications at https://www.our-ai.org/ai-nexus/read!

Another less marked counterpoint lies in the inherently human nature of language. It is as commonplace and natural to us as is eating or sleeping, yet I am appalled by the apparent indifference of some individuals to allow something as lifeless as AI to appropriate it from us. Although I will leave this point of speculation as an exercise to the reader, I must stress my own view that no matter what, we must prioritize the preservation of our humanity in the backdrop of rapid AI development. Even though AI detectors and peers may see my writing and jump to the conclusion that the text was AI-generated, my humanity compels me to continue using the em-dash.

\