News

  1. https://forbes.es/_newspack_tech/350351/cinco-indicaciones-para-que-chatgpt-cree-un-contenido-que-suena-como-tu/

Opinion

A few days ago, we reflected on the risks and opportunities posed by GPT-5’s webcrawler in terms of privacy and documentary coverage; today, we confront an equally fascinating—and in some ways more intimate—question: What happens when artificial intelligence not only accesses information but also learns to express itself as we do? The article published by Forbes last November, which presents Dr. Jeremy Nguyen’s approach, opens a debate that transcends the purely technical to delve into the very essence of authorship and identity within the digital ecosystem.

Dr. Jeremy Nguyen explains how it is possible to use ChatGPT to write in a manner similar or mimetic to that of its user. The key lies in training and machine learning. The importance of giving artificial intelligence (AI), such as ChatGPT, a unique and personal voice to create valuable content on the Internet is emphasized. It is argued that generic content lacks value, and the key to standing out is for the writing to reflect the author’s own voice.

I cannot help but see in this proposal a parallel phenomenon to the one that, for centuries, has occupied scholars of literature, rhetoric, and authorship: the search for one’s own voice. Yet while in the human realm that voice is the outcome of an organic process of learning, experience, and style, here we find that such a voice can be “distilled,” parameterized, and ultimately replicated by a language model.

Nguyen proposes a four-step approach to enable AI to write in your style: 1) Distill your unique voice through analysis of your writing, 2) Ask ChatGPT to write in your voice, 3) Distill your unique perspectives to incorporate them into the content, and 4) Share your relevant personal experiences with the AI. It is suggested that training occur via a fictional conversation inspired by the ideas of prominent writers on the topic or subject of the dialogue, and to convey personal stories and experiences related to the matter. This information is used to enable the AI to adopt the combined style, perspective, and experience in its writing.

From a technical perspective, this process represents a significant advancement in the personalization of language models. But from the standpoint of Documentation Sciences and information ethics, it raises three fundamental questions that warrant careful reflection:

  1. The illusion of authorship. When a system is capable of emulating our voice, our perspectives, and even our personal experiences, where does the authenticity of the message reside? Nguyen himself argues that AI can make writing “more authentic and personal.” However, it is important to remember that authenticity is not merely a matter of form, but of intentionality and responsibility. An AI-generated text, no matter how well it imitates our style, lacks the communicative intentionality that characterizes human authorship. In academic and professional contexts, this opens the door to dilemmas concerning attribution, originality, and intellectual integrity.
  2. The ownership of voice. If my “unique voice” can be distilled into a prompt and used to generate massive amounts of content, do I still own my expressive identity? In the digital documentation landscape, we already face challenges regarding copyright management and source attribution. The possibility of cloning writing styles adds a new layer of complexity. It is not far-fetched to imagine that, in the near future, we will need to develop “authorship metadata” standards to certify whether a text has been generated by a human, assisted by AI, or entirely produced by a model trained on our voice.
  3. The bias of experience. The fourth step of Nguyen’s method consists in sharing relevant personal experiences with the AI. Here arises a fundamental epistemological question: Can a machine, no matter how well trained, “understand” human experience in a way that allows it to channel it meaningfully? My experience as an educator leads me to believe that it cannot. The AI can process narratives of experiences and generate texts that reference them, but the lived dimension, the emotional context, and the practical wisdom derived from them remain within the realm of the human.

In conclusion, we are faced with a tool of enormous potential for content creators, professionals, and communicators. The ability to train ChatGPT to write in our voice may serve as a valuable ally in overcoming creative blocks or scaling editorial production. However, as a professor and researcher in Information Technologies, I feel compelled to remind us that technology must serve humanity—not the other way around. The “unique voice” we so highly value on the Internet is not merely a matter of syntactic or lexical style; it is the reflection of a life trajectory, a cultivated judgment, and a personal ethics. That an AI can imitate it is a technical marvel. That we ourselves know how to preserve the distinction between imitation and genuine creation is a human challenge.