AI Won't Replace Me Yet, But It Might Prove I Was Never That Original | HackerNoon
Briefly

The article delves into the unsettling parallel between Large Language Models (LLMs) and human cognition. While LLMs generate text by predicting patterns, this raises profound questions about the nature of human writing itself. Are we merely assembling words based on learned patterns, similar to the mechanics of LLMs? The author posits that our writing processes might be more akin to those of machines, challenging the romantic notion of creativity and intuition. Ultimately, this leads to a reflection on the implications of recognizing ourselves as potentially machine-like in our thought processes.
The real conundrum I’m racking my brains over, dear HackerNoon reader, is more unsettling. Like the feeling you get when you realize you’ve been on autopilot for the last two hours doing 80 on I5.
This raises a disquieting question: How much of human writing was already just...this? How often are we not writing but predictively assembling, our choice of words a game of Tetris played with borrowed patterns, phrases, and unconscious mimicry of established rhetorical forms?
What if the real heartburn-inducing revelation here is not that Large Language Models can imitate us but that what we call 'us' was machine-like all along?
Strangely enough, if you break down the writer's process, or at least this writer's process, it starts to look a lot like what Large Language Models do.
Read at Hackernoon
[
|
]