AI's cyborg problem: you have to embrace it to really succeed but 90% of people can't or don't want to | Fortune
Briefly

AI's cyborg problem: you have to embrace it to really succeed but 90% of people can't or don't want to | Fortune
"The Wall Street Journal ran a piece about how I use AI in my work as an editor at Fortune - prompting drafts, synthesizing interviews, and accelerating a reporting process that used to take me twice as long. The response was swift, loud, and chaotic. The "journalism community" was divided as editors perked up and reporters recoiled. Strangers on the internet called me lazy. A few journalists told me privately they were doing the same thing and would never admit it."
"I had not expected this. I had expected, maybe, curiosity. What I got instead felt like something older and more personal than a debate about journalism ethics - more like the look you get when a coworker figures out a shortcut and doesn't share it. I've been trying to understand the reaction ever since. The person who finally gave me a framework for it wasn't a media critic or a journalism professor."
"She was a neuroscientist who has spent 30 years wiring AI into human beings. Vivienne Ming 's career began in 1999, when her undergraduate honors thesis - a facial analysis system trained to distinguish real smiles from fake ones, which she proudly told me was partly funded by the CIA for lie-detection research - introduced her to machine learning before most people had even heard the term."
"She went on to build one of the first learning AI systems embedded in a cochlear implant, a model that learned to hear within a human brain that was also learning to hear. She has since founded companies applying AI to hiring bias, Alzheimer's research, and postpartum depression. For three decades, her self-appointed mission has been to take a technology most people misunderstand and figure out how to use it to make the world better."
AI was used to prompt drafts, synthesize interviews, and speed up reporting work that previously took twice as long. A Wall Street Journal piece triggered a swift, loud, chaotic response, dividing journalism professionals and drawing criticism from strangers who labeled the work as lazy. Some journalists privately admitted they used similar tools but would not acknowledge it publicly. The reaction felt more personal than a debate about journalism ethics, resembling resentment when a coworker takes an unshared shortcut. A neuroscientist provided a framework for understanding the response, with a career focused on embedding AI into human systems and applying it to areas like hearing, hiring bias, Alzheimer’s research, and postpartum depression.
Read at Fortune
Unable to calculate read time
[
|
]