
"Artificial intelligence is the most exhaustively covered technology since the dawn of the internet. As any tech editor will tell you, it can be challenging to find stories about AI that are not merely new but big. So when our editorial director, Jill Bernstein, forwarded me a pitch from journalist John Pavlus, who wanted to write about a "mad scientist" attempting to "stomp out hallucinations and other gen-AI nonsense from Amazon's cloud security/ chatbots/robots/agents," I said yes in seconds."
"(He actually used a more pungent term than "nonsense," but for decorum's sake, I'm keeping that to myself.) And then I braced myself. The pitch promised to explain the "abstruse formal mathematics" behind "neuro-symbolic AI," a totally different kind of AI that is not based on the kind of large language models that power ChatGPT and just about every other AI product that has infiltrated our lives over the past three years. The mad scientist was Byron Cook, who heads up Amazon's automated reasoning group."
Artificial intelligence is exhaustively covered and finding truly large, newsworthy AI stories is difficult. A proposal described a 'mad scientist' seeking to stomp out hallucinations and other generative-AI errors across Amazon's cloud security, chatbots, robots, and agents. The approach centers on neuro-symbolic AI, which relies on abstruse formal mathematics and automated reasoning rather than large language models. Neuro-symbolic AI represents a different paradigm than the LLMs that power ChatGPT and many recent products. Byron Cook leads Amazon's automated reasoning group. A deadline reminder notes the Fast Company World Changing Ideas Awards closes December 12 at 11:59 p.m. PT.
Read at Fast Company
Unable to calculate read time
Collection
[
|
...
]