GitHub is readying a new feature to automate some of the most expensive work in DevOps: the invisible housekeeping no one wants to own. Developers would rather be building features than debugging flaky continuous integration (CI) pipelines, triaging low-quality issues, updating outdated documentation, or closing persistent gaps in test coverage. In order to help developers and enterprises manage the operational drag of maintaining repositories, GitHub is previewing Agentic Workflows, a new feature that uses AI to automate most routine tasks associated with repository hygiene.
We began GitHub Agentic Workflows as an investigation into a simple question: what does repository automation with strong guardrails look like in the era of AI coding agents? A natural place to start was GitHub Actions, the heart of scalable repository automation on GitHub. GitHub Agentic Workflows leverage LLMs' natural language understanding to let developers define automation goals in simple Markdown files describing the desired outcome.
This extends to the software development community, which is seeing a near-ubiquitous presence of AI-coding assistants as teams face pressures to generate more output in less time. While the huge spike in efficiencies greatly helps them, these teams too often fail to incorporate adequate safety controls and practices into AI deployments. The resulting risks leave their organizations exposed, and developers will struggle to backtrack in tracing and identifying where - and how - a security gap occurred.
In fact, I didn't even think to ask ChatGPT what might work in my favor if I just stayed the course.I was a "LLeMming": a term Lila Shroff uses to describe compulsive AI users in The Atlantic. Lila Shroff shares that just as the adoption of writing reduced our memory and calculators devalued basic arithmetic skills, AI could be atrophying our critical thinking skills.
Reddit has published its latest performance update, with the platform adding another 5 million daily active users, while it's also posted a strong revenue result for Q4. And as it continues to optimize for AI search, both on and off platform, it's engaged, expert communities could help to boost the platform as a key source of information. Unless, that is, Reddit ends up pricing too many LLM projects out of the market.
OpenClaw's claim to fame is that it can take real-world actions on your behalf. Instead of living purely in the cloud, the agent runs on a user's own hardware, often on Mac minis, but you can run it with Windows, Linux, or what have you. Under the hood, it connects to one or more large language models (LLMs) via application programming interface (API),
One of my ongoing fixations in AI is what it's doing to cybersecurity. This week, I spoke with Gal Nagli, head of threat exposure at $32 billion cloud security startup Wiz, and Omer Nevo, cofounder and CTO at Irregular, a Sequoia-backed AI security lab that works with OpenAI, Anthropic, and Google DeepMind. Wiz and Irregular recently completed a joint study on the true economics of AI-driven cyberattacks.
Unverified and low quality data generated by artificial intelligence (AI) models - often known as AI slop - is forcing more security leaders to look to zero-trust models for data governance, with 50% of organisations likely to start adopting such policies by 2028, according to Gartner's seers. Currently, large language models (LLMs) are typically trained on data scraped - with or without permission - from the world wide web and other sources including books, research papers, and code repositories.
OpenAI's new LLM has revolutionized AI and opened up new possibilities for marketers. Here's a look at how three big-name brands have embraced the technology. In March, the AI lab OpenAI released GPT-4, the latest version of the large language model (LLM) behind the viral chatbot ChatGPT. Since then, a small number of brands have been stepping forward to integrate the new-and-improved chatbot into their product development or marketing efforts. To a certain extent, this has required some courage.
Artificial intelligence can lower the barrier to self-reflection and be genuinely empowering for some, she explains. For people who feel stuck, overwhelmed, or unsure of where to begin, prompts can act as a scaffold for expressing and understanding your ideas, says Iftikhar. If the AI has access to information you've either shared or asked it to generate, it's also an efficient tool at synthesizing that information, explains Ziang Xiao, an assistant professor of computer science at Johns Hopkins University.
Akhil Savani has joined the company as vice president of publisher development, while longtime OpenX leader Rebecca Bonell has been elevated to regional vice president of publisher development, The Americas. Together, they will shape OpenX's next phase of publisher growth, working with publishers to innovate and increase monetisation while driving fair value exchanges. As publishers face critical challenges, including zero-click search and advancements in large language models (LLMs), OpenX is innovating to support publisher revenue and data strategies.
If you don't know it, Ecclesiastes is a collection of Old Testament verses in which the eponymous title character discourses on the apparent meaninglessness of pleasure, accomplishment, wealth, politics, and life itself in the face of the infinitude of the universe and the absolute perfection of God. It is the source of many of our most cliched phrases, such as there is a time for everything and there is nothing new under the sun.
In a 2024 study by Apollo Research, scientists deployed GPT-4 as an autonomous stock trading agent. The AI managed investments and received communications from management. Then researchers applied pressure: poor company performance, desperate demands for better results, failed attempts at legitimate trades, and gloomy market forecasts. Into this environment, they introduced an insider trading tip - information the AI explicitly recognized as violating company policy.
There is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase AI psychosis has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times
AI search builds on the same signals that support traditional SEO, but adds additional layers, especially in satisfying intent. Many LLMs rely on data grounded in the Bing index or other search indexes, and they evaluate not only how content is indexed but how clearly each page satisfies the intent behind a query. When several pages repeat the same information, those intent signals become harder for AI systems to interpret, reducing the likelihood that the correct version will be selected or summarized.
This comes after a banner year for IP lawsuits against AI companies brought by rights holders. Just about every type of entity that deals in protected content has gone to court against AI companies this year, from movie studios like Disney and Warner Bros. to papers like the . Some of these cases have led to settlements in the form of partnerships, such as the licensing deal between Disney and OpenAI.
As AI chat interfaces become more popular, users increasingly rely on AI outputs to make decisions. Without explanations, AI systems are black boxes. Explaining to people how an AI system has reached a particular output helps users form accurate mental models, prevents the spread of misinformation, and helps users decide whether to trust an AI output. However, the explanations currently offered by large language models (LLMs) are often inaccurate, hidden, or confusing.
Tiiny AI, a US-based deep-tech startup, has unveiled the Pocket Lab, officially verified as the "world's smallest personal AI supercomputer." This palm-sized device, no larger than a typical power bank, is capable of running large language models (LLMs) with up to 120 billion parameters entirely on-device, without relying on cloud servers or external GPUs. Designer: Tiiny AI At its core, the Pocket Lab aims to make advanced artificial intelligence both personal and private.
As each holding company and agency rushes to assemble its own version of an AI-driven platform, Horizon Media is pitching its version, Blu, as kind of the anti-black box consultancy that helps clients not only build campaigns but find pools of customers they may otherwise have missed. Overseen by Bob Lord, Horizon Media's president, and run by Domenic Venuto, Horizon's chief product and data officer, Blu is essentially a content marketing platform that uses a variety of LLMs to help the independent agency's clients determine broader business goals through the prism of media (creative inputs will come later - and more on that later), only on steroids.
Harvey on Thursday confirmed it closed a round of funding, led by Andreessen Horowitz, that values the legal AI startup at $8 billion after reports of the funding leaked in October. The startup raised $160 million in the round. This latest capital infusion came just months after it raised a $300 million in a Series E round at a $5 billion valuation in June. And that was just months after raising a Sequoia-led $300 million Series D at a $3 billion valuation in February.
"A press release really lends itself to AI, because if you think about it, if you're talking about your company or your you're putting out expert knowledge," Jeppsen explained during a recent Tech Talk at Ragan's Future of Communications Conference. "You are the domain expert. You are factual. You've got a framework ... that resonates, and not only humans read it that way, but then AI tries to read it like a human."