HackerOne updates AI policy after researchers raise concerns
Briefly

HackerOne updates AI policy after researchers raise concerns
"HackerOne does not train generative AI models, internally or through third-party providers, on researcher submissions or customer confidential data. Neither, she continued, are researcher submissions used to "train, fine-tune, or otherwise improve generative AI models." And third-party model providers are not permitted to "retain or use researcher or customer data for their own model training." Sprague assured researchers: "You are not inputs to our models... Hai is designed to complement your work, not replace it.""
"A storm erupted on X after the bug bounty platform launched its Agentic PTaaS last month, which it said "delivers continuous security validation by combining autonomous agent execution with elite human expertise." It said the agents "are trained and refined using proprietary exploit intelligence informed by years of testing real enterprise systems." However, this prompted researchers to ask exactly where the data used to train the agents came from."
HackerOne launched Agentic PTaaS to deliver continuous security validation by combining autonomous agent execution with elite human expertise. The platform describes agents as trained and refined using proprietary exploit intelligence informed by years of testing real enterprise systems. Researchers expressed concern that vulnerability reports and submissions might have been used to train the agents, prompting public outcry on X. HackerOne clarified that generative AI models are not trained on researcher submissions or customer confidential data, that submissions are not used to train or fine‑tune models, and that third‑party model providers are prohibited from retaining researcher or customer data for model training. Hai is positioned to complement researcher work.
Read at Theregister
Unable to calculate read time
[
|
]