AI is entering one of the most human domains: healthcare. It helps people track sleep, manage chronic conditions, monitor mental health, and navigate loneliness. It listens, advises, and sometimes comforts. Yet despite these advances, hesitation remains, not because the algorithms are weak, but because the experience does not always feel reliable. In this context, trust is not just an emotional response. It is about system reliability, the confidence that an AI assistant will behave predictably, communicate clearly, and acknowledge uncertainty responsibly.
A pair of reports show that artificial intelligence (AI) use and adoption is growing in broadband and other industries. Protiviti reports that 68% of organizations will have integrated autonomous or semi-autonomous AI agents into their core operations by 2026. The company's report says nearly one in four (23%) respondents reported in August 2025 that they were within six months of integrating AI agents that can operate semi-autonomously or with defined guardrails under human supervision.
Only 40% of organizations invest in "trustworthy AI," or AI with guardrails. Yet, those investing the least view genAI as 200% more trustworthy than traditional, proven machine learning - despite the latter being more established and having greater reliability and explainability. "Our research shows a contradiction: that forms of AI with humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy," said Kathy Lange, research director of the AI and Automation Practice at IDC.