Security Experts Urge Immediate Action: Block All AI Browsers Now
The pushback from the cybersecurity community against AI-powered browsers is intensifying. Research powerhouse Gartner and a UK government agency have flagged these tools as potential risks, urging caution and containment.
Cybersecurity leaders are sounding the alarm: block AI browsers now. Gartner’s new report emphasizes that while AI browsers are still in their early, innovative stages, they pose too great a risk for widespread adoption by most organizations. The warning comes as more tech companies roll out AI-enhanced browsers that promise to streamline web tasks. These tools can delegate functions like automated web searches or drafting emails to an AI agent, which can boost productivity—yet this same capability can be exploited to issue hidden, malicious commands through websites or emails, turning the browser into a conduit for harm.
The cybersecurity field refers to this danger as prompt injection attacks. These exploits take advantage of AI chatbots’ difficulty distinguishing between a user’s request and a malicious command. Gartner specifically cautions organizations about picking certain AI browsers, such as Perplexity’s Comet and OpenAI’s Atlas, which can automate various processes while potentially enabling dangerous instructions if not properly safeguarded.
Gartner even recommends that chief information security officers block all AI browsers for the foreseeable future to minimize risk, pointing out that default configurations favor end-user convenience over cybersecurity best practices and privacy protections.
Meanwhile, the UK’s National Cyber Security Centre echoed the concern on Monday, noting that prompt injection may never be fully mitigated in the same way SQL injection vulnerabilities are addressed. The agency suggested that the best possible outcome is to reduce the likelihood or impact of such attacks rather than eliminate them entirely.
Industry reactions are already pushing back. Large tech players, including Microsoft, OpenAI, and Perplexity, are developing safeguards against prompt injection. These measures typically treat all online content as untrusted and require user consent before executing particularly sensitive actions. Google also announced efforts to strengthen Chrome against such attacks by integrating Gemini-powered AI features.
In independent tests, AI browsers have often fallen short on performance and reliability. They frequently come with privacy and security trade-offs that users should understand before adoption.
Follow the conversation: do these AI browsing technologies represent a future risk that outweigh their productivity gains, or can robust safeguards tilt the balance toward safer, more useful tools? Share your thoughts and experiences in the comments.
About the author: Michael Kan is a senior reporter with PCMag, covering cybersecurity, satellite internet services, PC hardware, and more. With 15+ years in journalism and hands-on experience testing technologies—from SpaceX-related connectivity to the latest in AI and security—Kan offers practical insight into how emerging tech intersects with privacy and safety.