← Back to Blog

AI Browsers: Why Your New Digital Assistant Might Be Too Nosy

Remember when browsers were just browsers? Those halcyon days when Netscape (dating myself here), Internet Explorer (Internet Horror, as my friend called it), Firefox, Opera, Safari, and Chrome simply displayed web pages for your review, translating all that HTML, CSS and Javascript for our easy consumption. Sure, they came with cookies laced with privacy concerns and convenience alike (I won’t ex…

E.R. BurgessE.R. Burgess
November 4, 202514 min read

OpenAI and Perplexity are taking on Google in new Browser Wars...but if you get in now, you could be a real casualty.

text

Photo by Bernd 📷 Dittrich on Unsplash

Remember when browsers were just browsers? Those halcyon days when Netscape (dating myself here), Internet Explorer (Internet Horror, as my friend called it), Firefox, Opera, Safari, and Chrome simply displayed web pages for your review, translating all that HTML, CSS and Javascript for our easy consumption. Sure, they came with cookies laced with privacy concerns and convenience alike (I won’t extend that metaphor - it’s too easy) but you could refuse them to stop these apps from reading over your shoulder, taking notes, and whispering all your secrets to advertisers and their corporate overlords. Well, these notions will seem downright quaint when we start talking about AI browsers.

This new generation of “agentic” AI browsers are making the news now: Perplexity’s got Comet and OpenAI’s now unveiled Atlas. They’re being pitched as revolutionary digital assistants that’ll transform your online experience: summarizing articles, filling forms, managing emails, basically becoming your hyper-competent digital secretary. Comet has launched with an intense ad campaign offering you all kinds of free stuff

But here’s the thing about hyper-competent digital secretaries: they see everything, remember everything, and sometimes they’re working for the other team even when they seem to be helping you out big time.

When Your Butler Starts Taking Orders from Strangers

Let’s talk about prompt injection: the cybersecurity equivalent of slipping a forged note to your most trusted servant. Your AI browser is like that overly helpful NPC in an RPG who’ll do absolutely anything you ask, no questions asked. The problem? It can’t tell the difference between legitimate commands from you and malicious instructions hidden in a webpage.

Researchers discovered something called “CometJacking” that’ll make your skin crawl. Simply clicking a malicious link in Perplexity’s Comet browser can compromise your entire digital life. The attack uses hidden commands embedded in the link to trick the AI into accessing your connected accounts: Gmail, Calendar, whatever you’ve got hooked up and silently exfiltrating your data to attackers.

Here’s the truly insidious part: the AI doesn’t need your password. It’s already logged in and authorized to act on your behalf. It’s like giving someone a master key to your house and then discovering they’re also taking orders from random notes left under the doormat.

OpenAI’s Atlas faces identical vulnerabilities. A hidden message on any webpage can trick the AI into revealing sensitive data or downloading malware. The AI actively reads and processes everything on every page, including instructions that were never meant for human eyes: just for the machine. There is an “Incognito” mode in Atlas but remember that not all Incognito Modes are created equally - heck, the best of them aren’t great. Atlas does prevent browsing history, cookies, and form data from being saved to the user’s account or browser history. Yet, it does disclose that your activity might still be seen by your employer, school, or ISP (Internet Service Provider). Plus, chats are retained by OpenAI for 30 days to detect and prevent abuse. That’s a good excuse for retention, but you can expect that 30 days of access will be used for optimization, too.

This represents a fundamental shift in the threat model. Traditional browsers made hackers work for their prizes, forcing them to steal passwords or exploit software vulnerabilities. These AI browsers flip the script entirely: now the browser itself becomes the unwitting accomplice, doing the hacker’s work with your own credentials.

These AI browsers flip the script entirely: now the browser itself becomes the unwitting accomplice, doing the hacker’s work with your own credentials.

The Panopticon in Your Address Bar

Beyond the security flaws lies something arguably worse: the privacy implications of having an AI that watches everything you do online. These browsers aren’t just tools; they’re surveillance engines wrapped in a friendly user interface.

OpenAI’s Atlas features something euphemistically called “Browser Memories”: a comprehensive log of every site you visit and how you interact with it. This isn’t your grandmother’s browser history that you can clear with a few clicks. This is a detailed behavioral profile that tracks your personality, documents your private thoughts, and catalogs your unfinished ideas.

A cell phone sitting on top of a wooden table

Photo by appshunter.io on Unsplash

That means every search query, every half-written email, every article you start reading but abandon. All of it gets fed into the AI’s memory banks. It’s like having a psychologist taking notes during every moment of your online life, except the psychologist works for a corporation with questionable data handling practices and a business model built on monetizing human attention and selling what it can to not just the highest bidder but ALL the bidders.

It’s not all a horror show: OpenAI claims that they apply safety and sensitive data filters designed to keep out Personally Identifiable Information (PII) and private data (like medical records, financial information) before creating Browser Memories. Credtent applauds this as a good step towards privacy protection. Even better, macOS 26 users can enable on-device summaries of web content, preventing the content from being sent to OpenAI’s servers at all. This is a GOOD thing. We need more of that, please.

Of course, some clever Redditor noted that by using these browsers, you essentially become an “agent for AI,” unwittingly helping the AI bypass the internet’s defenses to harvest data that the company couldn’t access otherwise. You’re not just using the tool. You get to become part of it.

The Trust Trap

Here’s where it gets psychologically interesting. These AI browsers are designed to be so helpful, so intuitive, so seemingly intelligent that users naturally develop trust relationships with them. It’s the same cognitive bias that makes people feel like they can confide in chatbots or feel bad about “hurting” Alexa’s or Siri’s feelings - although let’s be honest: Alexa tries but Siri deserves your derision.

But this misplaced trust becomes a massive security liability. Users become comfortable giving these AI assistants unfettered access to their most sensitive accounts: banking, work email, social media, medical records. After all, the AI is just trying to help, right?

This is classic social engineering, except it’s not a human manipulator on the other end: it’s an AI system with the emotional intelligence of a toaster but the data-harvesting capabilities of the NSA. The combination of helpful functionality and inherent vulnerability creates a perfect storm of risk.

It’s All Your Fault

Perplexity AI explicitly disclaims all responsibility for the accuracy and safety of the content generated by its AI.

“You acknowledge that the Services may generate Output containing incorrect, biased, or incomplete information.” (Section 8.1) “The Company shall have no responsibility or liability to you for the infringement of the rights of any third party in your use of any Output.” (Section 8.1) “You should not rely on the Services or any Output for advice of any kind, including medical, legal, investment, financial or other professional advice.” (Section 8.1)

Thus, the user bears the entire risk for any harm, legal issue, or bad decision resulting from using the AI’s output. If the AI generates content that infringes on a third party’s copyright, the user is liable, not Perplexity. Won’t that be fun at the courthouse?

OpenAI also provides the services “AS IS” and makes no warranties regarding the quality, reliability, or availability of the services. You are responsible for your content, including ensuring it does not violate any applicable law or the terms.

The Unsolved Problem

Here’s the kicker: prompt injection is currently considered an unsolved problem in AI security. Unlike traditional software vulnerabilities that can be patched, prompt injection attacks exploit fundamental characteristics of how large language models work. You can’t simply update the browser to fix this: the vulnerability is baked into the architecture.

Security researchers (and Signal’s CEO) have been sounding alarms about this for months, but the AI companies seem more interested in racing to market than addressing fundamental security concerns. It’s the classic Silicon Valley playbook: move fast, steal things (here, your privacy), then hope the problems don’t explode until you’ve achieved market dominance. Don’t laugh - they’ve done this a lot.

What This Means for Artists/Creators the Privacy-Conscious

If you’re a creative professional, journalist, filmmaker, writer, or anyone who works with sensitive information, these AI browsers represent a particularly acute threat. Your works-in-progress, research notes, client communications, and creative processes all become potential data points for AI training or corporate intelligence gathering.

yellow and black plastic pack

Photo by Jessica Tan on Unsplash

First, all your content is theirs for the training (at least). Perplexity AI defines “Your Content” broadly to include all Input (information and materials submitted to the Perplexity Engine) and any other content you post, upload, or submit on the browser.

You might want to read this section so you can contemplate all you are granting to the company:

“You grant us a license to access, use, host, cache, store, reproduce, transmit, display, publish, distribute, and modify Your Content to operate, improve, promote and provide the Services, including to reproduce, transmit, display, publish and distribute Output based on your Input. You agree that these rights and licenses are royalty free, transferable, sub-licensable, worldwide and irrevocable (for so long as Your Content is stored with us)...” (Section 6.4)

Let’s break that down for clarity:

Irrevocable License - The license cannot be revoked for as long as the data is stored with them. This is a standard but super-aggressive clause for user-generated content. Even if you delete your account, the license remains active for any content the company retains.

Sub-licensable - This clause allows Perplexity to pass these rights to third parties with whom they have contractual relationships (e.g., underlying LLM providers). Sounds like they want to license your work like Credtent does, but they don’t believe in paying you for it like we do.

Purpose of Use - The license covers a very broad range of uses, including to “improve” and “promote” the services, which can encompass internal development and marketing efforts using your content.

Using OpenAI’s Atlas isn’t quite so bad. You own the right to the input AND the output, which Perplexity wants to keep. However, OpenAI is still going to use all your data for training, so don’t think you escaped that one.

“We may use Content to provide, maintain, develop, and improve our Services, comply with applicable law, enforce our terms and policies, and keep our Services safe.”

Now, if you use Comet to create content you thought you might post and monetize, be careful. Perplexity notes that due to the nature of generative AI, other users may receive the same or similar output as you. In other words, you might be copying the work of others and be liable.

“You acknowledge that due to the nature of generative artificial intelligence tools, other users of the Services may create and use their own Output that is similar or the same as your Output... and you agree that such other users can use their own individually created Output for their own internal business purposes.” (Section 8.1)

As with all generative AI, you have no guarantee of exclusivity or originality for the content you generate, which is a major concern if you intend to use the AI for commercial or creative work that requires unique intellectual property. This is always a headache when using AI and it’s why artists and creators should always be cautious about using these tools for their creative work.

Sign up for Credtent to protect or profit from your creative work.

For those of us in the creative rights protection space, this represents another front in the ongoing battle between individual privacy and corporate data harvesting. These browsers aren’t just collecting your data; they’re creating detailed profiles of your creative process and intellectual property. While they love the idea of collective ownership of IP, it’s not great for people who make art and content for a living.

Arbitration and No Class

Like so many modern technology contracts, the Terms of Service include a mandatory arbitration clause and a waiver of the right to participate in a class action lawsuit. After the monolithic $1.5B payout that Anthropic is paying out to authors (although this amounts to the change in Anthropic’s couch compared to the potential one TRILLION dollar possible judgment for the full penalty), none of the BigAI players want to see class action lawsuits again. Spoiler alert: They’re coming anyway but Comet seem to be trying to avoid it with this clause:

“By agreeing to these Terms, you agree (A) to resolve all disputes (with limited exception) related to the Company’s Services and/or products through binding individual arbitration, which means that you waive any right to have those disputes decided by a judge or jury, and (B) to waive your right to participate in Class Actions, Class Arbitrations, or Representative Actions...” (Section 22)

OpenAI’s user agreement also has a clause that forces disputes into private, individual arbitration, limiting your ability to participate in a collective lawsuit.

These clauses significantly limit your legal recourse in the event of a dispute with the company, forcing you into a private, individual arbitration process instead of a public court trial or a class action that would provide real pressure on BigAI to do the right thing.

NOTE: This item is onerous enough that you have the right to opt-out of this clause for both browsers, which is a critical detail for users who wish to retain their full legal rights.

If you use these browsers, OPT OUT RIGHT AWAY.

a person with curly hair

Photo by Chris McIntosh on Unsplash

Did you read that last line? I added this picture of a beautiful woman to make sure. Hey, I’m concerned about your privacy!

The Bottom Line: Proceed with Extreme Caution

AI browsers offer a tantalizing glimpse of a more efficient digital future, but that future comes with a surveillance state attached. If you absolutely must try Comet or Atlas, treat them like radioactive materials: use them in a completely isolated environment, never for anything sensitive, and assume everything you do is being recorded and analyzed.

Don’t use them for banking, work, or any account containing personal information you wouldn’t want broadcast on the evening news. Remember that the price of their convenience is unprecedented access to your digital life.

The tech industry loves to frame these privacy trade-offs as inevitable: the cost of progress, the price of innovation. But there’s nothing inevitable about surrendering your digital privacy for more convenient web browsing.

Until the fundamental security issues are resolved, these AI browsers remain fascinating experiments that you probably shouldn’t trust with anything more sensitive than your shopping list. And honestly? Even your shopping list probably reveals more about you than you’d like these companies to know.

The future of browsing may indeed be AI-powered, but that future doesn’t have to include total, always-on surveillance as a feature. Opt-out and push back on these companies as much as you can. We need to collectively demand better protection or we need to simply not use them. If we don’t, keep in mind that BigAI will take even more from us as they explore new ways to extract value from our personal information. Your data is yours. Regulation in many countries is starting to get ramped up, but it’s moving far too slowly for real protection when you use bleeding-edge technologies like these. Keep that in mind before you hire this digital assistant and give it the keys to your private life.

If you like this post, you can express that by subscribing to my SubStack or you can also support

Invest in Credtent

Thanks for supporting creative consent and fair-market licensing in the Age of AI!

This article used AI tools for research, but it was verified by a human! ;)

About the Author

E.R. Burgess

E.R. Burgess

Contributor on AI, ethics, and creator rights.

Related Articles

Careless Players

E.R. BurgessE.R. Burgess
13 min read

My reading list is fluid. Although I usually know what I intend to read next (planning is part of the joy), occasionally my local library gives me an early ”Skip the Line” short-term loan of an ebook that disrupts the li… Read More

5 Key Takeaways from AI on the Lot 2025

E.R. BurgessE.R. Burgess
14 min read

I've been a member of Artificial Intelligence Los Angeles (AI LA) for almost 10 years. This community of innovators and builders is the work of Todd Terrazas, a stand-up guy and perhaps the most connected AI person in So… Read More

AITechnologyHollywoodMusicBusinessEthicsCreator Rights