“It’s safe to say that the people who volunteered to “shape” the initiative want it dead and buried. Of the 52 responses at the time of writing, all rejected the idea and asked Mozilla to stop shoving AI features into Firefox.”
Hey all, just a reminder to keep the community rules in mind when commenting on this thread. Criticism in any direction is fine, but please maintain your civility and don’t stoop to ad-hominem etc. Thanks.
Ad-hominem
in a way that is directed against a person rather than the position they are maintaining.
in a way that relates to or is associated with a particular person.
It’s a good thing LLM models are not people because…
If it can be proven that an LLM bot account is present on the instance masquerading as a human user, I would recommend you report the account for that reason/spam so that it can be investigated and removed per instance rule 4 after evidence is found.
Since they aren’t people, I’d say it’s pointless to reply to them with ad-hominem in the first place since it means nothing to them, and therefore reporting it would be the more effective action to take in any event.
don’t stoop to ad-hominem
At this point Ad-hominem is practically the nice name for the business model “enshitification”.
Why not just distribute a separate build and call it “Firefox AI Edition” or something? Making this available in the base binary is a big mistake. At least doing so immediately and without testing the waters.
There is a Firefox Developer’s Edition so I don’t see why not? I personally don’t care to see them waste the time on AI features.
If you don’t like AI window, don’t use AI window. I don’t see the problem.
Except if AI Window turns out to be another resident subprocess that eats 4GB of ram in the background
Is that what it’s doing?
I haven’t heard about it anywhere. But considering Firefox’s history with memory handling and considering the feature in question, I’m calling it. I’m reasonably sure that’s how it’s going to turn out
I don’t see any technical background about it. Just a marketing spiel
Opt-in
??
What does being opt-in or opt-out have to do with resource consumption? It will still consume resources even if you’re not using it. It might even reserve memory while inactive for faster startup by default.
Even if you completely disable it, just having it built into the default binary already consumes resources.
So much whining… Please get to know the things you attack, before you embarrass yourself.
Do you know what a “build in AI” is? It’s just some algorithms that makes your day easier. And no one steals your data… sigh
It’s a local collection of algorithms, and you are quite safe. You probably already use it in so many ways you can’t count them - it’s just not called “AI” which I guess is the trigger word for you…
So much wtf. Please name one way I use AI without knowing it, and ways that enrage me don’t count.
Well, then give me a report of everything you use IT for… You phone, apps, tablets, computers, smart tv’s and so forth?
Um, nah… ? I actively avoid half those things and yet use computers constantly. The burden would be on you. So perhaps you can actually give examples that would be true of a lot of people. I feel like you think lemmy users are like most people and just along with what’s popular, but that’s not the case, particularly with computing concerns.
Damn you are not really well, are you? You want me to tell you exactly where you use build in “AI” that helps you make decisions, but you won’t tell me what you use?!? I don’t have to prove anything. My claim stands. I can see that you are using some kind of computer, and the internet. You are using algorithms that act the way AI does… But I guess you are too blind to see… That’s on you. Keep living in your little illusion of a world. 🙃
Damn you are not really well, are you?
Says the person who wants a full inventory of my computing hardware. What a fucking bizarre human.
Well, you wanted me to tell you what AI-like algorithms you were using - and you expect me to do that from no information? You really are not well…
Using a search engine would be one.
Nope. The “AI summary” is something I disable on Google and I try to actively avoid it because it enrages me. It’s total dogshit.
The algorithms to determine which pages are most relevant to your query are traditionally seen as AI too. They’re just not labelled as such (which was the point), and predate the current AI hype.
I hadn’t even thought about the “AI modes” that search engines are incorporating nowadays though, so I get the confusion.
That’s a fair point. But search algorithms fall under the “recommender system” umbrella, which are a very different family from the agentic AI we’re discussing here.
Both are AIs, but with very different use cases. In the same way, you could technically classify both your gaming PC and phone as computers, and you’d be correct.
Edit: links
It was my impression that the thread started wasn’t just talking about agentic AI. And I think a lot of the “anti AI folks” here are also angry about recent non-agentic AI additions that Mozilla added, such as e.g. tab recommendations for tab groups.
The post is about the new AI Window feature, and here’s its description from Mozilla’s own announcement:
Now, we’re excited to invite you to help shape the work on our next innovation: an AI Window. It’s a new, intelligent and user-controlled space we’re building in Firefox that lets you chat with an AI assistant and get help while you browse, all on your terms. Completely opt-in, you have full control, and if you try it and find it’s not for you, you can choose to switch it off.
So, definitely agentic.
And I think a lot of the “anti AI folks” here are also angry about recent non-agentic AI additions that Mozilla added, such as e.g. tab recommendations for tab groups.
I’ll speak honestly: I’m not happy with that either. But that’s incidental to this topic. The agentic component is my biggest concern.
I exclusively use https://noai.duckduckgo.com/
So no.
Shut up, clankerfucker
So much whining… Please get to know the things you attack, before you embarrass yourself.
I’m a senior full-stack engineer at a recruitment startup. I’m the one responsible for the AI side of things. We do document analysis of résumés and all kinds of identification documents and photos. We also run various assistant chatbots that interact with users and accept file uploads, do job interviews and much more. I think I know enough not to embarrass myself.
Do you know what a “build in AI” is? It’s just some algorithms that makes your day easier.
No. A “built-in AI” is not a collection of “magic algorithms” that suddenly make life better. Working with AI is some of the worst hours I’ve ever spent of my life. Writing increasingly convoluted prompts to make it follow explicitly stated instructions for the N-th time in order not to misuse tools or output the correct structured data is not magic. It’s an exercise in futility most of the time.
And no one steals your data… sigh
It’s a local collection of algorithms, and you are quite safe.
So… You’re introducing an autonomous agent between you and the web you’re browsing. With the sole purpose of making decisions and taking actions on your behalf.
Aside from all the privacy concerns and assuming that the models actually run locally; all code can have bugs, and worse: this “magic tool” is an amorphous collection of floating point numbers and you can’t debug a neural net’s weights. And this thing is already proven to HALLUCINATE.
If this thing has permission to freely interact with web pages, then sorry, your data will never be safe.
You probably already use it in so many ways you can’t count them - it’s just not called “AI” which I guess is the trigger word for you…
I think you’re the one confusing the terminology.
Oh, you work for Mozilla Firefox, do you?
Being childish and trying to turn what I say into “magic algorithms”
GZ with your made up title, which must be made up, given the crazy ramblings you are making.
The “AI” that Firefox is implementing is not “autonomous” - do look up the definitions, please! It is not sentient, it can’t hallucinate, since it is not sentient. What the big tech are making (their LLM) can’t hallucinate either - they just do something that looks like it, and we put silly terms on it, like hallucinating. You are NOT getting a neural network on your own computer when you install Firefox - now you are just being daft!
You are confused, paranoid and really prove, that you should get to know what you are talking about. sigh
Are you purposefully trolling or just that level of ignorant? I’m done responding here regardless, just curious.
I’ll take that as a no, you are not working for Mozilla. You are being childish by calling it “magic algorithms”. You don’t even know what kind of “AI” FF are talking about…
I would run too, if I were caught in the lies you have spun here.
Well if they do I’ll just switch to whatever browser that doesn’t.
I think Mozilla’s base is privacy focused individuals, a lot of them appreciating firefox’s opensource nature and the privacy hardened firefox forks. From a PR perspective, Firefox will gain users by adamantly going against AI tech.
Maybe their thought process is they’ll gain more users by adopting AI while knowing they’re still the most privacy focused of the major browsers. Where have I seen this mentality before?
Spoiler
The American Democrat party often believes it can get more votes by shifting conservative, believing the more progressive voters will stick pick them because they’re still more progressive than not.
Everyone’s entitled to their opinion, but how can you be aware of this fact
I don’t know whether the negative reactions reflect the majority of Firefox users or are just a noisy minority. Mozilla, after all, likely has a clearer view of the whole user base.
and then still assume that nobody wants something based on a non-representative sample of 52 comments?
That bit was odd for me too. I think the author means “nobody” based on the online reactions and discussions about Firefox. And you shouldn’t take the term/word too literal. So I can understand the view at least, because that is what I am getting too. The problem is, that mostly negative opinions are discussed. And mostly from tech people, which I personally assume most Firefox users might be.
The more AI is being pushed into my face, the more it pisses me off.
Mozilla could have made an extension and promote it on their extension store. Rather than adding cruft to their browser and turning it on by default.
The list of things to turn off to get a pleasant experience in Firefox is getting longer by the day. Not as bad as chrome, but still.
Oh this triggers me. There have been multiple good suggestions for Firefox in the past that are closed with nofix as “this can be provided by the community as an add-on”. Yet they shove the crappiest crap into the main browser now.
Rather than adding cruft to their browser and turning it on by default.
The second paragraph of the article:
The post stresses the feature will be opt-in and that the user “is in control.”
That being said, I agree with you that they should have made it an extension if they really wanted to make sure the user “is in control.”
I am actually curious if gentoo community sees a noticeable increase in time for updating/installing with all these new AI features on everything.
I noticed that too! I will never use it.
Hear me out.
This could actually be cool:
-
If I could, say, mash in “get rid of the junk in this page” or “turn the page this color” or “navigate this form for me”
-
If it could block SEO and AI slop from search/pages, including images.
-
If I can pick my own API (including local) and sampling parameters
-
If it doesn’t preload any model in RAM.
…That’d be neat.
What I don’t want is a chatbot or summarizer or deep researcher because there are 7000 bajillion of those, and there is literally no advantage to FF baking it in like every other service on the planet.
And… Honestly, PCs are not ready for local LLMs. Not even the most hyper optimized trellis quantization of Qwen3 30B is ‘good enough’ to be reliable for the average person, and it still takes too much CPU RAM. Much less the suboptimal version Mozilla would ship.
That would be awesome. Like a greasemonkey/advanced unlock for those of us who don’t know how to code. So many times I wanted to customise a website but I don’t know how or it’s not worth the effort.
But only of it was local, and specially on mobile, where I need the most, it will be impossible for years…
I mean, you can run small models on mobile now, but they’re mostly good as a cog in an automation pipeline, not at (say) interpreting english instructions on how to alter a webpage.
…Honestly, open weight model APIs for single-off calls like this are not a bad stopgap. It costs basically nothing, you can use any provider you want, its power efficient, and if you’re on the web, you have internet.
You mean to use online LLM?
No. That’s what I don’t want. If it was a company I trusted I would, but good luck with that. Mozilla is not that company anymore, even if they had the resources to host their own.
But locally or in a server I trust? That would be awesome. AI is awesome, but not the people who runs it.
If I can pick my own API (including local) and sampling parameters
You can do this now:
- selfhost ollama.
- selfhost open-webui and point it to ollama
- enable local models in about:config
- select “local” instead of ChatGPT or w/e.
Hardest part is hosting open-webui because AFAIK it only ships as a docker image.
Edit: s/openai/open-webui
Open WebUI isn’t very ‘open’ and kinda problematic last I saw. Same with ollama; you should absolutely avoid either.
…And actually, why is open web ui even needed? For an embeddings model or something? All the browser should need is an openai compatible endpoint.
The firefox AI sidebar embeds an external open-webui. It doesn’t roll its own ui for chat. Everything with AI is done in the quickest laziest way.
What exactly isn’t very open about open-webui or ollama? Are there some binary blobs or weird copyright licensing? What alternatives are you suggesting?
https://old.reddit.com/r/opensource/comments/1kfhkal/open_webui_is_no_longer_open_source/
https://old.reddit.com/r/LocalLLaMA/comments/1mncrqp/ollama/
Basically, they’re both using their popularity to push proprietary bits, which their devleopment is shifting to. They’re enshittifying.
In addition, ollama is just a demanding leech on llama.cpp that contributes nothing back, while hiding the connection to the underlying library at every opportunity. They do scummy things like.
-
Rename models for SEO, like “Deepseek R1” which is really the 7b distill.
-
It has really bad default settings (like a 2K default context limit, and default imatrix free quants) which give local LLM runners bad impressions of the whole ecosystem.
-
They mess with chat templates, and on top of that, create other bugs that don’t exist in base llama.cpp
-
Sometimes, they lag behind GGUF support.
-
And other times, they make thier own sloppy implementations for ‘day 1’ support of trending models. They often work poorly; the support’s just there for SEO. But this also leads to some public GGUFs not working with the underlying llama.cpp library, or working inexplicably bad, polluting the issue tracker of llama.cpp.
I could go on and on with examples of their drama, but needless to say most everyone in localllama hates them. The base llama.cpp maintainers hate them, and they’re nice devs.
You should use llama.cpp llama-server as an API endpoint. Or, alternatively the ik_llama.cpp fork, kobold.cpp, or croco.cpp. Or TabbyAPI as an ‘alternate’ GPU focused quantized runtime. Or SGLang if you just batch small models. Llamacpp-python, LMStudo; literally anything but ollama.
As for the UI, thats a muddier answer and totally depends what you use LLMs for. I use mikupad for its ‘raw’ notebook mode and logit displays, but there are many options. Llama.cpp has a pretty nice built in one now.
-
Honestly, PCs are not ready for local LLMs
The auto-translation LLM runs locally and works fine. Not quite as good as deepl but perfectly competent. That’s the one “AI” feature which is largely uncontroversial because it’s actually useful, unobtrusive, and privacy-enhancing.
Local LLMs (and related transformer-based models) can work, they just need a narrow focus. Unfortunately they’re not getting much love because cloud chatbots can generate a lot of incoherent bullshit really quickly and that’s a party trick that’s got all the CEOs creaming their pants at the ungrounded fantasy of being just another trillion dollars away from AGI.
Yeah that’s really awesome.
…But it’s also something the anti-AI crowd would hate once they realize it’s an 'LLM" doing the translation, which is a large part of FF’s userbase. The well has been poisoned by said CEOs.
I don’t think that’s really fair. There are cranky contradictarians everywhere, but in my experience that feature has been well received even in the AI-skeptic tech circles that are well educated on the matter.
Besides, the technical “concerns” are only the tip of the iceberg. The reality is that people complaining about AI often fall back to those concerns because they can’t articulate how most AI fucking sucks to use. It’s an eldtritch version of clippy. It’s inhuman and creepy in an uncanny valley kind of way, half the time it doesn’t even fucking work right and even if it does it’s less efficient than having a competent person (usually me) do the work.
Auto translation or live transcription tools are narrowly-focused tools that just work, don’t get in the way, and don’t try to get me to talk to them like they are a person. Who cares whether it’s an LLM. What matters is that it’s a completely different vibe. It’s useful, out of my way when I don’t need it, and isn’t pretending to have a first name. That’s what I want from my computer. And I haven’t seen significant backlash to that sentiment even in very left-wing tech circles.
You know what would be really cool? If I could just ask AI to turn off the AI in my browser. Now that would be cool.
I can’t fucking believe I’m agreeing with a .world chud.
Well fortunately for you, I don’t know what that means.
Your server has not a monopoly on, but a majority of the worst shitlibs and other chuds. To the point I’m genuinely surprised by agreeing with someone there, and am worried that when i examine it closely youll be agreeing with me for some unthinkably horrible reason.
The problem is I fundamentally do not understand how Lemmy works, so I just picked what seemed obvious. Like why wouldn’t I want the world.
Also I thought from just reading sub-Lemmies? that .ml was the crap hole.
Also, I looked up Chud and that’s really mean.
Youre on the shitlib chud server; shit happens.
-
I considered using AI to summarize news articles that don’t seem worth the time to read in full (the attention industrial complex is really complicating my existence). But I turned it off and couldn’t find the button to turn it back on.
AI is very much not good at summarising news accurately.
https://pivot-to-ai.com/2025/11/05/ai-gets-45-of-news-wrong-but-readers-still-trust-it/
you have to be REALLY careful when asking an LLM to summarize news otherwise it will hallucinate what it believes sounds logical and correct. you have to point it directly to the article, ensure that it reads it, and then summarize. and honestly at that point…you might as well read it yourself.
And this goes beyond just summarizing articles you NEED to provide an LLM a source for just about everything now. Even if you tell it to research online the solution to a problem many times now it’ll search for non-relevant links and utilize that for its solution because, again, to the LLM it makes the most sense when in reality it has nothing to do with your problem.
At this point it’s an absolute waste of time using any LLM because within the last few months all models have noticeably gotten worse. Claude.ai is an absolute waste of time as 8 times out of 10 all solutions are hallucinations and recently GPT5 has started “fluffing” solutions with non-relevant information or it info dumps things that have nothing to do with your original prompt.
If you need to summarize the news, which is already a summary of an event containing the important points and nothing else, then AI is the wrong tool. A better journalist is what you actually need. The whole point of good journalism is that it already did that work for you.
That should be the point but there is barely good journalism left.
How does AI mis-summarizing the (allegedly bad) journalism improve it?
It SLAMS away all the fluff.
I have a real journalist, but this is more on the “did you know this was important” side. Like how it’s fine to rinse your mouth out after brushing your teeth, but if your water isn’t fluoridated then you probably shouldn’t (which I got from skimming the article for the actionable information).
The post stresses the feature will be opt-in and that the user “is in control.”
Nothingburger
I’ve actually flipped on this position - but before you pull out your pitchforks and torches, please listen to what I have to say.
Do we want mass surveillance through SaaS? No. Do we want mass breach of copyright just because it’s a small holder and not some giant publisher - I.e “rules for thee” type vibe? Hell no. But do we throw the baby out with the bath water? Also: heck no. But let’s me underline a few facts.
- AI currently requires power greedy chips that also don’t utelize memory effectively enough
- Because of this it’s relegated to massive, globe heating infrastructure
- SaaS will always, always track you and harvest your data
- Said data will be used in marketing and psy-ops to manipulate you, your children and your community
- The more they track, the better their models become, which they’ll keep under lock and key
- More and more devices are coming with NPUs and TPUs on-chip
- That is the hardware has not caught up to the software yet
See where I’m going with this?
Add to the fact that people like their chatbots and can even learn to use them responsibly, but as long as they’re feeding the corpos, it’ll be used against them. Not only that, but in true silicon valley fashion, it’ll be monopolized.
The libre movement exists to bring power back to the user by fighting these conditions. It’s also a very good idea to standardize things so that it’s not hidden behind a proprietary API or service.
That’s why if Mozilla seeks to standardize locally run AI models by way of the browser, then that’s a good thing! Again; not if they’re feeding some SaaS.
But it their goal and their implementation is to bring models to the general consumer so that they can seize the means of computing, then that’s a good thing!
Again, if you’d rather just kick up dust and bemoan the idiocy and narcissistic nature of Silicon Valley, then you’ve already given them what they want - that they, and they alone, get to be the sole proprietaries of AI that is standardized. That’s like giving the average user over to a historically predatory ilk who’d rather build an autocracy than actually innovate.
Mozilla can be the hero we need. They can actually focus on consumer hardware, to give people what they want WITHOUT mass tracking and data harvesting.
That is if they want to. I’m not saying they’re not going to bend over, but they need the right kind of push back. They need to be told “local AI only - no SaaS” and then they can focus on creating web standards for local AI, effectively becoming the David to Silicon Valleys Goliath.
I know this is an unpopular opinion and I know the Silicon Valley barons are a bunch of sociopaths with way too much money, but we can’t give them monopoly over this. That would be bad!! We need to give the power to the user, and that means standardization!
Take it from an old curmudgeon. I’ve shook my fist at the cloud, I’ve read a ton of EULAs and I’ve opposed many predatory practices. But we need to understand that the user wants what the user wants. We can’t stick our heads in the sand and just repeat “AI bad” ad nauseum. We need to mobilize against the central giants.
We need a local AI movement and Mozilla could be in the forefront of this, if it weren’t for the pushback and outright cynicism people trevall generally (and justifiably) have - but we can’t let these cretinous bastards hold all the AI cards.
We need libre AI, and we need it now!
Thank you for your consideration.
I agree that: SaaS = UaaP (User as a Product). Most importantly, AI is powerful and here to stay and if it’s completely controlled by the rich and powerful, then the rest of us are majorly screwed.
Small models, local models, models that anybody can deploy and control the way they see fit, PUBLIC models not controlled by the rich and powerful - these will be crucial if we’re going to avoid the worst case situation.
IMHO it’s better to start downloading and playing with local quantized LLMs (i barely know what i’m talking about here, i admit, but bear with me - i’m just trying to add something useful to the discussion), it’s better to start taking hold of the tech and tinkering, like we did with cars when they were new, and planes, and computers, and internet … so that hopefully there will be alternatives to the privately controlled rich-and-powerful-corpo models.
I am not really liking AI , sure its good for somethings but in last 2 weeks i seen some very negative and destructive outcomes from AI . I am so tired of everything being AI . It can have good potential but what are risks to users experience?
What things is it good for?
Genuinely good for?
Basically everything its used for that isn’t being shoved in your face 24/7.
- speech to text
- image recognition
- image to text (includes OCR)
- language translation
- text to speech
- protein folding
- lots of other bio/chem problems
Lots of these existed before the AI hype to the point they’re taken for granted, but they are as much AI an LLM or image generator. All the consumer level AI services range from annoying to dangerous.
Is it actually both good and efficient for that crap, though? Or is it just capable of doing it?
Is it efficient at simulating protein folding, or does it constantly hallucinate impossible bullshit that has to be filtered out, burning a mountain and a lake for what a super computer circa 2010 would have just crunched through?
Does the speech to text actually work efficiently? On a variety of accents and voices? Compared to the same resources without the bullshit machine?
I feel like i need to ask all these questions because there are so many cultists out there contriving places to put this shit. I’m not opposed to a huge chunky ‘nuclear option’ for computing existing, I just think we need to actually think before we burn the kinds of resources this shit takes on something my phone could have done in 2017.
All of the AI uses I’ve listed have been around for almost a decade or more and are the only computational solutions to those problems. If you’ve ever used speech to text that wasn’t a speak-n-spell you were using a very basic AI model. If you ever scanned a document and had the text be recognized, that’s an AI model.
The catch here is I’m not talking about chatgpt or anything trying be very “general”. These are all highly specialized ai models that serve a very specific function.














