

The few times I’ve used LLMs for coding help, usually because I’m curious if they’ve gotten better, they let me down. Last time it was insistent that its solution would work as expected. When I gave it an example that wouldn’t work, it even broke down each step of the function giving me the value of its variables at each step to demonstrate that it worked… but at the step where it had fucked up, it swapped the value in the variable to one that would make the final answer correct. It made me wonder how much water and energy it cost me to be gaslit into a bad solution.
How do people vibe code with this shit?





I’m not at all a fan of being recorded in public but all of your examples…
These are situations in which the camera in the glasses is technically being accessed, which in software means something is analyzing the feed from the camera. If it is generating any output anywhere, even just visually for the user, it is recording in my mind. It may not be storing video, but it might face match and store a list of every recognized face it saw on the subway. There is no way for the OS to reasonably know what the feed is being used for unless it has exclusive control over the camera feed… and I sure as fuck aren’t going to trust the smart glasses manufacturer to be honest about what it is doing with the camera feed…
So basically, if the camera is in use at all, an indicator light should be on.