• 0 Posts
  • 32 Comments
Joined 1 month ago
cake
Cake day: September 14th, 2025

help-circle
  • The tech could represent the end of visual fact — the idea that video could serve as an objective record of reality — as we know it.

    We already declared that with the advent of photoshop.

    I think that this is “video” as in “moving images”. Photoshop isn’t a fantastic tool for fabricating video (though, given enough time and expense, I suppose that it’d be theoretically possible to do it, frame-by-frame). In the past, the limitations of software have made it much harder to doctor up — not impossible, as Hollywood creates imaginary worlds, but much harder, more expensive, and requiring more expertise — to falsify a video of someone than a single still image of them.

    I don’t think that this is the “end of truth”. There was a world before photography and audio recordings. We had ways of dealing with that. Like, we’d have reputable organizations whose role it was to send someone to various events to attest to them, and place their reputation at stake. We can, if need be, return to that.

    And it may very well be that we can create new forms of recording that are more-difficult to falsify. A while back, to help deal with widespread printing technology making counterfeiting easier, we rolled out holographic images, for example.

    I can imagine an Internet-connected camera — as on a cell phone — that sends a hash of the image to a trusted server and obtains a timestamped, cryptographic signature. That doesn’t stop before-the-fact forgeries, but it does deal with things that are fabricated after-the-fact, stuff like this:

    https://en.wikipedia.org/wiki/Tourist_guy


  • That depends on how you define the web

    Wikipedia:

    https://en.wikipedia.org/wiki/Gopher_(protocol)

    The Gopher protocol (/ˈɡoʊfər/ ⓘ) is a communication protocol designed for distributing, searching, and retrieving documents in Internet Protocol networks. The design of the Gopher protocol and user interface is menu-driven, and presented an alternative to the World Wide Web in its early stages, but ultimately fell into disfavor, yielding to Hypertext Transfer Protocol (HTTP). The Gopher ecosystem is often regarded as the effective predecessor of the World Wide Web.[1]

    gopher.floodgap.com is one of the last running Gopher servers, was the one that I usually used as a starting point when firing up a gopher client. It has a Web gateway up:

    https://gopher.floodgap.com/gopher/

    Gopher is a well-known information access protocol that predates the World Wide Web, developed at the University of Minnesota during the early 1990s. What is Gopher? (Gopher-hosted, via the Public Proxy)

    This proxy is for Gopher resources only – using it to access websites won’t work and is logged!



  • How many of you out there are browsing the web using Gofer?

    Gopher predated the Web.

    I do agree that there have been pretty major changes in the way websites worked, though. I’m not hand-coding pages using a very light, Markdown-like syntax with <em></em>, <a href=""></a>, and <h1></h1> anymore, for example.





  • I don’t see why it would need to be affected.

    The constraint to require a valid signing isn’t something imposed by the license on the Android code. If you want to distribute a version of Android that doesn’t check for a registered signature, that should work fine.

    I mean, the Graphene guys could impose that constraint. But they don’t have to do so.

    I think that there’s a larger issue of practicality, though. Stuff like F-Droid works in part because you don’t need to install an alternative firmware on your phone — it’s not hard to install an alternate app store with the stock firmware. If suddenly using a package from a developer that isn’t registered with Google requires installing an alternate firmware, that’s going to severely limit the potential userbase for that package.

    Even if you can handle installing the alternate firmware, a lot of developers probably just aren’t going to bother trying to develop software without being registered.







  • You can get inkjet printers that don’t have restrictions on the ink. They cost more, though.

    The reason printer manufacturers are so hell-bent on being a pain in the ass with the ink is because they’re using a razor-and-blades model. They’re selling you the printer at a lower price than they really should, if their price reflected their costs, with the expectation that they’ll make their money back when you buy ink at a higher price than you really should, because people pay more attention to the the initial price of the printer than to the consumable costs.

    Same way you can get unlocked cell phones instead of network-locked cell phones with a plan. Gaming PCs instead of consoles. It’s not that they’re unavailable, but you’re gonna have to accept a higher up-front cost, because you’re not getting a subsidy from the manufacturer.

    Canon sells a line of inkjet printers that just take ink from a bottle. No hassles with restrictions on ink supply there. The ink is cheap, and there are third-party options that are even cheaper readily available…but you’re going to pay full price for the printer.

    https://www.usa.canon.com/shop/printers/megatank-printers

    Their lowest-end “MegaTank” printer is $230:

    https://www.usa.canon.com/shop/p/megatank-pixma-g3290

    A pack of third-party ink refill bottles is $15, and will print (using Canon’s metrics), about 7,700 color pages and 9,000 black-and-white pages:

    https://www.amazon.com/Refill-Compatible-Bottles-MegaTank-4-Pack/dp/B0DSPSS5W7

    Compatible GI-21 Black Ink Bottle Up to 9,000 pages, GI-21 Cyan/Magenta/Yellow Ink Bottles Up to 7,700 pages

    On the other hand, Canon’s lowest-end “cartridge” printer, where they use the razor-and-blades model, is $55.

    https://www.usa.canon.com/shop/p/pixma-ts3720-wireless-home-all-in-one-printer

    But you rapidly pay for it with the ink; It looks like they presently sell a set of replacement cartridges for $91. And that set will print a tiny fraction of the number of pages that the above ink bottles will print.

    page yield of 400 Black / 400 Color pages per ink cartridge set and cost of $90.99 for a value pack of PG-285(XL) and CL-286(XL) ink cartridges (using Canon Online Store prices as of June 2025).

    So if you really do want to do photo prints with an inkjet without dealing with all the DRM-on-ink stuff, you can do it today. But…you’re going to pay more for the printer.

    All that being said, I do think that lasers are awfully nice in that you don’t need to deal with nozzles clogging. You can leave a laser printer for years and it’ll just work when you start it up. If you don’t need photo output, just less hassle.



  • Sacks, the Trump administration’s AI czar and co-host of the conference, stopped Musk mid-answer. “Well, Elon, by the way, could you just publish that?” he asked. “Wikipedia is so biased, it’s a constant war.” He suggested that Musk create what he called “Grokipedia.”

    This past week, as the Wikipedia controversy reignited, Musk announced xAI would, in fact, offer up Grokipedia. Soon after, the Wikipedia page for Musk’s Grok was updated. The entry included a brief comparison to an effort almost 20 years earlier to create another Wikipedia alternative called Conservapedia.

    Yeah, my initial take is “Conservapedia was pretty much a disaster, and there’s a reason that people don’t use it”.

    Like, go to Conservapedia’s “evolution” article.

    https://www.conservapedia.com/Evolution

    Like, you’re going to have to create an entire alternate reality for people who have weird views on X, Y, or Z. And making it worse, there isn’t overlap among all those groups. Like, maybe you’re a young earth creationist, and you like that evolution article. But then maybe you don’t buy into chemtrails. It looks like Conservapedia doesn’t like chemtrails. So that’s gonna piss off the chemtrail people.

    There are lots of people on the right who are going to disagree with scientific consensus on something, but they don’t all have the same set of views. They might all complain that Wikipedia doesn’t fit with their views on particular point X, but that doesn’t mean that they’re going to go all happily accept the fringe views of some other group. And some views are just going to outright contradict each other. You could have a conservative Mormon, Amish, and a Catholic, but they’re going to have some seriously clashing views on religion, even if they’re all conservative. In broader society, the way we normally deal with that is to just let people make up their own mind on particular issues. But if you’re trying to create a single “alternate reality” that all of them subscribe to, then you have to get them all on one page, which is going to be a real problem.

    Maybe Musk could make Grok try to assess which fringe group that someone is in and automatically provide a version of truth in Grok’s responses tailored to their preferences. But…that’s not a Grokipedia, because the latter requires a unified view.


  • I don’t know if there’s a term for them, but Bacula (and I think AMANDA might fall into this camp, but I haven’t looked at it in ages) are oriented more towards…“institutional” backup. Like, there’s a dedicated backup server, maybe dedicated offline media like tapes, the backup server needs to drive the backup, etc).

    There are some things that rsnapshot, rdiff-backup, duplicity, and so forth won’t do.

    • At least some of them (rdiff-backup, for one) won’t dedup files with different names. If a file is unchanged, it won’t use extra storage, but it won’t identify different identical files at different locations. This usually isn’t all that important for a single host, other than maybe if you rename files, but if you’re backing up many different hosts, as in an institutional setting, they likely files in common. They aren’t intended to back up multiple hosts to a single, shared repository.

    • Pull-only. I think that it might be possible to run some of the above three in “pull” mode, where the backup server connects and gets the backup, but where they don’t have the ability to write to the backup server. This may be desirable if you’re concerned about a host being compromised, but not the backup server, since it means that an attacker can’t go dick with your backups. Think of those cybercriminals who encrypt data at a company and wipe other copies and then demand a ransom for an unlock key. But the “institutional” backup systems are going to be aimed at having the backup server drive all this, and have the backup server have access to log into the individual hosts and pull the backups over.

    • Dedup for non-identical files. Note that restic can do this. While files might not be identical, they might share some common elements, and one might want to try to take advantage of that in backup storage.

    • rdiff-backup and rsnapshot don’t do encryption (though duplicity does). If one intends to use storage not under one’s physical control (e.g. “cloud backup”), this might be a concern.

    • No “full” backups. Some backup programs follow a scheme where one periodically does a backup that stores a full copy of the data, and then stores “incremental” backups from the last full backup. All rsnapshot, rdiff-backup, and duplicity are always-incremental, and are aimed at storing their backups on a single destination filesystem. A split between “full” and “incremental” is probably something you want if you’re using, say, tape storage and having backups that span multiple tapes, since it controls how many pieces of media you have to dig up to perform a restore.

    • I don’t know how Bacula or AMANDA handle it, if at all, but if you have a DBMS like PostgreSQL or MySQL or the like, it may be constantly receiving writes. This means that you can’t get an atomic snapshot of the database, which is critical if you want to be reliably backing up the storage. I don’t know what the convention is here, but I’d guess either using filesystem-level atomic snapshot support (e.g. btrfs) or requiring the backup system to be aware of the DBMS and instructing it to suspend modification while it does the backup. rsnapshot, rdiff-backup, and duplicity aren’t going to do anything like that.

    I’d agree that using the more-heavyweight, “institutional” backup programs can make sense for some use cases, like if you’re backing up many workstations or something.


  • Because every “file” in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.

    I think that you may be thinking of rsnapshot rather than rdiff-backup which has that behavior; both use rsync.

    But I’m not sure why you’d be concerned about this behavior.

    Are you worried about inode exhaustion on the destination filesystem?


  • looks

    For Linux, my off-the-cuff take is that I’m not that excited about it. It means that if you can launch a Unity game and pass it command-line arguments, then you can cause it to take actions that you want. Okay, but usually the security context of someone who can do that and the game that’s running should probably be the same. If you can launch a game with specified parameters to do something bad, you can probably also just do something bad and cut the game out of the picture.

    This is why you have few suid binaries on a Limux system (and should never make something large and complex, like a Unity game, suid) — because then the binary does have a different security context than the launching process.

    Now, it’s possible that there are scenarios where you could make this badly exploitable. Say games have chosen to trust command-line arguments from a remote system, and that game has community servers. Like, maybe they have a lobby app that launches a Unity binary with remotely-specified command line arguments. But in that case, I think that the developer is already asking for trouble.

    Most games are just not going to be sufficiently hardened to avoid problems if an attacker can pass arbitrary command lines anyway. And as the bug points out, on Linux, you can achieve something similar to this for many binaries via using LD_PRELOAD anyway — you can use that route to make fixes for closed-source Linux games. Windows has similar routes, stuff like DLL injection.

    It’s possible that this is more serious on Android. I don’t know if there’s a way to pass command line parameters there, and doubt it, but part of the Android security model is that apps run in isolation, and so if that’s exploitable by any local app, that could cause that model to break down.

    But on Linux — GNU/Linux — I’d think that if someone malicious can already launch games with arbitrary command line parameters on your system, you’re probably not really in much worse trouble due to this bug than you already are.


  • slow

    rsync is pretty fast, frankly. Once it’s run once, if you have -a or -t passed, it’ll synchronize mtimes. If the modification time and filesize matches, by default, rsync won’t look at a file further, so subsequent runs will be pretty fast. You can’t really beat that for speed unless you have some sort of monitoring system in place (like, filesystem-level support for identifying modifications).