Did you know most coyotes are illiterate?

Lemmy.ca flavor

  • 0 Posts
  • 20 Comments
Joined 4 months ago
cake
Cake day: June 7th, 2025

help-circle
  • Weirdly, Chris does have a GitHub repo for it, which would be way more secure for serving downloads, but it’s not mentioned on the download page. We’re also assuming that Chris is trustworthy and has not included a malicious payload into this file, in which case it wouldn’t matter if the file is correct - it would still be malicious.

    My advice is don’t run things you don’t understand or that people you trust have not vouched for. If you use Linux, you inherently trust your repository maintainers to not serve you malicious code and to audit the packages they are maintaining, so you can be delivered safe software from secure repositories without needing to understand every line of code. This isn’t 100% bulletproof, as we saw with the XZ Utils backdoor, but it’s a hell of a lot safer than piping things raw.


  • Worth noting that when What died, ~4 new sites popped up immediately and invited all the old members, and everyone raced to re-upload everything from What onto them, which was actually pretty effective. At this point, RED and OPS have greatly surpassed What in many ways, aside from some releases that never made it back (you can actually find out which releases used to exist because What’s database was made available after its death). Users and staff are a lot more prepared if it happens again, e.g. keeping track of all metadata via “gazelle-origin”.

    If by “in” you mean how to get into them, generally you’re supposed to have a friend invite you. If you don’t have anyone you know on private trackers, you’ve gotta get in from scratch. Luckily, RED and OPS both do interviews to test your knowledge on the technicals of music formats, though I’ve heard RED’s interview queues are long and OPS’s interviews are often just not happening: https://interviewfor.red/en/index.html https://interview.orpheus.network/

    Alternatively, you can interview for MAM, which is IMO the best ebook/audiobook tracker. They’re super chill and have a very simple interview e.g. “what is a tracker”: https://www.myanonamouse.net/inviteapp.php. After that, you can just hang around there for a while until you can get into their recruitment forums to get invites to other entry-level trackers, and then on those entry-level trackers you can get recruited into slightly higher-level trackers, and so on, and eventually RED/OPS should be recruiting from somewhere.

    This can feel a little silly and convoluted, but I guess I’d just appreciate that these sites put the effort into conducting interviews for new people at all, since the alternative is that you will just never get into anything without a friend. Reddit’s /r/trackers wiki is unfortunately one of the better places for information about private trackers if you want to do further reading.




  • Yes, it’s allowed and encouraged between RED<->OPS. There are a few tools on the RED and OPS forums to automate most of the process (e.g. Transplant, REDCurry, Takeout, Orpheus-Populator, etc.). Cross-posting torrents on many sites is allowed and fine, you just have to be aware of the rules of the source site, e.g. some places don’t want their internals to be shared, or some have a literal timer countdown before cross-posting is allowed. On the other hand, most sites are not going to enforce other sites’ exclusivity demands (PTP explicitly has a note about this). If an exclusive file is cross-posted onto PTP, PTP isn’t going to take it down on anyone’s behalf.

    I’ll note that private tracker culture has warmed up quite a bit in the past decade and a half that I’ve been on them. Trackers (and their users) don’t usually see other trackers as rivals/competitors anymore, release groups are respectful of each other, there are a ton of tutorials and help forums around to help low-skill members learn how to do the advanced stuff, and so on. There are recognizable usernames everywhere, and the general vibe is to cross-upload as much as possible and help build everyone’s trackers together. Cross-seed (the program) has helped a lot with this, and seedbases have become very strong even on smaller trackers as a result.





  • I don’t think it will be a big deal to transcode MP3 to Opus as long as you’re okay with for-sure having theoretically-scuffed-up audio files. Every time an encoder has a go at the files (especially different encoders) they’ll leave little artifacting marks all over the waveforms, typically seen as little “blocks”. Are they audible? Doubtful. If you want to keep a neat and high-quality library I’d recommend collecting FLAC next time around.

    Also, this won’t work on Win11, and I don’t think you can make it transcode MP3, but if anyone happens to have slightly different requirements I’ll plug https://gitlab.com/beep_street/mkopuslibrary, which I use to keep my FLAC library in sync with a parallel Opus library for mobile use.



  • No, and in fact they have fought to unseal and publish the articles they have. The point is that if you read the subpoenas, they request a lot of data from Signal and Signal can only ever return the phone number, account creation date, and last connected timestamp. So either Signal is consistently lying to various governments or they actually don’t have any of that data. Signal’s client is also open-source and has been audited, and they have published many blogposts about how the technology works.

    I’d strongly recommend digging deeper into this and trusting the auditors and experts instead of dismissing it based on lazy and cynical guesses. If you don’t trust anyone you’re welcome to read the source code of the client yourself. Soatok recently posted an 8-part series going through Signal’s encryption that you can read as a primer: https://soatok.blog/2025/02/18/reviewing-the-cryptography-used-by-signal/.




  • There’s nothing wrong with Signal’s centralization model in a worrying sense. It acts only as a clueless message relay, and it has near-zero information on any of its users, even as it delivers messages from person to person. The only information Signal knows is if a phone number is registered and the last time it connected to the server. There is great care taken to make sure everything else is completely end-to-end encrypted and unknowable, even by subpoena.

    The only real issue with Signal’s centralization is that if Signal the company goes down, then all clients can no longer work until someone stands up a new server to act as a relay again. Signal isn’t the endgame of privacy, but it’s the best we have right now for a lot of usecases, and it’s the only one I’ve had any luck converting normies to as it’s very polished and has a lot of features. IMO, by the time the central Signal server turns into an actual problem we’ll hopefully have excellent options available to migrate to.

    Also TMK, the only reason you still need a phone number for Signal is to combat spam. You can disable your phone number being shown to anyone else in the app and only use temporary invite codes to connect with people, so I don’t count the phone number as a huge problem, though the requirement does still annoy me as it makes having multiple accounts more difficult and asserts a certain level of privilege.


  • If you’re only at 10mbps upload you’ll have to be very careful about selecting microsized 1080p (~4-9mbps) or quality 720p (~6-9mbps) encodes, and even then I really wouldn’t bother. If you’re not able to get any more upload speed from your plan then you’ll either have to cancel the idea or host everything from a VPS.

    You can go with a VPS and maybe make people chip in for the storage space, but in that case I’d still lean towards either microsized 1080p encodes or 1080p WEB-DL (which are inherently efficient for the size) if you want to have a big content base without breaking the bank. E.g, these prices look pretty doable if you’ve got people that can chip in: https://hostingby.design/app-hosting/. I’m not very familiar with what VPS options are available or reputable so you’ll have to shop around. Anything with a big harddrive should pretty much work, though I’d probably recommend at least a few gigs of RAM just for Jellyfin (my long-running local instance is taking 1.3GB at the moment; no idea what the usual range might be). Also, you likely won’t be able to transcode video, so you’ll have to be a little careful about what everyone’s playback devices support.

    Edit: Also, if you’re not familiar with microsized encodes, look for groups like BHDStudio, NAN0, hallowed, TAoE, QxR, HONE, PxHD, and such. I know at least BHDStudio, NAN0, and hallowed are well-regarded, but intentionally microsizing for streaming is a relatively new concept, and it’s hard to sleuth out who’s doing a good job and who’s just crushing the hell out of the source and making a mess - especially because a lot of these groups don’t even post source<->encode comparisons (I can guess why). You can find a lot of them on TL, ATH, and HUNO, if those acronyms mean anything to you. Otherwise, a lot of these groups post completely publicly as well, since most private trackers do not allow microsizing.




  • Screen-sharing is part of chat apps nowadays. You’re fully within your rights to stay on IRC and pretend that featureful chat is not the norm these days, but that doesn’t mean society is going to move to IRC with you. Like it or not, encrypted chat apps have to become even more usable for the average person for adoption to go up. This reminds me of how all the old Linux-heads insisted that gaming was for children and that Linux didn’t need gaming. Suddenly now that Linux has gaming, adoption is going way up - what a coincidence.

    Edit: Also for the record, I have a tech-savvy friend who refuses to move to Signal until there are custom emoji reactions, of all things. You can definitely direct your ire towards these people, but the reality is some people have a certain comfort target, and convincing them to settle for less is often harder than improving the app itself.


  • The nice thing is that Linux is always improving and Windows is always in retrograde. The more users Linux has, the faster it will improve. If the current state of Linux is acceptable enough for you as a user, then it should be possible to get your foot in the door and ride the wave upwards. If not, wait for the wave to reach your comfort level. People always say <CURRENT_YEAR> is the year of the Linux desktop but IMO the real year of the Linux desktop was like 4 or 5 years ago now, and hopefully that captured momentum will keep going until critical mass is achieved (optimistically, I think we’re basically already there).


  • To be fair, it’s also basically impossible to have extremely high quality AV1 video, which is what a lot of P2P groups strive for. A lot of effort has gone into trying to do so and results weren’t good enough compared to x264, so it’s been ignored. AV1 is great at compression efficiency, but it can’t make fully transparent encodes (i.e., indistinguishable from the source). It might be different with AV2, though again even if it’s possible it may be ignored because of compatibility instead; groups still use DTS-HD MA over the objectively superior FLAC codec for surround sound because of hardware compatibility to this day. (1.0/2.0 channels they use FLAC because players support that usually)

    As for HEVC/x265, it too is not as good as x264 at very high quality encoding, so it’s also ignored when possible. Basically the breakdown is that 4k encoding uses x265 in order to store HDR and because the big block efficiency of x265 is good enough to compress further than the source material. x264 wouldn’t be used for 4k encoding even if it could store HDR because its compression efficiency is so bad at higher resolutions that to have any sort of quality encode it would end up bigger than the source material. Many people don’t even bother with 4k x265 encodes and just collect the full disc/remuxes instead, because they dislike x265’s encoder quality and don’t deem the size efficiency worth its picture quality impact (pretty picky people here, and I’m not really in that camp).

    For 1080p, x265 is only used when you want to have HDR in a 1080p package, because again x265’s picture quality can’t match x264, but most people deem HDR a bigger advantage. x264 is still the tool of choice for non-HDR 1080p encodes, and that’s not a culture thing, that’s just a quality thing. When you get down into public P2P or random encoding groups it’s anything goes, and x265 1080p encodes get a lot more common because x265 efficiency is pretty great compared to x264, but the very top-end quality just can’t match x264 in the hands of an experienced encoder, so those encoding groups only use x265 when they have to.

    Edit: All that to say, we can’t entirely blame old-head culture or hardware compatibility for the unpopularity of newer formats. I think the home media collector usecase is actually a complete outlier in terms of what these formats are actually being developed for. WEB-DL content favors HEVC and AV1 because it’s very efficient and displays a “good enough” quality picture for their viewers. Physical Blu-Rays don’t have to worry about HDD space or bandwidth and just pump the bitrate insane on HEVC so that the picture quality looks great. For the record, VVC/x266 is already on the shortlist for being junk for the usecases described above (x266 is too new to fully judge), so I wouldn’t hold my breath for AV2 either. If you’re okay with non-transparency, I’d just stick with HEVC WEB-DLs or try to find good encoding groups that target a more opinionated quality:size ratio (some do actually use AV1!). Rules of thumb for WEB-DL quality are here, though it will always vary on a title-by-title basis.