

OMG, we as a society need to go back to that instead of filling a 24 hour news cycle with pundits and endless opinons.
I’m surprisingly level-headed for being a walking knot of anxiety.
Ask me anything.
Special skills include: Knowing all the “na na na nah nah nah na” parts of the Three’s Company theme.
I also develop Tesseract UI for Lemmy/Sublinks
Avatar by @SatyrSack@feddit.org


OMG, we as a society need to go back to that instead of filling a 24 hour news cycle with pundits and endless opinons.


So, I set this up recently and agree with all of your points about the actual integration being glossed over.
I already had bot detection setup in my Nginx config, so adding Nepenthes was just changing the behavior of that. Previously, I had just returned either 404 or 444 to those requests but now it redirects them to Nepenthes.
Rather than trying to do rewrites and pretend the Nepenthes content is under my app’s URL namespace, I just do a redirect which the bot crawlers tend to follow just fine.
There’s several parts to this to keep my config sane. Each of those are in include files.
An include file that looks at the user agent, compares it to a list of bot UA regexes, and sets a variable to either 0 or 1. By itself, that include file doesn’t do anything more than set that variable. This allows me to have it as a global config without having it apply to every virtual host.
An include file that performs the action if a variable is set to true. This has to be included in the server portion of each virtual host where I want the bot traffic to go to Nepenthes. If this isn’t included in a virtual host’s server block, then bot traffic is allowed.
A virtual host where the Nepenthes content is presented. I run a subdomain (content.mydomain.xyz). You could also do this as a path off of your protected domain, but this works for me and keeps my already complex config from getting any worse. Plus, it was easier to integrate into my existing bot config. Had I not already had that, I would have run it off of a path (and may go back and do that when I have time to mess with it again).
The map-bot-user-agents.conf is included in the http section of Nginx and applies to all virtual hosts. You can either include this in the main nginx.conf or at the top (above the server section) in your individual virtual host config file(s).
The deny-disallowed.conf is included individually in each virtual hosts’s server section. Even though the bot detection is global, if the virtual host’s server section does not include the action file, then nothing is done.
Note that I’m treating Google’s crawler the same as an AI bot because…well, it is. They’re abusing their search position by double-dipping on the crawler so you can’t opt out of being crawled for AI training without also preventing it from crawling you for search engine indexing. Depending on your needs, you may need to comment that out. I’ve also commented out the Python requests user agent. And forgive the mess at the bottom of the file. I inherited the seed list of user agents and haven’t cleaned up that massive regex one-liner.
# Map bot user agents
## Sets the $ua_disallowed variable to 0 or 1 depending on the user agent. Non-bot UAs are 0, bots are 1
map $http_user_agent $ua_disallowed {
default 0;
"~PerplexityBot" 1;
"~PetalBot" 1;
"~applebot" 1;
"~compatible; zot" 1;
"~Meta" 1;
"~SurdotlyBot" 1;
"~zgrab" 1;
"~OAI-SearchBot" 1;
"~Protopage" 1;
"~Google-Test" 1;
"~BacklinksExtendedBot" 1;
"~microsoft-for-startups" 1;
"~CCBot" 1;
"~ClaudeBot" 1;
"~VelenPublicWebCrawler" 1;
"~WellKnownBot" 1;
#"~python-requests" 1;
"~bitdiscovery" 1;
"~bingbot" 1;
"~SemrushBot" 1;
"~Bytespider" 1;
"~AhrefsBot" 1;
"~AwarioBot" 1;
# "~Poduptime" 1;
"~GPTBot" 1;
"~DotBot" 1;
"~ImagesiftBot" 1;
"~Amazonbot" 1;
"~GuzzleHttp" 1;
"~DataForSeoBot" 1;
"~StractBot" 1;
"~Googlebot" 1;
"~Barkrowler" 1;
"~SeznamBot" 1;
"~FriendlyCrawler" 1;
"~facebookexternalhit" 1;
"~*(?i)(80legs|360Spider|Aboundex|Abonti|Acunetix|^AIBOT|^Alexibot|Alligator|AllSubmitter|Apexoo|^asterias|^attach|^BackDoorBot|^BackStreet|^BackWeb|Badass|Bandit|Baid|Baiduspider|^BatchFTP|^Bigfoot|^Black.Hole|^BlackWidow|BlackWidow|^BlowFish|Blow|^BotALot|Buddy|^BuiltBotTough|
^Bullseye|^BunnySlippers|BBBike|^Cegbfeieh|^CheeseBot|^CherryPicker|^ChinaClaw|^Cogentbot|CPython|Collector|cognitiveseo|Copier|^CopyRightCheck|^cosmos|^Crescent|CSHttp|^Custo|^Demon|^Devil|^DISCo|^DIIbot|discobot|^DittoSpyder|Download.Demon|Download.Devil|Download.Wonder|^dragonfl
y|^Drip|^eCatch|^EasyDL|^ebingbong|^EirGrabber|^EmailCollector|^EmailSiphon|^EmailWolf|^EroCrawler|^Exabot|^Express|Extractor|^EyeNetIE|FHscan|^FHscan|^flunky|^Foobot|^FrontPage|GalaxyBot|^gotit|Grabber|^GrabNet|^Grafula|^Harvest|^HEADMasterSEO|^hloader|^HMView|^HTTrack|httrack|HTT
rack|htmlparser|^humanlinks|^IlseBot|Image.Stripper|Image.Sucker|imagefetch|^InfoNaviRobot|^InfoTekies|^Intelliseek|^InterGET|^Iria|^Jakarta|^JennyBot|^JetCar|JikeSpider|^JOC|^JustView|^Jyxobot|^Kenjin.Spider|^Keyword.Density|libwww|^larbin|LeechFTP|LeechGet|^LexiBot|^lftp|^libWeb|
^likse|^LinkextractorPro|^LinkScan|^LNSpiderguy|^LinkWalker|msnbot|MSIECrawler|MJ12bot|MegaIndex|^Magnet|^Mag-Net|^MarkWatch|Mass.Downloader|masscan|^Mata.Hari|^Memo|^MIIxpc|^NAMEPROTECT|^Navroad|^NearSite|^NetAnts|^Netcraft|^NetMechanic|^NetSpider|^NetZIP|^NextGenSearchBot|^NICErs
PRO|^niki-bot|^NimbleCrawler|^Nimbostratus-Bot|^Ninja|^Nmap|nmap|^NPbot|Offline.Explorer|Offline.Navigator|OpenLinkProfiler|^Octopus|^Openfind|^OutfoxBot|Pixray|probethenet|proximic|^PageGrabber|^pavuk|^pcBrowser|^Pockey|^ProPowerBot|^ProWebWalker|^psbot|^Pump|python-requests\/|^Qu
eryN.Metasearch|^RealDownload|Reaper|^Reaper|^Ripper|Ripper|Recorder|^ReGet|^RepoMonkey|^RMA|scanbot|SEOkicks-Robot|seoscanners|^Stripper|^Sucker|Siphon|Siteimprove|^SiteSnagger|SiteSucker|^SlySearch|^SmartDownload|^Snake|^Snapbot|^Snoopy|Sosospider|^sogou|spbot|^SpaceBison|^spanne
r|^SpankBot|Spinn4r|^Sqworm|Sqworm|Stripper|Sucker|^SuperBot|SuperHTTP|^SuperHTTP|^Surfbot|^suzuran|^Szukacz|^tAkeOut|^Teleport|^Telesoft|^TurnitinBot|^The.Intraformant|^TheNomad|^TightTwatBot|^Titan|^True_Robot|^turingos|^TurnitinBot|^URLy.Warning|^Vacuum|^VCI|VidibleScraper|^Void
EYE|^WebAuto|^WebBandit|^WebCopier|^WebEnhancer|^WebFetch|^Web.Image.Collector|^WebLeacher|^WebmasterWorldForumBot|WebPix|^WebReaper|^WebSauger|Website.eXtractor|^Webster|WebShag|^WebStripper|WebSucker|^WebWhacker|^WebZIP|Whack|Whacker|^Widow|Widow|WinHTTrack|^WISENutbot|WWWOFFLE|^
WWWOFFLE|^WWW-Collector-E|^Xaldon|^Xenu|^Zade|^Zeus|ZmEu|^Zyborg|SemrushBot|^WebFuck|^MJ12bot|^majestic12|^WallpapersHD)" 1;
}
# Deny disallowed user agents
if ($ua_disallowed) {
# This redirects them to the Nepenthes domain. So far, pretty much all the bot crawlers have been happy to accept the redirect and crawl the tarpit continuously
return 301 https://content.mydomain.xyz/;
}


I think with automatic updates enabled, they seem to install automatically without confirmation. I usually wake up every few days to a notification that a few apps updated overnight.


Is the bootloader unlockable?


Not all of my devices have smart lock, and the two that do only support on-body detection, GPS-based location, or trusted bluetooth devices. Those two are also the ones that aren’t fully de-Googled, so I’m guessing the smart lock is dependent on that?
I also have a block house with metal roof. With LineageOS not having A-GPS support it makes location-based smart lock untenable.
Bah. Probably just going to disable the lock entirely.


Honestly, I prefer A11 and agree it’s gotten worse sense. Thinking mostly along the lines of security patches and the like.
The GSI ROM I’m running on my secondary one is LineageOS on top of the stock system (basically how GSI images work). There’s no official LineageOS support, but I’m honestly not sure what the limitation is; may just be no one willing to maintain it.


Unfortunately they don’t make them anymore, and they’re stuck on Android 11. I have two, and one of them is running a GSI ROM for Android 13 but it’s got issues with VoLTE so it’s not my daily driver.


I don’t say this lightly, but completely on par with a classic Nokia with the exception of the touch screen. Not that I’ve damaged it, but touch screens are the weak point in all modern phones.


Yeah, I dunno how accurate, let alone comprehensive, this repo of shame is.
From: https://github.com/zenfyrdev/bootloader-unlock-wall-of-shame/blob/main/brands/cat/README.md
Cat’s phones have the OEM Unlock option in the settings app, but the typical fastboot flashing unlock/fastboot oem unlock just returns an unknown command error.
They didn’t bother to list what model they tested before deeming the entire brand “Terrible”. I have two CAT S22 Flips (still daily driving one of them), and both of them unlocked immediately with the fastboot command and without requiring an unlock key.
Mostly moot, though. You can still get CAT phones on the secondhand market but Bullit Group (the manufacturer of their branded phones) went out of business in April 2024.
I’m honestly not sure why you’d buy a phone from a tractor company anyways.
Because their phones are rugged AF and nearly indestructible.


Gonna start a side hustle of forking open source projects and removing the cringey anime girls.


I had to google “Ozempic face” because I’ve apparently been living under a rock.
In case anyone else was under a rock…
From Cleveland Clinic:



So what if Lemmy, Piefed, Mbin, and NodeBB made it so that only the first matching community gets the post?
Not sure about the others, but doesn’t Lemmy do that already (only applies the first matching community)? I’ve been out of the loop for several months, so maybe it changed, but I thought it already did that?


It’s strange how we’ve moved from mall shopping to online shopping to now AI shopping for us
Well, “we” only did the first move because it was more convenient. The latter is being forced on us.


Google’s methods are shitty and exploitative, yes, but this is far from “censoring”. And “censor” is not used just for a clickbait title - the author claims “censorship” multiple times in the article before I stopped reading for health reasons (the doctor says if I keep rolling my eyes, my ocular muscles will spasm and eject my eyeballs).
Really wish people would stop wielding powerful words irresponsibly.
Yeah, maybe add “[Satire]” to the title because it reads too much like every other AI-bro press release lol.


Yeah, A/C is a power sink; I’m not arguing that. But people are increasingly in need of it for survival. No one needs AI (the biggest datacenter power suck to date).


If you need a printer now, I’d just make the best of it while saving for a good laser printer. The vertical alignment may be able to be fixed with a calibration (check the manual to see if there’s a procedure). Not sure about the jams, but usually a good cleaning will fix that.
Ink-jet printers are money pits. Laser printers are superior in just about every way. They cost a little more upfront, but the long-term costs are much, much less than any inkjet. If you only need black and white, you can get a decent laser printer for $100-150. I bought mine in 2014, use it infrequently, and am still on the toner that came with it. Paid $100 for it, and it’s printed every time I’ve needed it for over a decade and going.
If you need to print color (or photos), color lasers are a bit more expensive but not by much (maybe double the cost of a B&W one). Honestly, though, unless you print pictures all the time, it’s cheaper and easier to just take them somewhere to be printed (or use an online service and have the prints mailed to you).


Quickly send files, paste images/text snippets between devices.
I’m using the older Snapdrop (which PD was forked from) with some patches I made to:
It has 100% replaced emailing things to myself or shuffling files to/from Nextcloud. I probably use it to send text (URLs, clipboard contents, etc) to/from my phone as much as I use it for sending files back and forth.


Right? It sounds delicious. Not sure how that would fly with modern health and safety rules, though. The Wikipedia entry says a New York restaurant did one for ~8 months, so it must be possible somehow.
You joke, but there are news “articles” that are just a bunch of quotes of what people are saying about a topic on Reddit or Twitter that read very similarly to what you commented.