I think you might have missed my point. I wasn’t listing stuff I had trouble understanding. I was listing stuff that didn’t make much a sense. The distinction is relevant. The end result, even if you manage to excuse why it isn’t as bad, still doesn’t result in anything useful or informative.
I’m also not using fancy words. The only fancy thing that stands out is the the “Bloom filter”, which isn’t a fancy word. It’s just a thing, in particular a data structure. I referenced it because its an indication of an LLM, in behaving like the stochastic parrot that it is. LLMs don’t know anything, and no transformer based approach will ever know anything. The “filter” part of “bloom filter” will have associations to other “filters”. That’s why you see “creator filter” in the same context as “bloom filter”, even though “bloom filter” is something no human expert would put there.
The most amusing and annoying thing about AI slop, is that it’s loved by people who don’t understand the subject. They confuse an observation of slop (by people who… know the subject), with “ah, you just don’t get it”, by people who don’t.
I design and implement systems and “algorithms” like this, as part of my job. Communicating them efficiently is also part of that job. If anyone came to me with this diagram, pre 2022, I’d be genuinely concerned if they were OK, or had some kind of stroke. After 2022, my LLM-slop radar is pretty spot on.
But hey, you do you. I needed to take a shit earlier and made the mistake of answering. Now I’m being an idiot who should know better. Look up Brandolini’s law, if you need an explanation for what I mean.
Let me ask you this tho. When you say “do in fact make sense”. Are you basing it that in the context of what you think this diagram is saying? Or do you mean “do in fact make sense” in the context of knowing how such an algorithm would be constructed?
You still keep missing my points. And they aren’t difficult points either. The fancy jargon words were a basic ass description of what a bloom filter does. So you’re kinda making my argument, which is funny, because you don’t get the argument either, and you also won’t understand why it is funny.
I’m also not tangentially an expert, for fucks sake. I’m the kind who’s day job is to design simpler things than what this diagram is trying to “explain”, and telling you, that it comes across as if made by with a toddler’s understanding. I also didn’t say this was 100% guaranteed to be LLM, I said it smelled like it. I have suggested other possible explanations: stupidity, incompetence, and even a mental stroke.
Your take on being tangentially an expert might be a woosh moment
I’m also out of shits to give at this point. Literally.
You still think that’s relevant? I think you also might want to read again what I’ve written, from the start. Also, since you’re already learning so much. Read about how LLMs and transformers work. Then maybe read what I wrote a couple more times to make sure. Fingers crossed you figure it out. I don’t like being a dick to people. But I genuinely had good intentions to begin with.
Nah. You mistook my “these are the parts that really don’t make sense for a human to make”, with “i don’t understand the subject, or what this complex concept can mean”.
If you don’t see the difference, you’re just going in a loop of trying to argue the wrong point. I was hoping to save you the trouble of “you don’t get it” line, by saying “trust me, I do get it, I’m a god damn expert”.
I’m happy to indulge in explaining things to people who want to learn something. I happily fuck with people who seem disingenuous to that goal. If I was wrong and you genuinely meant to ask “why doesnt this make sense”, then I’m sorry. I misread your intentions, and I’ll keep it in mind.
deleted by creator
I think you might have missed my point. I wasn’t listing stuff I had trouble understanding. I was listing stuff that didn’t make much a sense. The distinction is relevant. The end result, even if you manage to excuse why it isn’t as bad, still doesn’t result in anything useful or informative.
I’m also not using fancy words. The only fancy thing that stands out is the the “Bloom filter”, which isn’t a fancy word. It’s just a thing, in particular a data structure. I referenced it because its an indication of an LLM, in behaving like the stochastic parrot that it is. LLMs don’t know anything, and no transformer based approach will ever know anything. The “filter” part of “bloom filter” will have associations to other “filters”. That’s why you see “creator filter” in the same context as “bloom filter”, even though “bloom filter” is something no human expert would put there.
The most amusing and annoying thing about AI slop, is that it’s loved by people who don’t understand the subject. They confuse an observation of slop (by people who… know the subject), with “ah, you just don’t get it”, by people who don’t.
I design and implement systems and “algorithms” like this, as part of my job. Communicating them efficiently is also part of that job. If anyone came to me with this diagram, pre 2022, I’d be genuinely concerned if they were OK, or had some kind of stroke. After 2022, my LLM-slop radar is pretty spot on.
But hey, you do you. I needed to take a shit earlier and made the mistake of answering. Now I’m being an idiot who should know better. Look up Brandolini’s law, if you need an explanation for what I mean.
deleted by creator
Let me ask you this tho. When you say “do in fact make sense”. Are you basing it that in the context of what you think this diagram is saying? Or do you mean “do in fact make sense” in the context of knowing how such an algorithm would be constructed?
You still keep missing my points. And they aren’t difficult points either. The fancy jargon words were a basic ass description of what a bloom filter does. So you’re kinda making my argument, which is funny, because you don’t get the argument either, and you also won’t understand why it is funny.
I’m also not tangentially an expert, for fucks sake. I’m the kind who’s day job is to design simpler things than what this diagram is trying to “explain”, and telling you, that it comes across as if made by with a toddler’s understanding. I also didn’t say this was 100% guaranteed to be LLM, I said it smelled like it. I have suggested other possible explanations: stupidity, incompetence, and even a mental stroke.
Your take on being tangentially an expert might be a woosh moment
I’m also out of shits to give at this point. Literally.
deleted by creator
You still think that’s relevant? I think you also might want to read again what I’ve written, from the start. Also, since you’re already learning so much. Read about how LLMs and transformers work. Then maybe read what I wrote a couple more times to make sure. Fingers crossed you figure it out. I don’t like being a dick to people. But I genuinely had good intentions to begin with.
deleted by creator
Nah. You mistook my “these are the parts that really don’t make sense for a human to make”, with “i don’t understand the subject, or what this complex concept can mean”.
If you don’t see the difference, you’re just going in a loop of trying to argue the wrong point. I was hoping to save you the trouble of “you don’t get it” line, by saying “trust me, I do get it, I’m a god damn expert”.
I’m happy to indulge in explaining things to people who want to learn something. I happily fuck with people who seem disingenuous to that goal. If I was wrong and you genuinely meant to ask “why doesnt this make sense”, then I’m sorry. I misread your intentions, and I’ll keep it in mind.
deleted by creator
deleted by creator
Wouldn’t want to be accused of using big words, now, would I?
deleted by creator