• 0 Posts
  • 40 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle



  • To demonstrate the efficacy of the tiny screen, the researchers reproduced The Kiss, a famous artwork painted by Gustav Klimt. The image was shown in perfect resolution on the screen, which at approximately 1.4 x 1.9 mm was 1/4000th that of a standard smartphone.

    This makes me doubt the author of the article’s credibility. What exactly is the “perfect resolution” of a hand painted piece of art?

    The underlying paper is published in Nature which adds more credibility to its significance but an article that presents none of the limitations, drawbacks, or broader industry context that might hold something like this back isn’t adding much. What was the colour depth? Refresh rate? Is it thrown if the external light shifts and changes? How many children have to be sacrificed to the machine gods to produce it? Etc. etc.


  • And your point is wrong because you keep boiling it down to simple black and white.

    The Nobel prize is not purely political and is not devoid of merit.

    The world is not full of binary systems. It’s made of multi variable systems where multiple influences can be true at the same time.

    If you want to make a point about why accurately predicting the structure of hundreds of thousands of proteins doesn’t deserve the Nobel in chemistry then I’m all ears. Please tell us all exactly why you think their prize was political and not meritocratic, and why predicting protein structures automatically is not important?

    Because if you can’t answer that very specific question, then you weren’t making a point relevant to the conversation, you were making a snide generalization to hear yourself speak.



  • Lmao, it’s binary cause you say it’s binary.

    Bro grow up. The world is not black and white. Literally not a single award on the planet is meritocratic if you insist on dealing in absolutes. Every award is awarded by some committee and there is some room left for human judgement, which leaves room for human bias, which makes it not perfectly meritocratic.

    If you want to go an unhinged rant that no one wants to listen to then email the nobel association directly, don’t waste federated server time.






  • I mean I agree that it’s probably vastly overvalued as a whole, the leap between current LLM capabilities and an actual trusted engineer is pretty big and it seems like a lot of people are valuing them at the level of engineer capabilities.

    But the caveats are that simulated neural networks are a technological avenue that theoretically could get there eventually (probably, there’s still a lot of unknowns about cognition, but large AI models are starting to approach the scale of neurons in the human brain and as far as we can tell there’s no quantum magic involved in cognition, just synapses firing which neural networks can simulate).

    And the other caveat is like the bear trash can analogy… the whold park ranger story where they said that it’s impossible to make a bear-proof trash can because there’s significant overlap between the smartest bears and the dumbest humans.

    Now I don’t think AI is even that close to bear level in terms of general intelligence, but a lot of current jobs don’t require that much intelligence to do them, we just have people doing them because theres some inherent problem or step in the process that’s semantic or fuzzy pattern matching based and computers / traditional software just previously couldn’t do it, so we have humans doing stuff like processing applications where they’re just mindlessly reading, looking for a few keywords and stamping. There are a lot of industries where AI could literally be the key algorithm needed to fully automated the industry, or radically minimize the number of human workers needed.

    Crypto was like ‘hey that decentralized database implementation is pretty cool’, in what situations would that be useful? And the answer was basically just ‘laundering money’.

    Neural network algorithms on the other hand present possible optimizations for a truly massive number of tasks in society that were otherwise unautomatable.








  • I’d argue, that it sometimes adds complexity to an already fragile system.

    You don’t have to argue that, I think thats inarguably true. But more complexity doesn’t inherently mean worse.

    Automatic braking and collision avoidance systems in cars add complexity, but they also objectively make cars safer. Same with controls on the steering wheel, they add complexity because you now often have two places for things to be controlled and increasingly have to rely on drive by wire systems, but HOTAS interfaces (Hands On Throttle And Stick) help to keep you focused on the road and make the overall system of driving safer. While semantic modelling and control systems absolutely can make things less safe, if done well they can also actually let a robot or machine act in more human ways (like detecting that they’re injuring someone and stopping for instance).

    Direct control over systems without unreliable interfaces, semantic translation layer, computer vision dependancy etc serves the same tasks without additional risks and computational overheads.

    But in this case, Waymo is still having to do that. They’re still running their sensor data through incredibly complex machine learning models that are somewhat black boxes and producing semantic understandings of the world around it, and then act on those models of the world. The primary difference with Waymo and Tesla isn’t about complexity or direct control of systems, but that Tesla is relying on camera data which is significantly worse than the human eye / brain, whereas Waymo and everyone else is supplementing their limited camera data with sensors like Lidar and Sonar that can see in ways and situations humans can’t and that lets them compensate.

    That and that Waymo is actually a serious engineering company that takes responsibility seriously, takes far fewer risks, and is far more thorough about failure analysis, redundancy, etc.


  • I don’t misunderstand how they work at all.

    Quite frankly what you’re saying doesn’t matter in the context of my point. It literally does not matter whatsoever that they are not logic based but language based, as long as they produce helpful results, and they inarguably do. You are making the same types of point that my middle school librarians made about Wikipedia. You’re getting hung up on how it works, and since that’s different than how previous information sources work, you’re declaring that they cannot be trusted and ignoring the fact that regardless of how they work, they are still right most of the time.

    As I said, it is far faster to ask copilot web a question about salesforce and verify its answers, then it is to try and manually search through their nightmarish docs. Same goes for numerous other things.

    Everyone seems so caught up in the idea that it’s just a fancy text prediction machine and fail to consider what it means about our intelligence that those text prediction machines are reasonably correct so much. All anthropological research has suggested that language is a core part of why humans are so intelligent, yet everyone clowns on a language based collection of simulated neurons like it can’t have anything remotely to do with intelligence.