• 1 Post
  • 16 Comments
Joined 8 months ago
cake
Cake day: June 23rd, 2025

help-circle


  • You’re definitely right that image processing AI does not work in a linear manner like how text processing does, but the training and inferences are similarly fuzzy and prone to false positives and negatives. (An early AI model incorrectly identified dogs as wolves because they saw a white background and assumed that that was where wolves would be.) And unless the model starts and stays perfect, you need well-trained doctors to fix it, which apparently the model discourages.




  • It’s nice to see articles that push back against the myth of AI superintelligence. A lot of people who brand themselves as “AI safety experts” preach this ideology as if it is a guaranteed fact. I’ve never seen any of them talk about real life present issues with AI, though.

    (The superintelligence myth is a promotion strategy; OpenAI and Anthropic both lean into it because they know it boosts their stocks.)

    In the case of Moldbook or FaceClaw or whatever they’re calling it, a lot of the AGI talk is sillier than ever, frankly. Many people who register their bots have become entirely lost in their own sauce, convinced that because their bots are speaking in the first person, that they’ve somehow come alive:

    It’s embarrassing, really. People promoting the industry have every incentive to exaggerate their claims on Twitter for the revenue, but some of them are starting to buy into it.




  • Calculators are programmed to respond deterministically to math questions. You don’t have to feed them a library of math questions and answers for them to function. You don’t have to worry about wrong answers poisoning that data.

    On the contrary, LLMs are simply word predictors, and as such, you can poison them with bad data, such as accidental or intentional bias or errors. In other words, that study points to the first step in a vicious negative cycle that we don’t want to occur.


  • Even if we narrowed the scope of training data exclusively to professionals, we would have issues with, for example, racial bias. Doctors underprescribe pain medications to black people because of prevalent myths that they are more tolerant to pain. If you feed that kind of data into an AI, it will absorb the unconscious racism of the doctors.

    And that’s in a best case scenario that’s technically impossible. To get AI to even produce readable text, we have to feed a ton of data that cannot be screened by the people pumping it in. (AI “art” has a similar problem: When people say they trained AI on only their images, you can bet they just slapped a layer of extra data on top of something that other people already created.) So yeah, we do get extra biases regardless.


  • 1/2: You still haven’t accounted for bias.

    First and foremost: if you think you’ve solved the bias problem, please demonstrate it. This is your golden opportunity to shine where multi-billion dollar tech companies have failed.

    And no, “don’t use Reddit” isn’t sufficient.

    3. You seem to be very selectively knowledgeable about AI, for example:

    If [doctors] were really worse, fuck them for relying on AI

    We know AI tricks people into thinking they’re more efficient when they’re less efficient. It erodes critical thinking skills.

    And that’s without touching on AI psychosis.

    You can’t dismiss the results you don’t like, just because you don’t like them.

    4. We both know the medical field is for profit. It’s a wild leap to assume AI will magically not be, even if it fulfills all the other things you assumed up until this point, and ignore every issue I’ve raised.