• TubularTittyFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    16 hours ago

    having the ability to do so doesn’t necessarily mean they will do so.

    There are plenty of terrible therapists, preists, family, and friends out there. Personally I gave up on asking for people for ‘advice’ 20 years ago.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      16 hours ago

      having the ability to do so doesn’t necessarily mean they will do so.

      No, but they have a profit motive to do so. And I’d rather assume the worst and be wrong rather than deal with another 23andMe situation in a decade. Because it will happen eventually. VC money isn’t endless, and they’re pissing away money like a pro athlete in a club.

      You can trust them if you want, but I’m not naive enough to do that myself.

      There are plenty of terrible therapists, preists, family, and friends out there.

      Preaching to the choir, I’ve dumped people from all noted categories for being shitty. I gave up on therapy about 15 years ago but my partner convinced me to go back. I looked for someone who fit my specific needs, and found someone who is rebuilding my trust in therapists

      I trust my therapist not to randomly decide to give out my info because their job relies on that. AI chat bots flat out tell you they will use what you give them for their ‘training’ purposes, which means they have access to it and can use it or sell it as they please.

      • NewDayRocks@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        For some people, paying with their data is a lot cheaper than paying for therapy or religion. I do not fault them for this, especially if they are getting similar results.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 hours ago

          if they are getting similar results.

          That ‘if’ is doing a hurculean amount of effort, given the reports of ChatGPT psychosis, because again, you’re dealing with a stochastic parrot not a real person giving you actual advice.

          • NewDayRocks@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            Believe it or not AI results are doing fine, which is why people use it.

            Yes they will produce some funny/tragic results that are both memeable and newsworthy, but by and large they do what they are asked.

            If the results were poor you wouldn’t have adoption and your AI problem is solved.

            We have had chat bots since the late 90s. No one used them for therapy.

            • Passerby6497@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              If the results were poor you wouldn’t have adoption

              But the argument is that people are using them because they can’t afford to go to a real one, so conflating desperation to efficacy isn’t a good argument, given it’s that or nothing.

              And we all know tons of people accept a turd product because they don’t think they have a better option.

              We have had chat bots since the late 90s. No one used them for therapy.

              But they are now, which is the problem.