Lol, saying you are “beginning a process designed to delete your data” is a very different thing to actually deleting your data.

  • djmikeale@feddit.dk
    link
    fedilink
    arrow-up
    70
    arrow-down
    3
    ·
    2 days ago

    As a person working in a field close to data engineering this sounds like they’re actually honest about the process.

    Tldr: it’s not possible to “just delete” everything at once, even though we’d love to be able to.

    There’s so many layers of where information is stored, and such insane amounts of data in their data platform. so running a clean up job to delete a single persons data in oltp databases, data lakes, dwh’s, backups, etc, would both be expensive and inefficient. Instead what they then do is to do it in stages: flip a flag somewhere (is_deleted = true) which lets it be removed from view initially, and then running periodic clean-up jobs.

    • kadu@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      1 day ago

      A photo I deleted 10 years ago resurfaced on my Google Drive account recently.

      I’m sure it was deleted, and it had never appeared before until now.

      But sure, they’re being honest!

      • djmikeale@feddit.dk
        link
        fedilink
        arrow-up
        34
        arrow-down
        1
        ·
        2 days ago

        This is any company, government, or other organisation with +80 employees. The two other alternatives are

        1. Have all data in Excel with no data governance, robust procedures, or trust in data, as the organisation grows in size
        2. Use only external tools (which in turn are owned by organisations that work like I described in my parent comment)

        I’d love to hear of there’s other ways of doing this stuff that actually works, but so far I just haven’t experienced it in my career yet.

        • dropped_packet@lemmy.zip
          link
          fedilink
          arrow-up
          17
          ·
          2 days ago

          I’m not disputing the technical aspect. But due to these realities I prefer to drastically limit the services I interact with.

          • djmikeale@feddit.dk
            link
            fedilink
            arrow-up
            9
            arrow-down
            1
            ·
            2 days ago

            Aha I misunderstood, thanks for clarifying.

            Actually for this specific context, there’s an easy solution: I reckon for llms self-hosting would be the way to go, if your hardware supports it. I’ve heard a lot of the smaller models have gotten a lot more powerful over the last year.

            • dropped_packet@lemmy.zip
              link
              fedilink
              arrow-up
              6
              arrow-down
              1
              ·
              2 days ago

              Small fine tuned models seem to be where the market as a whole is headed. Even the big players like OpenAI/Google/Meta are doing this as a means to optimize infrastructure. The Qwen3 models have been really interesting to work with.