• 0 Posts
  • 8 Comments
Joined 2 months ago
cake
Cake day: June 24th, 2025

help-circle


  • djmikeale@feddit.dktoPrivacy@lemmy.mlTrust us bro
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    2 days ago

    Aha I misunderstood, thanks for clarifying.

    Actually for this specific context, there’s an easy solution: I reckon for llms self-hosting would be the way to go, if your hardware supports it. I’ve heard a lot of the smaller models have gotten a lot more powerful over the last year.


  • djmikeale@feddit.dktoPrivacy@lemmy.mlTrust us bro
    link
    fedilink
    arrow-up
    34
    arrow-down
    1
    ·
    2 days ago

    This is any company, government, or other organisation with +80 employees. The two other alternatives are

    1. Have all data in Excel with no data governance, robust procedures, or trust in data, as the organisation grows in size
    2. Use only external tools (which in turn are owned by organisations that work like I described in my parent comment)

    I’d love to hear of there’s other ways of doing this stuff that actually works, but so far I just haven’t experienced it in my career yet.


  • djmikeale@feddit.dktoPrivacy@lemmy.mlTrust us bro
    link
    fedilink
    arrow-up
    70
    arrow-down
    3
    ·
    2 days ago

    As a person working in a field close to data engineering this sounds like they’re actually honest about the process.

    Tldr: it’s not possible to “just delete” everything at once, even though we’d love to be able to.

    There’s so many layers of where information is stored, and such insane amounts of data in their data platform. so running a clean up job to delete a single persons data in oltp databases, data lakes, dwh’s, backups, etc, would both be expensive and inefficient. Instead what they then do is to do it in stages: flip a flag somewhere (is_deleted = true) which lets it be removed from view initially, and then running periodic clean-up jobs.