• 0 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: July 24th, 2023

help-circle
  • So unreliable boilerplate generator, you need to debug?

    Right I’ve seen that it’s somewhat nice to quickly generate bash scripts etc.

    It can certainly generate quick’n dirty scripts as a starter. But code quality is often supbar (and often incorrect), which triggers my perfectionism to make it better, at which point I should’ve written it myself…

    But I agree that it can often serve well for exploration, and sometimes you learn new stuff (if you weren’t expert in it at least, and you should always validate whether it’s correct).

    But actual programming in e.g. Rust is a catastrophe with LLMs (more common languages like js work better though).


  • Have you actually read my text wall?

    Even o1 (which AFAIK is roughly on par with R1-671B) wasn’t really helpful for me. I just need often (actually all the time) correct answers to complex problems and LLMs aren’t just capable to deliver this.

    I still need to try it out whether it’s possible to train it on my/our codebase, such that it’s at least possible to use as something like Github copilot (which I also don’t use, because it just isn’t reliable enough, and too often generates bugs). Also I’m a fast typer, until the answer is there and I need to parse/read/understand the code, I already have written a better version.




  • confidently so in the face of overwhelming evidence

    That I’d really like to see. And I mean more than the marketing bullshit that AI companies are doing…

    For the record I was one of the first jumping on the AI hype-train (as programmer, and computer-scientist with machine-learning background), following the development of GPT1-4, being excited about having to do less boilerplaty code etc. getting help about rough ideas etc. GPT4 was almost so far as being a help (similar with o1 etc. or Anthropics models). Though I seldom use AI currently (and I’m observing similar with other colleagues and people I know of) because it actually slows me down with my stuff or gives wrong ideas, having to argue, just to see it yet again saturating at a local-minimum (aka it doesn’t get better, no matter what input I try). Just so that I have to do it myself… (which I should’ve done in the first place…).

    Same is true for the image-generative side (i.e. first with GANs now with diffusion-based models).

    I can get into more details about transformer/attention-based-models and its current plateau phase (i.e. more hardware doesn’t actually make things significantly better, it gets exponentially more expensive to make things slightly better) if you really want…

    I hope that we do a breakthrough of course, that a model actually really learns reasoning, but I fear that that will take time, and it might even mean that we need different type of hardware.






  • all our brains are pretty dumb and easy to fool.

    Absolutely, but I think that when we’re talking to actually smart people in person we at least subconsciously more likely believe the person that actually has to say something (i.e. really knows something we don’t). With social media a lot of these communication factors are missing, so if the text sounds smart, we may believe it. Sure you can fake and lie, etc. but I think (going back in time) we have a good instinct for people that may help us in any way i.e. through knowledge where to find food, find secure shelter etc. stuff that helps our survival, which in the end for humans is basically good factual knowledge that helps the survival of the species as a whole.

    Today our attention spans are reduced to basically nothing to a large part because of social media promoting emotional (unfortunately mostly negative/anxiety/anger) short messages (and ads of course) that reinforce whatever we believe which likely strengthens bad connections in the brain.

    Also the sheer mass of information is very likely not good for us. I.e. mostly nonfactual information, because well, there’s way more people that “have heard about something” than actually researched and gone down to the ground to get the truth (or at least a good model of it).

    This all mixed, well doesn’t give me a positive outlook unfortunately…


  • Yeah, which actually underlines my point even. We weren’t “designed” for connecting with everyone around the world. Evolutionary there were smaller groups, sometimes having contact with other groups.

    Today we can just connect with our bubbles (like here on lemmy) and get validated and reinforce our beliefs independently if they are right or wrong (mostly factually). As we see this doesn’t seems to be healthy for most people. In smaller circles (like scientific community) this helps, but in general… Well I don’t think I have to explain the situation on the world (and especially currently in the USA) currently…





  • For whom?

    For their unsatisfiable thirst of power?

    I mean they only get power because people give it them, but I think it’s still the fascists who spark this process… It’s a little bit of a chicken-egg situation…

    Do you really think that our wannabe-Nazi Elon is yet on the same level as Hitler? (It could develop into that, but that’s a big if)

    Then you understand neither.

    Oh I do as anarchist… As I say the world isn’t as black and white in reality. The goal of these both are similar: freedom. Albeit different definitions and ways to achieve it.

    But it’s definitely different than fascism which is mostly centralized oppression… I.e. oppression of freedom.

    I’d say it’s basically a wild mix/spectrum of different ideologies in reality.

    I’d agree with (I think) at least one thing the general accepted opinion of liberalism (which is basically capitalism) is definitely leaning more towards fascism than say anarchism, but that probably also has to do with that anarchism in reality is not a model that is achievable at least so far, similar with real communism.