I think the difference is that the LLMs can read all the context of your project and figure out what will work. If you want to add a feature, it will do so in a way that won’t break other things or offer you options if you can’t make that change without breaking something.
Also, LLMs are super fast compared to humans so even when it’s slightly wrong, it can be fixed with another prompt. People act like the LLM doing something wrong makes using LLMs pointless, but they are ignoring the fact that the LLM can always take another prompt and keep working until it gets it right, which is usually immediately once the issue is recognized.
You can even automate the feedback loop by describing the test scenarios and then having it run those tests, see the failures, and fix the code all by itself.
I get LLMs might not work as well for law at this point, but they do work for coding.
I’ll have to take your word for it! “figuring out” sounds like a higher-order process than a large language model is capable of to me, but if what they do is as good, then great.
I think I’m just skeptical because of how horrendously bad LLM output is in my field of expertise (despite looking fine to a lay person), so I immediately analogize that to other areas. The output of law and coding are both really about language, and the process of creating that output on the part of a lawyer or coder are really about language, so I can see how one might think LLMs would be able to recreate what lawyers and coders do. But boy it doesn’t strike me as remotely plausible that LLMs will ever get there, at least for law. I have no doubt some yet-unimagined technology could get us there, but “next word prediction” just isn’t gonna be it.
I think the difference is that the LLMs can read all the context of your project and figure out what will work. If you want to add a feature, it will do so in a way that won’t break other things or offer you options if you can’t make that change without breaking something.
Also, LLMs are super fast compared to humans so even when it’s slightly wrong, it can be fixed with another prompt. People act like the LLM doing something wrong makes using LLMs pointless, but they are ignoring the fact that the LLM can always take another prompt and keep working until it gets it right, which is usually immediately once the issue is recognized.
You can even automate the feedback loop by describing the test scenarios and then having it run those tests, see the failures, and fix the code all by itself.
I get LLMs might not work as well for law at this point, but they do work for coding.
I’ll have to take your word for it! “figuring out” sounds like a higher-order process than a large language model is capable of to me, but if what they do is as good, then great.
I think I’m just skeptical because of how horrendously bad LLM output is in my field of expertise (despite looking fine to a lay person), so I immediately analogize that to other areas. The output of law and coding are both really about language, and the process of creating that output on the part of a lawyer or coder are really about language, so I can see how one might think LLMs would be able to recreate what lawyers and coders do. But boy it doesn’t strike me as remotely plausible that LLMs will ever get there, at least for law. I have no doubt some yet-unimagined technology could get us there, but “next word prediction” just isn’t gonna be it.