Without a large training corpus, agents/autocomplete assistants will struggle knowing what to do (or will just confidently do the wrong thing), so people will hesitate to adopt a new language even if it has great features.
Or maybe we’re going to see an explosion of new languages for every project because it’s one way to stay ahead of the machines and make sure that humans are writing your code?
Language dev requires precise semantics. If a user decides that types or other academic nonsense are overrated, they can get away with it. You can be cavalier and just let your application crash - as long as there's a precise formulation underneath you about what 'crashing' means: executing `finally` blocks, rolling back the stack and capturing a stack trace, following an exceptional continuation, etc.
You can't YOLO an Algorithm W, an ANF transformation, memory models or register allocation. Language design has to be just right. I don't think language design selects for people who look at a "95% correct" LLM benchmark and think "wow!". A program generated by a 95%-correct compiler will probably run catastrophically wrong in about 100% of executions.
I don't see any reason to think that this reality has changed or that LLMs will change things on this front.
But in the case of a young language, I worry that LLMs will induce a sort of convergent evolution either in syntax or just idiomatic best practices