

LLMs are - by the nature of how they work - only able to achieve 90-95% accuracy. That’s the theoretical best they can do, according to the people behind OpenAI. And worse, it will be presented as 100% accurate, even going so far as to make up sources wholecloth.
That’s an insane and completely unacceptable error rate for any system even pretending to be mission critical.
Can you imagine sending people to space with a system that has a 1 in 20 chance of just being completely unfit for service?

But when your boss tells you that you have to keep doing it this way, then you don’t have much choice in the matter. You either keep asking AI for new code and hope it gets it right, or you have to actually delve into the code and spend your time correcting it.
The 1 million lines of code is just untenable, assuming they want code that actually works.