Replace? Hopefully not until I've retired! Given the current crop of AI tools, though, I'm not worried. At best they're a time-saving convenience for boilerplate, or a supercharged knowledge base, and at worst they're sloppy, uninformed, and massively reduce the quality of a codebase.
I think where they shine is the knowledge base. A couple of examples - one of my devs asking the difference between interfaces and structs as a type definition in golang, chatgpt had an immediate answer that satisfied all of us; or a PM looking at some code, asking chatgpt what the params for a function were, and finding a bug that had been missed. It can get you to your answer so much faster than trawling google or stack overflow, or going back through docs... if you ask the right question.
As a time-saving convenience, if you can train the model to look for common patterns in your codebase, then the utility is useful for replacing common boilerplate - "AI, generate me the files for creating a new endpoint on my api, and the infra definitions for deploying it". However, I'm yet to be convinced that this is faster than any of the templating tools that have been around for donkeys'. You could argue that with a templating tool you have to define the templates, which can take some time depending on the complexity - to handle "generate x with a,b,c features, called m", you've got a lot of logic in your templates to manage that. On the flip side, how long is it going to take to train the model and refine it such that it is predictable in the same way as the template? A nice medium might be defining the template, feeding it to the model, and using some inline AI tool that previews as you give it the instructions.
Both uses above are specialised AI tools. What we currently have with ChatGPT and Bard are general tools that view everything as a nail and apply the same hammer. So what about Github copilot, or tools like Tab9ine, that are billed as predictive autocomplete tools for programming?
The training models are public repositories, which, let's face it, are not the highest quality. We've tried it, we've seen the output, we've yet to see anything of more than basic complexity that is fit for purpose, and even the single line output can need additional work (e.g. expecting variables or functions that don't exist, because the things it was trained on happen to use them). You've got to look beyond the suggested code - things like the time it takes the dev to verify and refactor to fit, or the loss of confidence when it comes to reviewing the code (if a senior dev writes the code you have a good level of confidence it works, compared to a junior's code where you will check everything. AI code removed that entirely, and every PR has to be treated as if a junior's). There's also a danger that people accept the output simply because the model suggests it, "oh, it's been trained on millions of lines of code, it must know better than me".
I hope I'm just being a snob and will be proven wrong that lowering the bar even further will continue to reduce the quality of the market of developers, but I can't discount the possibility. Some sectors of software development are already very easy to get into - anything web frontend is ridiculously easy to pick up, for example. I love how permissive javascript is as a language, but it's also dangerous. The number of devs I have seen over the years who don't understand the why behind what they're doing, or how to apply some concept in a different way, let alone apply their knowledge to a new language, is staggering. Sure, fair play to the industry for being open to anyone, and having the communities and resources to take someone from zero to 'developer' within a short time, but it's no substitute for being taught how to think properly. AI tools, I feel, will compound that - a generation of developers who ask the tool to build the thing, to solve the error, without being taught the why. It's not all doom and gloom, mind. Where the tools will absolutely shine is in the hands of knowledgeable devs with the experience to ask the right questions and verify any output. The glimmer of hope is that these devs teach everyone else.
Finally, behind it all lies the current murky waters of licensing and fair use, and anonymising data. If you're doing anything commercial, you absolutely cannot blindly include code without knowing its origin, or use some tool that reports back to a parent (run it locally with no outside contact, or not at all, please).
I don't want to discount AI tools for use in specific applications (data manipulation, analytics, automation, ...). I think they can absolutely bring huge benefits to the industry, if used correctly, but the current crop are poor, and it's absolute nonsense that they're being hyped as some magic bullet that will revolutionise software development today. We're at day zero, if that. There's a long way to go.