How long before AI replaces programmers ?

CultureCitizen

Well-Known Member
Joined
Feb 14, 2023
Messages
124
First, look at the video.


Now some caveats:
- There are tons of examples of social media "apps" in github. It is code that gpt4 has already memorized and can easily generate.
- It looks unedited, but I am suspicious because the whole app was generated in one go without the need to move , copy or paste files... so it could be completely fake.
- I haven't tried gpt4 so I don't know if its generation capabilities are as good as portrayed in the video ( it could be completely fake).

That said, it does seem like the general direction in which AI could go in a near future.
 
The challenge in creating software is not in translating a well articulated idea into a computer language, it is in determining the appropriate descriptions of the problem to be addressed. Also, machine learning, the current approach to most AI, freezes approaches at the time of training. Machine learning does not have the ability to identify a new approach to a common problem; it can only replicate older solutions. Use of machine learning at times in the past would have locked into place technology as it was known and understood at the time.
 
The challenge in creating software is not in translating a well articulated idea into a computer language, it is in determining the appropriate descriptions of the problem to be addressed. Also, machine learning, the current approach to most AI, freezes approaches at the time of training. Machine learning does not have the ability to identify a new approach to a common problem; it can only replicate older solutions. Use of machine learning at times in the past would have locked into place technology as it was known and understood at the time.
True, models are frozen because most of the time they use batch learning and are frozen for stability reasons.
This freezing is inconsequential if the model is updated every 3 months; it will catch up faster than 99% of the programmers.
The input is still a problem , currently the window is limited to 32,000 tokens , some 6,400 lines of code: it can't output or correct any block of code that is longer than that.
 
True, models are frozen because most of the time they use batch learning and are frozen for stability reasons.
This freezing is inconsequential if the model is updated every 3 months; it will catch up faster than 99% of the programmers.
The input is still a problem , currently the window is limited to 32,000 tokens , some 6,400 lines of code: it can't output or correct any block of code that is longer than that.
But where would those new things to learn come from? That's the problem with the frozen model, there is no advancement.
 
But where would those new things to learn come from? That's the problem with the frozen model, there is no advancement.
From other AI's. GPT is only ONE may ML models, there are many different models (adversarial models , reinforcement learning) , and not all of them use the same technique to learn and produce an output as GPT ( transformers).


 
From other AI's. GPT is only ONE may ML models, there are many different models (adversarial models , reinforcement learning) , and not all of them use the same technique to learn and produce an output as GPT ( transformers).


We may just have to agree to disagree. For example, if we applied the current suite of AI tools to studying command line and ASCII text based programs, I doubt they would have come up with mouse driven, graphical interfaces. Even with significant increases in size and speed, I do not see these tools making the leap from efficiently using known techniques to coming up with new and novel techniques.
 
Replace? Hopefully not until I've retired! Given the current crop of AI tools, though, I'm not worried. At best they're a time-saving convenience for boilerplate, or a supercharged knowledge base, and at worst they're sloppy, uninformed, and massively reduce the quality of a codebase.

I think where they shine is the knowledge base. A couple of examples - one of my devs asking the difference between interfaces and structs as a type definition in golang, chatgpt had an immediate answer that satisfied all of us; or a PM looking at some code, asking chatgpt what the params for a function were, and finding a bug that had been missed. It can get you to your answer so much faster than trawling google or stack overflow, or going back through docs... if you ask the right question.

As a time-saving convenience, if you can train the model to look for common patterns in your codebase, then the utility is useful for replacing common boilerplate - "AI, generate me the files for creating a new endpoint on my api, and the infra definitions for deploying it". However, I'm yet to be convinced that this is faster than any of the templating tools that have been around for donkeys'. You could argue that with a templating tool you have to define the templates, which can take some time depending on the complexity - to handle "generate x with a,b,c features, called m", you've got a lot of logic in your templates to manage that. On the flip side, how long is it going to take to train the model and refine it such that it is predictable in the same way as the template? A nice medium might be defining the template, feeding it to the model, and using some inline AI tool that previews as you give it the instructions.

Both uses above are specialised AI tools. What we currently have with ChatGPT and Bard are general tools that view everything as a nail and apply the same hammer. So what about Github copilot, or tools like Tab9ine, that are billed as predictive autocomplete tools for programming?

The training models are public repositories, which, let's face it, are not the highest quality. We've tried it, we've seen the output, we've yet to see anything of more than basic complexity that is fit for purpose, and even the single line output can need additional work (e.g. expecting variables or functions that don't exist, because the things it was trained on happen to use them). You've got to look beyond the suggested code - things like the time it takes the dev to verify and refactor to fit, or the loss of confidence when it comes to reviewing the code (if a senior dev writes the code you have a good level of confidence it works, compared to a junior's code where you will check everything. AI code removed that entirely, and every PR has to be treated as if a junior's). There's also a danger that people accept the output simply because the model suggests it, "oh, it's been trained on millions of lines of code, it must know better than me".

I hope I'm just being a snob and will be proven wrong that lowering the bar even further will continue to reduce the quality of the market of developers, but I can't discount the possibility. Some sectors of software development are already very easy to get into - anything web frontend is ridiculously easy to pick up, for example. I love how permissive javascript is as a language, but it's also dangerous. The number of devs I have seen over the years who don't understand the why behind what they're doing, or how to apply some concept in a different way, let alone apply their knowledge to a new language, is staggering. Sure, fair play to the industry for being open to anyone, and having the communities and resources to take someone from zero to 'developer' within a short time, but it's no substitute for being taught how to think properly. AI tools, I feel, will compound that - a generation of developers who ask the tool to build the thing, to solve the error, without being taught the why. It's not all doom and gloom, mind. Where the tools will absolutely shine is in the hands of knowledgeable devs with the experience to ask the right questions and verify any output. The glimmer of hope is that these devs teach everyone else.

Finally, behind it all lies the current murky waters of licensing and fair use, and anonymising data. If you're doing anything commercial, you absolutely cannot blindly include code without knowing its origin, or use some tool that reports back to a parent (run it locally with no outside contact, or not at all, please).

I don't want to discount AI tools for use in specific applications (data manipulation, analytics, automation, ...). I think they can absolutely bring huge benefits to the industry, if used correctly, but the current crop are poor, and it's absolute nonsense that they're being hyped as some magic bullet that will revolutionise software development today. We're at day zero, if that. There's a long way to go.
 

How long before AI replaces programmers?​


Insert joke about how AI is ideally suited to not having a girlfriend and living in its parents' basement.

[Sorry, I hate stereotypes but couldn't resist]
 
I would project a best case model for AI support of computer programming, it would be having the AI being responsible for generating source code using available programming languages. Software developers would be develop a set of declarative statements (apologies for the geek speak) similar to current TDD (Test Driven Development) practices. These statements would be what is kept under version control. Similarly to how software developers currently recompile source code and are unconcerned about the machine language code that is developed, in this approach the AI would recreate the entire source code set for each pass. Developers would not look at the source code (source code readability is no longer a concern), but would incrementally add to and improve the declarative statements.

Some of the concerns I have are whether creating this set of declarative statements is actually any simpler than directly creating the software code. I am also concerned that software systems will be locked in time and not show any advancement in either usage or baseline characteristics. AI systems will merely replicate the current state of the art and rely on the current levels of components. The latter locks in the level of detail that must be included in declarative statements limiting advancement to ever more complex feature sets. I also wonder about the availability of AI engines as compared to human programmers.
 
That could be interesting, a new generation of even higher level languages. Maybe also the rise of a much cheaper workforce, and how about a resurgence of the so-far disappointing no-code movement?

We might also see huge strides in formal language models, maybe also semantic recognition.

How about, also, an angry movement in direct response to the AI tools - agencies advertising their 100% organic development capabilities, with a team of free-range artisanal developers!
 
I would project a best case model for AI support of computer programming, it would be having the AI being responsible for generating source code using available programming languages. Software developers would be develop a set of declarative statements (apologies for the geek speak) similar to current TDD (Test Driven Development) practices. These statements would be what is kept under version control. Similarly to how software developers currently recompile source code and are unconcerned about the machine language code that is developed, in this approach the AI would recreate the entire source code set for each pass. Developers would not look at the source code (source code readability is no longer a concern), but would incrementally add to and improve the declarative statements.

Some of the concerns I have are whether creating this set of declarative statements is actually any simpler than directly creating the software code. I am also concerned that software systems will be locked in time and not show any advancement in either usage or baseline characteristics. AI systems will merely replicate the current state of the art and rely on the current levels of components. The latter locks in the level of detail that must be included in declarative statements limiting advancement to ever more complex feature sets. I also wonder about the availability of AI engines as compared to human programmers.
Many developments are made using a specification document with the design of screens and rules.
What I think will happen is that people will be able to write the design of the program and the AI will be in charge of writing the code that fits the design document, writing the unit tests and running them.
No, not now, but maybe in 5 or 6 years.
 

Similar threads


Back
Top