ChatGPT; should we fear it?

Hi guys,

I haven't seen a similar question asked here, so I want to hear everyone's thoughts; now, it may be slightly naive and rather preliminary to say, that I believe chatGPT could have a knock on affect on developers.

Already ChatGPT can write better code than many currently employed programmers. There'll always be a need for programmers to formulate questions, parse and deploy said code i.e. putting it together, compiling, etc. But, considering chatGPT and other future similar AI projects may(and already can) write programs in a few seconds as opposed to hours or even days, I feel that professional jobs will dwindle, as there will not be a need for as many programmers working on projects.

Do you think this is fair or realistic assessment?
Last edited on
Lucky I'm of an age when I could retire!

I also remember 'The Last One' which was, at the time, advertised as the last program that ever needed to be written - and hence programmers would no longer be needed. That was back in 1981....
https://en.wikipedia.org/wiki/The_Last_One_(software)
Think of it in terms of construction. If you had a brick-laying machine, would that mean that its owner could build a building without involving any other person at all? No. Someone would still need to instruct the machine where to put walls according to a design someone came up with. People would still need to survey the terrain, inspect the soil, etc. And ultimately someone has to take the instruction "build an apartment building here" and understand not to build a slaughterhouse.

I think most developers lay bricks from time to time, so a brick-laying program would be useful, in the same way that a compiler is useful. If all you can do is lay bricks, perhaps you should be worried. I don't see too many people who can only write in Assembly and don't know how to do anything else.

Perhaps, given Jevons paradox, what will happen is not that we will continue to develop the same amount of software that we do now, but instead that we will harness this technology to write software faster than ever before, using about the same amount of manpower.
https://en.wikipedia.org/wiki/Jevons_paradox
Now whether that software will be of higher, lower, or comparable quality is something that remains to be seen. The latest models seem to be good at occasionally producing false but coherent and believable output. I think it would be pretty funny if all this effort just lets us produce untrustworthy code that we then need to review, negating the productivity savings.
Adapt or die! It is too useful to ignore, so programmers should incorporate it into their workflow. It's especially helpful when doing one-off tasks using tools you aren't familiar with.

Still, ChatGPT is no better than the average stack overflow answer in the context of your current project. That's not great and the results almost always need adjustment to account for the particulars of your problem.

Do you think this is fair or realistic assessment?

No current model can understand the local context needed to guide development. These are things like existing design constraints, the skills of involved humans, the degree of correctness & performance required, the data model that is most suitable for future plans, the ability to understand and address the customer's requirements with a software design, the social aspects of development in general, technical specifics of the current design, and the intuition needed to guide development forward.

It seems to me that no model can feasibly incorporate the holistic knowledge needed to make a quality product. And even if someone found a capable model, you'd have to constantly re-train it to reflect its changing environment. Software development is more than typing in code. Programmers won't be replaced any time soon.
Last edited on
for all its power and usefulness, it still does not have any 'understanding' and still has no idea what is going on around it. These things are just the newest and shiniest GIGO machine.

AI can beat just about everyone, with only a few exceptions, at something like chess. Why? Because chess can be turned into computations: if I move here, and they move there, the 'score' of the board position evaluated higher for me ... but if I go to this other place, and they do whatever, its lower for me... that is how the chess engines work at a crude high level. And that is effective.

And then... taken ANY computer strategy game, where you have a randomly generated map, a variety of ways to win, a number of approaches, your 'faction' has a bonuses unique to it that you need to incorporate, you may have close hostile neighbors or none, ... and there is NO AI out there for these games that can beat a competent 10 year old without cheating (these games notoriously give the computer tons of handicaps to make up for their inability).

For every useful and amazing feat like chess, there are hundreds of epic failures (we have seen many of them this year with the chat bots).

I get calls from chatbots. It is immediately obvious, a single response reveals that they are not human. They generally respond like a person if you say expected things like hello or yes/no, but if you tell them to go lick your toilet bowl clean they say stuff like "I don't understand, can you repeat that" or "Yes, well, the reason I am calling is to sell you stuff..."
There are too many possibilities and AI, even this new stuff, can't handle nonsense responses, unexpected responses, or much of anything beyond very simple inputs. They can't help you if they are trying to run tech support, they are just running a decision tree and when it comes up short, they have no fallback. There is no knowing, nothing to fear, and won't be for a long time.
Jonnin you're right it's a GIGO engine, a text predictor, but you should still try it out. The tech knocked my socks off, which doesn't happen often; it's beyond every other system I've used. Realistically it's useful enough to warrant attention, despite its limitations.

AI can beat just about everyone, with only a few exceptions, at something like chess. Why? Because chess can be turned into computations: if I move here, and they move there, the 'score' of the board position evaluated higher for me ... but if I go to this other place, and they do whatever, its lower for me... that is how the chess engines work at a crude high level. And that is effective.
Chess engines started beating the best human players in 1997 (Garry Kasparov v. Deep Blue). Today, no human has a chance.

About strategy games, there was a machine learning based approach for Starcraft II that apparently beats 99.8% of players, back in 2019. See video:
https://www.youtube.com/watch?v=cUTMhmVh1qs
I didn't watch more than a few minutes of it but at 24:54 they talk about the computer's reaction time & how it perceives the game. It is less "unfair" than you might expect. It has a ~350ms reaction time, and can't see through fog of war (i.e., it can only see within the line of sight of its own units.) I didn't hear them confirm this but I suspect it doesn't get extra bonuses either.

Last edited on
Its certainly not worthless. I didn't mean to imply that, just that I don't think its anything to fear either. I am following it off and on but chatbots specifically bore me. I get the practical/commercial desire for them, but they are not ready, have been in play for years and still not ready, and the more of them that call me the less inclined I am to panic that the terminator is coming or that anyone with an IQ over about 40 is in any danger of unemployment.

I think the key on the starcraft is that it is a real time game. I was thinking turn based (which usually has more convoluted rules, like humankind) when I said it. Real time, yea, humans are likely to struggle there. As for chess, I know, but I leave it open that some super human player may still eek out either draws or a few wins even today. Don't think magnus has it in him... but I want to leave that door cracked for now.
I don't see too many people who can only write in Assembly and don't know how to do anything else.


IMO one of the hallmarks of a 'good IT' person is their ability to adapt, overcome and improvise when faced with a challenge. Also their ability to self-learn, improve and advance.

The 'challenge' presented by chatGPT can be viewed as an opportunity to adapt, learn and grow. How many students are now taught Cobol (a major language back when I did my degree)? Universities now have courses on AI, Machine Learning and Natural Language as part of their degree syllabus. Programming languages as a topic seem to be diminishing in favour of these other topics.
Topic archived. No new replies allowed.