ChatGPT is too smart

At this point, the AI training I'm doing is, "Ha, ChatGPT couldn't even 1 shot produce PhD level research!"

Not even exaggerating. Work that would take a true-expert hours of math/research/etc is the baseline of what we're expecting ChatGPT to produce in a single output when training the AI right now.

Just creating the prompts takes hours.



Now, now. I know what some of you are thinking - "PhD? I can't get this thing to solve some programming problems I have."

The catch here is that, when training, the prompts must have everything AI needs to solve the problem. Or - missing that - enough such that the AI is able to research/web-search and find the missing information it would require. It is highly unlikely that such a prompt will overwhelm the AI in terms of sheer tokens in any context other than code requests.

For coding in particular, there are usually AI versions trained for coding-CLI kind of interfaces with higher context windows so they don't get lost in vastness of a codebase.

Using a general AI like ChatGPT for specialized programming usually isn't great, because its not optimized for it.


And having trained some of those coding specific models as well - they've been mostly terrific.

Even though I still end up just code dumping onto regular ChatGPT with thinking on rather than go through a CLI AI model. It's still pretty good, but it gets lost in the tokens after 2-4 prompts and I have to start a new chat.



But damn, the level they're at now from just a year ago is astounding. Training them a year ago looks nothing like today in terms of the expectations we have for what they can accomplish.

I don't know if I'll have a job doing this in another year. What could I possibly teach this thing in a years time?



Also - I'm talking about the best of the best AI here, so none of these lighter weight models.
Last edited on
Have you seen Dave Plummer's new video re AI writing c code? He uses AI to produce a Notepad application.
https://www.youtube.com/watch?v=bmBd39OwvWg
That's nothing. There's many people out there that have made it a goal to see how much they can get from a single prompt ("one shot" as they call it).

A single prompt has produced entire games, full sites with 3D animations, etc..


Example: https://www.instantdb.com/essays/gpt_5_vs_opus_4

If you give a deck of cards a good shuffle right now, no cheating or bad techniques, it is guaranteed to be the first time that that combination of cards has ever existed on Earth.
Even if a computer could create a new shuffle a billion times per second, it still would not be able to log every permutation before the heat death of the universe.
Just because a computer can shuffle a deck of cards faster than you, does that mean that your shuffle is less meaningful?
If you use a machine to shuffle your deck of cards, is it no longer your deck?
Your imagination is infinitely more complex than a deck of cards.

All of civilization is built upon the backs of giants. We constantly are getting a glimpse of where science meets magic.

Case in point:
- I have a cube that eats plastic and spits out horrible cheap objects after I place a sliver of silicone into a slot in the side and push a couple of buttons.
- I avoid talking to my friends and family by "accidentally" forgetting my hand-held computer-phone in my car that once replaced horse and carriage, which in its own time changed the face of warfare forever.

Practice your craft.
Whatever tool you choose, choose to master it.
Last edited on
Registered users can post here. Sign in or register to post.