AI is something special

I feel like AI is getting overlooked by many. I feel like I have a real professional in any field I want to learn about just waiting for me to ask things.

Yes, they're wrong sometimes, but so can a real professional.

Through one of my AI training gigs, I have access to several different AIs, including GPT4o, GPT o1, and Claude 3.5 Sonnet/Haiku.

Having them all is an insane learning and productivity boost. Topics that are difficult to Google (because results come back for something else, you don't know the perfect wording on the subject, its niche, etc.) are just answered by these models.

If I'm suspicious of one model's answer, I go to another.

Of course I don't trust these AIs with any important information or something that could identify me, but that's not a difficult issue to get around.


Their context windows and token limits are huge, as I've been able to just send them huge pieces of code for it to go through. This was especially helpful for my encryption program, since it's nearly a thousand lines long just for the encryption logic and I completely forgot pretty much all of it by the time I wanted to make some tweaks 🥲.



If you're a true expert in your field, then yes the AI may not be at your level, though it's definitely getting there and the level of training they're paying for is for very specialized areas of expertise nowadays to close these gaps.

However, no one is an expert in everything.. except for these AIs. They have weaknesses in certain fields (which is model dependent), but simply makes up for it in sheer vastness of knowledge.


They're growing in knowledge at a rapid pace, and I don't see any reason for the shortcomings of these AI to not be addressed and improved until they're nonexistent.


Eventually, we might have something advanced enough to make us wonder at what point is consciousness simulated so well, that it simply becomes consciousness itself.
Last edited on
AI will excel at some things and may never be trusted with others, or not for many decades yet to come. I just saw where a team set up a race car with AI and first thing it did was slide off the track ass end forward and crunch up the platform -- something humans are equally good at, for sure, but the AI was supposed to know better. And a couple years back I saw where an AI generated some new materials with some breakthroughs in chemistry, saving who knows how much R&D time.

Probably about 2005 I programmed several that did simple things perfectly... one was a throttle control on a boat that adjusted to match the wanted speed, even if you had a tailwind or current or headwind or whatever external factors, not exactly rocket science but it did its job and better than a person would for the same task as its reaction times were just off the charts better.

Even if you are teaching it nonsense, its a very important field. I despise the uses that are going to dominate the field in the near future (generating ads, tracking people, cheating on homework is already a big problem, etc) but even those things will generate useful tools for other things eventually.

I don't care for the internet expert at everything quick (and often wrong) answer generation stuff (again, it will lead to something, but right now its a mess). But I believe that a deeper trained specialist AI for one or more related fields to solve specific types of problems will really tear things up in some areas ... and SOON.

Anyway, you are doing something important. I honestly though AI was going to stagnate (say around 2000) and just be a slightly better GIGO engine, advanced lookup table or approximation function (depending on the training, type, etc) but it has really taken off in new ways. Keep up the good work!
I don't care for the internet expert at everything quick (and often wrong) answer generation stuff

It just isn't wrong that often anymore. Though this does depend on which model you use. For example, Windows 11 "Co-Pilot" comes with a just-ok AI that can often be wrong.

But Claude 3.5 Sonnet has really impressed me. I've been using it for a little over a month now, and it's been wrong maybe 2-3 times. I've been asking it for information on lots of topics, so this is very impressive.

There are definitely things it struggles with, like problems that require multiple complex steps to solve (it'll get some of the steps right, but it only needs to fail on one of those difficult steps to get a wrong answer).

But in general, if you use the AI to supplement your work instead of just having it do your work for you, it's unlikely to lead you astray.


The biggest issues I see are with the lighter models. They make little mistakes all the time. I gave GPT4o-mini code output that was formatted and just told it to give me the numbers back unformatted and comma-separated. It gave it back to me with two wrong numbers. The non-light models would almost certainly never make a mistake like that.

Keep up the good work!

Training them to replace me lmao
Registered users can post here. Sign in or register to post.