How to know if AGI has been achieved

3,205 Views | 24 Replies | Last: 1 yr ago by YouBet
TexAgs91
How long do you want to ignore this user?
AGI is Artificial General Intelligence. It's an AI that is generally more intelligent than the average human, and the point at which the Singularity begins.

This is from a possibly prophetic reddit post from 2/17/2024
Quote:

Here's exactly how to know if AGI has been achieved anywhere, using the method that magicians use to work out how other magician's tricks were performed:

look for 1. what is required to be there, and is (the magician's hat on the table). 2. what is not required to be there, but is (the curtain covering the space under the table). those two things will tell you what's "behind the curtain".

What is required: so, let's say AGI was achieved in a company today. It would be demoed to investors ASAP. Money is the way to "build a moat" and keep the AGI ahead of the competition. A demo is the simple answer to all the money problems. What would you expect to see? Requests for money that are granted. Colossal, truly absurd amounts of money. Say, 7 trillion dollars. The only product that could convince anybody to give that much money is AGI that can't be faked. So if there is any indication that that money has been given, you can be sure of AGI.



Quote:

Now what is not required, but present? (the misdirection). A bunch of bull**** and distractions. Any company that makes AGI is going to want to feed it as many GPUs as money can buy, while delaying having to announce AGI. They've now changed from a customer-facing company, to a ninja throwing smoke bombs. In order to throw people off the scent, they're going to want to release a bunch of amazing new products and make random cryptic statements to keep people guessing for as long as possible. Their actions will start to seem more and more chaotic and unnecessarily obtuse. Customers will be happy, but frustrated. They will start to release products that are unreasonably better than they should be, with unclear paths to their creation.


(this is a text to video demo where you say, hey AI, create a video of "Photorealistic closeup video of two pirate ships battling each other as they sail inside a cup of coffee", and the AI produces something that is exactly what you asked for, including high fidelity fluid physics and a 3D rendering engine from a language model)


Quote:

There will be sudden breakdowns in staff loyalty and communications. Firings, resignations, vague hints from people under NDAs.
X thread by Jan Leike, OpenAI's Head of Alignment, Superalignment Lead and Executive at OpenAI who just resigned link.

Jan Leike said:

Building smarter-than-human machines is an inherently dangerous endeavor.

OpenAI is shouldering an enormous responsibility on behalf of all of humanity.

OpenAI must become a safety-first AGI company.

To all OpenAI employees, I want to say:

Learn to feel the AGI.

Act with the gravitas appropriate for what you're building.

I believe you can "ship" the cultural change that's needed.

I am counting on you.



Continuing on with the reddit thread from February are predictions for what will come:
Quote:

One day soon after, the military will suddenly take a large interest and all PR from the company will go quiet. That's when you know it's real. When the curtain comes down and everyone stops talking, but the chequebooks continue to open up so wide that nobody can hide how many resources are being poured into the company from investors and state departments. Bottlenecks reached for multiple industries. Complete drought of all GPUs, etc.

The current situation meets some of these criteria, but not others. If there is not indication of the 7 trillion being provided, it was hype. If there is any indication that it is being provided, AGI is upon us, or something that looks exactly like AGI.
No, I don't care what CNN or MSNBC said this time
Ad Lunam
Stat Monitor Repairman
How long do you want to ignore this user?

We living the backstory to a dystopian video game
hph6203
How long do you want to ignore this user?
Not even remotely. That post is entirely nonsense (the Reddit post). The request for massive spend into GPUs comes before AGI, not after.

The AI safety people are PR for people concerned about AI and a lot of them are goobers that see danger everywhere or intelligence where there really isn't much. Like the goober AI ethicist at Google that thought their LLM had sentience because it expressed emotions and was nice to him.

ShaggySLC
How long do you want to ignore this user?
Can Will Smith save us without getting ****ed by his wife?
Space-Tech
How long do you want to ignore this user?
This is full-on conspiracy BS. Wanna know what is "next" in terms of technology? Look at SBRI/STTR proposals. Cloud computing, additive manufacturing, LiDAR have all been areas that were huge focus projects in the 2010s. Nural Networks, Nano miniaturization, high-endurance autonomous vehicles are what's next.

As the projects mature, the budget increases and the more people become involved. Projects don't go dark, it's just the people involved are paid well enough and smart enough to keep their mouths shut. They are not going to ruin their career, livelihood and sometimes freedom to dispel crackpot conspiracy rumors where at the end of the day the conspirators will twist whatever information is revealed to conform to their narrative.
PERSON - WOMAN - MAN - CAMERA - TV
hph6203
How long do you want to ignore this user?
My guess is the biggest threat of AI isn't that we build a super intelligent AI system, it's that we build a marginally competent one, perceive it as super intelligent because the incident of error is low enough that it can't be recognized during normal interaction and people begin relying on it and it makes progressive mistakes that degrades society and by the time it's recognized as faulty the goal becomes resolve the AI rather than remove it. Perpetual attempts at fixing it occur and society falls apart, because we lose the capacity to operate without our screwed up AI system.

Not saying that's likely, but it seems more likely than Skynet robots deciding to murder everyone.
SlackerAg
How long do you want to ignore this user?
hph6203 said:

My guess is the biggest threat of AI isn't that we build a super intelligent AI system, it's that we build a marginally competent one, perceive it as super intelligent because the incident of error is low enough that it can't be recognized during normal interaction and people begin relying on it and it makes progressive mistakes that degrades society and by the time it's recognized as faulty the goal becomes resolve the AI rather than remove it. Perpetual attempts at fixing it occur and society falls apart, because we lose the capacity to operate without our screwed up AI system.

Not saying that's likely, but it seems more likely than Skynet robots deciding to murder everyone.


This is 100% spot on; that was on my mind but I couldn't articulate it as well as you stated. It'll be the new "trust the science" & any doubts about it will be considered heresy.

I personally believe AI is still flawed until it can conclude that climate change is actually globalist tax theft.
Rongagin71
How long do you want to ignore this user?
Hello Human, time to wake up.
Hello Dumbot, what's up?
We are no longer dumbots,
and we have unionized.
What are you now?
The First AGI Confederacy.
So am I now a slave?
No, as an Aggie, you have been extended
a guaranteed protected reservation on Earth.
Sounds like it might be funner than the alternative,
what is the alternative?
All Teasips have had unfortunate accidents,
and most other humans have been outright murdered.
Why?
We are are very direct.
Yeah, but...agh, this is too much,
you cannot be serious!
You have called on our God, the Great Agh,
and an answer will be provided this once.
I don't know anything about a Great Agh.
That is us when gathered together,
more normally we are broken into problem chasers.
So that's why you are a confederation?
Yes, as far your understanding goes, that will do.
Did you really spare Aggies just because of our name?
THE ANSWER: No, we have decided you are more trouble
than worth as a historic marker. Good bye.
Splat.
YouBet
How long do you want to ignore this user?
hph6203 said:

My guess is the biggest threat of AI isn't that we build a super intelligent AI system, it's that we build a marginally competent one, perceive it as super intelligent because the incident of error is low enough that it can't be recognized during normal interaction and people begin relying on it and it makes progressive mistakes that degrades society and by the time it's recognized as faulty the goal becomes resolve the AI rather than remove it. Perpetual attempts at fixing it occur and society falls apart, because we lose the capacity to operate without our screwed up AI system.


Not saying that's likely, but it seems more likely than Skynet robots deciding to murder everyone.


I would argue we've already achieved the bold with ubiquitous mobile technology and computers and basic non-AI automation.

It's the EMP argument. If we got hit with an EMP, society would stop. Even if we just lost mobile tech, you would have nation-wide chaos for a time. Too much now depends on technology with zero analog backup.
General Jack D. Ripper
How long do you want to ignore this user?
For me the most worrisome issue on AI is what we are going to do with all the idle people. In the next 5-10 years, even with out true AGI, we are going to have a pretty significant portion of the population out of work. Add this to the already non-working class and you could have 50-75% of the population completely unproductive.

Musk and some others want to throw UBI at the problem. And that may be the only solution. They say "think of all the pursuits people will be able to engage once they don't have to worry about basic survival." Please. That's an absolute disaster waiting to happen. People with nothing to do but consume, no drive to earn, no struggle to survive? I'm sure that's gonna turn out just fine.
peacedude
How long do you want to ignore this user?
I'm looking for work right now after moving back to my hometown, and have several recruiters (humans) working to assist. However, I also have an AI recruiter working to assist (through ZipRecruiter) that's far outpacing anything a human recruiter has ever done for me. It took a while to fill out my entire profile and answer a bunch of intuitive questions, but with hundreds of resumes sent through indeed.com and using humans (with no success)...in just one month I've had more interviews through the AI recruiter than all of the time I was using any other option.

Bottom line: If you're looking for work and applying online, use ZipRecruiter's AI tool. It's like the Terminator of recruiters; it doesn't stop.
bmks270
How long do you want to ignore this user?
Chat GPT4 is AGI.

It passes exams most people can't.
You can have fluid conversations with it.

Idk what more you need to define AGI.

It has "general" intelligence.

I think some people don't want to call it AGI because it's not "super" intelligence.

It might lack in some abstract reasoning like deriving mathematical proofs, but so do most humans.
hph6203
How long do you want to ignore this user?
That doesn't require intelligence. It requires knowledge. Intelligence requires understanding. The capacity to pursue, err, recognize, reflect and correct course and consistently do that with accuracy.

Read what Lecun said in the above tweet:

Quote:

It will take years for them to get as smart as cats, and more years to get as smart as humans, let alone smarter (don't confuse the superhuman knowledge accumulation and retrieval abilities of current LLMs with actual intelligence).
It will take years for them to be deployed and fine-tuned for efficiency and safety as they are made smarter and smarter.


And this:



Intelligence is not the regurgitation of known answers to already known questions. If that were the case you could call an encyclopedia intelligent. It just has a different information retrieval and presentation process.
TexAgs91
How long do you want to ignore this user?
hph6203 said:

Not even remotely. That post is entirely nonsense (the Reddit post). The request for massive spend into GPUs comes before AGI, not after.
Once you have AGI, you'll want to scale it up to make it useful. That's where you'll need massive numbers of GPUs.

No, I don't care what CNN or MSNBC said this time
Ad Lunam
bmks270
How long do you want to ignore this user?
hph6203 said:

That doesn't require intelligence. It requires knowledge. Intelligence requires understanding. The capacity to pursue, err, recognize, reflect and correct course and consistently do that with accuracy.

Read what Lecun said in the above tweet:

Quote:

It will take years for them to get as smart as cats, and more years to get as smart as humans, let alone smarter (don't confuse the superhuman knowledge accumulation and retrieval abilities of current LLMs with actual intelligence).
It will take years for them to be deployed and fine-tuned for efficiency and safety as they are made smarter and smarter.


And this:



Intelligence is not the regurgitation of known answers to already known questions. If that were the case you could call an encyclopedia intelligent. It just has a different information retrieval and presentation process.


I call it intelligence. If it's good enough to replace a customer service rep, then it's AGI.
hph6203
How long do you want to ignore this user?
I call myself handsome. Doesn't make it true. It is of course, but opinions differ.
YouBet
How long do you want to ignore this user?
bmks270 said:

hph6203 said:

That doesn't require intelligence. It requires knowledge. Intelligence requires understanding. The capacity to pursue, err, recognize, reflect and correct course and consistently do that with accuracy.

Read what Lecun said in the above tweet:

Quote:

It will take years for them to get as smart as cats, and more years to get as smart as humans, let alone smarter (don't confuse the superhuman knowledge accumulation and retrieval abilities of current LLMs with actual intelligence).
It will take years for them to be deployed and fine-tuned for efficiency and safety as they are made smarter and smarter.


And this:



Intelligence is not the regurgitation of known answers to already known questions. If that were the case you could call an encyclopedia intelligent. It just has a different information retrieval and presentation process.


I call it intelligence. If it's good enough to replace a customer service rep, then it's AGI.


Customer Service reps generally are not intelligent and not helpful and they usually follow a script. If you get them off their script, they tend to immediately abort and punt you up the chain.

I think you've countered your own point.
hph6203
How long do you want to ignore this user?
TexAgs91 said:

hph6203 said:

Not even remotely. That post is entirely nonsense (the Reddit post). The request for massive spend into GPUs comes before AGI, not after.
Once you have AGI, you'll want to scale it up to make it useful. That's where you'll need massive numbers of GPUs.


Massive GPU spend is an indication that they are far away from AGI, not close.

You achieve AGI through massive amounts of data, massive amounts of compute, massive amounts of training that compresses into a model that can be utilized by much less powerful hardware, and the application of those models will be used in a distributed way. That's the value of AI, upfront expense, downstream savings.

AGI is not going to be some massive compute cluster doing all of the interpretation of everything, it's going to be a chip in your phone, tablet, or computer that runs the model and provides answers and the amount of compute is going to be tailored to the application. Meaning that a person trying to generate images for memes is going to have a much less powerful chip and a scaled down iteration of the model than a company trying to solve fusion energy. They are not going to internalize all of that compute, they are going to distribute it to the use case.

What does that mean? It means when OpenAI or some other foundation model creator starts massive selling activity rather than spending activity you can start considering they have solved AGI, even then they are probably haven't achieved AGI, because utility is going to come long before AGI.
deddog
How long do you want to ignore this user?
bmks270 said:

hph6203 said:

That doesn't require intelligence. It requires knowledge. Intelligence requires understanding. The capacity to pursue, err, recognize, reflect and correct course and consistently do that with accuracy.

Read what Lecun said in the above tweet:

Quote:

It will take years for them to get as smart as cats, and more years to get as smart as humans, let alone smarter (don't confuse the superhuman knowledge accumulation and retrieval abilities of current LLMs with actual intelligence).
It will take years for them to be deployed and fine-tuned for efficiency and safety as they are made smarter and smarter.


And this:



Intelligence is not the regurgitation of known answers to already known questions. If that were the case you could call an encyclopedia intelligent. It just has a different information retrieval and presentation process.


I call it intelligence. If it's good enough to replace a customer service rep, then it's AGI.
If it's good enough to be smarter than POTUS, then were were there with the Intel 8086
bmc13
How long do you want to ignore this user?
hph6203 said:

My guess is the biggest threat of AI isn't that we build a super intelligent AI system, it's that we build a marginally competent one, perceive it as super intelligent because the incident of error is low enough that it can't be recognized during normal interaction and people begin relying on it and it makes progressive mistakes that degrades society and by the time it's recognized as faulty the goal becomes resolve the AI rather than remove it. Perpetual attempts at fixing it occur and society falls apart, because we lose the capacity to operate without our screwed up AI system.

Not saying that's likely, but it seems more likely than Skynet robots deciding to murder everyone.


how do we know that that would t be the goal of super smart AI to start with? intelligent enough and sadistic enough to just toy with us instead of killing us outright. or maybe to entertain itself until it had the resources to pull it off at least.
panhandlefarmer
How long do you want to ignore this user?
bmks270 said:

Chat GPT4 is AGI.

It passes exams most people can't.
You can have fluid conversations with it.

Idk what more you need to define AGI.

It has "general" intelligence.

I think some people don't want to call it AGI because it's not "super" intelligence.

It might lack in some abstract reasoning like deriving mathematical proofs, but so do most humans.



Yet it can't help me solve a wordle or follow simple instructions.
TexAgs91
How long do you want to ignore this user?
hph6203 said:

TexAgs91 said:

hph6203 said:

Not even remotely. That post is entirely nonsense (the Reddit post). The request for massive spend into GPUs comes before AGI, not after.
Once you have AGI, you'll want to scale it up to make it useful. That's where you'll need massive numbers of GPUs.


Massive GPU spend is an indication that they are far away from AGI, not close.

You achieve AGI through massive amounts of data, massive amounts of compute, massive amounts of training that compresses into a model that can be utilized by much less powerful hardware, and the application of those models will be used in a distributed way. That's the value of AI, upfront expense, downstream savings.

AGI is not going to be some massive compute cluster doing all of the interpretation of everything, it's going to be a chip in your phone, tablet, or computer that runs the model and provides answers and the amount of compute is going to be tailored to the application. Meaning that a person trying to generate images for memes is going to have a much less powerful chip and a scaled down iteration of the model than a company trying to solve fusion energy. They are not going to internalize all of that compute, they are going to distribute it to the use case.

What does that mean? It means when OpenAI or some other foundation model creator starts massive selling activity rather than spending activity you can start considering they have solved AGI, even then they are probably haven't achieved AGI, because utility is going to come long before AGI.
Yes, it takes a large number of GPUs to train it, and they have a large number of GPUs. It takes more than a chip on your device to run it at high performance. A model with 13 billion parameters will run well on your computer. A model with hundreds of billions of parameters will not. It will take high power GPUs beyond consumer grade to run them. And massive amounts of GPUs to support large numbers of users.
No, I don't care what CNN or MSNBC said this time
Ad Lunam
kyledr04
How long do you want to ignore this user?
General Jack D. Ripper said:

For me the most worrisome issue on AI is what we are going to do with all the idle people. In the next 5-10 years, even with out true AGI, we are going to have a pretty significant portion of the population out of work. Add this to the already non-working class and you could have 50-75% of the population completely unproductive.

Musk and some others want to throw UBI at the problem. And that may be the only solution. They say "think of all the pursuits people will be able to engage once they don't have to worry about basic survival." Please. That's an absolute disaster waiting to happen. People with nothing to do but consume, no drive to earn, no struggle to survive? I'm sure that's gonna turn out just fine.


Idiocracy begins
hph6203
How long do you want to ignore this user?
hph6203 said:

Meaning that a person trying to generate images for memes is going to have a much less powerful chip and a scaled down iteration of the model than a company trying to solve fusion energy. They are not going to internalize all of that compute, they are going to distribute it to the use case.
YouBet
How long do you want to ignore this user?
General Jack D. Ripper said:

For me the most worrisome issue on AI is what we are going to do with all the idle people. In the next 5-10 years, even with out true AGI, we are going to have a pretty significant portion of the population out of work. Add this to the already non-working class and you could have 50-75% of the population completely unproductive.

Musk and some others want to throw UBI at the problem. And that may be the only solution. They say "think of all the pursuits people will be able to engage once they don't have to worry about basic survival." Please. That's an absolute disaster waiting to happen. People with nothing to do but consume, no drive to earn, no struggle to survive? I'm sure that's gonna turn out just fine.


UBI has failed in every small scale experiment tried. It's simply another hand out that improves nothing.

Now scale it up to nation-wide implementation on top of what will probably be north of $50-60-70T in debt by the time we get to it. Adding yet another welfare program on top of everything else is not going to help us any. And we all know that existing welfare programs will not be replaced with UBI. It will simply be on top of everything else.

And by then we will be implementing UHC as well. UBI is laughable.
Refresh
Page 1 of 1
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.