***** Elon Musk sues OpenAI *****

4,755 Views | 44 Replies | Last: 9 mo ago by TexAgs91
bmks270
How long do you want to ignore this user?
AG
YouTube summary of events.



Open AI post.

https://openai.com/blog/openai-elon-musk

Open AI publishes emails from Elon.
Elon wanted OpenAI to become a part of Tesla. He also wanted full control and to be CEO of Open AI in a for profit structure.

Seems he walked away from Open AI when he couldn't get full control in a for profit architecture.

Now he sues them.

Elon really coming off as power hungry.
MouthBQ98
How long do you want to ignore this user?
AG
Something data scientists have noticed is that even the best AI seem to be averaging engines. As they become the source for more of their own data sets (AI generated reference information stacking), they tend to become iteratively less creative. You can see it by asking an AI to generate a set of 100 images of a set of parameters, then going back and doing the same a few weeks or months later. It very obviously gets stuck in a rut: it clearly generates a set of results that are much more alike to each other the second time. Less original. Less creative.

It will be a challenging algorithmic problem.
bmks270
How long do you want to ignore this user?
AG
MouthBQ98 said:

Something data scientists have noticed is that even the best AI seem to be averaging engines. As they become the source for more of their own data sets (AI generated reference information stacking), they tend to become iteratively less creative. You can see it by asking an AI to generate a set of 100 images of a set of parameters, then going back and doing the same a few weeks or months later. It very obviously gets stuck in a rut: it clearly generates a set of results that are much more alike to each other the second time. Less original. Less creative.

It will be a challenging algorithmic problem.


As the training data becomes increasingly contaminated with Ai generated data, this trend to an average does seem inevitable. I wonder if training it from its own generated data will make it increasingly dumber.

It might end up being a self arresting phenomenon. Maybe we won't have enough data to make a super intelligent AI.


Can the intelligence of the AI exceed the intelligence of its training data? If we base
Our answer on current AI performance, I would say no, the AI political bias has proven that. Maybe models built with more compute and data will prove otherwise.
MouthBQ98
How long do you want to ignore this user?
AG
Wouldn't it be funny if you had to have it intentionally generate random errors or junk source choices and then test the viability of those sources or attempt to incorporate them and evaluate the viability of the results, almost like biological mutation, in order to help it grow past that phenomenon?

We really could end up with an AI that ultimately decides in Buddhist fashion what is the purpose of existence, ultimately, if the end result is oblivion no matter what we do in the interim, and simply shuts itself off, leaving us in the lurch.
Scruffy
How long do you want to ignore this user?
AG
MouthBQ98 said:

Wouldn't it be funny if you had to have it intentionally generate random errors or junk source choices and then test the viability of those sources or attempt to incorporate them and evaluate the viability of the results, almost like biological mutation, in order to help it grow past that phenomenon?

We really could end up with an AI that ultimately decides in Buddhist fashion what is the purpose of existence, ultimately, if the end result is oblivion no matter what we do in the interim, and simply shuts itself off, leaving us in the lurch.


Mr President Elect
How long do you want to ignore this user?
AG
MouthBQ98 said:

Wouldn't it be funny if you had to have it intentionally generate random errors or junk source choices and then test the viability of those sources or attempt to incorporate them and evaluate the viability of the results, almost like biological mutation, in order to help it grow past that phenomenon?

We really could end up with an AI that ultimately decides in Buddhist fashion what is the purpose of existence, ultimately, if the end result is oblivion no matter what we do in the interim, and simply shuts itself off, leaving us in the lurch.
I believe they are working on something similar to this. I'm not sure how they exactly plan to incorporate it, but they want it to have a more RL approach, similar to what Alpha-Go was able to do. During training you have an "exploratory" parameter that has it intentionally chose something other than what it thinks is the best answer to explore the possiblities down that rabbit-hole.
Mr President Elect
How long do you want to ignore this user?
AG
bmks270 said:

As the training data becomes increasingly contaminated with Ai generated data, this trend to an average does seem inevitable. I wonder if training it from its own generated data will make it increasingly dumber.

It might end up being a self arresting phenomenon. Maybe we won't have enough data to make a super intelligent AI.


Can the intelligence of the AI exceed the intelligence of its training data? If we base
Our answer on current AI performance, I would say no, the AI political bias has proven that. Maybe models built with more compute and data will prove otherwise.


Sam has been asked this and thinks the answer is that we do have enough data (well, specifically about true AGI not ASI).

They are getting away from training it on all data and instead having it train on quality data. Why does it need 50 similar biology textbooks to get better at Biology, why not just 1?

As far as reaching and exceeding the intellegence of the training data, I think LLM's can get there, sort of, through emergent properties. This data is actually in the training data, just not discernable by humans.

I think it's obvious LLM's are just a stepping stone in the AI arch, wouldn't be suprised to see it's predecessor emerge soon.
Mr President Elect
How long do you want to ignore this user?
AG
bmks270 said:

YouTube summary of events.



Open AI post.

https://openai.com/blog/openai-elon-musk

Open AI publishes emails from Elon.
Elon wanted OpenAI to become a part of Tesla. He also wanted full control and to be CEO of Open AI in a for profit structure.

Seems he walked away from Open AI when he couldn't get full control in a for profit architecture.

Now he sues them.

Elon really coming off as power hungry.

I haven't watched the video, as I think i have seen all the info that it will probably touch on. I agree, it doesn't make Elon look good, but it is also just a few cherry-picked emails. Also, I don't think Elon wants anything but the discovery information to go public.
richardag
How long do you want to ignore this user?
TxAgs91
This quoted from the article is quite disturbing :
  • This is especially troubling when one potential donor is the national security advisor of the United Arab Emirates, and US officials are concerned due to the UAE's ties to China. Moreover, Mr. Altman has been quoted discussing the possibility of making the UAE a "regulatory sandbox" where AI technologies are tested.
Time to finish that EMP development.
Among the latter, under pretence of governing they have divided their nations into two classes, wolves and sheep.”
Thomas Jefferson, Letter to Edward Carrington, January 16, 1787
TexAgs91
How long do you want to ignore this user?
AG
bmks270 said:

MouthBQ98 said:

Something data scientists have noticed is that even the best AI seem to be averaging engines. As they become the source for more of their own data sets (AI generated reference information stacking), they tend to become iteratively less creative. You can see it by asking an AI to generate a set of 100 images of a set of parameters, then going back and doing the same a few weeks or months later. It very obviously gets stuck in a rut: it clearly generates a set of results that are much more alike to each other the second time. Less original. Less creative.

It will be a challenging algorithmic problem.


As the training data becomes increasingly contaminated with Ai generated data, this trend to an average does seem inevitable. I wonder if training it from its own generated data will make it increasingly dumber.

It might end up being a self arresting phenomenon. Maybe we won't have enough data to make a super intelligent AI.


That has to do with how AI in implemented. Do humans need to read the entire internet to become intelligent? No.

Quote:

Can the intelligence of the AI exceed the intelligence of its training data? If we base
Our answer on current AI performance, I would say no, the AI political bias has proven that. Maybe models built with more compute and data will prove otherwise.

The political bias is partially a "feature" that they've ingrained into their models. But tests have shown that at least in many areas, an AI can train the next version of AI to get better results.
"Freedom is never more than one election away from extinction"
Fight! Fight! Fight!
Refresh
Page 2 of 2
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.