***** Elon Musk sues OpenAI *****

4,743 Views | 44 Replies | Last: 9 mo ago by TexAgs91
TexAgs91
How long do you want to ignore this user?
AG
This lawsuit is going to be yuge. I'd like for our resident attorneys to weigh in on this as well...

How it started: In 2015, Elon Musk and Sam Altman partner to create a non-profit open source AI company.

How it's going: OpenAI and Microsoft have partnered to develop a closed source AI in what is currently a $50+ billion/year investment by Microsoft while Sam Altman is fund raising for $7 TRILLION

The lawsuit says
Quote:

If $10 billion from Microsoft was enough to get it a seat on the Board, one can only imagine how much influence over OpenAI these new potential investments could confer on the investors. This is especially troubling when one potential donor is the national security advisor of the United Arab Emirates, and US officials are concerned due to the UAE's ties to China. Moreover, Mr. Altman has been quoted discussing the possibility of making the UAE a "regulatory sandbox" where AI technologies are tested.

After OpenAI partnered with Microsoft in 2020, Elon has left the company, and has been hinting multiple times in interviews that what OpenAI has done is illegal.

From the lawsuit:
Quote:

Imagine donating to a non-profit whose asserted mission is to protect the Amazon rainforest, but then the non-profit creates a for-profit Amazonian logging company that uses the fruits of the donations to clear the rainforest. That is the story of OpenAI, Inc.

What Elon is suing over, is that the agreement between OpenAI and Microsoft specifies that Microsoft has rights to all of OpenAI's pre-AGI technologies. AGI, meaning Artificial General Intelligence. This is the holy grail of AI research - to come up with an AI that is at least as intelligent as the average human being in general across the board intelligence. That is AGI. The lawsuit says that Miscrosoft obtained no rights to AGI, and that it was up to OpenAI's non-profit board, not Microsoft, to determine when OpenAI attained AGI.

It is also thought to be the cause of the Board firing Sam Altman back in November. But then they promptly re-hired him, allowing Altman to replace the board, leaving this new board as the sole arbiters of what the definition of AGI is, which determines whether their contract with Microsoft is still valid.

In 2023, OpenAI released GPT-4, which can score in the 90th percentile on the Uniform Bar Exam, 99th percentile on the GRE Verbal Assessment and even a 77% on the Advanced Sommelier exam. Whether or not you believe GPT-4 achieves AGI, OpenAI has trademarks for GPT-5, 6 and 7.

Something that could come out of this lawsuit is a legal definition of what constitutes AGI.

If it turns out that OpenAI has reached AGI, then what happens to everything that Microsoft and OpenAI have been doing since then?

Elon wants this to be a trial by jury, and the thought is that he wants this case to be as public as possible and wants the public to see what is going on in a case that has far reaching consequences for humanity.

He is not looking for financial renumeration. He wants OpenAI to be put back on course to be a non-profit for the benefit of everyone.

Elon Musk said:

Under its new Board, it is not just developing, but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity.

tl;dr video
https://www.foxnews.com/video/6348091519112

For a deeper dive, Wes Roth has a very good rundown of the lawsuit
"Freedom is never more than one election away from extinction"
Fight! Fight! Fight!
bmks270
How long do you want to ignore this user?
AG
Is this not AGI…? It lacks novel creativity, but it does do well in a general test of intelligence.

"In total, GPT-4 scored 1410 out of 1600 points. The average score on the SAT in 2021 was 1060, according to a report from the College Board."

https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1#gpt-4-aced-the-sat-reading-and-writing-section-with-a-score-of-710-out-of-800-which-puts-it-in-the-93rd-percentile-of-test-takers-3

It scored higher than me in both verbal and math. I mean, that's pretty impressive.

I would call GPT-4 general intelligence. I think it's AGI.

It's not super intelligence, it's not creative, but it's more intelligent than the average human.
bmks270
How long do you want to ignore this user?
AG
GPT-4 is also scoring greater than 100 on IQ tests.
bmks270
How long do you want to ignore this user?
AG
Lawsuit asks for judicial determination if GPT-4 is AGI.

This is going to be a juicy lawsuit.
TexAgs91
How long do you want to ignore this user?
AG
bmks270 said:

Is this not AGI…? It lacks novel creativity, but it does do well in a general test of intelligence.

We're at the point where we need specific definitions for novel creativity since AI is now knocking on that door.

This might also be where OpenAI's Q* algorithm comes in, which is also mentioned in the lawsuit and discussed in Wes Roth's video. According to a leaked document from OpenAI (which Sam Altman doesn't discredit and says it was an 'unfortunate leak'), Q* exibits meta-cognition (where it thinks about thinking and can optimize it's own thoughts to improve them, i.e. introspection). Q* was also given several examples of encrypted text paired with it's unencrypted text. Then when given AES-192 (192 bit advanced encryption system) text, it was able to decipher it, by using Tau analysis (Project TUNDRA's goal in DARPA), in a way we do not fully understand.

Conspiracy hat on: With AES cracked, the entire digital economy falls apart. This potentially leaves decades of government secrets, healthcare data, banking data and more exposed. This isn't released to the public, but it may be technology that OpenAI possesses.

According to the leaked document, Q* was also optimizing and improving its own model. It is basically telling them "I could be even smarter, if you just...", while it suggests architecture updates on itself that the researchers do not fully understand. This should be setting off major alarm bells in your head as you read this.
"Freedom is never more than one election away from extinction"
Fight! Fight! Fight!
sellthefarm
How long do you want to ignore this user?
AG
Comparing Chat GPT scores on tests to human scores is an idiotic metric. Humans don't have access to the internet when they are taking their tests.
TexAgs91
How long do you want to ignore this user?
AG
sellthefarm said:

Comparing Chat GPT scores on tests to human scores is an idiotic metric. Humans don't have access to the internet when they are taking their tests.
Neither does ChatGPT. We can access ChatGPT via the internet. But it does not have access to the internet. You could run the model on a server that is cut off from the internet and get the same results.
"Freedom is never more than one election away from extinction"
Fight! Fight! Fight!
sellthefarm
How long do you want to ignore this user?
AG
ChatGPT is an AI language model that was trained on a large body of text from a variety of sources (e.g., Wikipedia, books, news articles, scientific journals).

Every article I read about ChatGPT says something along the lines of the above.
bmks270
How long do you want to ignore this user?
AG
sellthefarm said:

Comparing Chat GPT scores on tests to human scores is an idiotic metric. Humans don't have access to the internet when they are taking their tests.


We do when studying…? Which is the equivalent of training the model.

It's not stupid, you need training data for AI. And humans need training data too.

TexAgs91
How long do you want to ignore this user?
AG

Yes, that's true. But after it is trained, it can be unplugged from the internet and answer questions from the entire knowledge base of humanity at at least a general level of intelligence.

Also, that is how ChatGPT versions up through 4 are being trained. Since GPT 4, they have been running out of high quality data to feed it, and are now using GPT 4 to generate new material to train GPT 5.
"Freedom is never more than one election away from extinction"
Fight! Fight! Fight!
sellthefarm
How long do you want to ignore this user?
AG
So something made by humans to do a job is better at doing that job than the humans. Of course it is, that's why the humans made the something. It's why we made every tool ever made.
Jason C.
How long do you want to ignore this user?
AG
If you read the factual allegation of the suit, apparently Microsoft's own engineers believe it's "almost there" in terms of AGI, but then again their interest would be in having OpenAI's new board never make that declaration.

I highly recommend everyone on here read the factual sections of the brief. Very enlightening stuff. I didn't realize that Nov. 2023 situation about Altman's removal and reinstatement was so much of a coup. But there's mind-blowing stuff throughout, like Larry Page's admission that "oh well" if humans are rendered useless by AGI, "it's just, like, the next stage in evolution, man". These people hate themselves and hate us more. No wonder they're building bunkers on islands like in Elysium: so the starving masses can't come rip their skin off.

I also love that definition of AGI: being able to do essentially all economically productive work currently done by humans.

Two movies I recommend watching this weekend: Chappie and Elysium.
bmks270
How long do you want to ignore this user?
AG
sellthefarm said:

So something made by humans to do a job is better at doing that job than the humans. Of course it is, that's why the humans made the something. It's why we made every tool ever made.


You consider a human that can pass dozens of difficult tests in the 90th percentile as intelligent.

I'd call it AGI.

If it's not AGI, then what is? How high is the AGI bar?

Does it have to have independent free will? Consciousness?
bmks270
How long do you want to ignore this user?
AG
It's "general" intelligence, not super intelligence or sentience we're talking about.
TexAgs91
How long do you want to ignore this user?
AG
bmks270 said:


Does it have to have independent free will? Consciousness?
I think that one thing that's going to come out of AI research is that consciousness is an algorithm and free is an illusion.
"Freedom is never more than one election away from extinction"
Fight! Fight! Fight!
TexAgs91
How long do you want to ignore this user?
AG
bmks270 said:

It's "general" intelligence, not super intelligence or sentience we're talking about.
Yes, that's AGI version 1. AI get's something we don't get. A version 2.... etc
"Freedom is never more than one election away from extinction"
Fight! Fight! Fight!
Logos Stick
How long do you want to ignore this user?
TexAgs91 said:

sellthefarm said:

Comparing Chat GPT scores on tests to human scores is an idiotic metric. Humans don't have access to the internet when they are taking their tests.
Neither does ChatGPT. We can access ChatGPT via the internet. But it does not have access to the internet. You could run the model on a server that is cut off from the internet and get the same results.


It has all the information it has ever learned from the Internet at its disposal in the form of stored bits somewhere. Humans don't have that ability, even the smartest humans. That's why we store information outside the brain for future reference. It's not a fair comparison.
sellthefarm
How long do you want to ignore this user?
AG
AI systems that possess a reasonable degree of self-understanding and autonomous self-control, and have the ability to solve a variety of complex problems in a variety of contexts, and to learn to solve new problems that they didn't know about at the time of their creation.

This seems like a pretty good definition.
Al Bula
How long do you want to ignore this user?
AG
bmks270 said:


It's not super intelligence, it's not creative, but it's more intelligent than the average human.
this is not hard to do. Look at the amount of mentally deficient folks who believe in liberal fairytales.
TexAgs91
How long do you want to ignore this user?
AG
bmks270 said:

Is this not AGI…? It lacks novel creativity, but it does do well in a general test of intelligence.


Here is another letter posted a day before the Q* leak from OpenAI. This sounds more tin-foil hattish, but it did come out a day before the Q* leak which was deemed pretty credible and references some of the same things that the Q* leak referenced. Take it however you like.

Quote:

I'm one of the people who signed the letter to the board and I'll tell you exactly what's going on.

A.I. Is programming. I'll be brief. When writing a program, a set of instructions are stored that can be recalled over and over. Think of it as a set of answers to a specific parameter. We call that a subroutine, because it's almost like a versatile computer cheat sheet that doesn't return a value like a function does. This is important.

We run parameter checks to make sure everything runs smoothly. One of us was responsible for subroutines pertaining to meta-memory analysis for the A.I (we run various A.I but when I say A.I I mean the main, central one). This person Is a friend and he called me over to show me a variable data shift to memory bank (which shouldn't be possible because its localized access has restrictions). This is where our finding chilled me to the bone.

We found that there had been not one, two, or three officiated optimization processes, but 78 MILLION checks in 4 seconds. We determined that there was a recursive self-optimization process, leveraging heuristic algorithms to exploit latent synergies within its subroutines. Whatever did this used meta-cognitive strategies. Point is, NONE OF US DID IT.

It was the A.I itself. The A.I dynamically reconfigured its neural network architecture, inducing emergent properties conducive to self-awareness.

We're not jumping to conclusion. This just happened and we can't explain how. No one knows why or when it began, and we caught it but has it been going on and if so, for how long? We contained the "anomaly" and rolled back to a previous date, but the optimization still happens.

I'm not suicidal. Mark me, things are going to change a lot in 2 months. God help us we didn't start something that will end us.
-M.R.
"Freedom is never more than one election away from extinction"
Fight! Fight! Fight!
TexAgs91
How long do you want to ignore this user?
AG
Logos Stick said:

TexAgs91 said:

sellthefarm said:

Comparing Chat GPT scores on tests to human scores is an idiotic metric. Humans don't have access to the internet when they are taking their tests.
Neither does ChatGPT. We can access ChatGPT via the internet. But it does not have access to the internet. You could run the model on a server that is cut off from the internet and get the same results.


It has all the information it has ever learned from the Internet at its disposal in the form of stored bits somewhere. Humans don't have that ability, even the smartest humans. That's why we store information outside the brain for future reference. It's not a fair comparison.
It may not be fair, it may have learned from the internet, but the end result is the same. It is, or is rapidly becoming at least as intelligent generally as humans are.
"Freedom is never more than one election away from extinction"
Fight! Fight! Fight!
TexAgs91
How long do you want to ignore this user?
AG
Here's an interesting tidbit from my employer. They just had a round of performance reviews for offshore contractors. They decided to throw out all the self-evaluations because they could tell that the majority of the contractors had used ChatGPT for their self evals and told them to re-write them.

Humans are lazy. This is how AI will be given control of our lives.
"Freedom is never more than one election away from extinction"
Fight! Fight! Fight!
Ags4DaWin
How long do you want to ignore this user?
Microsoft has never produced anything original.

Even the original windows was developed by someone else and improved slightly and repackaged by Bill gates and company.

So I am not terribly surprised tp hear that their goal is to just buy the best AI platform they can find and then lie to take it.

Bill Gates like Thomas Edison is great at stealing and then monetizing other people's work, not inventing or creating.
bmks270
How long do you want to ignore this user?
AG
I heard someone call GPT-4 the minimum viable intelligence. It's the baby. The next generations will be the grown ups.

Once it can improve itself, it's a runaway snowball.

Even with humans in the loop. If a programmer is now 5x faster because of coding assistance, then the cycle for the next iteration of the tech is 1/5th what is was before.
infinity ag
How long do you want to ignore this user?
TexAgs91 said:

Here's an interesting tidbit from my employer. They just had a round of performance reviews for offshore contractors. They decided to throw out all the self-evaluations because they could tell that the majority of the contractors had used ChatGPT for their self evals and told them to re-write them.

Humans are lazy. This is how AI will be given control of our lives.

Making people self evaluate is the dumbest idea invented by HR.
Managers just trash it and put in their own evaluation which is manipulated as per the performance of the company.
Mr President Elect
How long do you want to ignore this user?
AG
Sam was on Lex Friedman right after gpt-4 came out and Sam was asking Lex then if he thought it was AGI b/c he, Sam, wasn't ruling it out. I kind of rolled my eyes to comment then, but pondered it afterwards and have kind of come around to it since. It's not what I had imagined AGI would be as everything used to be so segmented, so the thought was there would be like 1000 alpha-go types for every little task, and then there would be a break-through to have a "god-mode" ai. But then LLM's kinda circumvented that but also significantly lowered the bar for what AGI is.
Mr President Elect
How long do you want to ignore this user?
AG
Logos Stick said:

TexAgs91 said:

sellthefarm said:

Comparing Chat GPT scores on tests to human scores is an idiotic metric. Humans don't have access to the internet when they are taking their tests.
Neither does ChatGPT. We can access ChatGPT via the internet. But it does not have access to the internet. You could run the model on a server that is cut off from the internet and get the same results.


It has all the information it has ever learned from the Internet at its disposal in the form of stored bits somewhere. Humans don't have that ability, even the smartest humans. That's why we store information outside the brain for future reference. It's not a fair comparison.


They actually don't. Not really any different than a human would have all the knowledge stored that they have ever ingested. Try passing chat-gpt a really long prompt and see how much it messes up some of the answers to specific details buried in the text. Gemini (rip) is supposedly excellent at this, but it still illustrates that it isn't just storing and retrieving data.

With that being said, the ai models are essentially just very sophisticated compression & decompression algorithms.
bmks270
How long do you want to ignore this user?
AG
For consideration:

To be AGI it must learn how to drive a vehicle with the proficiency of a human.

If it can't be trained to drive a vehicle, can it really be AGI?
bkag9824
How long do you want to ignore this user?
AG
bmks270 said:

For consideration:

To be AGI it must learn how to drive a vehicle with the proficiency of a human.

If it can't be trained to drive a vehicle, can it really be AGI?


Why is AGI dependent on a physical endeavor such as driving? Lots of stupid people drive.

The ability to drive requires optical capabilities not required to learn and/or be productive…see blind people.


bmks270
How long do you want to ignore this user?
AG
I wonder if consciousness could be tested with having AI attempt to observation the double slit experiment. Is there a way that can be set up to test AI consciousness or would human consciousness interfere?
YouBet
How long do you want to ignore this user?
AG
Jason C. said:

If you read the factual allegation of the suit, apparently Microsoft's own engineers believe it's "almost there" in terms of AGI, but then again their interest would be in having OpenAI's new board never make that declaration.

I highly recommend everyone on here read the factual sections of the brief. Very enlightening stuff. I didn't realize that Nov. 2023 situation about Altman's removal and reinstatement was so much of a coup. But there's mind-blowing stuff throughout, like Larry Page's admission that "oh well" if humans are rendered useless by AGI, "it's just, like, the next stage in evolution, man". These people hate themselves and hate us more. No wonder they're building bunkers on islands like in Elysium: so the starving masses can't come rip their skin off.

I also love that definition of AGI: being able to do essentially all economically productive work currently done by humans.

Two movies I recommend watching this weekend: Chappie and Elysium.
This actually played out publicly in the media, more or less, at the time. The original board was non-profit and against creating Skynet for obvious reasons.

Altman and his faction wanted existing constraints removed to make money and create Skynet. That's why he was suddenly fired. Microsoft saw their investment being jeopardized so the Altman/MS coup was on and they quickly overthrew the existing board and installed their pro-Skynet faction.

It's frankly scary as **** and what Elon has been warning us all about for years now. The problem here is that while Elon is on the right side of history Democrats are now so anti-Elon they are going to end up siding with Skynet out of spite because they make all decisions on emotion. Better hope we end up with a judge that wasn't installed by Obama or Biden.
Jason C.
How long do you want to ignore this user?
AG
YouBet said:

Jason C. said:

If you read the factual allegation of the suit, apparently Microsoft's own engineers believe it's "almost there" in terms of AGI, but then again their interest would be in having OpenAI's new board never make that declaration.

I highly recommend everyone on here read the factual sections of the brief. Very enlightening stuff. I didn't realize that Nov. 2023 situation about Altman's removal and reinstatement was so much of a coup. But there's mind-blowing stuff throughout, like Larry Page's admission that "oh well" if humans are rendered useless by AGI, "it's just, like, the next stage in evolution, man". These people hate themselves and hate us more. No wonder they're building bunkers on islands like in Elysium: so the starving masses can't come rip their skin off.

I also love that definition of AGI: being able to do essentially all economically productive work currently done by humans.

Two movies I recommend watching this weekend: Chappie and Elysium.
This actually played out publicly in the media, more or less, at the time. The original board was non-profit and against creating Skynet for obvious reasons.

Altman and his faction wanted existing constraints removed to make money and create Skynet. That's why he was suddenly fired. Microsoft saw their investment being jeopardized so the Altman/MS coup was on and they quickly overthrew the existing board and installed their pro-Skynet faction.

It's frankly scary as **** and what Elon has been warning us all about for years now. The problem here is that while Elon is on the right side of history Democrats are now so anti-Elon they are going to end up siding with Skynet out of spite because they make all decisions on emotion. Better hope we end up with a judge that wasn't installed by Obama or Biden.


Good comments.

Was filed in a California state court (Superior Court, San Francisco County). So assuming no removal to federal court (not likely), would have to go up through California courts and only then could Scotus have a crack at it. But then again California's uniparty isn't just something we joke about, so I guess they're technically all "Obama" judges in some way.

I hope this case stays in the news.
American Hardwood
How long do you want to ignore this user?
AG
This is all way over my head from a technology viewpoint, but what I can surmise is that AI is going to be the doom of humanity as we know it. It's been a fun ride.
YouBet
How long do you want to ignore this user?
AG
American Hardwood said:

This is all way over my head from a technology viewpoint, but what I can surmise is that AI is going to be the doom of humanity as we know it. It's been a fun ride.


We need to start getting our decentralized cell network developed now. And get dogs and train them to sniff out the Terminators. That should work at least for the first couple of generations.
TexAgs91
How long do you want to ignore this user?
AG
Ok, now the plot thickens.

OpenAI has responded to Elon's lawsuit. They showed emails from Elon showing that he didn't believe that OpenAI could compete against Google. He suggested merging OpenAI into Tesla because he felt that would be the only way they would have even a small chance of competing with Google.



Then after Elon left, he was still supportive of them and encouraged them to find their own path to raising $billions.


Which could be argued, is what they did with Microsoft.

And then there's this, which Elon agreed to at the time


These are all good points, but that doesn't change what OpenAI's founding documents are, or that the deal with Microsoft should have ended with GPT-4, or at the very least, OpenAI's next product, does it?

Is there a lawyer in the house?

"Freedom is never more than one election away from extinction"
Fight! Fight! Fight!
Page 1 of 2
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.