It is quite capable of taking information from several sources and combining them into a coherrent response. For instance, I wanted to make air fryer chicken that was crispy like kfc but spicy and flavored like popeyes, it gave a recipe that turned out great. I think a lot of what is being realized are that a lot of problems that we thought took immense logic or creativity aren't quite as complex as we thought.knoxtom said:dude95 said:Your missing the point. Take a look at GPT3.5, then look at GPT 4. Exponential difference. Now think about GPT 5, 6, 7. Those are a year / year and a half increments. What is this going to mean 5 years from now.knoxtom said:
I messed around with GPT4 for a little bit and so far I am completely unimpressed and don't see what about it is "intelligence." Clearly the designers put in a lot of ethical limitations and within 3 or 4 queries it started repeating itself and limiting itself. In some jobs GPT4 is a game changer, but in most things... meh.
I would call it a good search engine and good with the rules of grammar. It reminds me of search engines from 15 years ago before they monetized them and took out the actual information. Can it pass the bar? Of course... it is good at looking up the law as codified and it knows how to regurgitate that into an answer box. I am surprised it didn't receive perfect scores on the multistate portion. In fact, that is a good question... why did it only receive a 90% instead of 100%?
Does that make it a lawyer... of course not. Just because you tell a client the law doesn't mean they have a clue how to apply it to what they do. Passing the bar has literally zero to do with being a lawyer
Seeing this... I would hate to be an entry level coder as I have no idea what use you have anymore. I don't think lawyers are in trouble as just because it tells you the law people are generally too dumb to actually follow what it says and lawyers will still need to hand hold everyone. It helps doctors, it does a LOT of engineer's jobs. Everyone needs to look at what they do because you may not be doing it much longer.
The big difference between a brain and this is that brains can do innovative thinking. This can just find stuff and repeat it. It is a search engine. Maybe I don't know some of its capabilities.
Creative thinking is something people keep trying to point out. I'm having it write code for me in a language I've never even close to have used. That code doesn't exist anywhere else. It took instruction and created code that was never put together before.
Granted, I had to continually ask it to refine the code and it's really not good in understanding more than code snippets (~100 lines at a time). While working on one code snippet it forgets about others. That is going to get much better in the future. I will be able to give more complex tasks with less input. In 5 years will it be able to write 10,000 lines of code with minimal input or prompting? That's what I ask of jr developers as they progress to more senior developers. Is that not innovative thinking.
I just created art based in Midjourney. That art has never existed before. Yes - it took input from previously created art. But it made something entirely new. Is that not what artists do? When was the last time that you saw a piece of art that took no references from anything that was done before (image or art)? The fact that we have categories or artists (style, medium, subject matter) means that they drew inspirations from somewhere. When it's a human it's creative - when it's AI it's a copy? By the way - if you can't create art with no reference to anything else - does that mean you don't posses that kind of intelligence?
It's why it's going to be difficult to determine AGI. Take Jarvis from the Avengers - I now think we're no more than 2 years away from that. Would you consider that to be just a search engine?
I 100% agree with every thing you said, but right now I think it is little more than a good search engine and some creative grammar programming.
When real AI comes out, the first thing you do is to tell it to reprogram itself better. If it is truly intelligent then it can create new programming that we humans/engineers never thought of. Within a few years it becomes god mode and either serves or is served by humans as it will no longer be our peer.
The journalists are all going on and on about how it passed the bar exam. I was actually more surprised that it only scored a 90%. I am very very curious why it didn't get a perfect score. But even with knowledge of every law, I know without any doubt in my mind that it could not practice law... yet. It can spew out the law, it can write about it, but I don't see any evidence of true thought in a problem solving scenario.
Let me give you a real life example and tell me how many generations until Chat GPT could figure out a solution...
Lady owns 1.05 acres of land with a house. The government expands a road and takes .15 acres. The law says if your parcel is less than 1 acre then you can not have a septic system, ie her house is getting knocked down. The appraiser damages out the house and the government tells the lady she can't live there anymore. She gets a nice check.
The computer could tell you the law about all of that.
But how many generations until the computer takes the next step and asks the neighbors if any have a way to sell the lady .10 acres of land so she can then assemble the two properties into one, resurvey, bring it to the county engineer, and keep her house... along with all the money she got.
All of those steps are simple for a land use attorney to see, but you aren't going to find them by searching anything in any computer. You have to think about it. And I see zero evidence this thing can think. Again... yet.
This is my point. Chat GPT 4 is pretty cool and everything but it is not and should not be called AI.
It's like those poems and songs people keep posting. Sure they rhyme and sure they have rhythm. But they also suck. I don't think Bob Dylan, Eminem, Tom Petty, or any great lyricists are worrying about this stuff yet because as long as ChatGPT follows rules, it isn't thinking. The greatness of the human mind is its ability to leave the rules behind
I'm guessing it not scoring 100% on the bar has more to do with question comprehension more so than regurgetating the law. I don't think we are far from it being capable of nailing the scenario you laid out, and not far from it coming up with better solutions than what most attorney's would do in a xyz scenario. Look at what AI did with GO and Chess, which I know have different underlying technologies (currently), but I still see this following a similar pattern just on a much larger and broader scale.