Cross-posted on F16, an example implementation of AI for higher level research

1,106 Views | 25 Replies | Last: 2 days ago by DaShi
BusterAg
How long do you want to ignore this user?
AG
POL post is here: https://texags.com/forums/16/topics/3528585

This is a really cool example of how ChatGPT is going to eliminate a lot of lower level research employees in knowledge based firms. The results are imperfect, but terrifying for knowledge workers.

The topic is My fascinating ChatGPT conversation on how to balance the budget w/ SARBOX for gvmt

*************************

So, you have to feed it some updated assumptions, because ChatGPT relies on an OMB estimate that improper payments are only $200B per year, and that SARBOX controls would only eliminate 20% of those improper payments. Even at those unrealistically conservative estimates, ChatGPT recommends implementing SARBOX for the federal government (FEDBOX).

I fed it some more reasonable assumptions:
1) Federal fraud is about 20% of the total budget. I think that this is a conservative estimate. I think that arguing against that at this point would be difficult
2) SARBOX for the federal government would cut that fraud by 80%. This assumption is a little more aggressive, but, with AI, I think it would be reasonable. The trick is to use Musk's playbook of bottoms-up, going with the payments first, and working back. All other attempts at fraud have been top-down, going with allocations first.

From ChatGPT, I was able to:
1) estimate the net savings of FEDBOX after implementation costs.
2) suggest a timeline for a phased in implementation, what those phases should be, an how long each phase may take, the challenges of implementation, and how to overcome them.
3) A detailed cost benefit analysis of each phase
4) The break even point of implementation (year 1 or 2)
5) Policy recommendations for implementation for Congress and the White House
6) The policy makers and federal agencies that might lead the charge
7) Draft legislative proposal to get things rolling

The resulting research is fascinating for two reasons:
1) It gives an easy to understand answer for a very complex issue. It would have taken me at least two weeks tp put this together, maybe a month. My ChatGPT session was about 5 minutes.
2) It uses second level reasoning. This isn't just data, it is applying data to a problem, and providing a good answer.
3) ChatGPT is very weak compared to the best engines out there. Pro is way better.

This is a lot of reading, but hopefully this is enlightening. And, if you are in the knowledge business, this is also likely terrifying.

Here are the link to the chat:
https://chatgpt.com/share/67b49d4d-03f4-8003-a26b-d75653ee59a9
Here is the draft legislation:
https://chatgpt.com/canvas/shared/67b4a1f07c34819197bf481c31f71dbc

The one thing that ChatGPT didn't do well was to create an executive summary of this research for a blog post. It took me 10x longer to write this post than it did all of the research above.

BTW, my plan is to refine this proposal, making sure there are no factual inaccuracies, and email it to every senator or congressman that I think might find it interesting.
Diggity
How long do you want to ignore this user?
AG
BusterAg said:

POL post is here: https://texags.com/forums/16/topics/3528585

This is a really cool example of how ChatGPT is going to eliminate a lot of lower level research employees in knowledge based firms. The results are imperfect, but terrifying for knowledge workers.

The topic is My fascinating ChatGPT conversation on how to balance the budget w/ SARBOX for gvmt

*************************

So, you have to feed it some updated assumptions, because ChatGPT relies on an OMB estimate that improper payments are only $200B per year, and that SARBOX controls would only eliminate 20% of those improper payments. Even at those unrealistically conservative estimates, ChatGPT recommends implementing SARBOX for the federal government (FEDBOX).

I fed it some more reasonable assumptions:
1) Federal fraud is about 20% of the total budget. I think that this is a conservative estimate. I think that arguing against that at this point would be difficult
2) SARBOX for the federal government would cut that fraud by 80%. This assumption is a little more aggressive, but, with AI, I think it would be reasonable. The trick is to use Musk's playbook of bottoms-up, going with the payments first, and working back. All other attempts at fraud have been top-down, going with allocations first.

From ChatGPT, I was able to:
1) estimate the net savings of FEDBOX after implementation costs.
2) suggest a timeline for a phased in implementation, what those phases should be, an how long each phase may take, the challenges of implementation, and how to overcome them.
3) A detailed cost benefit analysis of each phase
4) The break even point of implementation (year 1 or 2)
5) Policy recommendations for implementation for Congress and the White House
6) The policy makers and federal agencies that might lead the charge
7) Draft legislative proposal to get things rolling

The resulting research is fascinating for two reasons:
1) It gives an easy to understand answer for a very complex issue. It would have taken me at least two weeks tp put this together, maybe a month. My ChatGPT session was about 5 minutes.
2) It uses second level reasoning. This isn't just data, it is applying data to a problem, and providing a good answer.
3) ChatGPT is very weak compared to the best engines out there. Pro is way better.

This is a lot of reading, but hopefully this is enlightening. And, if you are in the knowledge business, this is also likely terrifying.

Here are the link to the chat:
https://chatgpt.com/share/67b49d4d-03f4-8003-a26b-d75653ee59a9
Here is the draft legislation:
https://chatgpt.com/canvas/shared/67b4a1f07c34819197bf481c31f71dbc

The one thing that ChatGPT didn't do well was to create an executive summary of this research for a blog post. It took me 10x longer to write this post than it did all of the research above.

BTW, my plan is to refine this proposal, making sure there are no factual inaccuracies, and email it to every senator or congressman that I think might find it interesting.
that's going to be the rub. The LLM's love to make up crap when they can't find the right data
IrishAg
How long do you want to ignore this user?
Like any new disruptive paradigm shift based tech/framework you'll see a ton of people "adopting" it because they get told to by higher ups because it will help "increase their operating margins/EPS" and you'll get some who go through and do due diligence and understand how to properly. Like most things, public vs private applies to LLMs as much as any other technology. For a business LLMs are only as good as the training/tuning for their specific use case. I think people get hung up on the public/marketed chatGPT for reasons (political, privacy, and/or cultural) and don't realize that good companies aren't going to use the general one (on any generative AI LLM platform), but instead will train it with specific data so that it can understand the exact use case that companies need to use it for.

But at the end of the day, this isn't that different from digital transformation, flex work, SASE, etc in that it's a disruption to the way business is done, will cause personnel turnover, and will be implemented horribly by the majority of companies that attempt to adopt it.

This has happened before and it will happen again in the tech industry, the good or bad impact on business will only be defined after the fact.
BusterAg
How long do you want to ignore this user?
AG
Diggity said:


that's going to be the rub. The LLM's love to make up crap when they can't find the right data
I mean, this is true.

But, eventually, we are going to teach LLM's to fact check eachother. If you get four LLMs working together with different training sets and different algorithms, it will be much easier to detect hallucinations.

But, at the end of the day, you do need one guy with experience to review everything for obvious problems, and a bunch of middle managers to do a thorough review. But, your analyst level research staff is going to shrink by 60% to 90%.
IrishAg
How long do you want to ignore this user?
BusterAg said:

Diggity said:


that's going to be the rub. The LLM's love to make up crap when they can't find the right data
I mean, this is true.

But, eventually, we are going to teach LLM's to fact check eachother. If you get four LLMs working together with different training sets and different algorithms, it will be much easier to detect hallucinations.

But, at the end of the day, you do need one guy with experience to review everything for obvious problems, and a bunch of middle managers to do a thorough review. But, your analyst level research staff is going to shrink by 60% to 90%.
Yeah, those analyst level research staff will be reduced and refocused on training the AI on the same data to better improve the responses. Those will be long term careers but will have fewer opportunities available. But such is life in the world of tech
Diggity
How long do you want to ignore this user?
AG
the previous poster was correct that using internal LLM's is going to be a lot more useful.

relying on public LLM's is like using Wikipedia to source your term paper (without the fact checking).
IrishAg
How long do you want to ignore this user?
Diggity said:

the previous poster was correct that using internal LLM's is going to be a lot more useful.

relying on public LLM's is like using Wikipedia to source your term paper (without the fact checking).
Very much, and I think all the different press makes it tough for people not in the industry to understand what the actual uses and implementation of AI are. I work in infosec and I can tell you most companies have already implemented LLMs into their technology stacks. But while they may be using a chatGPT or something else SaaS based or locally deployed, they use their own data to train the model. So that instead of a tech needing to understand how the data normally flows, compare it to a spike, and then make a decision based on the parameters of that spike in traffic, they can just ask in plain english "is this spike an attack" and the LLM will give them probability based answers that are built off of the data that the model was trained with. Which speeds up accuracy in identifying and responding to an attack. And while that is very good for the business, it's probably bad for the techs, as the need for them will be reduced.
fig96
How long do you want to ignore this user?
AG
A lot of people misunderstand what AI is good at.

Sure, you can ask it anything which is when you get hallucinations and made up responses, it doesn't know what it doesn't know. And as a creative I have my own thoughts on AI art that's been trained on the work of people with actual talent.

But analysis of a contained data set is where AI shines. I think there's more complexity to it than some of the plan in the OP (the people management of something like that is a huge aspect not really accounted for), but being able to rapidly run analysis and simulations is a best case kind of use case and is definitely going to reduce the role of a lot of analysts who aren't coming up with true insights.

The product I work on now leverages AI to analyze sales and related data to show trends and insights of where changes can affect profitability.
BusterAg
How long do you want to ignore this user?
AG
IrishAg said:

BusterAg said:

Diggity said:


that's going to be the rub. The LLM's love to make up crap when they can't find the right data
I mean, this is true.

But, eventually, we are going to teach LLM's to fact check eachother. If you get four LLMs working together with different training sets and different algorithms, it will be much easier to detect hallucinations.

But, at the end of the day, you do need one guy with experience to review everything for obvious problems, and a bunch of middle managers to do a thorough review. But, your analyst level research staff is going to shrink by 60% to 90%.
Yeah, those analyst level research staff will be reduced and refocused on training the AI on the same data to better improve the responses. Those will be long term careers but will have fewer opportunities available. But such is life in the world of tech
Before the invention of the spreadsheet, accounting departments were packed with lower-trained bookkeeppers armed with adding machines. Getting rid of all those positions made accounting more efficient, making way for the increase in size of corporations due to better availability of accounting data and the lowered agency cost of getting more complex data.

If there is any analogy of AI that explains what is coming, I think that this is the best one. It's just the next logical leap in technology of business management.

If you are raising young men, turn them on to AI development, medicine, or tradeschool / labor management. Too much risk in knowledge based careers.
BusterAg
How long do you want to ignore this user?
AG
Diggity said:

the previous poster was correct that using internal LLM's is going to be a lot more useful.

relying on public LLM's is like using Wikipedia to source your term paper (without the fact checking).
It's more like a wikipedia that will write your term paper, and you have to document all the facts that it came up with. This will decrease as hallucinations decrease. Hallucinations are decreasing about as fast as Moore's law, and will likely surpass that.
Lathspell
How long do you want to ignore this user?
AG
They are only as good as the information being fed into them, and that information is controlled by people with an agenda. Therefore, I would not trust these LLM's too far. They are great as a tool to be used but should always be checked. ChatGPT seems to make stuff up at least every 3 conversations I have with it.

Great resource to speed up the mundane data assessment and such, but you always need to know what you're talking about to some extent to really use it to its fullest.

Hell, I asked it how one would integrate a specific UCaaS solution with another specific CRM. It spat out this long list of instructions. When I responded that most of what it's directing me to is not present in my version of CRM or UCaaS solution, it basically said it made up the whole thing as a hypothetical instead of simply telling me it didn't have the data to answer my question directly.
fig96
How long do you want to ignore this user?
AG
BusterAg said:

IrishAg said:

BusterAg said:

Diggity said:


that's going to be the rub. The LLM's love to make up crap when they can't find the right data
I mean, this is true.

But, eventually, we are going to teach LLM's to fact check eachother. If you get four LLMs working together with different training sets and different algorithms, it will be much easier to detect hallucinations.

But, at the end of the day, you do need one guy with experience to review everything for obvious problems, and a bunch of middle managers to do a thorough review. But, your analyst level research staff is going to shrink by 60% to 90%.
Yeah, those analyst level research staff will be reduced and refocused on training the AI on the same data to better improve the responses. Those will be long term careers but will have fewer opportunities available. But such is life in the world of tech
Before the invention of the spreadsheet, accounting departments were packed with lower-trained bookkeeppers armed with adding machines. Getting rid of all those positions made accounting more efficient, making way for the increase in size of corporations due to better availability of accounting data and the lowered agency cost of getting more complex data.

If there is any analogy of AI that explains what is coming, I think that this is the best one. It's just the next logical leap in technology of business management.

If you are raising young men, turn them on to AI development, medicine, or tradeschool / labor management. Too much risk in knowledge based careers.
What's funny is one of my products is literally trying to get people to take this data management out of excel and into solutions that can much more effectively link together and analyze data
IrishAg
How long do you want to ignore this user?
BusterAg said:

IrishAg said:

BusterAg said:

Diggity said:


that's going to be the rub. The LLM's love to make up crap when they can't find the right data
I mean, this is true.

But, eventually, we are going to teach LLM's to fact check eachother. If you get four LLMs working together with different training sets and different algorithms, it will be much easier to detect hallucinations.

But, at the end of the day, you do need one guy with experience to review everything for obvious problems, and a bunch of middle managers to do a thorough review. But, your analyst level research staff is going to shrink by 60% to 90%.
Yeah, those analyst level research staff will be reduced and refocused on training the AI on the same data to better improve the responses. Those will be long term careers but will have fewer opportunities available. But such is life in the world of tech
Before the invention of the spreadsheet, accounting departments were packed with lower-trained bookkeeppers armed with adding machines. Getting rid of all those positions made accounting more efficient, making way for the increase in size of corporations due to better availability of accounting data and the lowered agency cost of getting more complex data.

If there is any analogy of AI that explains what is coming, I think that this is the best one. It's just the next logical leap in technology of business management.

If you are raising young men, turn them on to AI development, medicine, or tradeschool / labor management. Too much risk in knowledge based careers.
But that's the same in most fields, evolution of a technology and/or saturation of qualified resources will cause dynamic shifts in pay structure and/or job opportunities. AI isn't any different in that, which is what I was trying to convey in my first response.

When cloud hit and digital transformation became the hot buzz word that was going to revolutionize the face of business, everyone scrambled to just shove everything they could into "the cloud". Just like AI, a lot of people didn't take the time to understand the actual needs and requirements to do that, they just did it, and kept trying to do it for years. It was only when real defined technical frameworks came out that everyone stopped to realize that not everything should be in the cloud. And most companies built out a hybrid system with some core things in a data center and other things in cloud infrastructure.

I wouldn't narrow down anything when raising young people, as I'm doing with my daughter right now who's 8. Knowledge based careers will change significantly, but they'll still be there, and the foundation will be there on what the requirements are.

Again, AI has become such a talking point thanks to the state of our political system right now, but from a technology standpoint, we've all done this many times before. It's disruptive, and it will displace jobs, but most companies won't be able to implement it properly which will lead to a back slide in 5 to 10 years that brings back a lot of similar positions with a slightly different focus.
BusterAg
How long do you want to ignore this user?
AG
Lathspell said:

They are only as good as the information being fed into them, and that information is controlled by people with an agenda. Therefore, I would not trust these LLM's too far. They are great as a tool to be used but should always be checked. ChatGPT seems to make stuff up at least every 3 conversations I have with it.

Great resource to speed up the mundane data assessment and such, but you always need to know what you're talking about to some extent to really use it to its fullest.

Hell, I asked it how one would integrate a specific UCaaS solution with another specific CRM. It spat out this long list of instructions. When I responded that most of what it's directing me to is not present in my version of CRM or UCaaS solution, it basically said it made up the whole thing as a hypothetical instead of simply telling me it didn't have the data to answer my question directly.
Yeah, hallucinations are a thing. Especially right now while AI is still in the delivery room having just been born.

What is your estimate of how long it will be before hallucinations go down from about 1% of all words created now to about 0.00000001%? I'm bullish that it won't be that long.

Funny story about the legal industry. Some associate wrote a brief using ChatGPT, it was reviewed by the partner, and filed. It was completely made up. The brief cited cases that did not exist, and even court districts that did not exist. There is no Western Circuit District Court of Rhode Island.

For now, AI has to be very closely watched. That will be reduced as time goes on.
BusterAg
How long do you want to ignore this user?
AG
fig96 said:

BusterAg said:

IrishAg said:

BusterAg said:

Diggity said:


that's going to be the rub. The LLM's love to make up crap when they can't find the right data
I mean, this is true.

But, eventually, we are going to teach LLM's to fact check eachother. If you get four LLMs working together with different training sets and different algorithms, it will be much easier to detect hallucinations.

But, at the end of the day, you do need one guy with experience to review everything for obvious problems, and a bunch of middle managers to do a thorough review. But, your analyst level research staff is going to shrink by 60% to 90%.
Yeah, those analyst level research staff will be reduced and refocused on training the AI on the same data to better improve the responses. Those will be long term careers but will have fewer opportunities available. But such is life in the world of tech
Before the invention of the spreadsheet, accounting departments were packed with lower-trained bookkeeppers armed with adding machines. Getting rid of all those positions made accounting more efficient, making way for the increase in size of corporations due to better availability of accounting data and the lowered agency cost of getting more complex data.

If there is any analogy of AI that explains what is coming, I think that this is the best one. It's just the next logical leap in technology of business management.

If you are raising young men, turn them on to AI development, medicine, or tradeschool / labor management. Too much risk in knowledge based careers.
What's funny is one of my products is literally trying to get people to take this data management out of excel and into solutions that can much more effectively link together and analyze data
That is because Excel is not a database. It is an accounting tool. It should be used to build complex models that you feed data into.

But, it is easier to understand than a database and useful when you only have 1,000 or so rows of data. Any more than that, and using excel for data is like using a racecar in a tractor pull, or a tractor in a nascar race.
BusterAg
How long do you want to ignore this user?
AG
IrishAg said:


I wouldn't narrow down anything when raising young people,
For the record, I did not recommend that. "Young men" definitely rules out 8 year old girls. But, I think that making those fields an attractive option, and discussing the risks and rewards of each when young people are old enough to understand, is not a bad idea. I specified men because labor / trades / AI is usually more favored by young men than young women, for biological reasons.

Too many kids go to college because that is what is expected and traditional. Probably not the best bang for your buck right now.
BusterAg
How long do you want to ignore this user?
AG
IrishAg said:


Again, AI has become such a talking point thanks to the state of our political system right now, but from a technology standpoint, we've all done this many times before. It's disruptive, and it will displace jobs, but most companies won't be able to implement it properly which will lead to a back slide in 5 to 10 years that brings back a lot of similar positions with a slightly different focus.
In 1970, you probably had 20 adding machines being manned for every real accountant doing journal entries.

Now the accountants do all that math themselves in excel.

Accounting didn't go away, bookkeepping did (largely).

Same with every knowledge based profession, IMO. Lawyers, accountants, engineers, database managers, etc are not going away. It's just that entry level analyst jobs will become 10X more competitive.

Also, I think that AI is going to be far more disruptive than cloud computing. But, I guess we will see.
pocketrockets06
How long do you want to ignore this user?
AG
I find really hilarious the basic premise of this whole thread which starts with the idea that the human knows better than the AI (fraud assumptions and SARBOX effectiveness are both wrong) but the AI is going to replace all the people because it will do their job better than them.

I mean Elons already demonstrating the flaws in this - DOGE posted its first round of savings today and the biggest single line item of savings is a $8billion contract canceled (hilariously it's for logistics support of ICE which I thought he supported ) … except it's actually an $8 MILLION contract that their AI parsed incorrectly.
IrishAg
How long do you want to ignore this user?
BusterAg said:

IrishAg said:


I wouldn't narrow down anything when raising young people,
For the record, I did not recommend that. "Young men" definitely rules out 8 year old girls. But, I think that making those fields an attractive option, and discussing the risks and rewards of each when young people are old enough to understand, is not a bad idea. I specified men because labor / trades / AI is usually more favored by young men than young women, for biological reasons.

Too many kids go to college because that is what is expected and traditional. Probably not the best bang for your buck right now.
So, am I'm out of this conversation because I'm raising a young girl and you don't feel like she should be in the field? I seriously don't understand your comment here. I changed it from young men to young people, because I only have a daughter so that's my frame of reference. As someone who has been in IT from an infrastructure, implementation, and a vendor point of view over 25 years comprising a career in software engineering to infosec strategy for different fortune 500 companies, I've seen no biological issues with women in the field. Personally I will more than encourage my daughter in Ai based studies, if she is interested (as I know she could be extremely successful) or other fields if that's where her interests lie (including trades).

I mean, I agree with kids going to college just to go to college is a very bad decision and I actively tell people that vocational skills should be looked at much harder. Again, if my daughter doesn't have a specific plan, then I would probably have her look at it as it can be incredible lucrative..

Not sure where to go with this conversation. Do you realize what your talking about in the original post (not specifically you're topic, but the interaction type you had) represents an extremely small portion of the AI industry?

I mean, in my opinion the public facing chatGPT is more of a toy for sales people to be lazy, a kid doing research for a school paper, and/or someone wanting to do generalized data crunching, but I don't know of anyone that would trust it for actual true business based decisions. Hell, I host multiple LLMs in my house that run circles around chatGPT when it comes to accomplishing my tasks, with a fraction of the compute, as a hobby.

Overall, it seems like this post is meandering all over the place. Is AI disruptive? Yes, it will displace a number of jobs and job roles, but at the same time like all paradigm shifts in technology it will add more jobs and job roles in ways that weren't available before. Does it have the potential to be the most disruptive thing in tech during my career or ever, sure. Is it a guarantee, no. I think most people seriously don't understand how something like the concept of cloud computing fundamentally changed the way technology and companies work in the world. It's just now taken for granted since it was never in the public eye like AI was.
Rex Racer
How long do you want to ignore this user?
AG
BusterAg said:

Diggity said:


that's going to be the rub. The LLM's love to make up crap when they can't find the right data
I mean, this is true.

But, eventually, we are going to teach LLM's to fact check eachother. If you get four LLMs working together with different training sets and different algorithms, it will be much easier to detect hallucinations.

But, at the end of the day, you do need one guy with experience to review everything for obvious problems, and a bunch of middle managers to do a thorough review. But, your analyst level research staff is going to shrink by 60% to 90%.
I recently had an employee at Dell tell me that he has made a habit of writing his prompts asking for the LLM to "show your work" and to double-check it's results. Then he asks a 2nd LLM the same question, and then has a 3rd LLM confirm both answers. He said he finds that he "pretty much" always gets the correct answer that way.
fig96
How long do you want to ignore this user?
AG
BusterAg said:

fig96 said:

BusterAg said:

IrishAg said:

BusterAg said:

Diggity said:


that's going to be the rub. The LLM's love to make up crap when they can't find the right data
I mean, this is true.

But, eventually, we are going to teach LLM's to fact check eachother. If you get four LLMs working together with different training sets and different algorithms, it will be much easier to detect hallucinations.

But, at the end of the day, you do need one guy with experience to review everything for obvious problems, and a bunch of middle managers to do a thorough review. But, your analyst level research staff is going to shrink by 60% to 90%.
Yeah, those analyst level research staff will be reduced and refocused on training the AI on the same data to better improve the responses. Those will be long term careers but will have fewer opportunities available. But such is life in the world of tech
Before the invention of the spreadsheet, accounting departments were packed with lower-trained bookkeeppers armed with adding machines. Getting rid of all those positions made accounting more efficient, making way for the increase in size of corporations due to better availability of accounting data and the lowered agency cost of getting more complex data.

If there is any analogy of AI that explains what is coming, I think that this is the best one. It's just the next logical leap in technology of business management.

If you are raising young men, turn them on to AI development, medicine, or tradeschool / labor management. Too much risk in knowledge based careers.
What's funny is one of my products is literally trying to get people to take this data management out of excel and into solutions that can much more effectively link together and analyze data
That is because Excel is not a database. It is an accounting tool. It should be used to build complex models that you feed data into.

But, it is easier to understand than a database and useful when you only have 1,000 or so rows of data. Any more than that, and using excel for data is like using a racecar in a tractor pull, or a tractor in a nascar race.
I'm well aware, but that doesn't mean it's not often used like one. Lots of companies manage inventory and pricing in excel sheets with tens of thousands of records.

Which, yes, is an absolutely terrible idea which is why we're building an alternative (and already have a few others).
Bradley.Kohr.II
How long do you want to ignore this user?
AG
The main thing AI seems to be good at, is paperwork.

IOW, it should eliminate most bureaucrats.

One of my friends is successfully doing this for colleges, and saving them staggering amounts of man hours.
BusterAg
How long do you want to ignore this user?
AG
It is a pretty well given that men are biologically geared more towards things and less towards relationships. There are exceptions, of course.

About 80% of coders globally are men.

I wish nothing but the best for you and your daughters. No one will know them better than you and their mother, so you are obviously the most qualified person to make parental decisions in their interests.

I was just making a suggestion.

Happy Friday.
BusterAg
How long do you want to ignore this user?
AG
I guess my point was that the spreadsheet revolutionized the accounting industry, not the IT industry.

We are likely in full agreement on both these issues.

Happy Friday!
Bradley.Kohr.II
How long do you want to ignore this user?
AG
I will say, our plan for our daughter/all of our kids (only one so far), is online/home school, and focus on "being human".

Art, music, theoretical math, gardening/farming, cooking, sailing, riding, outdoors, speaking, etc.

Essentially, living a life as far from computers as possible, because anything which can be done by a computer, will be done by AI, by the time she's 30.
DaShi
How long do you want to ignore this user?
Most ppl are using LLMs as search engines.

You can literally go build anything with an LLM right now and know zero code. It's a massive unlock. Especially once you have multi step agent in your IDE.
Refresh
Page 1 of 1
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.