On item 3 on the order of AI development, I think I heard that Tesla can/is/will build software simulations of the bot. It would be obvious to put in physics such you give the simulation increasingly difficult tasks, like stand still, walk, pick something up, etc., and let the neural network/simulation do trial and error at software speeds. Add in well defined environments, like a giga-factory, and give the bot more challenging and varied tasks, and the software can do learning via simulations at a potentially staggering rate.
Isn't this akin to how the Go player was developed? It may still be 3rd on the list, but I don't think it will be far behind.
We'll all find out when the Tesla bot makes it debut at the next Tesla AI day this September 30.
Hi Dave - I just did an interview on this YouTube channel (https://www.youtube.com/watch?v=lH6E1-aPvoQ) and discussed this very topic. I mentioned how the various AI eras are cascading into each other. For example, AGI seems to have >50% chance of happening by 2035. TeslaBot will start mass production 2025-2030. AI-job suite is starting to happen now.
As far as Teslabot in simulation, there are reports that the bot is walking in all sorts of conditions in simulation. The simulation engine is super critical for Teslabot, and gives them an unassailable lead. I will be doing Teslabot posts ahead of the Sep 30th AI day.
One has to wonder who this automation benefits. The goal of automation has been to better humans lives. While in the past you had to have a bigger piece of the pie to be wealthy, industrial output increased productivity such that having a bigger piece of the pie no longer means grabbing someone else’s. With AI working replacing jobs enmasse, what are we humans left to do with. I’d like to know your thoughts on : 1) If jobs don’t provide consumers with wealth to consume AI created products, who are creating these AI products for. 2) Are we creating social problems with AI 3) Will AI replacing jobs create political instabilities, can social outrage be a speedbump to future AI developments
Hi Sid, thanks for your insightful comments and questions! There is so much to unpack for each question, but I'll try my best.
1) Not assuming artificial general intelligence (AGI), basically the "salaries" of all the workers would flow into profits for the companies that have those AI services. Those companies might be required to distribute much of those earnings back in the form of universal basic income to the workers affected. In that scenario the wealth just shifts around, but you have one big problem leading to your second question. What will all those former workers do with their lives?
2) Just like with every technology, there are benefits and drawbacks. Think about cars, cellphones, the internet. There really are 2 grand challenges for AI: 1) the biggest by far long-term is aligning an intelligent AI with human goals and making sure it's safe. 2) what will people do day-to-day when the AI automation starts occurring. Will that free people up to be more creative, spend time with children and grandparents, pursue passions? Or, will the vast majority of people succumb to binging Netflix, gaming, and the future metaverse? Nobody can tell for sure but this is what we should start thinking about as society. Meaning is such an important part of life and we all have to find new meanings.
3) There will likely be some instability, but not unrest necessarily. Here's how I think about it: We are climbing a mountain and on the top is AGI which can solve so many issues. But, on the way up the mountain there are crevasses. Moreover, the mountain has become a lot more steep in just the last year, meaning that the time between AI automating a large part of the workforce and AGI seems to be shrinking.
Thanks for your response. A few more points 1) AGI gives good reasons to be optimistic, however only time will tell if we see a betterment in human life and equitable distribution of wealth. We have only seen things go in opposite direction with modern capitalism with no signs of narrowing the increasing wealth gap. Can private organizations be trusted to do what’s best for the society over profits. The answer really depends. 2)I agree, meaning is such an important aspect of human life, and can be personally unique to everyone. Work is one way humans find meaning to their existence. I’m not sure removal of work is a good thing, perhaps like you mentioned new work/ passions replace day to day work. We have to think this with the increasing human lifespan in mind too. Given a sufficient length of time, at what point does not having work make life engaging enough. If AI is better than humans in every aspect, what do you pursue betterment for and what do you gain out of it either materially or emotionally.
Sounds like a good time for governments to start regulating this and breaking up big tech. I do not see the utility in AI replacing so much work, high level tasks are not going to exponentially increase and thus the current future would appear to align closer to a dystopian future than some utopia. There is so much research around lack of productivity or work and mental illness, boredom, etc... that should steer us away from having the majority of the population unemployed.
Honest and friendly feedback, the last article was so good. I showed a few people because I was so impressed which is something I rarely do. I cringed a little at the headline here (scare tactics, rather than living with AI), especially when the content is pretty important. There are a few websites this month that started monetizing image creation. I would encourage you to explore what is happening and the direction that we are headed in more. Please keep up the good work.
Hi Jon - Thanks so much for your readership and feedback! Noted about the headline.
In general, I am super optimistic about AI and all its applications to improve humanity. Those are the majority of topics on my to-write list. In thinking and analyzing this small cross-section of AI just in the last two weeks, it hit me that most of society is 1) unaware of how far AI has advanced and/or 2) not thinking about the 2nd or 3rd order effects adequately. For example, just yesterday someone I admire, OpenAI's CEO, reiterated that DallE2 is going to be a "tool" to help creators and society, not take away jobs. That's not likely to remain true in the near to mid-term. We should have an appropriate discussion as society, policy markers, and big tech. It's such a hard issue for society to solve! Thanks again!
Everyone knows that the 80% is easy and the last 20% is much harder.
AI can probably automate the 80% of a job but it would require a human to finish off the last 20%, in the "African fortune teller" image, you would want a human to fix it up a bit so it doesn't look like three images thrown together, or a software program would need an extra button or small changes made, you would need a person to understand and implement it.
Hi Jules - Thanks for your readership! I see where you are going and may I suggest this thought experiment: let's say there is a team of 4 designers. AI automates 50% of their work, but there is still a lot of fine-tuning and certainly a lot of time in meetings with co-workers. But, this team has more free time so their company gives them more work. Great, so they are still working at capacity now. However, let's suppose AI gets to 80% automation, and there is no more additional work their company can offer them. That's when one or two, or more of them are at risk of losing their jobs. Hopefully that makes sense. That's my two cents and what I was trying to convey with the AI automation graphs and discussion around labor supply/demand.
This assumes all creative content (the data training the AI) has been created. Humans will still be the only ones able to create truely new and unique content.
On item 3 on the order of AI development, I think I heard that Tesla can/is/will build software simulations of the bot. It would be obvious to put in physics such you give the simulation increasingly difficult tasks, like stand still, walk, pick something up, etc., and let the neural network/simulation do trial and error at software speeds. Add in well defined environments, like a giga-factory, and give the bot more challenging and varied tasks, and the software can do learning via simulations at a potentially staggering rate.
Isn't this akin to how the Go player was developed? It may still be 3rd on the list, but I don't think it will be far behind.
We'll all find out when the Tesla bot makes it debut at the next Tesla AI day this September 30.
Hi Dave - I just did an interview on this YouTube channel (https://www.youtube.com/watch?v=lH6E1-aPvoQ) and discussed this very topic. I mentioned how the various AI eras are cascading into each other. For example, AGI seems to have >50% chance of happening by 2035. TeslaBot will start mass production 2025-2030. AI-job suite is starting to happen now.
As far as Teslabot in simulation, there are reports that the bot is walking in all sorts of conditions in simulation. The simulation engine is super critical for Teslabot, and gives them an unassailable lead. I will be doing Teslabot posts ahead of the Sep 30th AI day.
AI will drive down the costs and appeal of anything it creates at scale.
One has to wonder who this automation benefits. The goal of automation has been to better humans lives. While in the past you had to have a bigger piece of the pie to be wealthy, industrial output increased productivity such that having a bigger piece of the pie no longer means grabbing someone else’s. With AI working replacing jobs enmasse, what are we humans left to do with. I’d like to know your thoughts on : 1) If jobs don’t provide consumers with wealth to consume AI created products, who are creating these AI products for. 2) Are we creating social problems with AI 3) Will AI replacing jobs create political instabilities, can social outrage be a speedbump to future AI developments
Hi Sid, thanks for your insightful comments and questions! There is so much to unpack for each question, but I'll try my best.
1) Not assuming artificial general intelligence (AGI), basically the "salaries" of all the workers would flow into profits for the companies that have those AI services. Those companies might be required to distribute much of those earnings back in the form of universal basic income to the workers affected. In that scenario the wealth just shifts around, but you have one big problem leading to your second question. What will all those former workers do with their lives?
2) Just like with every technology, there are benefits and drawbacks. Think about cars, cellphones, the internet. There really are 2 grand challenges for AI: 1) the biggest by far long-term is aligning an intelligent AI with human goals and making sure it's safe. 2) what will people do day-to-day when the AI automation starts occurring. Will that free people up to be more creative, spend time with children and grandparents, pursue passions? Or, will the vast majority of people succumb to binging Netflix, gaming, and the future metaverse? Nobody can tell for sure but this is what we should start thinking about as society. Meaning is such an important part of life and we all have to find new meanings.
3) There will likely be some instability, but not unrest necessarily. Here's how I think about it: We are climbing a mountain and on the top is AGI which can solve so many issues. But, on the way up the mountain there are crevasses. Moreover, the mountain has become a lot more steep in just the last year, meaning that the time between AI automating a large part of the workforce and AGI seems to be shrinking.
Thanks for your response. A few more points 1) AGI gives good reasons to be optimistic, however only time will tell if we see a betterment in human life and equitable distribution of wealth. We have only seen things go in opposite direction with modern capitalism with no signs of narrowing the increasing wealth gap. Can private organizations be trusted to do what’s best for the society over profits. The answer really depends. 2)I agree, meaning is such an important aspect of human life, and can be personally unique to everyone. Work is one way humans find meaning to their existence. I’m not sure removal of work is a good thing, perhaps like you mentioned new work/ passions replace day to day work. We have to think this with the increasing human lifespan in mind too. Given a sufficient length of time, at what point does not having work make life engaging enough. If AI is better than humans in every aspect, what do you pursue betterment for and what do you gain out of it either materially or emotionally.
I don’t suppose the answers to these are clear.
Sounds like a good time for governments to start regulating this and breaking up big tech. I do not see the utility in AI replacing so much work, high level tasks are not going to exponentially increase and thus the current future would appear to align closer to a dystopian future than some utopia. There is so much research around lack of productivity or work and mental illness, boredom, etc... that should steer us away from having the majority of the population unemployed.
Honest and friendly feedback, the last article was so good. I showed a few people because I was so impressed which is something I rarely do. I cringed a little at the headline here (scare tactics, rather than living with AI), especially when the content is pretty important. There are a few websites this month that started monetizing image creation. I would encourage you to explore what is happening and the direction that we are headed in more. Please keep up the good work.
Hi Jon - Thanks so much for your readership and feedback! Noted about the headline.
In general, I am super optimistic about AI and all its applications to improve humanity. Those are the majority of topics on my to-write list. In thinking and analyzing this small cross-section of AI just in the last two weeks, it hit me that most of society is 1) unaware of how far AI has advanced and/or 2) not thinking about the 2nd or 3rd order effects adequately. For example, just yesterday someone I admire, OpenAI's CEO, reiterated that DallE2 is going to be a "tool" to help creators and society, not take away jobs. That's not likely to remain true in the near to mid-term. We should have an appropriate discussion as society, policy markers, and big tech. It's such a hard issue for society to solve! Thanks again!
Everyone knows that the 80% is easy and the last 20% is much harder.
AI can probably automate the 80% of a job but it would require a human to finish off the last 20%, in the "African fortune teller" image, you would want a human to fix it up a bit so it doesn't look like three images thrown together, or a software program would need an extra button or small changes made, you would need a person to understand and implement it.
Hi Jules - Thanks for your readership! I see where you are going and may I suggest this thought experiment: let's say there is a team of 4 designers. AI automates 50% of their work, but there is still a lot of fine-tuning and certainly a lot of time in meetings with co-workers. But, this team has more free time so their company gives them more work. Great, so they are still working at capacity now. However, let's suppose AI gets to 80% automation, and there is no more additional work their company can offer them. That's when one or two, or more of them are at risk of losing their jobs. Hopefully that makes sense. That's my two cents and what I was trying to convey with the AI automation graphs and discussion around labor supply/demand.
This assumes all creative content (the data training the AI) has been created. Humans will still be the only ones able to create truely new and unique content.
What do you think about IT security/cybersecurity jobs and AI?
You misunderstand the BLS numbers. The 22% is *cumulative* projected growth from 2022-2030, not annual.
Thanks Nicholas for pointing that out. I went back and modified. I will check out more of your work!
In a capitalist economy profit is king.
Labor is usually the largest line item on any company's financial statements.
CEOs who do not use AI to eliminate labor will find themselves out of a job.
Consumption by average people is 70% of U.S. GDP
When we reach a tipping point when there is more AI than human workers capitalism collapses.
What will be the socioeconomic fallout of this? Unknown. Chaos at first, riots, Dickensian poverty, Neo Luddites smashing machines.
Perhaps a UBI will be instituted. I don't expect it to be generous.