10 Comments
Jul 18, 2022·edited Jul 18, 2022Liked by Ash Jafari

Do you imagine an AGI as being able to do something like this (without any prior knowledge or explicitly coded support):

- Be taught the rules of chess verbally with a small number of visually demonstrated examples

- Play a handful of games

- Explain the rules correctly to someone else, or another instance of itself, in a similar way

This seems like it would fall under "any cognitive task" but feels much farther off than 2029 to me. If I had to bet I'd say it wasn't even achievable without being explicitly coded for at some level.

Expand full comment
author
Jul 18, 2022·edited Jul 18, 2022Author

The first two things are routinely done. In fact, AlphaGo Zero doesn't need any instructions, it just played against itself many times over. I believe one of the AI models also did #3 for a game, I will have to dig it up. Many of the large language models (Google's PaLm) use chain-of-thought-reasoning which is explaining how it arrived at understanding a joke, math problem, etc. See this example from Google's PaLm which was released just 2 months ago: Question posted to the AI: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now? Answer from the AI: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11. There are even more impressive examples out there. See you in the future!

Expand full comment

I should clarify the first two points - I mean learning more or less exactly like a human would, ie. with a camera and microphone looking at another person over a chessboard and having the person say things like "knights go in this shape [demonstrates L shape]" and "pawns can't go backwards". These seem to be examples of the common-sense knowledge and understanding - e.g. implicit understanding of what the word "backwards" means in the context of a board game - that AI has lacked so far. Are there any examples of something like this being done successfully? That's where the gap between our current systems and AGI is, as I see it - AlphaGo/chess etc might not need instructions but they use a tree search algorithm that had to be coded in manually by the programmers if I understand correctly, whereas the AGI in my chess example wouldn't have that, it would be much more general. Maybe it could have a tree search algorithm coded in somewhere, but if the operator has to spoon-feed it instructions like "use your tree search algorithm to optimise your side's eval function in this two-player game" then it starts to seem more like "regular" AI/software to me and not AGI.

I hadn't heard of PaLm, will check it out.

Expand full comment
author

Hi Gus - Have you heard about DeepMind's Gato (https://www.deepmind.com/publications/a-generalist-agent)? That was just released in May and made quite a stir because it is as generalized as it gets right now. It was effective across hundreds of tasks. Only 1B parameters. The founder of DeepMind confirmed last week that they are in the middle of scaling it up. It will be really amazing to see what a 200B Gato model could accomplish. Check it out and let me know what you think.

Expand full comment

Hi Ash, thanks, just read the Gato paper. I'm not that familiar with neural network terms of art so couldn't figure out how they determined which modalities were used for the input and output. E.g. image captioning tasks would be "Image in, Text out" but how does Gato know what the input is and what type of output to generate? Could it do "Text in, Continuous actions out"? It would be great to see some examples of deployment to get more of a sense of this or if you could clarify from your understanding of it. In any case it is awesome that the same network can work in many different modalities.

Expand full comment

Do you expect a radical transformation of the civilization after the development of AGI? Like massive automation or space colonization or something of that kind?

Expand full comment
author

One of the main reasons I started to write is because very few people in the world realize how close we are to radical transformation. There will be immense automation and new breakthroughs or discoveries occurring at any insane rate. I try to imagine the rate of progress by picturing myself waking up in the morning, opening up the news, and reading about a new breakthrough that is as Nobel worthy as DeepMind's AlphaFold breakthrough. And that is just once very 24 hrs, it will eventually get much quicker than that.

Expand full comment

U sure?

Expand full comment
author

Hi there! ~2030 does increasingly seem like the 50/50 date, meaning 50% chance it happens before 2030 and 50% after. I will be covering a lot more in coming days on this.

As far as singularity, it all depends how fast the AI takes-off is.

Expand full comment

Then what, singularity?

Expand full comment