Artificial General Intelligence Speculations
There are some speculative thoughts about developing Artificial General Intelligence (AGI) which take into account Critical Rationalist philosophy.
The biggest current problems for developing AGI are creating an idea data structure and resolving conflicts of ideas.
A secondary problem is how to “randomly” vary ideas (including varying the null idea, and including varying multiple ideas together to create a combination with parts of multiple previous ideas). This problem appears less fundamentally hard than the big ones, partly because an approximate solution is OK. Generating a lot of bad ideas is OK (criticism should refute them), but making it impossible to generate large categories of good ideas would be a failure. The variation should be unbounded overall, so there is some series of steps that could reach any finite idea. Also, we can’t work on this in detail until after we have the idea data structure.
The other reason variation is a secondary problem is it shouldn’t be built into the AGI and unchangeable. AGIs should be able to change their own variation methods while running. Variation should be done according to ideas. This meta-variation-method – vary according to ideas – is unbounded as long as the ideas are unbounded. The AGI designer should come up with a variation method for testing and as an example and to see that he understands things OK, and include something in the AGI as an initial default, but it’s not his job to create an ideal variation function (that would be thinking for the AGI, instead of creating something that thinks for itself – it’s no more valid than the AGI designer trying to learn the ideal political philosophy and then loading that into the AGI. Or, worse, trying to learn everything and then loading all the right ideas into the AGI as initial theories, so that the AGI will never change its mind about anything and never learn anything.)
For resolving conflicts: Idea 3 says that ideas 1 and 2 contradict each other. At least one of those ideas, 1, 2 or 3, is false. But the situation between ideas 1 and 2 is symmetric, and there’s no generic way to choose between the ideas. There’s no reason to have a higher opinion of one idea than another. BTW, it can be just two ideas if they know about each other and say they contradict (or if just one says there’s a contradiction). So the question is how to break the symmetry, how to find some asymmetric reason for taking sides in the conflict between the ideas.
For an idea data structure: It needs to represent any idea. Don’t have different data structures for different types of ideas. (Different data structures for details may be fine. You could have a generic idea type and then flexibly attach various types of additional information using any data structures.) Don’t have criticisms, problems, and solutions be three different things. They’re all unified epistemologically. The same idea – or at least its main content, with slight changes to some more superficial parts – can function in any role. “Don’t go to Taco Bell now; it’s closed.” is a criticism of the idea of going now. It’s also a solution to the question of whether going was a good plan. And it’s, in essence, also a problem: Since we won’t go to Taco Bell, what will we do? Or: how do we get Taco Bell given that it’s closed now? (Perhaps we go to sleep and go later, drive further to a restaurant with different hours, or break in.)
An understanding of the situation with Taco Bell can provide problems, solutions and criticisms. (Is the core understanding just an explanation, not a problem, solution or criticism? No, not “just”. It solves the problem of understanding the basics of Taco Bell, or solves the problem of giving an overview of the Taco Bell situation, or something like that. Anything that answers questions is a solution to those questions, even if no one verbalized the questions.)
A criticism helps solve the problem of understanding if an idea is correct or not. Problems and solutions seem to be the most fundamental categories which are harder to unify. But they’re both very involved with each other, so I don’t think they should be separate data structures treated separately in an AGI. Think of it this way: when varying a solution idea, you can end up with a problem idea, or vice versa. You don’t want to vary solutions to get only solutions, and vary problems to get only problems, separately. You want one big pool of all the ideas that you do variation on to get all types of ideas.
How can you tell if an idea data structure that you come up with is any good, just as a first initial check? Go read some books and articles and convert the ideas from them into your data structure. Do it by hand. Take anything you read anywhere and try to represent it with your proposed idea data structure. And you need to lose approximately zero information when you do this. The representation needs to be approximately complete, missing approximately nothing. And you need to be able to do this with everything that anyone says, not just some convenient examples. That includes every single sentence of this article – you should be able to store all of those individually, and also in combinations to make bigger points, using your idea data structures. If you can put 50% of ideas into your data structure, but don’t know how to cover some cases, that would be a promising start. But if you can only handle a few specific types of ideas, that isn’t a promising start.
Ideas need to refer to other ideas. Your idea data structure needs to handle ideas about ideas (meta ideas), or ideas that mention or build on other ideas.
There’s also a question of what size one idea should be. My guess is keep them small. Don’t have much complexity in one idea. Each idea should be a fairly small chunk plus references to other ideas to elaborate on details, background assumptions, etc., rather than trying to fit much complexity in one idea. It’s fine – I’d guess good – if what people might normally call “one idea” ended up in the AGI as thousands of ideas. Instead of having one idea with many parts, have the parts be independent ideas. Then handle higher level structure or grouping with other ideas that say what groups of ideas work together, how, why, etc., like an idea tree but presumably more complicated.
Earlier I said the variation algorithm should be changeable (and it should be easy to swap back, so you can use one algorithm in one field and then another algorithm for another field, and keep a whole archive of algorithms for different purposes). The idea conflict resolution algorithm should also be changeable like that. AGIs need to be able to think of, and use, better ways of judging conflicts between ideas. They shouldn’t be stuck with one hardcoded one. Being able to change methods is part of how humans think and also is needed for unbounded progress so they don’t get stuck due to an error in the method they were designed with. And the variation and conflict resolution algorithms should be ideas. Don’t add special cases for that. Figure out how ideas can do the job. Why? Elegance. And human beings have ideas about how to vary ideas and ideas about how to debate contradictions between ideas and reach conclusions. We think about those things, come up with methods, and use the methods we thought of. I think AGI’s should be able to do that too.
Also, AGIs need educations. They need childhoods. They need parents and teachers. The idea of pre-loading them with adult knowledge, instead of them learning for themselves and developing their own ideas, is contrary to the concept of AGI. A big part of the point of an AGI is that it thinks for itself. So don’t try to load it up with your own ideas and biases to start off with. Give it some minimal initial ideas – kind of like an operating system plus a few default apps – and let it get more knowledge by learning.
Some people assume AGIs will learn by downloading books and, boom, now they know stuff. No! To know something an AGI has to do conjecture and refutation, just like a human. It has to evolve its own knowledge.
Communication will work fundamentally the same for AGIs as it does for humans now – you can think about a communication (like a spoken sentence or a book) and try to learn something from it, but you can’t just stick it into your brain and be wise. A book consists of words not ideas, so you’ve gotta translate the words into ideas, which involves creative thinking. You also have to make changes, e.g. look for and fix some conflicts between ideas you get from a book and ideas you already have.
The key to learning is error correction. You need an unbounded data structure (so you can know/think/learn anything) plus an unbounded way to correct errors with it (so you never get stuck due to mistakes).
Maybe the idea data structure itself should be changeable by the thinker. But my initial thought was to have it flexible enough that it doesn’t need changing. You can use the data structure in different ways rather than having to change it. Partly that may work because it’s small, so you can build things out of it in different ways. That’s a little bit like how you could build basically any kind of shape using legos without having to change the design of lego pieces. You can just combine pieces in different ways to get something else. You couldn’t make a tiny sphere from standard lego pieces because you wouldn’t be able to use very many pieces without making something that’s too big, but you could make a good approximation of a larger sphere that’s big enough to use many lego pieces.
An idea, in the normal high level sense, could use thousands of instances of the idea data structure. We think of our ideas as having parts, but if the data structure is small enough it could be indivisible, with no parts, because it is the minimal part. All the flexibility of idea organization could be in combining these parts. Analogy: idea = atom, mid level idea = molecule, high level idea = macroscopic object.