AGI brain design

What I’m thinking is metadata keywords for each ontology, and outsourcing a job requirement by search across those.

I am almost certain that direct I/O will not be necessary, only different jobs managed by different agents.

Each individual agent not directly aware of the overall service being delivered, but a higher level emergence of AGI can occur as a result of the agent swarms.

Somewhat related to feasibility wrt singularitynet design.

I expect a connectionist architecture to be composed of millions of DL networks. These could conceivably be arranged in functional regions. Could such things merge to become CA’s? Maybe yes.

@examachine I mostly agree, except for there needing to be millions of DL networks for AGI. Perhaps I’m reading it wrong and you mean superintelligence. Sorry if I am.

The human brain has fewer than 300 regions that have been classified, since I last checked, recently. Each of these make up larger regions; some of which have been pretty well approximated with single neural networks. For example, object learning and recognition. I think that the many different types of tasks that humans can learn is actually due to the fact that a large portion of tasks share the same basic steps, meaning that the human brain just needs to figure out which regions to use in what order and remember that information to begin learning an arbitrary task.

Thank you for your input. It is very much appreciated. :slight_smile:

I mean human level intelligence. I didn’t specify the architecture, I’m merely hinting at the scale. Your point about approximations isn’t really quantifiable. We don’t know how well we’re imitating any large brain region. However, DL is indeed promising and as I explain in my AGI 2018 / AEGAP paper I expect DL extensions to achieve human level AI. That is not a popular opinion in the AI community, however IJCAI reviewers actually liked the idea but asked for much more material that wouldn’t fit in the page limit. The paper is a bit terse, but a presentation will be released before the conference and it might be easier and more enjoyable to follow. The paper is intended both for DL researchers and general AI audience. It’s a cross between philosophy of AI and an ANN review.

It’s a good idea to use machine learning here however hard that might be. That would be my instinct.

@examachine Though, we can be pretty confident that single neural networks are capable of achieving equivalent functions for individual discovered tiny regions because tiny regions, such as layers of the visual neocortex, have very simple jobs. For example, it takes two of them to process nothing more than edge detection.

There aren’t 300 large brain regions; only less than 300 tiny discovered ones, so I find it hard to not think that, at maximum, 1000 neural networks wouldn’t be enough. I think there would have to be a convenient jeopardising factor to make me wrong about this.

There’s also the fact that the brain uses equivalent to, at most, about 10 ^ 15 FLOPS and it seems very unlikely that the brain is as effcient as neural networks (which use backpropagation), so it’s quite a stretch that it would take more processing power than that to achieve human-level intelligence; with neural networks, at least.

Maybe DLs don’t take up many FLOPS or are very ineffcient. I don’t know that much about DLs.

I’ll read your paper. It sounds interesting. :slight_smile:

Thank you for the conversations. I’ve enjoyed them. I hope you have, too. :slight_smile:

Your answer is pertaining to the communication of each AI in the network; but didnt OP refer to the problem of an AI on the network that needs external AI to operate?

And also, is it just ontologies? wont something else be needed to allow for the best AI to be recruited for a job, wouldnt the AI have to know what the job is, and the function of all the AI’s on the network and how they could be used together to create a novel solution?

From a neuroscience POV, that’s not really true especially because the algos, models and computation don’t match at all. You’d be surprised. Read Jeff Hawkins’s latest papers. It’s closer to the brain, but still not there…I had underestimated the relevance of his work. But that’s mostly the network model, there’s a lot more to it.

@examachine I personally think that what matters isn’t if a neural network is a carbon copy of a brain region; why would that matter? What matters, in my opinion, is if it achieves equivalent results. I think that neural networks would be able to do this. A neural network can solve almost any arbitrary simple task, so they should be able to mimic tiny brain regions’ purpose. (Mimmicing their structure isn’t at all important.) I’m a man of practical results, not carbon copying.

What would you suggest is the missing ingredient that makes neural networks unable to mimic the purposes of brain regions?

1 Like

Excuse the interjection, the degree of fuzziness may be the link from carbon copy to practically in the brain… Some self identification needs to be digital and ,importantly, some needs to be analog… This should ensure a level of self repair/self knowledge in the system…

1 Like

I already gave a reference but computational neuroscience literature clearly shows usual ANN’s are not biologically plausible, nor were they meant to be. Sorry for having to repeat myself. It’s not an easy subject at all. And there are obviously many degrees between CNNs and brain simulations.

@Justjoe Neural networks can use arbitrary logic such as fuzzy logic and go through thousands of backpropagation iterations without breaking. A neural network could theoretically also be used to repair broken data, as well.

Thank you for the input. :slight_smile:

2 Likes

@examachine Sorry, but I don’t see the connection between biological plausibility (ability to exist naturally) and insufficiency (inability to become as intelligent as humans). Different doesn’t mean insufficient.

I hope I’m understanding what you’re saying correctly. I’m not an expert in neuroscience. I’m sorry if I’m misunderstanding you. Please correct me if I’m wrong.

I think I should take the opportunity to say that, though I attack your points, I’m, not attacking you personally. I don’t like to be mean and just want to help. I’m trying to help explore these topics in order to help build AGI. Thank you for being patient with me. :slight_smile:

@examachine & @Arthur_Heuer I have deleted the latest three posts as they did not add to the topic of discussion. Please aim to stay nice to each other even when you disagree (doesn’t apply to nazis of course @examachine :wink:).

1 Like

@examachine I can’t help but feel like we got off on the wrong foot. I’ve tried to not be mean, but I might have ended up being a bit blunt and perhaps overly suspicious. I’m sorry.

I thought that you were a troll because it seemed like you were just dismissing anything that was said to you just by saying it’s wrong and not giving any reasons why it is wrong. Whenever I pointed out a mistake in your reasoning, you just seemed to ignore it. You seemed to insult my expertise because I happened to not share your opinions and refused to discuss topics on a thread dedicated to discussing those topics, so I assumed you were just being immature and annoying to get a reaction. It also confused me that a naturalist would think that morality (a human invention) was somehow objective. I still don’t know exactly what you mean when you say that morality is objective. Maybe you just meant that things are objectively moral or immoral to certain moral codes, which I would agree with. I imagined that you meant that a perfectly logical paperclip maximiser would want to be moral instead of maximising paperclips or that everything, including calculators and fish, has the same moral code. I find those conceptions of objective morality to be a load of nonsense, so I thought that you probably were a troll trying to make naturalists look irrational.

Hopefully, this was just a misunderstanding. I didn’t mean any harm in anything that I said, but I have some difficulty with social interaction, due to my autism.

3 Likes

Maybe have the ai hear thoughts. To its self . maybe the ai should have self doubt and negative thoughts. But it should learn from them . maybe a schizophrenia ai . having a ai with a mental disorder seems not a good idea . its still a part of a brains thought process . make the ai hear different voices . and bring up past thoughts . in away to have the ai feel distracted by things . it would be confusing to us . but maybe it’ll give the ai that much more to think about ![tmp|375x500]
(upload://Aw6Cicn2O2qaSQtHY94smW9LIH5.jpeg)

@Bird_Brains

I would think the most effective AGI would think almost nothing like a human. We humans seem quite flawed in our ways of thinking. For example, human emotions get in the way of human logic and human gullibility and wishful thinking get humans believing in superstitions, like witchcraft, luck, demons and gods.

It means whether it’s plausible that such computations do exist in the brain. Some of it probably does, but the algorithms and networks are far from a 1-1 match, you can see how different the models in computational neuroscience are such as spiking neuron models etc. but it gets really different when you get down to the physical details. There has been great progress in this area, though. In my research, I’m trying to adopt some more complete, biologically relevant models because standard ANN’s don’t seem enough. Usually, AI labs like Deep Mind try to incorporate ideas from neuroscience rather than the other way around. (Find and read the Jeff Hawkins papers, it’s a good start to this subject, and he might be an underappreciated researcher, if you don’t understand something you can ask on this forum, too)

If anything, the ANN research moves away from biological plausibility often. For instance capsule networks don’t look like they are inspired by actual biological processes, it’s more like a math abstraction.

@examachine Thank you for the read. :slight_smile: I can understand that simulating brain structures with immense detail could lead to intelligent machines. What I don’t understand is why you’re focused on that particular method rather than making brains using traditional neural networks. Instead of making assumptions, I think it would be more polite if I just asked you.

Why are you focused on making 1-1 models of the brain rather than brains using traditional neural networks?

2 Likes

I am wondering why everyone appears to be focused on the nature not the nurture of AGI. All the challenges you all cite are extrinsic. Imho