Matthew Newman


Matthew Newman

Short Bio
"I am a world-leading expert on the business application of AI Ethics, with an extensive background in steering multinational enterprises through complex, strategic change. My unique profile combines depth of experience in corporate governance, risk, process engineering, digital transformation and cultural change, enabling me to provide true coverage of the enterprise impact. I have delivered outstanding results in financial services, energy, health-tech, insurance, oil & gas, public sector, media and FMCG.

I co-design global standards, advise into professional associations and assist in formation of government policy on the topics of corporate implementation of Artificial Intelligence and application of AI Ethics. I write and present on the topics of Applied AI Ethics, Beneficial AI, Future of Work and DAO impact on the enterprise.

I currently live in Sydney, Australia, with my family. We’re a multi-national bunch with our roots in the UK, The Netherlands, Poland and Caucasus. Originally from the UK, I’ve lived in various countries over my career with an extended stint in The Netherlands. I have no hobbies, when I’m not working I’m entertainment for my kids."

I’ll swap this around. First, how to try for a positive singularity: start building for it already. Us AGIans are already familiar with the idea of ASI emerging from an aggregation of agents, but in the wider world I often get the impression people picture AI systems being built in one go. They imagine that someone will one day sit down and build an AI system that happens to be powerful enough to be an ASI. Even without SingularityNET that would never have been the case. It’s always going to be re-use and repurpose. Sooo, we have to recognise that some of the simpler capabilities we develop now will probably be “good enough” not to be discarded before the singularity. And thus we’ll find that any bad habits we put in our current systems stick around and may affect a future super intelligence. So, start developing right now with the prospect of singularity. You never know where your code might end up being used.

As for when. No idea. Too many variables. I get the feeling it will be one of those “it’ll happen slowly until it doesn’t” things though I’m not convinced it’ll be a few seconds, more like a few months. I like the idea of the EU kicking it off by demanding Google and co unbundle their data and offer it up, leading to hard take-off when some uni grads who’ve been working on systems that need little data suddenly find their system has access to a massive data overhang.

OK, so first we should set expectations. The team are doing their thing. I don’t foresee the council defining the roadmap. To be fair, I’d make a completely arse of myself if I waltzed in and started making such demands. I have full faith that these people know much better than I how to create the platform.

That said, I have some strong ideas on where I think the community might play a role they find enjoyable, challenging and engaging… and that would be useful too. For instance there are some tricky questions around where this goes with ethical considerations, and I don’t feel these have any “right” answers. That affects quite a few areas of the project, like what data should an agent offer up for reputation assessment? Corporates are going to want any help they can get in managing the risk of this technology. We can assist that, or perhaps we judge it outside our interests. We shall see.

My feeling is that we’ve not yet managed to crack the cooperation between the community and the team. I don’t think it’s due to a lack of enthusiasm from either, but for my money it’s not there. I can’t give any quick fixes, because there rarely is when two groups want the same thing yet seem to struggle to make it happen, but that will be a big focus for me.

If I’m broke, my family are about to be turfed out onto the street and I’m down to my last few dollars, I’ll sell and consider the consequences later. But, that’s a very different story to cashing out because I’ve lost heart or I think it’s going the wrong way. Never say never, but I don’t see that happening while I’m committing my free time to the role.

Good question. And for my mind probably one of the most challenging. Some thoughts. I don’t think this is a magic wand item. It’s not like a few events will make it happen. Devs need motivation and an acceptable level of risk. Some of that is going to come with the platform growing and succeeding and organic growth. The network effect applies to supply as well as demand. The key is removing barriers & magnifying opportunities: make onramp simple, don’t make things unnecessarily esoteric, allow quick wins, responsive support, allow dev to focus on the goal not on troubleshooting the tools, etc. Second, of course, is to ensure awareness, but without a convincing case for investment of time, advertising will be of limited value.

  1. I want to see the currently proposed utility come to maturity first.
  2. None. Sorry, that would be a major conflict of interest. Important to point out that (in my case at least) it’s me standing as candidate. I don’t speak for any of the organisations with which I am associated, all opinions stated are mine alone. You get me only :slight_smile:
  3. October 2017. Heard on the grapevine and it just clicked: “This is how it (AGI) will happen”

I think it would be a bit presumptuous of me to already say what needs changing. I don’t buy the whole “new broom sweeps clean” concept.

This area is a balancing act. Crypto is kind of odd in that it’s accepted by all that should be deflationary. That speaks to the method of token distribution, but it also sets up expectation. Currently the risk profile for crypto is so utterly hideous that it balances the effects of deflation. But if AGI becomes a stable, used and valued token, deflation presents its own risk as it won’t only be a few crypto enthusiasts who hodl. The thinking here has a long way to go IMO, but we’re probably going to find we need to move quickly to address various pressures. That means our decentralised nature needs standing-up. It won’t work to have months pass when making decisions on token supply, etc.

I don’t see it as inevitable, but it is a choice for the community to continue their control. Unfortunately, a side effect of empowering people to make their own decisions is that they have the freedom to make mistakes. You can put in place lots of barriers to making a poor decision, but ultimately you’re either going to put the masses in control, or you’re not. And part of that control is the power to listen to the siren song of big-tech. Put in place too many locks as a creator to prevent the community from enacting certain decisions and you have a very different problem on your hands. But surely we need to trust that the community will act to continue the decentralised nature of the platform, right? Otherwise, why would we do this?

Lots of great questions on a topic I like. I think this is an area where we could really deep dive into lots of detail, but I’ll try and keep things a little light to keep this readable :slight_smile:

  1. The question of trust. Where to start? OK, so I think we need to re-assess our relationship with tech. At the moment people treat computers as either “right” or somehow “broken”. As in, if it’s working as it should, it will give you the right answer. You don’t expect your pocket calculator to give you a result with a confidence level attached. That’s the key here, it’s not about trusting the agents, it’s about knowing when it would be appropriate to put store in what they are saying. That’s about transparency. That’s about knowing how the agent came to its observation, the data used, the mechanism, confidence, accuracy, repeatability. That allows fully informed decisions by the user.
  2. Which brings us on accountability. This is going to have a lot of legal implications. A good rule of thumb might be: “is it working as advertised?” If it’s doing things as notified, it’s difficult to blame the agent for knowing misuse by the client. But that’s really tricky with any tech, let alone AI. Ultimately I don’t think “caveat emptor” is going to stand up for long, I’m not a legal expert but I can imagine there will be an onus on the AI creator to demonstrate a level of rigour in ethical specification, design, testing and ongoing risk management; a defensible position on ethical trade-offs and demonstration that more than reasonable effort has been made to communicate proper use and, importantly, limits and limitations of the system. After that, I would guess it’ll be for the user, but I’m sure we’ll see lots of battles at the edges of those to accountabilties.
  3. There are some great ideas out there on using risk as a mechanism for managing ethical goals within an organisation. I’ll need to stop there :blush: Suffice to say that installing some board of ethical experts to ponder the subject and offer a bit of advice to managers is not enough.
  4. Bigger problem than most would think. I mean surely it’s just a question of cleaning out that bias, right? The issue is that that involves trade-offs. A bank’s loan approvals model could avoid all bias by either: approving loans for EVERYONE, randomly choosing approvals, or by approving loans for NO-ONE. All of the above sees the bank go out of business. So we need to work on the trade-off. That involves understanding whether we’re aiming for procedural or distributive fairness. It also means trade-offs between different sorts of discrimination. If we decide on distributive fairness for one group, that implies a lack of procedural fairness for another. Underpinning all of this are two things: a) providing transparency to the deployer of the model so they can understand the story of the data and b) enabling the deployer of the tool to actually make smart judgment calls on what they see. It’s a big topic :slight_smile:

Good questions.

1 Like

Hi there. I think we have to recognise that the council will have limited powers. We’re unlikely to be in a position to accelerate the deployment of Sophia, not least because she’s produced by another company with which we would have no formal links. Then again, I think you identify precisely the method to prioritise developments that can be utilised by Sophia: tokens talk. I’m pretty confident that David’s and Ben’s interests will continue to see Sophia evolved. She’s an intriguing tool in understanding how human’s an artificial lifeforms might interact.

Regarding when the singularity will occur. Hard to tell, really. I don’t think we’ve yet got a definite handle on what’s missing. So we might predict when the enablers are all present and correct, but the special sauce might be absent. Then again we might see a couple of jumps in algos that allow us to far better leverage our current resources and chop decades off development time. I think the second generation of services on SingularityNET will be very interesting to see. Agents that dynamically incorporate the available services as tools could see a rapid move towards AGI/ASI. So, I offer a plan for a plan. I think in 5-7 years time we’ll be in a position to list out the hurdles and give an accurate timeline for AGI/ASI.