David is the Founder and Managing Partner of Network Society Ventures, a seed stage global investment firm focused on innovative startups at the intersection of exponential technologies and decentralized networks.
He is an investor, entrepreneur, author, blogger, keynote speaker, and thought leader of the global technology landscape. His entrepreneurial accomplishments span several companies founded and grown over more than twenty years.
The range of responses to this question by experts in the field of artificial intelligence was so scattered some decades ago, as to be statistically meaningless. However, more recently, the responses started to cluster around the mid decades of the 21st century. Fewer experts now say that the Technological Singularity will never happen, or that it will happen hundreds or thousands of years from now. I also believe that it will happen around the middle of the century, and would not be very surprised if it came earlier, around the mid '30s.
The question is indeed also how you define it. In many ways unenhanced humans, who don’t know how to use a smartphone, or more radically how to read and write, can’t function in the modern world. How fundamental this barrier is going to become, and how extreme the enhancements for biological humans to stay actively engaged in society, is how I would start formulating the question.
That is also what leads to the second part about positive outcomes. As long as we know that the choices are open to biological humans to acquire the tools that enable them to play an active and dignified role in society, and that these tools are available to the largest possible percentage of people on the planet as rapidly as possible, we will be able to ride the wave beyond layers that would have represented societal event horizons before.
In effect that is the real question today: how can we ensure meaning and dignity to the potentially increasing segment of humanity, that while having the knowledge, the means, and the availability to enhance themselves, merging with machines, and participate actively in the future, choose not to?
The key is attracting the largest number of people in the network, so that attention, talent, and financial resources can be allocated to the various challenges that arise. In this sense SingularityNet has already made great strides and has to continue educating the ecosystem on the fundamental topics, the increasing power of the tools we use, the opportunities to get engaged in various ways regardless of the level of technical background.
AGI used to belong to the field of unknown unknowns to most. Now it is in the field of known unknowns for many. Nobody knows how to build an AGI, let alone a beneficial one. But a lot of smart people, including those at SingularityNet, are busy trying to find out. Helping these teams and supporting their efforts is one of the most important things that one can aspire to do today.
Until we have AI based governance of DAO-like structures, it is one of the most important tasks that we humans have to make sure that the governance structures are transparent, accountable, effective, and that they themselves evolve, taking into account the innovative tools that from time to time become available. Experimenting with AI-supported governance, as consequence, is in my opinion one of the initiatives that the Council should support and promote, knowing that there will be mistakes made, and its responsibility will be that the open communication with the community and the larger ecosystem makes the unavoidable and necessary mistakes bearable.
The most exciting opportunity that utility tokens represent is the discovery of novel sustainable business models that are faster in eliminating inefficiencies and externalities born by unwitting economic participants.
In this sense the mere financial gain that can be represented by tokens being sold is the dumbest of possible uses, but it has its dignity nonetheless. If somebody is selling, then somebody else is buying. And if the network contains functions that support the token’s utility, then a dynamic equilibrium will be found, expressed by the price of the token.
Whoever plans to use the functionalities offered by the network, has to either own or plan to acquire the tokens. And then use them!
Contrary to BTC, where the emergent property of Bitcoin as a store of value gives a long term role to hodlers, with AGI our goal is to have many ways to use the token on the network, and holding the token is only a temporary necessity until those decentralized AI and AGI applications become available, but it should not have a preeminent long term role.
(I believe that you meant “important part”, and that’s how I will answer)
Spreading knowledge about the platform, and making sure that there are as many people excited to use it as possible is fundamental. There are many tried and true ways that this can be achieved, from stellar documentation, to great example code, to contests and awards, to online courses, hackatons, etc.
At the end, however, the effort has to be proportional to the power of the platform. Core functionality is going to become available not through the sheer numbers of developers who know about it, but through the consistent and high quality effort of the core team that releases code that others want to use.
Yes! Of course each of us, can and should show support by enthusiastically adopting the platform for solving concrete problems in our own organizations. I do plan to do so within the rich ecosystem of components that Network Society represents. As a matter of fact we have already started, announcing a partnership recently where we provide a novel funding mechanism for startups using the SingularityNet platform.
I’ve known Ben for about twenty years, and we have collaborated on many things. I’ve known about SingularityNet since the beginning.
There is a long and interesting history in the evolution of open source and free software project and licenses. (Interesting, that is, if you are a techno-legal geek!)
The Affero GPL has been created, for example, to prevent a Google or a Facebook to include code in their online services without benefiting the larger community. It is one of the least popular licenses.
A lot of people are unhappy about the Apache and Linux foundations being so strongly supported by corporate donations, because they believe that these achieve the goal of steering the technical committees in their favor. An example of this can be seen in the adoption of DRM by the W3C as part of the HTML standard. An abomination in the eyes of the free software purists.
In some ways these will be good problems to have, and hopefully the Council and other governing bodies of SingularityNet will be smart enough to navigate fruitful choices for an inclusive community to flourish long term.
I am a strong supporter of utility tokens as a supporting mechanism for the discovery of novel sustainable business models for decentralized organizations.
Any organization that completed the ICO has a strong responsibility towards its token holders, and the easiest way to make sure that these are fulfilled is to faithfully implement the roadmap and the details of the white paper it published. This also includes tokenomics.
However, as an investor in The DAO, the original one, I sharply feel the current contingent need for a system that enables the improvement of code that needs to be perfected, and this includes all the systems that support the minting, transfer, use, and burning of the tokens, in their various roles.
Monitoring what works, and finding smart ways to improve what doesn’t work is necessary. Nobody has final definitive answers, there are very few and flimsy best practices, and as a consequence adopting tools that allow the effective filtering of the best ideas coming from a large pool in the community, so that they can be tested, experimented with and then adopted more widely, becomes necessary. The cycle will be never ending!
These are of course wonderful questions, and very important ones. The role of the supervisory council is to improve and assure sound governance of the foundation. Of course a good understanding of the science and technology is important. It looks like however, that a good and deep answer to your questions would require an entire research program. In any case, here are my attempts to the answers…
It is very likely that the more powerful systems that we are building are not going to be computable, and as a consequence from a formal point of view their performance cannot be evaluated with total certainty. It is one of the fundamental reasons to prefer decentralized collaborative systems versus centralized monolithic ones: we will be able to compare the decisions made by competing approaches and measure, support, and promote the best ones.
Today’s legal systems do not recognize culpability of code, regardless of how intelligent it is. Consequently the ultimate accountability resides with the human developers, and in the case of the foundation increases the importance of good governance. Future legal systems may take this into account and introduce novel concepts of civil and criminal responsibility for nonhuman actors.
AI systems are mirrors of our own social and economic organization. They are accelerating our ability to analyze and understand what are the drivers that otherwise remain implicit. It is likely that the consequence of our recognizing biases in AI systems is going to rapidly improve not only the software itself but society as a whole.