The "(un)known (un)knowns" of AI - and what we do if anything

Thanks to Theo , I took the suggestion to “tease out” the dimensions of what we know about AI to that which we can or cannot predict - at a very rough high level to start with, see below, based on a model for understanding leadership (Known knowns, known unknowns, unknown unknowns & Leadership | by Andrea Mantovani | Medium). I look forward to any comments, challenges and suggestions!

Which factors most contribute to human understanding and awareness of the benefits and risks in the evolution of AI? What do we know, what can we know, what do we do about it?

  1. Known knowns:
  • accelerating opportunities and extending Use Cases for the known applications of AI: process automation, generative value…
  • identify and mitigate immediate threats: UBI for job losses, Blockchain for trusted sources, limit the ability of bad Actors to scale harms…
  • restore public confidence in the AI trajectory: through pause, legislation, certification, watermark etc.
  1. Known unknowns:
  • humans cannot anticipate the nature of exponential AI evolution or the potential impacts on our experiences
  • define what ‘could’ be possible without knowing ‘how’… e.g. with scenarios that leverage classes of impossibility (Michio Kaku)
  • a beneficial vision and mission will probably require forms of decentralization unless we want immortal authoritarians and quadrillionaires
  1. Unknown knowns:
  • the need for AI alignment: Instrumental Goals that achieve desired outcomes without undesirable behaviours or bias
  • anthropomorphism and misplaced human attitudes to risk (“killer robots”, “AI becomes evil”)
  • the need for explainability (XAI) to allow human users to comprehend and trust the results and output
  1. Unknown unknowns:
  • our spiritual purpose: maybe humans are the bootloader for AI and soon to be extinct, or digital consciousness on the path to immortality
  • possible and likely impacts of other tech+ convergences which increase the decision space of AI: good, bad, existentially dangerous
  • AI intelligence may be different enough to human as to be permanently inexplicable: quality, speed, memory, replication, Goal focus…