I’m extremely excited for what I see will be a new level of human learning and thinking enabled by ergonomic AR/VR. I’ll call it XR to avoid the compound abbr. I also have serious concerns about the technology when lifted out of a purely educational context. So, it’s a mixed bag, in that sense. I’ll dig into more detail below.
- Memory Palace
- IDE: Integrated tools (learning/exploratory and productive UI/tools/environment/artifacts)
- medium for the evolution of a dynamic Visual language
- the Glass Bead Game in Herman Hesse’s novel of the same name
- the Primer in Neal Stephenson’s Diamond Age
- Black Mirror-like emotional/social consequences
- What kind of dependencies will naturally form in relation to tech as powerful as I imagine?
- How to mature to the point as a species where the kind of technology that will really take our cognition to the next level won’t simultaneously drive us to self-destruct
Some brief comments on the above awesome ideas:
- Memory Palace: An ancient method for remembering things by imagining a stable environment and memorably placing imaginary representations of content for later recall. The extensions in XR feel limitless and powerful. Interesting to explore are the potential implications on our memory. I feel that there is already a process happening, especially in tech with information that changes frequently, to not-remember the content but rather to remember a path back to it (eg, via “I searched on google or stackoverflow with such and such a search term”). These are two kinds of memory and both can be augmented in XR. The memory palace is, in my view, a tool that should always be “right at eye” in a productive XR workspace. You can also talk about ‘automated palaces’, that organize passively recorded information according to a timeline or patterned logic. Basically, what I consider to be an essential missing element in modern computing is the easy facility for each person to create a map of their movement through information. Yes, I’m talking about Vanevar Bush’s Memex [https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/](As We May Think). Favorite quote from this 1945 article: “Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing.” IMO, this comment expands to the whole of intelligence.
- This brings me to my second point about an integrated tool environment. A key process we see in the software world in general is a moving away from monolithic applications towards ecosystems of smaller applications and microservice fabrics. It basically is the old *nix style of compositionality by piping the string output of one shell command to the next. In general, in our information economy, our protocols are advancing and our tower of babel is growing ever taller. I’m not alone in seeing an analogy with how the brain itself works, as a cooperation of expert systems where the cooperation is itself an expert system in the system. I’ve digressed somewhat, my original point is about functionally composing XR apps in a learning environment. I see this as involving, primarily, coming up with an “operating system” that is built with the idea of supporting an interface that is adapted to the way the brain internally (semi-consciously) shifts amongst it’s more natural cognitive tools. At the limit, an XR learning tool becomes a new layer of the mind. [https://youtu.be/faJ8N0giqzw](Tangible Functional Programming: a modern marriage of usability and composability - take this talk’s visual examples and reimagine them as cognitive composition) Overall, I feel the issue of an IDE is what we’re talking about and also the central thing I want to stress. Learning in XR is not compelling to me if I have to “pull off my headset to take some notes”. And, once you take this observation to the extreme, you end up with an XR IDE with no distinctions between “learning content” and “producing content”. Personally, this concept is what drives my participation in the world of technology.
- This brings me to my third point about the evolution of a language. “Visual” is somewhat limited and I expect that as technology continues to evolve the modalities of a future language can also expand. But, once we can write composable tools that transform multi-modal information and we make those tools ergonomic, we start approaching something like the voice-box, which our brains wrap around to make a transparently expressive tool. A new language could grow out of this.
- Similarly, if you’ve read Herman Hesse’s Glass Bead Game (and if not, I highly recommend it), you can then imagine how our entire way of representing information can evolve beyond the static symbols we use today. As the technology spreads, it become more and more convenient to record information and communicate using the new tools available in our augmented cognitive environment, just as once the availabilty of malleable surfaces provided an opportunity for writing which grew into a great conversation between writers suspended across time.
- As the medium becomes responsive, you can get into the realm of personalization. Both in terms of unique tool-palettes and personification. Imagine “talking to your language” and having it grow as a result. At some point, I would argue, that will become more meaningful than as an example of a schizophrenic mixture of levels of speech. Lol.
The first steps towards some of the more far-flung ideas above is continuing down the road of understanding the mind-process and writing software in a delicate and reflective process that folds in what’s learned. An opinion I have is that UX these days is so bad that we don’t realize what a fundamentally important thing it is. UX being the “how” concerning presentation of information and interactive control. On one level, it’s valid for an intellectual person with a mind deep in the essence of things to want to ignore such superficial concerns, as I’ve often done, but like most things, there is depth hidden in this fractal surface. Skipping a few steps, I would point out that UX is how the body-self expands, similar to how the sword master considers the sword to be an extension of his arm. If only our information tools were so naturally graspable by their intended hand. Metaphors being a language-based example of an easy to use tool to adapt and mold things with similarities and dissimilarities. Ultimately, we may need to “go into the brain” with implants or genetic modification to most efficiently interface with consiousness, but I predict touch/sight/sound/temperature will prove to be quite powerful.