Soul Machines have been around for a couple of years. This could be their first business application.
Last time you mentioned Joscha Bach, so I watched some of his youtube videos, and really like this guy. He got a clear vision on the computational consciousness. His ideas on philosophy are also fascinating.
Is MicroPsi his main computation framework?
The framework consists of the following components:
a graphical editor for designing executable spreading activation networks (which make up the Psi agent’s control structures and representations),
a network simulator (integrated with the editor and monitoring tools to log experiments),
an editor and simulator for the agent’ s environment and a 3D viewer which interfaces with the simulation of the agent world.
From: Seven principles of synthetic intelligence
MicroPsi is an implementation of the PSI theory (https://en.wikipedia.org/wiki/Psi-theory). The PSI theory, developed by Bach’s former professor Dietrich Dörner, is a cognitive architecture, a theory about human general intelligence. I imagine that Joscha tries to apply PSI theory to the AI foundation project. Endowing the agent/bot with a real cognitive architecture.
The computational models of consciousness, such as MicroPsi, global workspace, information integration, internal self-model, higher-level representations, and attention mechanisms, are all great models at our time. My concern is that we could still end up like medieval alchemists who did great theory but proven false due to the limitation of the technology. I think Clark & Chalmers’s theory of Extended Mind could be pragmatic, because:
- even if it is wrong, you can at least get something symbolic.
- the neural network, deemed to have no consciousness so far, still attracts huge investments from the commercial world, because it can deliver much more functionalities.
- integration between the digital world and physical world is rapidly driven by countless technology companies like Amazon, Facebook, Google, Alibaba, and Tencent who are trying to create a seamless controlled environment both on the screen and in the physical world, thus consciously and subconsciously manipulating our experience of the world and influencing our behaviors, emotions, and perceptions. This ever-growing hybrid-world is the mirror of our extended mind and the bio-mind.
Continuing the discussion from Can This Digital Human Help Smokers Quit?:
I agree that projects including the achievement of digital, human-like AGI will go nowhere under the current state of science. I personally believe digital human-like intelligence is possible, but I don’t count on it during my lifetime.
However, the interesting thing about the PSI theory and other cognitive architectures (Soar, ACT-R) is that they conceptualize intelligent, autonomous, human-like behavior in integrated models. This provides a basis to create agents, far from human-like, but showing basic forms auf autonomous intelligent behavior (MicroPsi, …). It is applied Cognitive Science.
Meaning, one can start with very simple agents (eg, a lifelog bot) and step by step upgrade them with more and more cognition.
Adding cognition is a hot topic in artificial intelligence / machine learning: https://hai.stanford.edu/agenda-2020-fall-conference
The above mentioned company Soul Machines also states this as their long-term goal: https://www.soulmachines.com/resources/whitepapers/delivering-on-the-promise-of-ai/
I would love to try those simple lifelog bots based on cognitive architectures. Do you have any demos on them? Microsoft SenseCam project has done quite some experiments on Lifelogging, but it did not find a way to automatically turn the huge number of images into a personal experience. If the lifelog bots you mentioned can do that, they could really make big progress.
When I look at Soul Machines, Sophia AI (Hanson Robotics), and AI foundation, I found that all of them have a gene of visual design. Soul Machines is from a Computer Graph background, Hanson Robotics is good at artificial skin material fabric, AI foundation’s demo also emphasizes the subtle facial emotion. I am wondering if visual representation plays an important role in cognitive architecture?
The first commercial application Soul Machine delivered is backed by IBM Watson. It gives me the impression that they are mainly doing front end stuff.
A cognitive bot so far is only an imagination of mine. Not only mine but surely what Joscha Bach and others envision for the future. I think you are absolutely right in assuming that today’s system are shiny surfaces with little to more likely no cognition behind.
Possibly a human-like surface is a necessary step since cognition needs to be embodied. Only a human-like embodiment goes together with human-like cognition.
Nevertheless, I think one has to start with a rule-based agent that shows some minimal behavior.
Lifelogging, to me, is a fascinating but also peculiar topic. Lifeloggers enthusiastically explore ever more angles from which they can record the details of their life. This feels natural to do, a way to translate yourself into a digital being. However, they seem to miss the other necessary direction - at least this is my superficial impression so far. Transforming the data back into a useful representation of themselves. I read a bit about Gordon Bell and his MyLifeBits project at Microsoft. They amassed huge amounts of data about themselves. But never managed to process the data into a “digital person”. “Does it take a lifetime to look at a lifetime’s data?” I think was revealing anecdote.
So, I think, that one needs to reduce the lifelogging effort strongly and spend at least the same time to develop an appealing representation of the data.
When I read the latest manual of Soar, it was published in 2017. I was surprised to see that most of the structures have not been changed for several decades. Maybe one day, the neural deep learning will hit the wall in the same way. So, the visual rendering technology behind the emotional avatar may be the best way we can do so far.
Like an emotional avatar, the idea of Extended Mind could be a low-tech angle to raise the awareness of personal data. I am not sure if relating digital soul to something bigger and more pragmatic could make more people interested. Could you share some ideas about this?
I take a step back here to reflect. What is it that will people attract to the project? It is the tangíble prospect and impression of (digital) immortality. For me, this comprises two essential elements:
(1) A person’s digital representation continues to have personality and autonomy.
(2) The digital person continues to communicate with other persons. It continues to be perceived as a social being and part of human society.
This is what the current avatar and bot approaches pretend to deliver. But do not really achieve because much work goes into creating shiny surfaces, being hollow inside. However, the shiny surface sells their product, generates much attention. And, in their labs, they strive to getting to the real thing, filling the hollow inside.
The question then - I am often asking myself - what is the alternative (but promising) route for the amateur enthusiast? No CGI studio, no massive budget. Lifelog, yes, pragmatic. What then are the next pragmatic steps to give it a pragmatic personality, communication and social ability?
Again, I think without these capabilities the project won’t be adopted by people. And it may well be that the big players are going to be there soon. From this perspective the amateur should maybe invest the time in gathering his (or others’) data and and lifelog / lifestory. To be transferred to a Replika or so when it is at the next level.
I totally agree with you on the current strategy of those AI Labs, and hope they can fill the hollow inside in the future.
Since the frequent questions for the digital soul would not be too many, most of the answering sentences could be preset/trained, maybe some open-source Replika like AIs can already help to map the questions to answers without too much work.
Thank you for your thoughts.