How the evolution of strong AI depends on strong development paradigms
The beauty and the beast – Open world machine learning vs domain modeling
Preliminary mark: the author considers himself an AI enthusiast for more than a decade – but, because of that, a skeptic to the current hype.
On a daily basis we are confronted with latest ‘breakthroughs’ in the growing capabilities of NN (neural networks)based AI Systems. Be it Google and it’s AlphaGo world dominance, AWS with it’s celebrity recognition service or Serge Brin reporting at WEF how GoogleBrain drew a cat by it’s own decision last week.
Still that keeps reminding me of the Varieté Theatres in the beginning of the 20th Century:
The strongest man in the world, in Leo lingerie, was pushed onto the stage, lifting tons of iron, then vanishing again, followed by Mr. Elastic bending his back without cracking and on and on and on.
A miracle for those who believed.
The Gap between Business Application and AI nevertheless seems almost as large as it would have been between the Iron Man and his ability to improve the construction business. Think of the existing voice assistants and chatbots – let’s face it – Siri and Alexa have not yet brought us where we thought we could already get in a short period of time. Of course we still may – but not if we keep holding on to the Varieté.
To bring AI to a purpose for good, we will have to overcome some historical contradictions in our underlying AI development paradigms.
During the 90’s rule and case based modeling of problem domains was the technology en vogue.
Neural networks already did exist, but computational capacities were too limited to reach acceptable results on a large scale.
In the 2000s Standard Software paradigms ruled, there was absolutely nobody inviting you to a pitch if you mentioned AI in your summary.
With the unleashed computational capacities (as a function of processing, storage and the amount of available data) of the last 10 years we faced a renaissance of the neural network machine learning paradigm that set off the paradigm of domain modeling as something outdated and lame. Of course that happened for a reason. The effort and ‘isomorphism’ of the rule or case based domain model with reality reached it’s limits. (Of course those solutions are still sold in the market but a with a little less saying it loud)
The data driven NN paradigm is very promising in mastering statistical patterns over domain modelling to develop extraordinary capabilities in specialized tasks. (Image recognition, image construction, Sales analytics, voice recognition – you name it) RIP rule based modeling.
On the other hand the general openness and unconditional expansion of data driven approaches alone lead to a loss of convergence and to disintegration of purpose and service delivery along the way (of course these results are only presented by accident). A critical point, I would rather consider an evolutionary hurdle, for strong AI to be made happen. I am not convinced, that staying in the Varieté, believing that it will happen just “by itself” will actually bring it on.
But why ist that?
The Unconditional Expansion Paradox
The unconditional expansion of neural networks (due available data) in many dimensions and higher orders at least might lead to a lack of systemic convergence, that might be shaping the identity of an ‘accountable agent’ in interaction. Accountability requires an expectation horizon which may not be moved by input data every second. (And yes, only one of the most fundamental prerequisites- not the only one. But the others, like flawless natural language processing are already widely discussed)
Thus exponentially increasing intransparency of outcomes for the NN builders and a fundamental intransparency of structures for the NN ‘itself’. Without this systemic convergence unpredictability of the NN behaviour disqualifies it for an autonomous agent role in a business context. (Just like the Iron man who might have started throwing rocks at his colleagues at the construction site for being hassled for his leo thong). And sooner than you might think hard coded rule interrupts are revitalized (of course incognito) to catch up the worst fallacies. – but don’t tell anyone. What else is this but a proof of something’s lacking, looming in the blind spot of cognitive computing?
That might be summed up in the ‘unconditional expansion paradox’
Unconditional Expansion is inescapably creating the limit of AI evolution by itself.
Or with an even more radical analogy: the unconditional expansion of cancer limits the possibilities of staying alive for the organism. (As a matter of General Purpose AI, I am not talking robot takeover)
If that holds true (and of course not everybody in the field will agree to that)
Then what could be next? How could a learning and time-evolving expectation horizon be thought of, as a kernel of a strong AI system?
Convergence by internal self-differentiation
There are paragons of healthy expansion to be found in living systems: a healthy living system is expanding it’s boundaries by increasing it’s internal complexity and reproducing it’s borders distincting it from it’s surroundings at the same time.
This internal differentiation and thusself-similar reproduction is a fundamental principle of achieving higher levels of control over complexity in evolution (as long as you don’t think a fungus might be an even more preferable role model).
Recurring to the NN / domain modeling contradiction, what I am pointing to is a third way: machine learning and operationally closed and at the same time world-open generic modelling may be combined to self-differentiating systems. For good.
I intentionally do not use the terms ‘self-aware’ or ‘self-recurring’ attributes to describe such a central nerve system to make a clear separation to human consciousness.
The good news is it can be done. Some try to. Some are doing.