As it stands, the current world of AI struggles with two major obstacles: generalization and explainability, both of which stem from the neural networks’ inability to conceptualize like or as good as humans. To a large extent, neural networks are excellent pattern matching algorithms. They’re good at solving tasks which use low-cognitive functions. However, where reasoning (high-cognitive functions) is concerned, AI algorithms are still out of reach.
The pain? A glass ceiling in the missions that AI can handle and black-box nature of AI, which cannot be trusted in sensitive domains since the logic of its work is not human-based. Even American military intelligence agency DARPA put forth a $2 billion campaign to tackle this problem, coined the ‘third wave of AI systems’, among projects globally, in an effort to spur research and development on neural network’s explainable qualities.
While most research is concentrated in trying to improve from lower to higher function nature, Israeli startup Decodea.ai is working in the opposite direction, starting from modeling sophisticated human reasoning.
