Nonlinear Artificial Intelligence: A New Perspective

A recent study published on arXiv proposes a nonlinear view of artificial intelligence, challenging the idea of linear and uniform progress. The authors introduce the concepts of "familiar intelligence" and "strange intelligence" to describe the emerging capabilities of AI systems.

'Strange' Intelligence and Unexpected Capabilities

Artificial intelligence, according to this perspective, is more likely to manifest as "strange intelligence". This means that AI systems may exhibit unexpected combinations of abilities and limitations, excelling in some tasks in a superhuman way but failing in others in surprising ways, making errors that a human would rarely make.

Implications for Testing and Evaluation

This view has important implications for how we evaluate AI capabilities. If AI is inherently "strange", we cannot expect the most advanced systems to be infallible. Seemingly trivial errors should not be interpreted as a lack of general intelligence. Similarly, excellent performance in a single task, such as an IQ test, does not guarantee broad and generalized capabilities beyond that specific domain.