Complexity laughs at your puny artificial intelligence

by Adrianna Gregory and Edward Cone

IMAG1493_1A high-wattage event at the New York Public Library this week focused on complex systems (e.g., living organisms, digital networks), our not-great ability to understand them, and their impact on established systems and worldviews. John Seely Brown opened the show by telling us we live in a time of non-linear causality and dynamic ecosystems; co-host Ann Pendleton-Julian said we are looking for agency in a world as turbulent as white water.  Associate Editor Adrianna Gregory and Tech Practice Lead Edward Cone exchanged some thoughts on the conversation.

Adrianna: I thought about open source a lot tonight. David Krakauer said our existing educational system promotes depth of scholarship within disciplines—and, as a result, it promotes divisions. But as the world becomes increasingly networked, and our systems more complex, seemingly unrelated fields and ideas reveal themselves to be much more similar than we thought. That led me to open source, and how much progress can be made when ideas are shared across a wide range of people. And of course, Yochai Benkler made a direct reference to Apache beating out Microsoft in his presentation ten minutes later.

Ed: I was surprised Krakauer was dismissive of AI as a tool for understanding complexity. He really shut down the guy who asked about it. Which saved me from embarrassment—I was going to ask about it too because earlier he said we don’t know how to write down the mathematics for highly complex systems, and I wrote “will that be code for AI?” in my notes.

It seemed a little glib for Krakauer to define AI’s ability as being really good at chess. The promise of it goes beyond raw computing power to the extension of the human intellect—kind of like an exosuit for our collective mind. And that would seem to fit his theme of consilience. But I’m glad I kept my mouth shut.

Adrianna: The conversation about AI took an interesting turn when the group began talking about our trouble dealing with multi-agent causality. Our tendency to look for simple solutions does not always work in a complex environment. Someone in the audience brought up the trouble that might arise the first time a self-driving car causes a collision—who gets blamed, the programmer? The operator of the vehicle? AI might be something that can help us sort through interrelated ideas in ways that our own personal system of ethics can’t.

Oh, and open source was still on my mind at this point. The lack of agency in a complex, networked environment mirrors the work done by large groups of coders—you can’t give credit to one individual in the group, since their contributions might not exist without the input from everyone else in the system along the way.

Ed: Huh. My take on agency in the context of open source was different—instead of seeing the individual programmer disappearing into the universal hive-mind, I see that programmer’s ability to effect change on her own as agency. And that feeds into Benkler’s discussion about individuals with arcane knowledge and specific expertise making a huge difference in real-world cases like the Snowden leaks. That seems important for businesses—who do you assign to particular tasks and what domain expertise do they need to effect change in a complex world?

Adrianna Gregory is the Associate Editor for Technology at Oxford Economics.

Edward Cone heads the technology practice and is Deputy Director of Thought Leadership at Oxford Economics.