Q & A with John Long

By Peter Bronski
Vassar Quarterly: Over the course of your career, you’ve studied  psychology, evolutionary biology, and robotics. What was the most surprising thing you learned?

John Long: For me, the most exciting and unexpected thing is what you can do with very little brain. You can see very intelligent behavior—searching for food, escaping from predators—with a brain of just a few neurons. We as humans overrate the brain. We have a big one, and we want it to be important. But there is a lot that you can do without a lot of neural power.

VQ: So if organisms can become more intelligent by evolving their bodies, rather than their brains, does this influence our definition of intelligence?

JL: I have a half-time appointment with Vassar’s cognitive science program, so the study of the mind is very important to me in doing this work. There’s a big debate in cognitive science right now. Is intelligence defined by your behavior, or what the mind/brain is doing independent of that behavior? Behavior is something we can observe. It’s important; it matters. But is behavior the only definition of intelligence? No. Humans have an introspective life, a life of the mind, and we can report on that experience.

Professor John Long uses electronic tadpoles (Tadros) and other artificially created marine vertebrates to study evolution.
VQ: It seems there’s the potential for some crossover between the human life of the mind and a robotic life of the mind. Are we looking at some distant future in which computers and machines and other robots guided by artificial intelligence become sentient and overthrow humans?

JL: I don’t think our machines are intelligent in the sense that they’re sentient and will wake up one day and have free will and need to overthrow their human rulers. But we are becoming more and more dependent on machines. We are building machines that do things for us, and we’re offloading “human” tasks to those machines—memory, math.

VQ: It sounds like there’s also a growing dependency on machines for soldiers on the battlefield, and that there’s a place for the nexus of biology and robotics there as well.

JL: Programs such as DARPA [the Defense Advanced Research Projects Agency] and companies like Boston Dynamics are basically creating robotic pack animals for soldiers in the field, machines that can go anywhere with the soldiers. You see it with their robotic dogs and a new robotic cheetah. But right now engineers are trying very hard to catch up with the very good job that biological systems do. There’s a perceived need to push the limits so that engineering can do what animals do in terms of endurance, maneuverability, speed, and the ability to carry supplies with very little energy.

VQ: Is there a fundamental difference, though, between building robots that mimic biology and building biomimetic robots that evolve, as yours do in your lab?

JL: When you’re interested in engineering applications, such as carrying supplies for a soldier, it’s an unmoving target on the wall. You do all you can to optimize getting to that target. By contrast, we start with a biological system, and take it from there. There’s no particular design we’re trying to shoot for. We’re looking at the process of evolution; we have no idea where the target is. We run our systems not knowing where they’re going to go.

VQ: What’s next in the “evolution” of your research, then?

JL: What my colleagues and I are working on right now is swarm intelligence, intelligent behavior in groups. Take ants. Individually they don’t do complicated things, but as a group they can, such as building and defending a mound. What we’ve done is taken a population of Tadros, stripped them down to a basic survival version (they search for food), and then we mess with their minds. Some are goal driven; others move randomly. But they don’t talk to each other. The only thing they share is their presence in the same physical world we create: their tank. In the swarm intelligence literature, there’s an assumption that members of a swarm are always looking at each other, making adjustments. But does coordinated behavior come merely from the fact that you’re in a group with the same goal? To our surprise, what we see going on is that if you share a goal, a group becomes coordinated, even without communication. We’re overestimating the amount of work that brains do, and underestimating the importance of being in a physical situation with people.

Interviewed by Peter Bronski