David Vernon

Interview with Prof. David Vernon

Computer Engineering, Khalifa University of Science, Technology,
and Research, UAE, Coordinator of euCognition
Personal website

What do you think cognitive systems are?

Anything that is able to adapt to changes in the world around it and that is able to anticipate things that might happen in the future is in some sense a cognitive system. The way the system adapts is important: it must be able to learn and to develop on its own as it interacts with the world around it, altering and improving itself, and becoming better at what it does. The system’s ability to anticipate is tied up with this: by being able to ‘guess’ what will happen next, the system can adapt even more effectively and hence anticipate new things that might happen. So there is an interesting circularity going on here: cognitive systems adapt to improve their skill in anticipation, and this anticipation, in turn, helps the system adapt even better. And, as I said, a cognitive system does all this on its own when interacting with the world around it.

What is your area of research within Cognitive Systems?

I work on an area called ‘cognitive architectures’ which is a little like working on the design of a blueprint for a skyscraper or an organization chart for a large corporation. It’s concerned with figuring out what bits are needed and how they all fit together to make the system work properly. I also work on the visual sensing part of cognitive systems: writing software that allows the system to see what’s going on in the world around it.

Why did you become a researcher?

I actually started life as a software engineer in a big company and I enjoyed that job very much. But I wanted to set my own goals, rather than work on projects someone else gave me, and I wanted the chance to do something really new and different. So I left the company, returned to university to do a Ph.D. in robot vision, and, after I graduated, kept on going as a researcher, trying to solve harder and harder problems.

How did you get into Cognitive System research?

Like many things in life, by accident! My Ph.D. work on vision threw up all kinds of questions about the nature perception that I couldn’t attempt to answer in my thesis. So I started reading books on psychology, and neuroscience, systems theory, and even philosophy. It became clear that somehow the question of perception was intimately tied to the area of cognition: how the whole system works, not just the sensing parts. And my research since has been mainly concerned with sorting out these issues out.

Where did you study and what subjects did you study?

I read for my Ph.D. at Trinity College Dublin. I studied Engineering at Trinity as an undergraduate and then robotics and robot vision at Trinity for my Ph.D.

Can you describe briefly how you are doing what you do?

I’m an engineer, first and foremost, so I love to build things. Working on cognitive systems is a perfect area for that because, as with all engineering, you need a model (or a theory) of how something will work before you design it (i.e. create the blueprint I mentioned earlier). Then you use the blueprint to build the system and see how it works (or see IF it works!) You learn a huge amount from this and it allows you re-think your original theory and create a better blueprint and, hopefully, a better system.

What are the techniques used in your research?

Cognitive systems is all about adapting to change and anticipating change so the techniques I use build on an area called dynamical systems and in particular on a type of dynamical system that can ‘self-organize’.

Can you tell me why they are important?

The idea of self-organization is important because we want to use techniques that enable the system to ‘look after itself’ without needing me, its designer, to help it by giving it lots of hints about what to do (in other words we don’t want to embed too much of MY knowledge in the system at the start: we’d rather the system figured things out for itself).

What are the major implications of your work?

The goal of my work is twofold: to understand this amazing process call cognition and to build systems that have a least a little amount of it in the way they work. The net result of this is that we will be able to use systems in applications where we don’t know everything before we start out: in areas which are just too complicated for existing systems.

Who will benefit from your research / techniques?

When you get old, you start to lose your short-term memory: you can’t remember what you’ve just done or even what you currently doing. Everyone has had the experience of walking into a room and stopping, thinking: what did I come in here to get? Well, for some old people, their entire life is like that and it’s very frustrating for them. They need people to keep an eye on them, notice when they’re getting confused or have forgotten to do something, and then prompt them to help them out and get them back on track. My dream is that some day we will have artificial cognitive systems that can do this for us. It may be a long way off, but we’ll probably get there in stages, learning as we go.

What skills do you think are most important to a Cognitive Systems researcher?

Curiosity, tenacity, and humility! You need a real passion to understand what’s going on in cognition but, because it’s such a hard problem, you need to be able to stick with your ideas and accept that they won’t all work out … but keep on going. Very few people succeed at the first attempt, you need to be willing to try, try, and try again.

What do you think is most satisfying about Cognitive Systems research?

I think it’s that you learn a little more about what it is to be human (the best cognitive system we know) and understand a little more about ourselves.

What do you consider is the most challenging about being a Cognitive Systems researcher?

The most challenging aspect of cognitive systems research is that we simply don’t know how all the parts fit together and we’re not even sure that we know what all the parts are. As a science, we’re just at the beginning and nobody has all the answers. Of course, that’s also the attraction: all researchers live in hope of making the big breakthrough that will change the way we think about and build systems and, because were just beginning to understand cognition properly, everyone has a chance of being on the team that makes this big breakthrough (or even a couple of small ones!).

What do you think are the main challenges for the future?

We face two types of challenges: social ones and technical ones.

The social challenges are the ones we face as a community of researchers. As I mentioned already, cognitive systems research is really just getting started (although it can trace it’s roots back well over fifty years) and we are finally bringing together people from all the disciplines that need to be involved to make real progress: the psychologists, the roboticians, the computer scientists, the neuroscientists, the mathematicians, the linguists, and many more. This creates some tough challenges: creating a common language so that we can work together and creating a common understanding of the problems we are trying to solve. And we are only going to solve them by working together: we won’t succeed if we allow the community we are building today to split up again into little specialist areas.

The technical challenges are huge and there are too many to mention here, but I think the critical ones are to get a strong theoretical and practical handle on this idea of self-organization, and at the same time to figure out what you need to put into a system at the beginning so that self-organization can take over and lead to the properties we want the system to have: the ability to adapt, to anticipate, and develop … all on its own.

There are several discussions or debates associated with Cognitive Systems research.

Could you mention issues relating to your work?

This is a tough question to answer and there is no shortage of controversies in cognitive systems research!  (See the euCognition Wiki for a discussion of some of them). This is partly to do with the fact that it's a young discipline and we don't yet have a widely-accepted position on everything, and it's partly due to the diversity of the people working in the area.

Anyway, let me pick out one of the most contentious issues and try to summarize the two sides that people take on the matter (but remember that, as with most important things, the more you get into it, the more complicated it becomes). The issue is whether or not cognitive systems need bodies.

Can you outline the arguments of the opposing sides of the debate?

Scientists who think that cognitive systems need to have a body (i.e. they have to be embodied) think that cognition is primarily a process of constructing physical skills that are adaptive and anticipatory - those words again - and that not only do you need a body to do this but the form of your body - the way your eyes, ears, arms, legs, hand, fingers work - is as important as the stuff going on in your brain. This position has the interesting consequence that your knowledge of the world - how you understand the world around you - is dependent not just on the way the world is but also on how you interact with it. This is a pretty major idea. Think about it: you are not just learning about the world, you are shaping and to an extent constructing it as you live (and cognize) in it. This means your knowledge of the world is personal and you can only share it by learning how to interact and communicate with other cognitive systems (and they with you).

On the other hand, scientists who don't think that cognitive systems need to have a body actually don't mind very much if they are embodied but they don't think it's necessary either. For them, if you had the right model of cognition, it wouldn't matter WHAT machine you used (human brain, laptop computer, or the positronic brain in Star Trek's Commander Data), you'd still get the cognitive behaviour you required. Interestingly, this position has the opposite consequence that the knowledge the cognitive system learns and develops is independent of the machine it runs on and therefore it's the same for all cognitive systems. In other words, cognitive systems can directly share their knowledge with each other. Incidentally, this also means you could simply transfer knowledge directly from an old cognitive system to new one without it having to relearn it through experience; a neat trick if it could be done.

Thank you!


Text only
An euCognition Network Action
COGNITIVE SYSTEMS OUTREACH
Department of Computer Science
University of Bath
Send us an email:
webmistress
euCognition - www.euCognition.org
HomeFor PressFor KidsFor AcademicsAbout Us
Meet the Scientists
Explore the Debates
Study at University
Robot Gallery
Home > Meet the Scientists > Prof. David Vernon