My principle scientific passion is understanding cognition, particularly as it relates to explaining human culture, but also natural intelligence more broadly.  My main methodology for doing this is designing intelligent systems to model and test scientific theories.  We build theories of intelligence into working AI models.  Modelling allows us to learn more than we could using unassisted human reasoning about whether a theory is reasonable and what its implications are.  Once we understand a theory's implications and predictions, we can compare these to data empirical scientists collect from the natural system we are trying to explain.  Increasingly, we are collecting our own data too.

Most (but not all) of my research has focused on the unintentional and non- or proto-linguistic aspects of human intelligence, and how intelligence evolves more broadly.  The more we understand both the universals of cognition and computation, and the variation we see across species (particularly but not exclusively in animals), the better we will understand the context of specifically human behaviour, including human culture.  From 2000–2007 I worked primarily on understanding non-human primate behaviour.  Since 2008 my group has been more focused on characteristics of human cognition such as consciousness, language, religion, and especially cultural variation in cooperation.  This doesn't mean we've lost interest in comparative cognition; my group now studies evolutionary and learning dynamics in many contexts, from public goods games in humans to instruction sharing in microbia, from evolvability and epistasis in gene regulatory networks to computer game strategies. We have looked at cognition and information sharing in species from macaques to tortoises to ravens to Mongolian asses.

Designing AI models of natural intelligence isn't as easy as it should be.  My research therefore has always included a great deal of work on systems AI, including my work in action selection and the development methodology I initially developed at MIT,  Behavior Oriented Design (BOD).  We apply this work into a variety of domains besides science, including cognitive robotics, computer game characters, and intelligent environments / "smart homes". 

Given I work on both bettering AI and understanding human society, I feel obliged to also work on Robot and AI Ethics. My work there didn't initially seem like research but rather just public understanding of science. However, it has become clear that humans have a lot of trouble understanding AI, partly because AI is built overly opaquely, partly because humans over-identify with the computational aspects of our intelligence, perhaps mostly because we don't understand ourselves.  Consequently my group now does real empirical research in AI ethics, both how to make AI itself more transparent and accountable, and how to help ordinary users understand their domestic AI systems.

Research Group and Students

Research is done by researchers.  Many of the people I've written papers with have been affiliated with Artificial models of natural Intelligence.  These include my PhD and other dissertation students.   I have also been the "research leader" for Bath's Intelligent Systems Group, but this was very much a Frederick Brooks' type of leadership-as-service role.  It has now been taken over by my awesome colleague Özgür Şimşek.

If you would like to do research with me, and

Other and Older Research Projects, Funding

I have been involved in promoting European Cognitive Systems research and education.  Some time ago I used to occasionally get around to maintaining this research-oriented list of Related Web Sites.  Even my really old Code is on line.  All of my code from published projects is available either there or from the Amoni Software Page.

Research projects and labs previous to Bath

Bath Projects and Funding Acknowledgement

Please see the AmonI Web pages for descriptions of projects and links to code.

J J Bryson
Last updated January 2018