Text only

CM30076 / CM30082
Individual Project

Project Ideas

Dr John Collomosse

jpc@cs.bath.ac.uk

These projects all lie on a continuum between Computer Graphics and Computer Vision, and many are in line with my own research interests. All require a reasonable degree of competence in mathematics. For more details please call in to see me - I am also happy to answer questions via email.

Robot Vision
STOP PRESS -- NEW PROJECT
An opportunity has arisen for an enthusiastic and committed student to take part in a collaborative robotics project with Electrical and Mechanical Engineering. We wish to put together a student team to design and build an AUV (submersible aquatic vehicle) capable of carrying out various vision-dependent underwater tasks, e.g. following a bright orange pipeline, or dropping weights into bins on the seabed. The goal is to enter the robot either into this US competition, or the UK equivalent in July 2006:
http://www.auvsi.org/competitions/water.cfm
I will be supervising up to two real-time Computer Vision projects in this area, addressing the underwater marker following and object recognition tasks. You would most likely work with the prototype robot built last year by Mech. Eng., liasing with other members of the team (including other CS students working on the vision and AI aspects of this task - see J. Bryson's projects).
Please do not apply unless you have excellent C programming skills, and reasonable competence in mathematics. You should not only work well in a team but also have the ability to learn quickly and independently to investigate new vision techniques.
This work will be very time consuming, and strong commitment to the project is important - the vision component is critical to the team's success. In addition, you will be required to carry on working, on your own time, up until the July competition date - i.e. continue working even after your project submission in May.
Projector Driven Touch Screen
Interactive touch screens are becoming increasingly popular in kiosks, exhibitions etc. Modern plasma displays can often function as touch screens but are expensive and bulky, making them unsuitable for deployment in some situations. This project will instead combine a data projector and camera to create a portable touch screen system. The projector will be used to create a virtual "screen" on a flat wall, and Computer Vision techniques used to detect user interaction. I have developed C libraries for automatic calibration of the camera/projector setup, which will be made available to you. This code automatically corrects for keystone distortion of the projected image, allowing the projector and camera to be positioned (almost) arbitrarily in the room. Your work will therefore concentrate on the tracking algorithms for touch screen interaction. The language for implementation will be C or C++, under either Linux or Windows.
Indicative reading: Introductory material re: image processing, e.g. "Feature Extraction and Image Processing" Nixon/Aguado. Copies in the library.
Exhibit Tag Recognition via Mobile Phone Camera
A novel system has been developed in which supermarket customers can use their phone/PDA camera to perform price checks by photographing bar codes (New Scientist, May 20th). The proposed project will implement a similar, non-proprietary, system allowing users to take photos of bar codes, or other printed tags, at arbitrary angles. Investigations will be undertaken as to the effects of illumination, image resolution and noise on the system. Other applications might include artifact recognition in museums, etc. Implementation language is flexible. Prototypes and experiments will probably be constructed in MATLAB, and will involve development of robust Computer Vision algorithms. If time permits, your prototype code will be migrated to C or C++ and deployed as a 2-tier client/server application.
Indicative reading: Introductory material re: image processing, e.g. "Feature Extraction and Image Processing" Nixon/Aguado. Copies in the library.
Automated Shredded Document Recovery
The stitching together of photographs to form panoramas is an active research topic and is becoming a popular feature in graphics software. However this operation assumes overlap between photographs. This project will investigate Computer Vision approaches to stitching together non-overlapping images, with an application to reconstituting shredded paper documents. We will only deal with "Class 2" shredders, i.e. long strips, not criss-cross shreds. Shreds will be scanned into your software which will then perform pattern matching to rebuild the document. Implementation will ideally be in MATLAB. This is related the automated jigsaw solution project that I ran last year, and I hope that this work will build upon last year's findings. The project is investigative in nature and will require mathematics.
Last year's project report (on my web page) and introductory material re: image processing, heuristic based search and evolutionary algorithms.
Vehicle Licence Plate Clean up in CCTV footage
CCTV has become ubiquitous in our society, and surveillance footage of vehicles recovered from cameras in car parks/roads is often used by the police in criminal investigations. Often the quality of video available from these devices is poor, and vehicle licence plates can be unreadable due to image noise. By averaging successive video frames together, some of this problematic noise can be suppressed (a technique also used astronomy) --- but this is not feasible if the subject vehicle is moving. This project will solve this problem by tracking the rectangular licence plate region over successive video frames. The rectangular regions will be registered to a single coordinate system, so compensating for their motion. The regions will then be averaged to clean up the vehicle licence plate, which should improve readability to a human operator. Note this project is _not_ about automatic number plate recognition, though variation in the project idea is negotiable. This is an investigative Computer Vision project, so implementation preferably in MATLAB.
Indicative reading: Introductory material re: image processing, e.g. "Feature Extraction and Image Processing" Nixon/Aguado. Copies in the library.
Augmented Reality - Open ended
Augmented reality is a form of virtual reality in which text and graphics are superimposed over a video feed, giving the impression that virtual objects exist within the users real world environment. Virtual objects move in accordance with the user's own movements, so adding to sense of realism. This year we will have access to a head mounted display with built in camera, enabling students to experiment with innovative project ideas in this area. Possibilities might include education/tutorials, navigation/interactive guides around tourist hotspots, etc. however you are encouraged to be creative and put forward novel ideas. Open source software exists to handle the Computer Vision issues of tracking and object immersion (see ARToolKit). Your implementation would interface with this library using the OpenGL graphics library, and C.
Indicative reading: web resources on augmented reality for ideas.
Scripted Behaviour Detection
Computer Vision is often used commercially to detect and react to particular behaviour patterns. Consider a brief example. A moving object enters the car park zone. It is large, and so the system knows to classify it as a car. It stops within an image zone known to contain a parking bay. A smaller moving object breaks away from the stationary large object - a person - and exits the car park without visiting the "pay and display machine" zone. The carpark attendant is automatically paged, and issues a fine. This example demonstrates how simple image processing (identification of moving blobs) can be combined with a priori specification of rules and image zones to create a reactive vision system. Many such systems exist, but are often custom built to particular applications - i.e. the rules are hard coded. The aim of this project is to develop a generalisation of such reactive systems, that will use a simple scripting language of your design to specify rules and so describe behaviour of the system.
Indicative reading: There are numerous papers on vision and behaviour detection, but David Hogg's research at Leeds is a good starting point. Introductory material re: image processing, e.g. "Feature Extraction and Image Processing" Nixon/Aguado. Copies in the library.