My Short CV | University of Lincoln | School of Computer Science | My official home page | Contact me syue@lincoln.ac.uk or + 44 (0) 1522 837397
Dr. Shigang Yue

Research | Teaching | Publications | Scholar | Contact | Opportunities | Group | Home

Research Interests

My research focuses on multiple disciplinary approaches for solving real world problems with inspirations and methodologies from computer vision, artificial intelligence, biology, neuroscience and psychology. Specific applications of our research are in robotics, intelligent vehicles, security, and healthcare. I am extremly interested in how animal's visual systems work to interact with outside world and how to simulate/realize such complex systems in software/hardware for various applications.

The following keywords may be able to illustrate the specific research interests and previous research experiences I have:

  • Insects vision, biological systems modelling, LGMD/DCMD, locusts, behaviours, visual cue preference, directional selectivity;
  • Robotics, interaction, specific CMOS vision chips, FPGA, VLSI, attention, tracking;
  • Non-contact intent detection, emotion states recognition, stimuli, body languages, human robot interaction;
  • Vehicle collision detection, pedestrian collision prediction, pattern recognition;
  • Artifiical intelligence, natural computation, neural networks, sensory information processing;
  • Image processing, computer vision, thermal image processing, fMRI/DTI image analysis;
  • Mobible robot navigation, collision avoidance, behaviours;
  • Flexible robotic arm, force/torque sensors, manipulation, DLO, numerical dynamic simulation, finite element methods;
  • Artificial life, co-evolution, multiple neural sub-systems, co-exist, coordination, integration, redundancy.


Projects

My research has been supported by EU FP6/FP7, Alexander von Humboldt Foundation (AvH), Home Office, NTT, UoL and other public funding bodies. Their generious supports are very much appreciated. Some of the projects are listed as below.

temporary image for hazcept HAZCEPT: Towards zero road accidents - nature inspired hazard perception. The number of road traffic accident fatalities world wide has recently reached 1.3 million each year, with between 20 and 50 million injuries being caused by road accidents. In theory, all accidents can be avoided. Studies showed that more than 90% road accidents are caused by or related to human error. Developing an efficient system that can detect hazardous situations robustly is the key to reduce road accidents. This HAZCEPT consortium will focus on automatic hazard scene recognition for safe driving. HAZCEPT will address the hazard recognition from three aspects - lower visual level, cogintive level, and drivers' factors in the safe driving loop.
temporary image for livcode LIVCODE: Life like information processing for robust collision detection (EU FP7, coordinator). Animals are especially good at collision avoidance even in a dense swarm. In the future, every kind of man made moving machine, such as ground vehicles, robots, UAVs aeroplanes, boats, even moving toys, should have the same ability to avoid collision with other things, if a robust collision detection sensor is available. The six partners of this EU FP7 project from UK, Germany, Japan and China will further look into insects visual pathways and take inspirations from animal vision systems to explore robust embedded solutions for vision based collision detection for future intelligent machines.
eye to electronic chips EYE2E: Building visual brains for fast human machine interaction (EU FP7, coordinator). In the real world, many animals pocess almost perfect sensory systems for fast and efficient interactions within dynamic environments. Vision, as an evolved organ, plays a significant role in the survival of many animal species. The mechanisms in biological visual pathways provide nice models for developing artificial vision systems. The four partners of this consortium will work together to explore biological visual systems in both lower and higher level by modelling, simulation, integration and realization in chips, to investigate fast image processing methodologies for human machine interaction through VLSI chip design and robotic experiments.
temporary image for hazcept temporary image for hazcept Mini UAVs with size of hand are specially designed for study swarm intelligence. They may also be used for other application areas, for example, as platform for fly robots coordination research, collision avoidance research, surveillance,, human robot interaction, and even take part in rescure etc. Further details to be available soon.
crowded places DiPP: detecting hostile intent by measuring psychological and physiological reactions to stimuli (UK, HO, PI). This is a fascinating project which is tackling the huge challenge with innovative ideas. The feasibility study has proved the concept and methodologies. We are taking steps to push this idea forward further. Investors and/or potential collaborators are welcome to contact me for the involvment of further developments for industrial applications.
pedestrians Pedstrian: pedestrian collision detection with bio-inspired neural networks (Fellowship). In this project, only the pedestrians are in the collision course or are walking into the collision course to a moving vehcile are monitored and calculated with hierarchical neural networks. Several types of bio-inspired neural networks are combined to get a better performance for pedestrian collision detection.
3D environment Evolving of bio-plausible neural systems to control aerial agents (UoL). This project is looking for ways to evolve visual based neural controllor for autonomous agents. These agents are initially evolved in 3D virtual environments, and the best performed agent will be tested with its physical counteparts flying or running in the real physical world. The platform developed and used in this study can be acessed via Mark Smith's page altURI if you have further interest.
road collision detection robotic navigation LGMD neural networks Vehicle or robot collision detection inspired from locust visual pathway (LOCUST, EU). Life like image processing inspired from locust visual pathway are used for vehicle collision detection.
interact in dynamic environment interact with dynamic object 1 interact with dynamic object 1
With a pair of LGMDs, a robot can navigate easily in a dynamic environment, click [video] on YouTube, to see how it changes its moving course to avoid the rolling can. The short movie presented in AISB'07 is on YouTube [movie] (90s), click to see how a robotic 'locust' with panaromic vision behaves reasonably to approaching objects.
manipulation with a robotic arm
Robotic Manipulation Skills: intelligent robotic systems can be constructed hierachically start from primary skills. Each of these skills can deal with similar tasks in similar stituations. Taking manipulating soft/flexible objects as an example - to insert an elastic object into a hole efficiently, the robotic arm can, either to damp the vibration very quickly as shown in the video clips [damp1] with PID or [damp2] with fussy controller, or do the insertion directly if knowing the status of the deformable object with the help of different sensors and models as shown in the two video clips [FastInsertion1] and [FastInsertion2].


click, back to homepage
Research | Teaching | Publications | Scholar | Contact | Opportunities | Group | Home

Last update 25/06/2013 | University of Lincoln | School of Computer Science | My official home page | Contact me syue@lincoln.ac.uk or + 44 (0) 1522 837397