Semantic perception

Semantic perception is a computational framework, inspired by cognitive models of human perception, which uses semantic technologies to derive actionable intelligence and situational awareness from low-level sensor data. The formalization of this ability utilizes prior knowledge encoded in domain ontologies, and hybrid abductive/deductive reasoning, to translate low-level observations into high-level abstractions. These abstractions are more useful for applications and end-users of the sensor data.
Currently, there are many sensors collecting information about our environment, leading to an overwhelming number of observations that must be analyzed and explained in order to achieve situation awareness. As perceptual beings, we are also constantly inundated with sensory data; yet we are able to make sense out of our environment with relative ease.
Further reading
* Payam Barnaghi, Frieder Ganz, Cory Henson, and Amit Sheth. Computing Perception from Sensor Data. In proceedings of the 2012 IEEE Sensors Conference, Taipei, Taiwan, October 28-31, 2012.
* Cory Henson, Krishnaprasad Thirunarayan, Amit Sheth. An Ontological Approach to Focusing Attention and Enhancing Machine Perception on the Web. Applied Ontology, vol. 6(4), pp. 345-376, 2011.
* Cory Henson, Amit Sheth, Krishnaprasad Thirunarayan. . IEEE Internet Computing, vol. 16, no. 2, pp. 26-34, Mar./Apr. 2012, doi:10.1109/MIC.2012.20
* Cory Henson, Krishnaprasad Thirunarayan, and Amit Sheth. An Efficient Bit Vector Approach to Semantics-based Machine Perception in Resource-Constrained Devices. In: Proceedings of 11th International Semantic Web Conference (ISWC 2012), Boston, Massachusetts, USA, November 11-15, 2012.
 
< Prev   Next >