Recognition of human activities using depth images of Kinect for biofied building

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

These days, various functions in the living spaces are needed because of an aging society, promotion of energy conservation, and diversification of lifestyles. To meet this requirement, we propose "Biofied Building". The "Biofied Building" is the system learnt from living beings. The various information is accumulated in a database using small sensor agent robots as a key function of this system to control the living spaces. Among the various kinds of information about the living spaces, especially human activities can be triggers for lighting or air conditioning control. By doing so, customized space is possible. Human activities are divided into two groups, the activities consisting of single behavior and the activities consisting of multiple behaviors. For example, "standing up" or "sitting down" consists of a single behavior. These activities are accompanied by large motions. On the other hand "eating" consists of several behaviors, holding the chopsticks, catching the food, putting them in the mouth, and so on. These are continuous motions. Considering the characteristics of two types of human activities, we individually, use two methods, R transformation and variance. In this paper, we focus on the two different types of human activities, and propose the two methods of human activity recognition methods for construction of the database of living space for "Biofied Building". Finally, we compare the results of both methods.

Original languageEnglish
Title of host publicationProceedings of SPIE - The International Society for Optical Engineering
PublisherSPIE
Volume9435
ISBN (Print)9781628415384
DOIs
Publication statusPublished - 2015
EventSensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2015 - San Diego, United States
Duration: 2015 Mar 92015 Mar 12

Other

OtherSensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2015
CountryUnited States
CitySan Diego
Period15/3/915/3/12

Fingerprint

Air conditioning
Energy conservation
Aging of materials
Lighting
Robots
Sensors
eating
Activity Recognition
Motion
Diversification
Energy Conservation
air conditioning
Human
mouth
Trigger
Conditioning
energy conservation
promotion
conditioning
robots

Keywords

  • Biofied Building
  • Depth Image
  • Human Activity
  • Kinect
  • R Transformation
  • Variance

ASJC Scopus subject areas

  • Applied Mathematics
  • Computer Science Applications
  • Electrical and Electronic Engineering
  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics

Cite this

Ogawa, A., & Mita, A. (2015). Recognition of human activities using depth images of Kinect for biofied building. In Proceedings of SPIE - The International Society for Optical Engineering (Vol. 9435). [94351U] SPIE. https://doi.org/10.1117/12.2084079

Recognition of human activities using depth images of Kinect for biofied building. / Ogawa, Ami; Mita, Akira.

Proceedings of SPIE - The International Society for Optical Engineering. Vol. 9435 SPIE, 2015. 94351U.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ogawa, A & Mita, A 2015, Recognition of human activities using depth images of Kinect for biofied building. in Proceedings of SPIE - The International Society for Optical Engineering. vol. 9435, 94351U, SPIE, Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2015, San Diego, United States, 15/3/9. https://doi.org/10.1117/12.2084079
Ogawa A, Mita A. Recognition of human activities using depth images of Kinect for biofied building. In Proceedings of SPIE - The International Society for Optical Engineering. Vol. 9435. SPIE. 2015. 94351U https://doi.org/10.1117/12.2084079
Ogawa, Ami ; Mita, Akira. / Recognition of human activities using depth images of Kinect for biofied building. Proceedings of SPIE - The International Society for Optical Engineering. Vol. 9435 SPIE, 2015.
@inproceedings{9b2ca4af065740a1b4e190850807a429,
title = "Recognition of human activities using depth images of Kinect for biofied building",
abstract = "These days, various functions in the living spaces are needed because of an aging society, promotion of energy conservation, and diversification of lifestyles. To meet this requirement, we propose {"}Biofied Building{"}. The {"}Biofied Building{"} is the system learnt from living beings. The various information is accumulated in a database using small sensor agent robots as a key function of this system to control the living spaces. Among the various kinds of information about the living spaces, especially human activities can be triggers for lighting or air conditioning control. By doing so, customized space is possible. Human activities are divided into two groups, the activities consisting of single behavior and the activities consisting of multiple behaviors. For example, {"}standing up{"} or {"}sitting down{"} consists of a single behavior. These activities are accompanied by large motions. On the other hand {"}eating{"} consists of several behaviors, holding the chopsticks, catching the food, putting them in the mouth, and so on. These are continuous motions. Considering the characteristics of two types of human activities, we individually, use two methods, R transformation and variance. In this paper, we focus on the two different types of human activities, and propose the two methods of human activity recognition methods for construction of the database of living space for {"}Biofied Building{"}. Finally, we compare the results of both methods.",
keywords = "Biofied Building, Depth Image, Human Activity, Kinect, R Transformation, Variance",
author = "Ami Ogawa and Akira Mita",
year = "2015",
doi = "10.1117/12.2084079",
language = "English",
isbn = "9781628415384",
volume = "9435",
booktitle = "Proceedings of SPIE - The International Society for Optical Engineering",
publisher = "SPIE",

}

TY - GEN

T1 - Recognition of human activities using depth images of Kinect for biofied building

AU - Ogawa, Ami

AU - Mita, Akira

PY - 2015

Y1 - 2015

N2 - These days, various functions in the living spaces are needed because of an aging society, promotion of energy conservation, and diversification of lifestyles. To meet this requirement, we propose "Biofied Building". The "Biofied Building" is the system learnt from living beings. The various information is accumulated in a database using small sensor agent robots as a key function of this system to control the living spaces. Among the various kinds of information about the living spaces, especially human activities can be triggers for lighting or air conditioning control. By doing so, customized space is possible. Human activities are divided into two groups, the activities consisting of single behavior and the activities consisting of multiple behaviors. For example, "standing up" or "sitting down" consists of a single behavior. These activities are accompanied by large motions. On the other hand "eating" consists of several behaviors, holding the chopsticks, catching the food, putting them in the mouth, and so on. These are continuous motions. Considering the characteristics of two types of human activities, we individually, use two methods, R transformation and variance. In this paper, we focus on the two different types of human activities, and propose the two methods of human activity recognition methods for construction of the database of living space for "Biofied Building". Finally, we compare the results of both methods.

AB - These days, various functions in the living spaces are needed because of an aging society, promotion of energy conservation, and diversification of lifestyles. To meet this requirement, we propose "Biofied Building". The "Biofied Building" is the system learnt from living beings. The various information is accumulated in a database using small sensor agent robots as a key function of this system to control the living spaces. Among the various kinds of information about the living spaces, especially human activities can be triggers for lighting or air conditioning control. By doing so, customized space is possible. Human activities are divided into two groups, the activities consisting of single behavior and the activities consisting of multiple behaviors. For example, "standing up" or "sitting down" consists of a single behavior. These activities are accompanied by large motions. On the other hand "eating" consists of several behaviors, holding the chopsticks, catching the food, putting them in the mouth, and so on. These are continuous motions. Considering the characteristics of two types of human activities, we individually, use two methods, R transformation and variance. In this paper, we focus on the two different types of human activities, and propose the two methods of human activity recognition methods for construction of the database of living space for "Biofied Building". Finally, we compare the results of both methods.

KW - Biofied Building

KW - Depth Image

KW - Human Activity

KW - Kinect

KW - R Transformation

KW - Variance

UR - http://www.scopus.com/inward/record.url?scp=84943425830&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84943425830&partnerID=8YFLogxK

U2 - 10.1117/12.2084079

DO - 10.1117/12.2084079

M3 - Conference contribution

AN - SCOPUS:84943425830

SN - 9781628415384

VL - 9435

BT - Proceedings of SPIE - The International Society for Optical Engineering

PB - SPIE

ER -