Results (
Vietnamese) 2:
[Copy]Copied!
To address current limitations, a large body of research in the past few years has focused on methods that can automate the workface assessment process. These methods range from application of sensors such as Ultra Wide Band (UWB) systems [11-13], Radio Frequency Identification (RFID) tags [14], and Global Positioning Systems (GPS) [15,16] to computer vision methods using video cameras [17-19,71], Several existing methods that build on top of the non-visual sensors mainly track the locations of the workers. Without interpreting the activities of the workers and purely based on location information - which is the basis of the majority of the state-of-the-art methods - deriving workface assessment data is challenging. For example for interior drywall activi¬ties, distinguishing between idle time, picking up a gypsum board, and measuring and cutting purely based on location data will be very difficult as during these activities the location of the worker would not necessarily change. Because these solutions do not yet perform accurately and do not produce a detailed feedback, the construction industry is still reluc-tant to adopt these automated solutions.
To address the limitations of location-based activity recognition, Joshua and Varghese [20-23] proposed an accelerometer-based method that has the capability of recognizing various activities based on move¬ment of the body skeleton. Their method was tested for bricklaying op-erations at the task-level resolution and promising results have been reported. Using prior knowledge about activity locations on the jobsite, Cheng et al. [7] proposed an activity analysis method based on both location and body posture of the workers by integrating UWB - for location tracking - and commercially-available Physiological Status Monitors (PSM) with a wearable 3-axial thoracic accelerometer to de¬rive body posture data. This method uses a single body posture and location to model and infer each activity. Still distinguishing between two activities that have the same location and body pose, for example idle time and measuring dimensions of a gypsum board, would be challenging.
Our method is different from prior research, as we choose to use inexpensive RGB-D sensors (<$150) that can provide confidentiality in the data collection, and can detect and track body skeleton of up to six workers simultaneously and in real-time. Confidentiality here means that the identity of the workers remain unknown as we only track their body skeleton. To generalize the applicability of our method, we do not assume any prior knowledge about expected activities in certain locations on the jobsite. Also rather than directly interpreting location and single body posture to derive activities as in [7], we propose histograms of body posture from RGB-D sequences to capture tabulated fre-quencies of a large number of key body postures for construction activities and use learning methods to train and infer these activities in a principled way. Our method includes: 1) an algorithm for detecting, tracking, and extracting body skeleton features
Being translated, please wait..