TY - JOUR
T1 - Autonomous Terrain Classification with Co-and Self-Training Approach
AU - Otsu, Kyohei
AU - Ono, Masahiro
AU - Fuchs, Thomas J.
AU - Baldwin, Ian
AU - Kubota, Takashi
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/7
Y1 - 2016/7
N2 - Identifying terrain type is crucial to safely operating planetary exploration rovers. Vision-based terrain classifiers, which are typically trained by thousands of labeled images using machine learning methods, have proven to be particularly successful. However, since planetary rovers are to boldly go where no one has gone before, training data are usually not available a priori; instead, rovers have to quickly learn from their own experiences in an early phase of surface operation. This research addresses the challenge by combining two key ideas. The first idea is to use both onboard imagery and vibration data, and let rovers learn from physical experiences through self-supervised learning. The underlying fact is that visually similar terrain may be disambiguated by mechanical vibrations. The second idea is to employ the co-and self-training approaches. The idea of co-training is to train two classifiers separately for vision and vibration data, and re-train them iteratively on each other's output. Meanwhile, the self-training approach, applied only to the vision-based classifier, re-trains the classifier on its own output. Both approaches essentially increase the amount of labels, hence enable the terrain classifiers to operate from a sparse training dataset. The proposed approach was validated with a four-wheeled test rover in Mars-analogous terrain, including bedrock, soil, and sand. The co-training setup based on support vector machines with color and wavelet-based features successfully estimated terrain types with 82% accuracy with only three labeled images.
AB - Identifying terrain type is crucial to safely operating planetary exploration rovers. Vision-based terrain classifiers, which are typically trained by thousands of labeled images using machine learning methods, have proven to be particularly successful. However, since planetary rovers are to boldly go where no one has gone before, training data are usually not available a priori; instead, rovers have to quickly learn from their own experiences in an early phase of surface operation. This research addresses the challenge by combining two key ideas. The first idea is to use both onboard imagery and vibration data, and let rovers learn from physical experiences through self-supervised learning. The underlying fact is that visually similar terrain may be disambiguated by mechanical vibrations. The second idea is to employ the co-and self-training approaches. The idea of co-training is to train two classifiers separately for vision and vibration data, and re-train them iteratively on each other's output. Meanwhile, the self-training approach, applied only to the vision-based classifier, re-trains the classifier on its own output. Both approaches essentially increase the amount of labels, hence enable the terrain classifiers to operate from a sparse training dataset. The proposed approach was validated with a four-wheeled test rover in Mars-analogous terrain, including bedrock, soil, and sand. The co-training setup based on support vector machines with color and wavelet-based features successfully estimated terrain types with 82% accuracy with only three labeled images.
KW - Semantic Scene Understanding
KW - Space Robotics
KW - Visual Learning
UR - http://www.scopus.com/inward/record.url?scp=85063312291&partnerID=8YFLogxK
U2 - 10.1109/LRA.2016.2525040
DO - 10.1109/LRA.2016.2525040
M3 - Article
AN - SCOPUS:85063312291
SN - 2377-3766
VL - 1
SP - 814
EP - 819
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
M1 - 7397920
ER -