Title

Voice and Motion-based Control System: Proof-of-Concept Implementation on Robotics via Internet-of-Things Technologies

Presenter Information

mingyuan yanFollow

Academic Title

Assistant Professor

College

MCCB

Department

Computer Science and Information Systems

Primary Campus

Dahlonega

Keywords

IoT, Voice and motion based control system, Robotics

Abstract

The Internet of Things (IoT) permits the connection of several devices to a network enabling data to be communicated over a network without direct human-to-human or human-tocomputer communication. The core of IoT systems employs hardware devices, networks, cloud services and software applications. Hardware devices take input from outside the system, networks serve as a communication medium between hardware and the cloud server, cloud services store input data and software applications produce output using the cloud data. Because of the flexibility of IoT systems, it has become a popular model for modern device communication. IoT has a wide range of applications in a plethora of fields such as healthcare, manufacturing, home automation and education. To enhance usage of IoT for educators, this project is designed to implement an Arduino board alongside motion sensors and audio receiver to control a robot car by means of a cloud server, and IoT technologies. More specifically, an Arduino board will be trained using machine learning techniques to recognize a set of predefined gestures. These gestures will be stored in the cloud server, and the robot car will retrieve these commands from the cloud server reacting according to the gesture it the system has recognized. Moreover, the system integrates Google Voice API to control the robot car by pre-set voice commands. Furthermore, a web application is constructed to control the robot car by computer. This project is but a single yet powerful tool for future applications in smart classroom technologies.

Biography

Dr. Mingyuan Yan is an Assistant Professor in Computer Science at University of North Georgia. She received her Ph.D. degree in 2015 from Department of Computer Science at Georgia State University. She received her B.S. in Computer Science and Technology and M.S. in Information Security from Wuhan University, Wuhan, China, in 2008 and 2010, respectively. In 2012, she received another M.S. Degree in Computer Science from Georgia State University. Her research interests include data management and protocol design in wireless networks, influence maximization and information dissemination in mobile social networks. She is also interested in other topics such as information security and big data management. Dr. Yan is an IEEE member, and an IEEE COMSOC member.

Proposal Type

Poster

Subject Area

English/Communications

Start Date

15-11-2019 12:00 PM

End Date

15-11-2019 2:30 PM

This document is currently not available here.

Share

COinS
 
Nov 15th, 12:00 PM Nov 15th, 2:30 PM

Voice and Motion-based Control System: Proof-of-Concept Implementation on Robotics via Internet-of-Things Technologies

The Internet of Things (IoT) permits the connection of several devices to a network enabling data to be communicated over a network without direct human-to-human or human-tocomputer communication. The core of IoT systems employs hardware devices, networks, cloud services and software applications. Hardware devices take input from outside the system, networks serve as a communication medium between hardware and the cloud server, cloud services store input data and software applications produce output using the cloud data. Because of the flexibility of IoT systems, it has become a popular model for modern device communication. IoT has a wide range of applications in a plethora of fields such as healthcare, manufacturing, home automation and education. To enhance usage of IoT for educators, this project is designed to implement an Arduino board alongside motion sensors and audio receiver to control a robot car by means of a cloud server, and IoT technologies. More specifically, an Arduino board will be trained using machine learning techniques to recognize a set of predefined gestures. These gestures will be stored in the cloud server, and the robot car will retrieve these commands from the cloud server reacting according to the gesture it the system has recognized. Moreover, the system integrates Google Voice API to control the robot car by pre-set voice commands. Furthermore, a web application is constructed to control the robot car by computer. This project is but a single yet powerful tool for future applications in smart classroom technologies.