Title

Automatic 2D to 3D Object Reconstruction Using Neural Networks

Faculty Mentor(s)

Bryson Payne, Ph.D., Markus Hitz, Ph.D.

Campus

Dahlonega

Proposal Type

Poster

Subject Area

Computer Science

Location

Library Third Floor, Open Area

Start Date

2-4-2014 11:00 AM

End Date

2-4-2014 1:00 PM

Description/Abstract

The research presented herein is a methodology for reconstructing a 3D object from a single 2D image through the use of a back-propagation neural network to identify the depicted object as a member of one of four classes: rectangles/boxes, spheres, cylinders, and others. This process currently outputs a correctly textured 3D VRML, X3D, or WebGL file for two classes of objects: boxes and spheres. The approach applies a combination of edge detection and geometry to the 2D input image to ascertain the center of gravity and calculate a set of perimeter distances around that center of gravity. These calculated values are passed to a trained back-propagation neural network comprising 36 input nodes, 100 intermediate nodes and 4 output nodes corresponding to the four classes of objects above. Once the object has been classified, it is deconstructed in 2D using a subset of the calculated perimeter points and reconstructed in 3D as a textured model for display.

This document is currently not available here.

Share

COinS
 
Apr 2nd, 11:00 AM Apr 2nd, 1:00 PM

Automatic 2D to 3D Object Reconstruction Using Neural Networks

Library Third Floor, Open Area

The research presented herein is a methodology for reconstructing a 3D object from a single 2D image through the use of a back-propagation neural network to identify the depicted object as a member of one of four classes: rectangles/boxes, spheres, cylinders, and others. This process currently outputs a correctly textured 3D VRML, X3D, or WebGL file for two classes of objects: boxes and spheres. The approach applies a combination of edge detection and geometry to the 2D input image to ascertain the center of gravity and calculate a set of perimeter distances around that center of gravity. These calculated values are passed to a trained back-propagation neural network comprising 36 input nodes, 100 intermediate nodes and 4 output nodes corresponding to the four classes of objects above. Once the object has been classified, it is deconstructed in 2D using a subset of the calculated perimeter points and reconstructed in 3D as a textured model for display.