Olin is working with Community Boating of Boston to develop the RoboSail competition and class instructional materials.  The RoboSail Regatta challenges high school students to design and build a robotic model sailboat for competition in a regional regatta. The intent is to foster the development of knowledge, skills, and interest in engineering and science in a fun, engaging way.  The basic concept is to create a boat, up to 1 meter long, which can sail autonomously. The boat takes in information about wind, location, and bearing, makes decisions about sail trim and course, then generates control signals for on-board servo motors controlling the rudder and sail. Although some events require the boats to be fully autonomous, other events permit the rudder and/or sails to be controlled remotely from off the vessel. This allows teams to gradually increase the complexity of their boat, perhaps starting with only an autonomous sail trim system.   Race events are based on traditional sailing races and tasks, such as sailing a marked course or maintaining a lookout station. Teams will need to learn and apply the science of sailing, the strategy of navigation, and the technology involved in controlling a robotic device.


Olin is actively developing the capability of operating autonomous air, ground and water vehicles for extended periods without human intervention.  A key part of this is the capability to autonomously dock and recharge/refuel.  The student team has created a vision and control system that allows a hovering vehicle to locate and dock with a charging station.  This application was developed in ROS, which allows it to be ported to any of several existing robots.

Artistic Robot Arm

Olin students used a robotic arm to draw an image of a human subject using a standard Sharpie™ pen. To begin, the robot captures an image and applies filters to decompose the image into multiple regions of interest based on facial features. These features are then analyzed into simpler geometries and contours. Next, it converts the filtered image into a series of paths that the robot arm can follow to create the drawing. These paths are then converted into actuation commands which are sent to the arm. The final image is a caricature of the human subject.


Robots are already becoming commonplace for tasks which are dirty, dull and dangerous.  However, current technology limits tasks to be performed by either humans or robots in isolation. In the near future, tedious tasks will no longer be done solely by humans or robots, but will be completed by human-robot teams. In today’s factories, significant resources have been invested into complex, strong and agile robotic arms; however, due to the lack of their spatial awareness and difficulty in programming them, they are not easily adaptable to new tasks.  In addition, humans are prevented from moving into the work envelope while the machines are running, and so robots are generally left to tackle the dangerous tasks on the factory floor with poor or no sensing capability. In order to revolutionize production with the least cost possible to the factory, these arms must be retrofitted with a fairly inexpensive and adaptable sensing and control system.  We have developed a low cost control system that can be adapted to any arm which is based off of a commonly available 3D sensing system, such as the Microsoft Kinect (a consumer-grade RGB-D camera available for $100 that has a resolution of approximately 2mm at 1m to 2.5cm at 3m).  Use of such a sensor would allow an automated industrial manipulator arm to not only grasp and articulate moving objects alongside human beings in any environment, but also enable rapid reprogramming for new spatially oriented tasks, regardless of lighting conditions.  Because the depth camera collects diffracted infrared beam data and correlates it to each pixel in a color image it can be used in any indoor environment without the need for special equipment. Using simple image-processing techniques on the images from the depth camera, we have created a working proof-of-concept prototype that can recognize uniquely shaped objects moving on a 48 by 14 inch conveyor belt.  The system demonstrated its ability to recognize, grasp, and manipulate pieces to play a game of Tetris, making sure to optimize the position and orientation of each piece as it was detected. We are confident this technology can be applied to other applications, such as sorting waste products, organizing nuts and bolts, and any other task involving sorting of unique objects even if moving.