Checkpoint Project

UR3e Vision-Guided Manipulation

The Construct Robotics Masterclass

Vision-guided manipulation capabilities for the UR3e robotic arm, progressing from hardcoded movements to advanced 3D perception-based control.

Project Overview

This checkpoint project focused on developing vision-guided manipulation capabilities for the UR3e robotic arm. The project evolved through two distinct phases, progressing from hardcoded joint movements to advanced 3D perception-based manipulation, demonstrating the power of computer vision in robotic control.

The system showcases the integration of point cloud processing, 3D object detection, and real-time motion planning to enable the robot to dynamically identify and manipulate objects in its environment.

My Role

I developed the manipulation and perception systems for both phases of the project, implementing algorithms for object detection and motion planning:

Two-Phase Development Process

1

Phase 1: Hardcoded Joint Movements

Foundation - Predefined Manipulation Sequences

In the initial phase, the UR3e robotic arm executed manipulation tasks using predefined joint angles and movements. The arm was programmed with specific trajectories to:

  • Move to a known pickup location using hardcoded joint positions
  • Close the gripper to grasp a block at the predefined location
  • Execute a rotation sequence to reorient the object
  • Move to the drop-off location and release the object

This phase established fundamental manipulation skills and validated the arm's mechanical capabilities for pick-and-place operations.

2

Phase 2: Point Cloud-Based Perception

Advanced - Dynamic Object Detection & Adaptive Manipulation

The second phase introduced sophisticated 3D perception, enabling the robot to detect and manipulate objects dynamically based on their actual positions:

  • Real-time point cloud acquisition from depth cameras for 3D scene understanding
  • Object detection and pose estimation using Point Cloud Library (PCL) algorithms
  • Dynamic motion planning with MoveIt! based on detected object locations
  • Adaptive pick-and-place operations that respond to object position variations
  • Collision-free trajectory generation for safe manipulation in dynamic environments

This phase demonstrated true robotic intelligence, with the system adapting to object positions in real-time rather than relying on fixed coordinates.

Technologies Used

ROS (Robot Operating System) MoveIt! Point Cloud Library (PCL) 3D Perception Computer Vision (OpenCV) Python C++ UR3e Robotic Arm Depth Cameras Motion Planning Gazebo Simulation RViz

Simulation Environment

UR3e Vision-Guided Manipulation Demonstration

Point cloud processing and adaptive pick-and-place operations

Impact & Learning Outcomes

2
Development Phases
3D
Perception Integration
Dynamic
Object Manipulation

This project provided comprehensive experience in vision-guided robotic manipulation. Key learning outcomes included:

Back to Portfolio