Project Overview
This checkpoint project focused on developing vision-guided manipulation capabilities for the UR3e robotic arm. The project evolved through two distinct phases, progressing from hardcoded joint movements to advanced 3D perception-based manipulation, demonstrating the power of computer vision in robotic control.
The system showcases the integration of point cloud processing, 3D object detection, and real-time motion planning to enable the robot to dynamically identify and manipulate objects in its environment.
My Role
I developed the manipulation and perception systems for both phases of the project, implementing algorithms for object detection and motion planning:
- Programming hardcoded joint trajectories for pick-and-place operations
- Implementing point cloud processing for 3D object detection
- Integrating MoveIt! for real-time motion planning
- Developing perception pipelines using Point Cloud Library (PCL)
- Testing and refining manipulation strategies for reliability
Two-Phase Development Process
Phase 1: Hardcoded Joint Movements
Foundation - Predefined Manipulation Sequences
In the initial phase, the UR3e robotic arm executed manipulation tasks using predefined joint angles and movements. The arm was programmed with specific trajectories to:
- • Move to a known pickup location using hardcoded joint positions
- • Close the gripper to grasp a block at the predefined location
- • Execute a rotation sequence to reorient the object
- • Move to the drop-off location and release the object
This phase established fundamental manipulation skills and validated the arm's mechanical capabilities for pick-and-place operations.
Phase 2: Point Cloud-Based Perception
Advanced - Dynamic Object Detection & Adaptive Manipulation
The second phase introduced sophisticated 3D perception, enabling the robot to detect and manipulate objects dynamically based on their actual positions:
- • Real-time point cloud acquisition from depth cameras for 3D scene understanding
- • Object detection and pose estimation using Point Cloud Library (PCL) algorithms
- • Dynamic motion planning with MoveIt! based on detected object locations
- • Adaptive pick-and-place operations that respond to object position variations
- • Collision-free trajectory generation for safe manipulation in dynamic environments
This phase demonstrated true robotic intelligence, with the system adapting to object positions in real-time rather than relying on fixed coordinates.
Technologies Used
Simulation Environment
UR3e Vision-Guided Manipulation Demonstration
Point cloud processing and adaptive pick-and-place operations
Impact & Learning Outcomes
This project provided comprehensive experience in vision-guided robotic manipulation. Key learning outcomes included:
- Understanding the progression from hardcoded to perception-based control
- Mastering 3D point cloud processing and object detection techniques
- Implementing real-time motion planning with MoveIt! framework
- Integrating multiple ROS components for complex manipulation tasks