A novel object slicing based grasp planner for 3D object grasping using underactuated robot gripper
Robotic grasping of arbitrary objects even in completely known environments still remains a challenging problem. Most previously developed algorithms had focused on fingertip grasp, failing to solve the problem even for fully actuated hands/grippers during adaptive/wrapping type of grasps, where each finger makes contact with object at several points. Kinematic closed form solutions are not possible for such an articulated finger which simultaneously reaches several given goal points. This paper, presents a framework for computing best grasp for an underactuated robotic gripper, based on a novel object slicing method. The proposed method quickly find contacts using an object slicing technique and use grasp quality measure to find the best grasp from a pool of grasps. To validate the proposed method, implementation has been done on twenty-four household objects and toys using a two finger underactuated robot gripper. Unlike the many other existing approaches, the proposed approach has several advantages: it can handle objects with complex shapes and sizes; it does not require simplifying the objects into primitive geometric shape; Most importantly, it can be applied on point clouds taken using depth sensor; it takes into account gripper kinematic constraints and generates feasible grasps for both adaptive/enveloping and fingertip types of grasps.
💡 Research Summary
The paper introduces a grasp planning framework tailored for a two‑finger underactuated robotic gripper, capable of handling arbitrary 3‑D objects without simplifying them to primitive shapes. The core innovation is an “object slicing” technique that reduces the complex problem of finding multiple contact points on a surface to a fast spatial search of points lying on the intersection of the object’s mesh (or point cloud) with planes defined by the gripper’s finger flexion geometry.
First, the object is represented as a triangular mesh; if only depth‑camera data are available, the raw point cloud is used directly. The mesh is stored in an Octree to enable rapid queries. Principal Component Analysis (PCA) is applied to the object vertices to determine its dominant axes, allowing the system to classify objects into categories (cylindrical, flat, spherical/cuboid, small box) and to align the gripper accordingly.
A large pool of candidate grasps is generated by sampling the surface of an enclosing sphere, cylinder, or circle at fixed angular intervals, depending on the object class. Each candidate consists of an initial gripper pose, approach direction, and finger configuration. For every candidate, the algorithm proceeds as follows:
-
Knuckle contact – A plane orthogonal to the knuckle surface and containing both fingers is defined. All Octree leaf nodes intersecting this plane are retrieved, their points projected onto the plane, and the nearest point is taken as the first knuckle contact.
-
Finger closure – With the gripper positioned at the knuckle contact, a second plane aligned with the finger flexion direction is considered. Points from intersecting leaf nodes are projected onto this plane. The proximal joint is closed until either a projected point is reached or the joint limit is hit; then the distal joint is closed in the same manner. The links are modeled as zero‑thickness line segments of the actual link lengths, which simplifies collision checking.
After contact points for the knuckle, proximal link, and distal link are obtained, the grasp is evaluated using a combined quality metric. The first component approximates the friction cone at each contact by an eight‑sided pyramid and constructs the wrench space (forces and torques) generated by the contacts. The distance from the origin of this space to the closest facet of the convex hull (ε) quantifies the maximum external wrench the grasp can resist; ε is normalized to
Comments & Academic Discussion
Loading comments...
Leave a Comment