Robot Movement Planning Theory and Practice Based on ROS2 Humble and MoveIt2: Kinematics Explained

Robot

Introduction

In the previous article focused on modeling, we discussed how to construct a robotic arm model from scratch for basic motion planning in a simulation system. Utilizing the Moveit2 library’s built-in KDL solver and OMLP open-source path planning library provided a foundational understanding of motion planning principles, which may not be sufficient for practical engineering development. Therefore, this chapter will introduce commonly used underlying methods and principles related to motion planning.

1. Prerequisite Knowledge on Matrices

Before diving into forward and inverse kinematics, it’s essential to briefly discuss matrix-related concepts. Many people are intimidated by extensive matrix operations, as much of the forthcoming forward and inverse solutions involve various complex matrix calculations. First, it is important to understand why robotics heavily relies on matrices: regardless of whether it is a three-axis, four-axis, standard six-axis, or redundant seven-axis robot, we can analyze that all are composed of several links as indicated in their URDF files. Joints represent the connections between links, and links are key representations of robotic movement.

How do we best represent a link in space? A three-dimensional vector is optimal; it can express the direction and orientation of a link through combinations of basis vectors and describe its length through the vector’s magnitude. However, the challenge arises as robotic links require not only position but also complete orientation. A single vector can only indicate “which direction,” but cannot specify “how to rotate around that direction.” Three orthogonal basis vectors are needed to fully describe a local coordinate system, which naturally forms a 3×3 matrix known as the rotation matrix.

Moreover, the robotic arm’s pose changes throughout its motion, with spatial coordinates changing more frequently than orientation. Coordinate transformations are relatively simple, expressed through basic addition and subtraction of XYZ axis values. Here lies a distinction: rotation and translation are computed separately. In 3D space, rotation is linear (matrix multiplication v’ = Rv), while translation is nonlinear (vector addition v’ = v + p). To describe “rotate first, then translate,” the formula is:

vnew = R ⋅ vold + p

This formula includes one multiplication and one addition. When dealing with dozens of links in a robot, such nested additions can make calculations cumbersome, highlighting a limitation of linear algebra. To unify “multiplication + addition” into “multiplication,” we introduce homogeneous coordinates: forcibly adding a dimension to the 3D vector [x, y, z]T to become [x, y, z, 1]T.

When performing 4 × 4 matrix multiplication, we can observe the expanded block multiplication:


[R & p \\ 0 & 1]
[v \\ 1]
=
[Rv + p ⋅ 1 \\ 0 ⋅ v + 1 ⋅ 1]
=
[Rv + p \\ 1]

This transformation, which originally required two steps (Rv + p), is now accomplished perfectly via a single 4 × 4 matrix multiplication. The final row [0 0 0 1] is included to accommodate the augmented matrix dimensions.

If you break down the T matrix’s R portion:


T =
[nx & ox & ax & px \\ ny & oy & ay & py \\ nz & oz & az & pz \\ 0 & 0 & 0 & 1]

The vectors n, o, and a represent the unit vector directions of the new coordinate system’s X, Y, and Z axes in the original coordinate system. The vector p indicates the new coordinate system’s origin in the original coordinate system. This is the homogeneous transformation matrix, essentially serving as a “coordinate system projection manual.” It informs the original coordinate system where its X, Y, and Z axes must point and where its origin must move to match the new system.

In robotic kinematics, the homogeneous transformation matrix serves two main purposes: 1) representation of end-effector poses and 2) representation of pose transformations. Additionally, the homogeneous transformation matrix plays a crucial role in visual calibration tasks, enabling transformations between base coordinate systems, tool coordinate systems, camera coordinate systems, and calibration plate coordinate systems, yielding the core homogeneous equation for eye-in-hand calibration: AX = XB. Finally, the Jacobian matrix is also quite important and will be discussed in detail in the upcoming section on inverse kinematics.

2. Kinematics: Forward and Inverse Solutions

2.1 Forward Kinematics

Forward kinematics calculates the end-effector’s position and orientation in Cartesian space from known joint variables (such as rotation angles or translation distances). The core of this process is to establish the mapping relationship between joint space and Cartesian space, typically achieved through coordinate transformations. For example, for a planar two-degree-of-freedom link, knowing the angle between the link and the previous link or the horizontal plane allows us to use trigonometric transformations to derive the end coordinates of the link.

When extending this to six degrees of freedom or even higher, calculations build upon this foundation by layering each link’s end coordinates. The more degrees of freedom, the more complex the calculations become, but these calculations are generally handled rapidly by computers.

In three-dimensional space with six linked combinations, the formulas grow even more complex, necessitating the use of matrices for convenience, specifically the aforementioned homogeneous transformation matrix and rotation matrix. This matrix transformation method simplifies forward kinematics solutions considerably. Compared to inverse kinematics, forward kinematics is relatively straightforward, following a more fixed set of steps that can be summarized as:

  1. Establish the D-H parameter table.
  2. Calculate the transformation matrix for adjacent coordinate systems.
  3. Use chain multiplication to obtain the total transformation matrix.
  4. Extract the end-effector pose.

Through these four steps, we see that the D-H parameter table and homogeneous transformation matrix are foundational to forward kinematics. The establishment of the D-H parameter table was covered in the previous article: Link.

The essence of forward kinematics is to represent the relative positional relationship between adjacent coordinate systems through matrices. However, the coordinate systems are determined by six parameters, while the D-H parameter table seemingly provides only four parameters: link length, link offset, joint angle, and twist angle. These four parameters do not appear directly related to the six parameters of the coordinate system: XYZRPY (pose parameters). So, how can knowing the D-H parameters determine the relative relationship between adjacent coordinate systems?

In reality, pose parameters represent a state parameter indicating the current state of a coordinate system, while D-H parameters symbolize a process parameter expressing how movement occurs to reach a specific state.

The six parameters describe, “Where are you?” (state), while the four D-H parameters describe, “How did you get there?” (process). For robotic kinematics, knowing “how to get there” is more useful than merely knowing “where it is,” as it directly corresponds to joint movement control.

The transformation operations corresponding to D-H parameter types are as follows:

Step Transformation Operation Physical Meaning
1 Rotate θi around the Zi-1 axis Joint i’s rotation angle, which may be variable (rotational joint) or fixed (translational joint).
2 Translate di along the Zi-1 axis Offset distance along the joint axis, which may be variable (translational joint) or fixed (rotational joint).
3 Translate ai along the new X axis The shortest distance between two joint axes (fixed value).
4 Rotate αi around the new X axis The twist angle (fixed value).

This table clarifies the transformation operations corresponding to each D-H parameter. However, a question arises: while pose representation requires six degrees of freedom, the D-H parameters only seem to provide four. How are the remaining two degrees of freedom constrained?

In truth, the constraints of the remaining two degrees of freedom are already established during the creation of the D-H parameter representations. As previously noted, the Z-axis must align with the rotation axis, and the coordinate origin must lie on the Z-axis, while the X-axis is also restricted. The seemingly “lost” two degrees of freedom are actually embedded in the rules for establishing the coordinate systems.

Through the transformations provided by the D-H parameter table, we can understand the conversion methods between adjacent coordinate systems. The chain rule enables the derivation of the positional relationship between the end-effector’s coordinate system and the base coordinate system, from which the rotation and displacement matrices can be extracted from matrix T.

2.2 Calibration of D-H Parameters

If learning robotic motion control is aimed at research and simulation, you can directly use values measured with CAD or 3D modeling tools as D-H parameters. However, in practical robotic development projects, the parameters provided by CAD often differ from those of the robot after actual assembly. Theoretical D-H parameters are based on ideal conditions, whereas real robots experience various error sources:

Error Type Typical Size Causes
Manufacturing Tolerance 0.01-0.1mm Processing accuracy limits, material shrinkage.
Assembly Error 0.05-0.5mm Installation deviation, bolt pre-tightening force.
Joint Zero Position Offset 0.1-1.0° Encoder installation deviation, mechanical limit errors.
Link Deformation 0.01-0.1mm Elastic deformation due to gravity, temperature, or load.
Gear Play 0.01-0.05° Backlash in the transmission system.
Thermal Expansion 0.01-0.1mm/10°C Temperature changes causing dimensional changes.

Serial robots exhibit error accumulation characteristics; for instance, a typical six-axis industrial robot with a 0.1° angle error in joint 1 may result in a 1-2mm positional error at the end-effector. This error significantly impacts precision assembly when the end-effector is equipped with a gripper requiring precise fitting, and the accumulation of visual calibration errors further reduces achievable precision. The conventional calibration process involves:

  1. Installing a laser tracker.
  2. Aligning global coordinate systems.
  3. Ensuring sufficient data quantity and distribution.
  4. Identifying parameters.

The laser tracker can be quite expensive; students or those with limited budgets may skip this step, which would not affect subsequent motion planning, although it would impact precision.

Calibration Requirements:

  • Quantity: At least 3-5 times the number of parameters to be calibrated.
  • Distribution: Uniformly cover the workspace.
  • Pose: Sufficiently excite all parameters across different poses.
  • Typical Setup: For a six-axis robot, typically 50-200 calibration points are required.

Parameters to Identify:

  • Geometric Parameters: Identification includes not only nominal α, a, d, θ but also:
    • Joint zero position offset (Δθ or Δd): The deviation between the encoder’s zero position and the theoretical zero position. This is one of the most critical parameters to identify.
    • Minor errors in link lengths (a), twist angles (α), and link offsets (d).
  • Non-Geometric Parameters (Optional, for High-Precision Calibration):
    • Gear transmission ratio errors: Inaccuracies in the transmission ratio from motor to joint.
    • Joint flexibility/link deformation: Consideration of flexibility for heavy loads or high-speed robots.
    • Temperature drift errors: Structural deformation due to temperature changes.

Parameter Identification Process:

  1. Target Ball Installation: Securely attach the tracker target ball (SMR) to the robot end flange, ensuring a known fixed transformation relationship between the ball’s center and the flange center (requires prior calibration).
  2. Global Coordinate System Alignment: Establish the transformation relationship between the robot’s base coordinate system and the laser tracker’s world coordinate system, typically achieved through “three-point” or “multi-point fitting” methods.
  3. Key Principle: Measurement poses should be as uniformly and widely distributed within the robot’s workspace as possible, including various singular poses (such as fully extended arms or near joint limits) to thoroughly excite all parameter errors.
  4. Data Quantity: Usually requires collecting 50-200 groups or more of pose data. The more data collected and the wider the distribution, the more robust the identification.

For each measured pose i, simultaneously record:

  • Joint angle vector qi: All joint angles (or positions) read from the robot controller.
  • End-effector actual pose Tmeas_i: The pose of the target ball’s center in the global coordinate system measured by the laser tracker (4×4 homogeneous transformation matrix).

Parameter Identification Calculation:

Construct the error model. For the i-th measured pose, the end-effector’s theoretical pose calculated based on current parameters is:

Tcalc_i = f(qi, Φ)

Define the pose residual (error):

ΔTi = Tmeas_i-1 · Tcalc_i, or more commonly converted into a six-dimensional error vector δi, which contains three position errors and three orientation errors (in Euler angles or axis-angle format).

Since the error δi and parameter error ΔΦ are nearly linearly related at small quantities, we can establish a linear equation:

δ = J · ΔΦ, where J is the Jacobian matrix, also known as the error identification matrix, with each row corresponding to the error of a pose and each column corresponding to the sensitivity of a parameter to the error.

Parameter Solving (Identification): The goal is to find a set of parameters Φ* that minimizes the sum of the squared residuals of all measured poses. This is typically transformed into a least squares problem:

min Σ ||δi||²

Solving methods: Since J may not be a square matrix and could have ill-conditioned conditions, robust solutions are often sought using methods such as Levenberg-Marquardt or SVD (Singular Value Decomposition). The iterative process is as follows:

  1. Use nominal parameters Φnom as an initial guess.
  2. Calculate the theoretical pose and error vector δ based on current parameters.
  3. Compute the error identification matrix J.
  4. Solve the equation (JTJ + λI)ΔΦ = JTδ to obtain the parameter correction amount ΔΦ.
  5. Update parameters: Φnew = Φold + ΔΦ.
  6. Repeat steps 2-5 until the error Σ||δ||² converges to a predetermined threshold or the maximum number of iterations is reached.

Verification and Deployment: In the robot’s workspace, select another set of new validation poses (e.g., 20-30) not used for calibration, and repeat the measurement process. Input the identified joint zero position offsets (Δθ) and corrected link parameters into the robot controller’s parameter configuration file. Robot accuracy may drift due to mechanical wear, collisions, or temperature changes. It is recommended to repeat the calibration every three to six months; typically, robots undergo a DH calibration before leaving the factory.

2.3 Inverse Kinematics

Inverse kinematics addresses the problem of deducing joint parameters from the end-effector’s target position. Generally, methods for solving inverse kinematics can be divided into analytical and numerical methods.

Analytical Method: Suitable for robotic arms with lower degrees of freedom and simpler structures, it can derive explicit expressions for joint variables through mathematical deductions, yielding quick and precise results. However, it is only applicable to specific robotic structures and is challenging to generalize. This method is often utilized in industrial robots or collaborative robotic arms, characterized by custom development, stability, and efficiency.

Numerical Method: Applicable to robotic arms of any structure, it involves significant calculations, may converge slowly, and typically yields approximate solutions. This method is suitable for six-axis or higher robotic arms or any configuration, characterized by its general applicability.

2.3.1 Analytical Method

The analytical method essentially solves joint angles corresponding to a specific pose using inverse trigonometric transformations. By breaking down the six-dimensional pose into three-dimensional coordinates and three-dimensional orientations, each can be solved separately. For instance, in a UR robotic arm, the point where the three axes of the end effector intersect is referred to as the wrist point. The transformations of the first three axes determine the wrist point’s coordinates, while the last three axes rotate around the wrist point to establish orientation.

However, one might observe that for many robotic arms, including UR, the three axes of the end effector do not always intersect at a single point and may even appear parallel or collinear. Yet, research indicates that the UR robotic arm’s configuration meets the criteria for the Pieper criterion and can utilize the analytical method. This apparent contradiction raises a key point: The Pieper criterion states that the last three joint axes intersect at a point within the robot’s kinematic model (usually based on D-H parameters). The Pieper criterion pertains to the characteristics of the kinematic model, ensuring the analytical solution’s feasibility rather than requiring the mechanical axes to remain orthogonal and intersecting in physical reality.

We can draw a clear analogy: the Pieper criterion (model layer) suggests, “The earth orbits the sun in an elliptical motion.” This elegant and concise model resolves most issues, treating both the sun and the earth as point masses. However, in physical reality (execution layer), neither the sun nor the earth is a point mass; they possess volume, and the earth’s orbit is influenced by other planets. Nonetheless, this does not hinder our capability to accurately calculate calendars and launch satellites using the “elliptical model.” The physical structure of the UR robot (axes not physically touching) resembles the earth and sun having volume. Wrist singularities are akin to the earth at perihelion or aphelion—special positions inherently predicted by the elliptical model. At these positions, certain properties of the model (such as uniqueness at intersection points) degrade, but this does not contradict the model; rather, it is part of it.

When the three axes of the end effector appear non-intersecting or even collinear, this represents a singular configuration. For instance, when the fourth axis aligns with the sixth axis, both can only influence one degree of freedom in a single direction, losing one degree of freedom. This scenario leads to rapid flipping of the robotic arm when the fourth and sixth axes become collinear. The depicted situation illustrates how a robotic arm in a simulated environment reaches a singular point, resulting in rapid flipping. In actual robotic movement, if a singular position is reached, the motor’s speed and acceleration approach infinity during the flipping moment, making it impossible for real-world motors to achieve such extreme speeds and accelerations. The outcome is that the motors may abruptly stop, triggering limits and alarms for teaching devices and speed limits. Thus, avoiding singular positions during motion planning is essential to prevent planning failures.

Moreover, due to the extensive use of trigonometric and inverse trigonometric functions required to determine coordinates and orientations, a characteristic of trigonometric functions is that for a given Y value, multiple X values may satisfy the condition. For instance, if cos(x) = 0, the values of x could be π/2 or -π/2, leading to multiple angle combinations for the same point. This exemplifies the multi-solution nature of inverse kinematics. A six-axis robot can have up to eight sets of solutions:

  • Shoulder: Left/Right (two choices for θ1)
  • Elbow: Up/Down (two choices for θ3)
  • Wrist: Flip/No Flip (two choices for θ5)

It is crucial to note that this multi-solution refers to the end-effector’s final state, while motion planning presents another multi-solution aspect that must be differentiated. Singularities and multiple solutions are two major challenges in inverse kinematics, with no definitive solutions available. The best practice involves applying angle constraints to minimize their occurrence.

2.3.2 Numerical Method

The numerical method typically refers to iterative methods based on the Jacobian matrix. The core logic is that it is not random but deterministic iteration. It utilizes derivatives (Jacobian matrix) to inform the robot: if you want the end-effector to move 1 mm in the X direction, how many degrees each of the six joints needs to rotate.

Common algorithms include Newton-Raphson and Levenberg-Marquardt methods. The workflow is as follows:

  1. First, make an initial guess at the joint angles.
  2. Calculate the error between the current position and the target position.
  3. Use the Jacobian matrix to compute a “correction amount.”
  4. Update the angles and repeat until the error is negligible.

Features: The speed is exceptionally quick, and the precision is extremely high, but it can only find the solution nearest to the initial position and is susceptible to falling into singularities. Let’s break down each step:

  1. Step One: Initial Angle Guess – The term “guess” can be misleading. In mathematical numerical analysis, every iterative solver (like the Newton-Raphson method) requires a starting point. Regardless of how accurate this starting point is, it is uniformly referred to as “Initial Guess.” In code, this initial guess can be viewed as a seed value for the optimization algorithm. In practice, the joint angles can be read from the encoder, so this “guess” is simply the current state readings.
  2. Step Two: Calculate Error – From a human perspective, the desired position for the end-effector is a specific point in Cartesian space. Knowing the angles of the robot’s joints, one can use forward kinematics to ascertain the current end-effector’s Cartesian coordinates. It involves calculating the differences in X, Y, and Z axes between the starting and target points.
  3. Step Three: Use Jacobian Matrix for Correction – The numerical method’s uniqueness lies in its use of the Jacobian matrix. The Jacobian matrix can be seen as a “manual” that tells you how much the end-effector’s pose will change if each joint moves slightly (Δq). The mathematical expression is: Δx = J · Δq. In our case, since we know the error (desired change Δx), we need to find out how the joints should move (Δq), so we rearrange the equation: Δq = J-1 · Δx (In practice, due to J potentially being non-square or singular, the pseudo-inverse J or damped least squares method is typically used).
  4. Specific Correction Steps:
    • Perceive the error: Determine how far the end-effector is from the target (e.g., 0.1m position error and 5° orientation error).
    • Consult the manual: Multiply the six-dimensional error vector Δx by the inverse Jacobian matrix J-1 to obtain the correction instructions, which might indicate, for example, “Joint 1 should increase by 0.01 radians, Joint 2 should decrease by 0.005 radians,” resulting in Δq.
    • Update: Add these small correction amounts to the current joint angles: qnew = qold + Δq.
    • Repeat: Use the new qnew to calculate the forward solution again and check if the error has decreased. If the error is still unacceptable, continue from Step One.
  5. Step Four: Update Angles and Repeat – Since each correction step only modifies a small portion of the error, multiple iterations are needed to reach the target point. This process is akin to particle swarm optimization algorithms, continuously refining the position until the difference between the current and target positions falls within an acceptable tolerance, at which point the loop exits.

Jacobian Matrix: To comprehend how the Jacobian matrix computes joint corrections, it is essential to understand the local linearization of nonlinear functions. The forward kinematics of a robotic arm is fundamentally a nonlinear function: x = f(q), where q is the joint angle vector (e.g., 6 × 1) and x is the end-effector pose vector (position and orientation, 6 × 1). The function f is complex and nonlinear, filled with sin and cos operations. To determine how a small change in the independent variable q affects the dependent variable x, derivatives are taken.

For multivariable functions, this leads to linear equations formed in matrix form:dx = J(q)dq, where each matrix element Jij = ∂xi/∂qj has a clear physical meaning: it indicates the rate of change of the end-effector in the i-th coordinate direction when the j-th joint moves slightly.

Now, substituting small quantities d with the error step size Δ (used in our algorithm), we have: Δx ≈ J(q) Δq. This represents a standard linear equation set A v = b: J(q) is the known coefficient matrix (we can calculate it from the robot’s structure and current angles), Δx is the desired change (target pose – current pose), and Δq is the unknown joint correction amount. To find Δq, we need to invert (or pseudo-invert) J: Δq = J-1 Δx.

The Jacobian matrix effectively “knows” the correction amounts because it retains the instantaneous conversion coefficients between joint movements and end-effector movements. If you are familiar with Taylor series, the Jacobian matrix can be viewed as the first-order expansion of f(q) at the current point q0: f(q0 + Δq) ≈ f(q0) + J(q0) Δq. Since we only consider the first-order term and ignore higher-order terms, this is merely a linear approximation. When the error Δx is large, this linear approximation becomes inaccurate. This illustrates why numerical methods require iteration: we only adjust slightly each time, updating J (as J is a function of q, changes in position also change the derivatives) and making further small adjustments until converging on the true solution.

Furthermore, the Jacobian matrix holds significant importance in various fields of robotics, including dynamics, parameter identification, and singularity analysis, which we will not elaborate on here.

Original article by NenPower, If reposted, please credit the source: https://nenpower.com/blog/robot-movement-planning-theory-and-practice-based-on-ros2-humble-and-moveit2-kinematics-explained/

Like (0)
NenPowerNenPower
Previous February 1, 2026 11:59 pm
Next February 2, 2026 1:36 am

相关推荐