I have following problem:
I have a set of depth-images of the same object from different directions and I have data for camera calibration. Now I want to transform those pixel coordinates with the given depth value into world points, so I can use those points as a mesh.
I know that to get a point Pw in the real world from pixel coordinates Px following equation holds:
Pw = K * [R t] * Px
while the intrinsic matrix K = [f 0 pp; 0 f pp; 0 0 1 ] with f being the focal length and pp the principle point and the extrinsic matrix consisting of the 3x3 rotation matrix R* and the 3x1 translation matrix t.
My questions are now the following: How do I multiply the matrices, in which order? I know that matrices have to be multiplied from right to left, meaning first extrinsic = [R * t] and then the K * extrinsic. Or multiply first the 2D coordinate (incl. depth as Z) with extrinsic-matrix and then multiply K with the result?
Also, is there some std function that does matrixmatrix matrixvector multiplication or do I have to implement it with std::vector/array?
Aucun commentaire:
Enregistrer un commentaire