Announcement

Collapse
No announcement yet.

Asking about Kinect2 RAW orientation data

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Asking about Kinect2 RAW orientation data

    Hi ttakala , I want to thank you for every responds you gave me from my previous questions, it is really helping me

    I wonder if you could help me on this one,
    I've been struggling to understand the meaning of the RAW orientation data that Kinect v2 returned, it seems that it returns the quaternion (vector4) data, but when I try to implement my own "avatar" project, it seems that the movement is off. Thanks to your RUIS system, I can move the avatar perfectly now.

    But I'm really curious about what exactly that KinectV2 returned for its orientation data? I've been surfing a lot of forums and the answer that people posted are varying and gets me more confused. Since you have already done it perfectly on RUIS, could you please describe it to me, it would really help me.

    And one more thing, if I want to extract the Euler angle representation (axis and angle) of each joint's rotation from the orientation data, how should I do it?

    Your help is much appreciated, thank you

  • #2
    Hi Richard,

    I can't exactly tell you what is the thinking behind the SDK's rotation logic because quite frankly we didn't get it, instead we just worked around it :-)

    For some reason Microsoft SDK gives rotations that are not directly applicable in most cases. It seems like they left the rotation implementation halfway. That's why we needed to infer the rotation using joint positions and the weird raw rotations given by the SDK. You can see how we do it in RUISSkeletonManager.cs, line 419 forwards (RUIS 1.08/081/082). With most joints we feed the joint's bone direction to Quaternion.LookRotation() and rotate it with a raw rotation from the SDK. Only hip and hand rotations are obtained without that procedure. Please note that in that part of the code we also use constant rotations (Quaternion.Euler) to convert our Kinect 2 rotations to the weird skeleton system of Kinect 1 (this was a bad design decision that I plan to change in the future).

    I'm not sure if I understood your question. Euler representation is different from axis and angle representation, see here:
    http://www.euclideanspace.com/maths/...ions/index.htm

    In Unity you can easily get Euler angles or AngleAxis from any quaternion.

    If you want to get "usable" skeleton rotations, easiest thing to do is probably get rotation from the RUISSkeletonController component's public Transform fields (root, head, rightShoulder...).

    You could also try getting the rotations from the above discussed part of RUISSkeletonManager.cs, but without the Quaternion.Euler() constant rotations. I haven't tested that so can't guarantee that it works.

    Comment


    • #3
      I wonder if any filtering is possible prior to skeleton inference. The previous kinect skeletonizer did not work properly when the person was positioned against a background (as in “making contact with the background”). I suppose the skeletonization is done on the internal sensor processing board directly, I am afraid this precludes filtering stuff out of the point cloud that messes up skeleton tracking. Do you have any experiences with/ideas on this?hmmm

      Comment


      • #4
        As far as I know at least Kinect 1&2 (and probably other skeleton trackers based on depth sensing) use background extraction to capture the human silhouette and the skeleton pose detection method is applied on that silhouette. That is why the tracking works poorly if the user gets too close to the background. In Kinect the detection is done on the GPU, but acquiring new training data for the method is so labor intensive that we are not likely to see an update to Kinect. In the new VicoVR sensor the detection is done in hardware.

        Any temporal filtering of the point cloud would introduce additional latency on the tracking, so I'm not in favor of that. However I agree that post-filtering is far from ideal because of the very same reason.

        The jitter that is present in the tracked joints is due to the joint detection method that Kinect software uses:
        http://research.microsoft.com/pubs/1...ecognition.pdf

        Better methods will provide more stable results, by utilizing the input data in a more optimal way. <-- This sentence is vague on purpose ;-)

        And oh boy, I do have ideas! I know a few talented software engineers who could help me with this.To get serious about implementing these ideas, upwards of $300k seed money would get us started. Paging angel investors!

        Comment

        Working...
        X