Dawn of a New Era?

Dawn of A New Era in Motion Capture

Motion capture technology has been around for decades. The fidelity of current systems is phenomenal. They bring amazing characters to life in film and games. But the technology is expensive and requires substantial human and infrastructure resources to support it.

A set-up used to capture an athlete’s moves for a sports game requires a dedicated mo-cap studio and an array of cameras and sensors that cost hundreds of thousands of dollars – not counting the specialized operators or the custom software required to make sense of the data.

Recent innovations have made available a new breed of motion capture. One new system, the Perception Neuron, uses an array of 18-32 sensors, each with a highly accurate accelerometer and gyroscope. The system can record up to 120 samples per second, and unlike high end optical systems, it doesn’t have problems with occlusion because the data is not being captured by a camera – just sensors. The system can record and transmit data wirelessly. In real time. It’s designed from the ground up to integrate with virtual reality displays. It costs $1,000.

First motion capture test at Ikazuchi Dojo

Setting up the sensor array for our first motion capture test.

The Future Is Here

We believe that over the long term, three major new technologies will have far reaching impact in the martial arts world.

  • Motion Capture: Low cost, real time, sensor based systems will allow for efficient recording of martial arts movements. These systems can capture position, orientation, and movement of the human body in great detail.
  • Virtual Reality: A fully immersive environment that can be used to view recorded motion capture, or experience others’ movements in real time in a virtual space.
  • Augmented Reality: Holograms superimposed into the real world via glasses or other similar viewing technology. Motion capture data and other information can be projected in real time into an existing space.
A Perception Neuron sensor

A Perception Neuron sensor

What does this mean for the marital arts world?

Potential applications for this technology are far reaching and breathtaking. As a sci-fi fan, the long term arc is exciting to dream about. But as a practical martial artist, I realize the value in taking the first steps on the path – the near term applications. We’ve just begun to think about what those might look like:

  • Movement Research: With some custom software, we should be able to calculate and display a martial artist’s center of gravity, base of support, and balance state. Looking at this data may reveal ways to optimize movements and techniques. There are many other types of data that can be recorded that should yield insights in a range of areas.
  • Individual Assessments: This new breed of mo-cap systems is already being used for golf swing analysis and other similar applications. We think there’s a place to use these systems as instructional/assessment tools for students. It should be straightforward to measure body structure quality, alignment, movement path efficiency, and speed. We expect this to have value for a certain kind of martial arts practitioner.
  • Time Capsule: Imagine if we had 100 hours of Morihei Ueshiba’s movements recorded in 3D at 120 frames per second. What about Bruce Lee? We can begin to capture and immortalize the movements of today’s great martial arts masters. The future value of this data from a historical and research perspective is huge. In capturing the essence of the martial arts greats, the impact of this technology should be no less than that of video.
  • Entertainment: We have some ideas about how this technology can create new bridges between the worlds of martial arts and entertainment. We’ll save our thoughts on this for a future post.

Our First Step

As a first step, we purchased a system and started testing it. Before spending too much time thinking about applications, we wanted to get the system up and running and see what the data looks like.

We’re sure to run into a range of obstacles and problems. We’ve already identified a number of issues like proper calibration (the movement data doesn’t quite map properly to the model) and dealing with elevation changes (the system doesn’t deal well with rolling/falling). These kinds of things are pretty straightforward to address. If we do encounter hard problems to solve, our community has brought together individuals with the experience and motivation to do so.

Here’s a look at some data from our first capture session:


(The red circle in the abdominal area and the red circle projected onto the ground plane show the center of gravity.)

It’s possible our efforts won’t produce anything meaningful. Our dojo has invested resources into many projects that didn’t yield beneficial results. But through those failures, many valuable efforts and innovations have blossomed. We are cautiously optimistic and hope this effort will have a significant positive impact on how we think about perceiving, analyzing, and harnessing the power of movement in the martial arts.

Join The Conversation

Do you have any ideas to share or experience in this realm? Will this develop into a powerful and  transformative technology, or will it just be a fad?

We are excited to hear your thoughts. We are just beginning to experiment in this new world of motion capture, virtual reality, and augmented reality. A constructive dialogue and information exchange can only help us illuminate our path as we navigate the unknown.




Categories: Training Technology

There are 7 comments

  1. Rose Jones

    This is super exciting, Josh! I love that the dojo is exploring this technology. I look forward to future posts and updates and related activities on this.

    My mind immediately goes to the possibility of a “Matrix”-style downloading of the captured motions into someone’s brain. I can imagine some new initiate in some not-so-far-flung future exclaim, “I know aikido now!” 🙂

    Anyway, just wanted to share my enthusiasm with you about this direction of the dojo!

    1. Josh Gold

      Thanks for sharing Rose. We are excited too. The Matrix stuff is a ways off, but there are lots of interesting applications that will be available sooner than one would expect. Hopefully after our next capture session, we’ll be able to use VR as a display / debrief tool. It will be interesting to integrate that experience and see what we can do. The system can also transmit motion data in real time into a VR environment. We may want to use you as a test subject as some point soon as well 🙂

  2. Alex Barrera

    Fantastic stuff Josh! (You should see my envious face!). I’ve thought about this same approach for a while. I even had a couple of startups help me add sensors to capture the movements.

    The conclusions I took were twofold: Currently, practitioners will probably extract the most value in individual assessment. This only works if you can use motion sensors quickly and regularly. I have to wonder how feasible is that right now. The second conclusion is that some of the movements are so small, so hard to track, even with motion sensors (where calibration and resolution is a big issue indeed), that while impressive, it’s not of much use for most students. Incorporating them into your own movements is hard. More seasoned practitioners would definitely get value of this and other time-capsule approaches, but I wonder if replicating frozen movements from a sensei is wise. With time, those forms will be obsolete and might generate more issues for the unaware students.

    Looking forward to seeing how you guys are testing it!

    1. Josh Gold


      All good points. Our initial take (after only 2 capture sessions) is as follows:

      -It won’t be a good instructional tool for learning technique forms, except for very seasoned practitioners. It will however, be a good instructional tool for visualizing concepts like manipulating center of gravity to influence an uke’s balance state. We just did a test to determine how accurate the system’s center of gravity calculations are. It’s pretty much dead on, which by itself should have significant value.

      -It will likely have some value as an individual assessment tool. You will be able to see if things are in or out of alignment (hips relative to shoulders, if someone is leaning, etc.) The system we have takes about 15 minutes to prep and calibrate so it’s fairly efficient in that respect.

      -There’s a lot of stuff the system can’t capture, like muscle engagement and super fine movements. It’s not a perfect solution but it does some things really well and really efficiently.

      We have some other ideas for non-instructional purposes as well that we can run by you soon. Do you know anyone that works at Unity or Unreal btw?

      Thanks for the feedback and we look forward to sharing more soon!

      1. Alex Barrera

        Thanks for sharing this man! Agreed 100%! Hmm no one comes to mind right now but will keep it in mind. Let me know in private what are you looking for. I might know someone that can connet you with them 🙂

  3. David Buhner


    There is no requirement, in a balanced posture, for the center of mass/gravity to be inside the body, only above the base of support. Consequently, you will have to make allowance for the center of mass to move from where you have it pictured to where it really is when body posture changes. Additionally, since nage must fall down if his/her center of mass goes outside his/her base of support, it is really uke’s center of mass that is of greatest interest.

    The center of mass of the human body is only found in the lower abdomen near the umbilicus, as in your diagram, when the posture is completely upright (or recumbent), i.e., when the body is aligned along a single plane with the arms at the sides, and arms and legs completely straight (or particular anatomic variations on this posture that preserve that point, such as splaying the legs to the sides). For instance, if I bend over and touch my toes with my knees slightly bent, and without falling, such that my center of mass must be over my base of support, my center of mass will be outside my body in this posture.

    Motion capture data is very interesting and, if taken on faith that the motions it represents reflect accurate aikido practice, then it could be useful for helping beginning students (of a similar body size and shape) learn the rote movements. However, I do not see how it can be used to support a scientific understanding of how (or even if) aikido works in the absence of taking the motion data and pairing it with a model that allows the exact calculation of uke’s (or, at least, an idealized version of an uke appropriate to a mathematical model) center of mass. But, the model alone, when performing a static analysis of uke’s posture at the moments prior to (during the blending with uke’s motion), at the time of (uke’s attainment of an unstable equilibrium), and after (when gravity takes over and uke falls), unbalancing uke will show that uke’s center of mass is/has moved progressively away from a balanced position over uke’s base of support to an unbalanced posture where the center of mass is outside the base of support, and uke must fall by the laws of physics (or else aikido does not make scientific sense). Motion capture data would be more useful from this perspective for showing that aikido practitioners are actually doing what the model says will work.

    Such a model, which I have been developing for some time now, represents a happy conjunction for me, a physician with an specialty in muscle, bone, and joint disorders, whom is also a MS mathematician, and a shodan with 14 years experience in aikido. Should you be interested in trying to combine your motion capture data with my model, I would be quite interested.

    David Buhner MD MS

    1. Josh Gold

      Hi David,

      Yes, we understand and think of balance as you outline here. The center of mass can certainly be outside of the body and we do know that the balance defining relationship is between the base of support and the center of mass.

      Our motion capture system requires detailed measurements of a user’s body. Those measurements are then used by the mo-cap software’s algorithms to calculate (as closely as possible) where the user’s center of mass is, based on the known attributes of the user’s body and the data that’s being generated by the sensor array (15-20 sensors with gyroscopes, accelerometers, etc.).

      So far it seems as though the algorithms are fairly accurate. However, if you’ve done specialized work in this area, we would welcome any input and guidance you can provide.

      I’d love to talk about your model and get your perspective on how we can improve our system. Please email us at dojo@ikazuchi.com if you’d like to set up a time to talk with myself and / or our lead technical team member.


Leave a Reply to Josh Gold Cancel

Join event