CN103019372A - Calculating metabolic equivalence with a computing device - Google Patents

Calculating metabolic equivalence with a computing device Download PDF

Info

Publication number
CN103019372A
CN103019372A CN2012104027127A CN201210402712A CN103019372A CN 103019372 A CN103019372 A CN 103019372A CN 2012104027127 A CN2012104027127 A CN 2012104027127A CN 201210402712 A CN201210402712 A CN 201210402712A CN 103019372 A CN103019372 A CN 103019372A
Authority
CN
China
Prior art keywords
joint
value
joints
frame
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012104027127A
Other languages
Chinese (zh)
Other versions
CN103019372B (en
Inventor
E·巴苏姆
R·福布斯
T·莱瓦德
T·杰肯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Publication of CN103019372A publication Critical patent/CN103019372A/en
Application granted granted Critical
Publication of CN103019372B publication Critical patent/CN103019372B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • A63F2300/1093Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera using visible light
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for estimating a metabolic equivalent of task for use with a computing device is provided herein. The method includes receiving input from a capture device of a user; and tracking a position of each of the plurality of joints of the user. The method further includes determining a distance traveled for each of the plurality of joints between a first frame and a second frame; and calculating a horizontal velocity and a vertical velocity for each of the plurality of joints based on the distance traveled and an elapsed time between the first and second frames. The method further includes estimating a value for the metabolic equivalent of task using a metabolic equation including a component for the horizontal velocity and a component for the vertical velocity for each of the plurality of joints; and outputting the value for display.

Description

Utilize computing equipment to calculate metabolic equivalent
Technical field
The present invention relates to utilize computing equipment to calculate metabolic equivalent.
Background technology
It is to comprise the activity that muscle power is had requirements at the higher level that computer game system has developed, and especially is equipped with the computer game system of the natural input equipment such as depth camera.Therefore, game has become the exercise of certain form for some users.Yet these users are difficult to accurately understand the harsh degree of exercise, such as specific exercise what calories that burnt.A kind of formerly scheme can find in the computer game that is designed to simulate running.Running plays shows the task metabolic equivalent (MET) of running activity to the user, this task metabolic equivalent can be used for definite calorie that burns.Yet the MET model is the task special use, and therefore this running game is to be structured on the special-purpose MET model of running, and this MET model only can be applied to run.The shortcoming of the scheme of task special use is that the many motions in the computer game all are " activity not adequately describeds ", and do not exist for these movable MET models.In addition, will be surprisingly expensive for these activity Custom Design MET model, and will expend a large amount of development times.For this reason, the most computers game all can not provide for such activity not adequately described MET value or calorie output estimation, thereby the computer based exercise in the rudiment is baffled.
Summary of the invention
Provide a kind of for the method that is used for estimation task metabolic equivalent with computing equipment at this.The method comprises: receive input from user's capture device; And follow the tracks of each position in a plurality of joints of this user.The method also comprises: determine the distance of advancing between the first frame and the second frame in each joint in described a plurality of joint; And calculate horizontal velocity and the vertical speed in each joint in described a plurality of joint based on this distance of advancing and the lapse of time between the first and second frames.The method also comprises: use the metabolism equation to estimate the value of task metabolic equivalent, described metabolism equation comprises the component of horizontal velocity in each joint in described a plurality of joint and the component of vertical speed; And export described value for demonstration.
Provide content of the present invention in order to introduce in simplified form some concepts that will in following embodiment, further describe.Content of the present invention is not intended to identify key feature or the essential feature of claimed subject, is not intended to the scope for the restriction claimed subject yet.In addition, theme required for protection is not limited to solve the realization of any or all mentioned in arbitrary part of the present disclosure shortcoming.
Description of drawings
Fig. 1 is the stereographic map of checking the exemplary game system of the scene of observing according to embodiment of the present disclosure.
Fig. 2 A is schematically illustrated observe in the scene by the games system of Fig. 1 human target with exemplary skeleton data modeling.
Fig. 2 B schematically shows the exemplary skeleton data of being followed the tracks of in time by the games system of Fig. 1.
Fig. 3 shows the process flow diagram for the exemplary embodiment of the method for estimating the task metabolic equivalent with the games system of Fig. 1.
Fig. 4 shows for the process flow diagram of the games system that uses Fig. 1 to the exemplary embodiment of each method that is weighted in a plurality of joints of user.
Fig. 5 is the synoptic diagram of computing system that can be used as the games system of Fig. 1.
Embodiment
To each aspect of the present invention be described by the embodiment listed above shown in example and the reference now.
Fig. 1 shows user's 10 residing exemplary 3D interactive spaces 100.Fig. 1 also shows games system 12, its can so that user 10 can with video game interactions.Games system 102 can be used to play multiple different game, plays one or more different medium types and/or control or handle non-game application and/or operating system.Games system 12 can comprise game console 14 and display device 16, and this display device 16 can be used for presenting game visual to the game player.Games system 12 is a kind of computing equipments, and its details is described with reference to Fig. 5.
Get back to Fig. 1,3D interactive space 100 can also comprise the capture device 18 such as camera, and it can be coupled to games system 12.Capture device 18 for example can be for the depth camera of observing 3D interactive space 100 by catching image.Therefore, the task metabolic equivalent (MET) of estimating user 10 is come in capture device 18 each the position that can be used for a plurality of joints by following the tracks of user 10.For example, capture device 18 can catch user's image, and described image can be used for determining the distance of increment in each joint and can be used for calculating the speed in each joint.In addition, one or more joints can differently be weighted to take into account various factors with other joints, such as gravity, user's human dissection, user's physical efficiency, degree of freedom etc.In this way, user 10 can be mutual with games system, and can estimate based on user 10 actual motion (or without motion) value of MET.
Be used for estimating that the classic method of MET is based on specific activities or task.A kind of traditional scheme comprises the specific activities of determining that the user participates in, and exports the average MET value of this specific activities to the user.This scheme be not based on the user in fact WKG working what estimate the MET value.On the contrary, this scheme is based on following hypothesis and operates: always specific activities has identical MET value, and no matter the user carries out the intensity of this specific activities, so that MET output will be wrong for most of users.In addition, the method is not suitable for without the average available activity not adequately described (for example non-physical exertion) of MET value.
Another traditional scheme is estimated the MET value based on the detected speed of fragment (for example user's leg) for user's body.Yet this scheme is also supposed specific activity, and comes to estimate the MET value based on this specific activities with the MET model of activity-specific.Therefore, this scheme also is activity-specific, and therefore not general the MET value that is enough to estimate activity not adequately described.
The MET value of the disclosure by estimating user solves at least some in these challenges, and no matter the performed Activity Type of user 10 how.Because the MET value is to estimate, can estimate therefore to reflect that user 10 and games system 12 carry out the more accurate MET value of mutual intensity in the situation that the MET value is not limited to specific activities.In other words, this user's MET value be games system 12 do not suppose or definite user carry out in the situation of what activity estimative.Therefore, user 10 can carry out basically any activity, and games system 12 can be estimated the MET value by the motion of following the tracks of in real time user 10.
For example, user 10 can come with games system 12 alternately by playing magic game, fighting games, boxing game, dancing and game, car race game etc., and user's MET can not suppose that the user discharges magic arts, fights with the enemy, plays a box, estimated in the situation of dancing or racing car.In addition, user 10 can by watch film, with various application mutual etc. come mutual with games system 12.Therefore even can be for estimating the MET value with the activity not adequately described that may lower intensity be associated such example can be referred to here as activity not adequately described, but because method described herein is to estimate MET in the situation of specific activities not supposing.
Fig. 2 A shows the processing streamline 26 of simplification, and wherein the game player 10 in the 3D interactive space 100 is modeled as dummy skeleton 36, and described dummy skeleton 36 can serve as the control inputs for the various aspects of control game, application and/or operating system.Fig. 2 A shows the four-stage of processing streamline 26: image collection 28, depth map 30, skeleton modeling 34 and game output 40.Be appreciated that with those steps of describing among Fig. 2 A and compare, process streamline and can comprise more step and/or alternative step, and do not deviate from scope of the present invention.
During image collection 28, the remainder of game player 10 and 3D interactive space 100 can come imaging by the capture device such as depth camera 18.Particularly, depth camera is used to follow the tracks of each the position in a plurality of joints of user (for example the game player 10).During image collection 28, depth camera can by each pixel determine the surface in the observation scene with respect to the degree of depth of depth camera.Can use basically any degree of depth to seek (depth finding) technology and do not deviate from the scope of the present disclosure.Discussed example degree of depth searching technology in more detail with reference to figure 5.
During depth map 30, the depth information of determining for each pixel can be used for generating depth map 32.Such depth map can adopt the basically form of any suitable data structure, includes but not limited to comprise the depth image buffer zone of depth value of each pixel of the scene of observing.In Fig. 2 A, degree of depth Figure 32 schematically is shown the pixilated grids of game player 10 profile.This illustration is for understanding simple and clear purpose rather than for the purpose of technology accuracy.Can understand, depth map generally comprises the depth information of all pixels, and is not only those pixels to game player's 10 imagings.Depth map can be carried out by depth camera or computing system, and perhaps depth camera and computing system can cooperate to carry out depth map.
During skeleton modeling 34, comprise one or more depth images (for example degree of depth Figure 32) of computer user's (for example the game player 10) 3D interactive space from the depth camera acquisition.Dummy skeleton 36 can derive to provide from the degree of depth Figure 32 game player 10 machine-readable representation.In other words, derive dummy skeleton 36 with to game player's 10 modelings from degree of depth Figure 36.Dummy skeleton 36 can be derived from depth map in any suitable way.In certain embodiments, one or more skeleton fitting algorithms can be applied to depth map.For example, the model set of previous training can be used to each pixel from depth map is labeled as and belong to specific body part; And dummy skeleton 36 can be fit to the body part of institute's mark.The present invention is compatible with in fact any skeleton modeling technique.In certain embodiments, can come from depth image, to derive dummy skeleton with machine learning.
Dummy skeleton provides the game player's 10 that depth camera 18 observes machine-readable representation.Dummy skeleton 36 can comprise a plurality of joints, and each joint is corresponding to game player's a part.Can comprise basically any amount of joint according to dummy skeleton of the present invention, each joint can with any amount of parameter basically (such as the body gesture of three-dimensional joint position, joint rotation, corresponding body part (such as hand open, hand closes etc.) etc.) be associated.Should be appreciated that dummy skeleton can take the form of following data structure: this data structure comprises one or more parameters (the joint matrix that for example comprises x position, y position, z position and the rotation in each joint) in each joint in a plurality of skeleton joints.In certain embodiments, can use the dummy skeleton (for example wire frame, one group of shape pel etc.) of other types.
The skeleton modeling can be carried out by computing system.Particularly, the skeleton modeling can be used for deriving dummy skeleton from the observation information (for example degree of depth Figure 32) that is received from one or more sensors (for example depth camera 18 of Fig. 1).In certain embodiments, computing system can comprise the frame special MBM that can be used by multiple different application.In this way, each application needn't be construed to depth map the machine readable skeleton independently.On the contrary, each application can receive dummy skeleton from frame special MBM (for example by application programming interface or API) with the anticipatory data form.In certain embodiments, the frame special MBM can be can be by the long-range modeling device of access to netwoks.In certain embodiments, use oneself and can carry out the skeleton modeling.
As mentioned above, can estimate by the motion of following the tracks of the game player value of MET.Can understand, above-mentioned estimation modeling technique can provide machine sensible information in time, and this information comprises the three-dimensional position in each joint in a plurality of skeleton joints that represent the game player.Such data can be used at least in part the MET of estimating user, this will give more detailed description below.
Fig. 2 B shows the example of following the tracks of game player's motion with the skeleton modeling technique.As mentioned above, the game player can be modeled as dummy skeleton 36.As shown, dummy skeleton 36(and game player 10 thus) can move in time, so that for example change between the first frame and the second frame of the three-dimensional position in one or more joints of dummy skeleton.Can understand, in order to change the position, can change one or more parameters.For example, the joint can change the position in the x direction, but can not change at y and/or z direction.Basically any position change all is possible, and does not deviate from the scope of the present disclosure.
Shown in Fig. 2 B, the first frame 50 can be followed by the second frame 52, and each frame can comprise the dummy skeleton 36 that as mentioned above game player 10 in the 3D interactive space 100 is carried out modeling.In addition, the skeleton modeling can be carried out any suitable time period, for example proceeds to n frame 54.Can understand, " the second frame " used herein (and same n frame) can refer to the frame that occurs after the first frame, can be any suitable time period afterwards wherein.
The first frame 50 can comprise dummy skeleton 36, and its left wrist joint 56 is confirmed as having shown 3D position X 1, Y 1, Z 1In addition, the second frame 52 can comprise dummy skeleton 36, and its left wrist joint 56 is confirmed as having shown 3D position X 2, Y 2, Z 2Because at least one location parameter of wrist joint 56 between the first frame 50 and the second frame 52 change has occured, therefore can determine the distance of advancing in joint 56.In other words, this distance can change based on the position of wrist joint 56 between the first and second frames to determine.As shown, this distance for example can be determined with formula 58.In addition, the speed in joint 56 for example can be calculated according to formula 60.As shown, formula 60 can be based on the time of passage between determined distance and the first frame 50 and the second frame 52.The following describes for the method for other calculating of determining joint institute travel distance, calculating the speed of this motion and cause estimating the value of MET.
Get back to Fig. 2 A, during game data 40, the body kinematics of the game player 10 by skeleton modeling 34 identification is used to control the each side of game, application or operating system.In addition, like this can measure in the following way alternately: estimate the MET value in the institute detection position in each joint from a plurality of joints of expression game player's 10 dummy skeleton.Shown in scene in, game player 10 playing the illusion theme game and carried out the magic arts throwing gesture.With carry out the motion that the magic arts throwing gesture is associated can be tracked so that can estimate the value of MET.As shown, the estimated value of MET (generally in the indication of 44 places) can show at display device 16.
Fig. 3 shows the process flow diagram for the exemplary embodiment of the method 300 of estimating MET with the games system of Fig. 1.Method 300 can realize with hardware and software component described herein.
302, method 300 comprises: receive input from capture device.For example, capture device can be depth camera 18, and input can comprise the image sequence that the user catches in time.Therefore, user's image sequence for example can be the range image sequence that the user catches in time.
304, method 300 comprises: the position of each shutdown in a plurality of joints of tracking user.For example, the position in each joint in a plurality of joints of user can from as the depth information in each joint of user's range image sequence, catching in determine.In addition, the position in each joint in a plurality of joints can be determined by above-mentioned skeleton trace flow waterline.In this way, can be in every frame (namely utilizing each depth image that catches) determine three-dimensional (3D) position in each joint of following the tracks of.For example, the 3D position can be determined with the cartesian coordinate system that comprises x, y and z direction.
306, method 300 comprises: determine the incremental counter of each joint between the first frame and the second frame in a plurality of joints.May be defined as the change of position at this related incremental counter.Therefore, incremental counter can be used to the distance of advancing in each joint in definite a plurality of joints.For example, incremental counter can be based on the change of position between the first and second frames in each joint in a plurality of joints of following the tracks of.In addition, as this related, the first frame for example can be that the first image and the second frame that catch can be the second images that catches.Can understand, the second frame can be any frame that occurs after the first frame.For example, the second frame can be immediately following the second frame after the first frame.As another example, the second frame can be to catch the frame that the first frame certain hour section catches later on.This time period can be any suitable time period, for example such as millisecond, second, minute, more than one minute or any other time period.Can understand, this time period can be the threshold time section.For example, the threshold time section can be corresponding to any example in the aforementioned exemplary of time period.In addition, the threshold time section for example can be the time period that is determined in advance as for the grace time section of estimating MET.Such threshold time section can be corresponding to lapse of time section by the first and second frame definitions.In this way, determine the distance of increment in each joint in a plurality of joints of user during lapse of time section between the first and second frames.
308, method 300 comprises: horizontal velocity and the vertical speed of calculating each joint in a plurality of joints.For example, horizontal velocity and vertical speed can be based on incremental counter and the lapse of time in each joint in a plurality of joints between the first and second frames.For example, horizontal velocity can equal the horizontal incremental counter in each joint in a plurality of joints divided by lapse of time.As another example, vertical speed can equal the vertical increment position in each joint in a plurality of joints divided by lapse of time.
Calculated level speed can comprise the one or more speed components in the horizontal plane.For example, calculated level speed can comprise the speed of x direction and the speed of z direction, and wherein x and z direction are from the visual angle of depth camera.Therefore, the x direction can represent the horizontal direction (while arriving) of depth camera, and the z direction can represent the depth direction (approach/away from) of depth camera.
Similarly, the calculating vertical speed can comprise the one or more speed components in the vertical plane vertical with horizontal plane.For example, calculate the speed that vertical speed can comprise the y direction, wherein the y direction is from the visual angle of depth camera.Therefore, the y direction can represent the up/down direction of depth camera.
310, method 300 comprises: use the metabolism equation to estimate the value of task metabolic equivalent.For example, the metabolism equation can comprise horizontal component and vertical component.The horizontal and vertical component can be respectively horizontal velocity and the vertical speed sum in each joint in a plurality of joints.In addition, the horizontal and vertical component can additionally comprise respectively level variable and vertical variable.For example, the metabolism equation can be ACSM (ACSM) the metabolism equation for calculation task metabolic equivalent (MET):
Equation 1: MET = VO 2 3.5
VO wherein 2The expression oxygen expenditure, it is calculated by following equation:
Equation 2:VO 2=component h+ component v+ R
Wherein " R " equals 3.5 constant, " component h" be horizontal component, and " component v" be vertical component.The horizontal and vertical component can define by equation 2 is extended for following equation:
Equation 3:VO 2=K h(speed h)+K v(speed v)+R
" speed wherein h" expression horizontal velocity and " speed v" the expression vertical speed, it can calculate at the incremental counter between the first frame and the second frame and the lapse of time between the first and second frames according to a plurality of joints of user as mentioned above.
In addition, equation 3 comprises " K h" and " K v" can represent respectively level variable and vertical variable." K h" and " K v" value can the large-scale MET of reflection is movable to be determined by described variable is trained for.For example, " K h" and " K v" can be the mean value of one or more low MET values, one or more middle MET values and one or more high MET values separately.For example, low MET value can be corresponding to the user by being sitting on the sofa and watching film and games system 12 alternately (for example less than 3.0 MET value).In addition, middle MET value can be corresponding to the user by coming and games system 12 mutual (for example MET value between 3.0 and 6.0) with the motion control racing car incarnation of this user in car race game.In addition, high MET value can be corresponding to the user by coming with the motion control player incarnation of this user in dancing and game and games system 12 mutual (for example greater than 6.0 MET value).In this way, low to high MET value for example can be relevant to the high strength activity with low-intensity.
The classic method that is used for estimation MET value can be used the specified level variable corresponding with specific activities and specific vertical variable.The disclosure has been considered large-scale horizontal and vertical variable, so that be used for estimating that the method for MET can be applied to any activity as said.
Can understand " K h" and " K v" value can from experimental data, determine and analyze, wherein this experimental data comprises the value from large-scale MET value.As another example, " K that can the self-adaptation specific user h" and " K v" value.For example, can carry out some posture, motion, activity etc. by prompting user, and can be used to determine this user's specific " K from the data that the skeleton that is associated is followed the tracks of h" and " K v".In such scene, can also adopt the user ID technology.For example, can adopt facial recognition techniques to identify the specific user, so that the specific " K that comprises this user that is associated with this user h" and " K v" profile of value can be accessed to estimate MET.Can understand, can adopt other user ID technology and be not offset the scope that is disclosed.
Get back to Fig. 3,312, method 300 comprises: the value of output MET is for demonstration.For example, display 16 can comprise the graphic user interface of the value of the MET that shows this user.For example after the user interactions of finishing with games system 12, the value of MET can be the end value (end value) of the value of expression MET.In addition, when user and games system 12 were mutual, the value of MET can be the instantaneous value of expression snapshot and/or the accumulated value of MET.
Can understand, method 300 provides with way of example, and therefore is not intended to for restrictive.Therefore, can understand, method 300 can not deviate from the scope of the present disclosure with any suitable order execution.In addition, compare with those steps shown in Fig. 3, method 300 can comprise more and/or alternative step.For example, method 300 can comprise each joint in a plurality of joints of user is weighted to realize more accurate estimation to MET.
For example, Fig. 4 shows the process flow diagram of the illustrative methods 400 that is weighted for each joint to a plurality of joints of user.As mentioned above, compare with each joint in a plurality of joints not being weighted, each joint in a plurality of joints of user is weighted causes more accurate MET to estimate.Can understand, method 400 can comprise one or more steps of having described with reference to Fig. 3.In addition, can understand, such step can change the ground execution similarly or with said comparing a little.In addition, one or more steps of method 400 can be carried out after the incremental counter (for example step 306) of each joint between the first frame and the second frame of determining as described above in a plurality of joints.Method 400 can realize with hardware and software component described herein.
402, method 400 comprises: each joint in a plurality of joints of user assigns weight.Can understand, can distribute to each joint specific weight.In addition, can understand, the certain weights in a joint can be different from the certain weights in another joint.Can distribute certain weights to each joint in a plurality of joints of user according to any weighting scheme basically.For example, can distribute higher weighted value to the joint that has larger degree of freedom than another joint.As a non-limiting example, shoulder joint can have higher weighted value than knee endoprosthesis.Because shoulder joint is ball-and-socket type joint (rotary freedom), so the knee endoprosthesis that the shoulder joint ratio is similar to hinge type joint (being limited to flexion and extension) has larger degree of freedom.
404, method 400 comprises: each joint in a plurality of joints of weighting of user is divided into one or more health fragments.For example, some joints in a plurality of joints of weighting of user can be assigned to the upper body fragment.For example, the upper body fragment can comprise user's the one or more joints between head position and hip position in the weighting joint.Therefore, the upper body fragment can comprise a joint, left hip joint, right hip joint and be positioned at anatomically a joint and left hip and right hip joint between other joints.For example, the one or more joints that are associated with user's right arm and left arm can be assigned to the upper body fragment.As use shown here, the location on the anatomy can refer to the joint position relevant with user's human anatomic structure.Therefore, even swivel of hand may be positioned at the vertical lower (for example when the crooked hip joint of user when touching the pin joint) of hip joint physically, swivel of hand still is assigned to the upper body fragment, because swivel of hand is positioned between hip joint and the joint anatomically.In other words, swivel of hand is higher than hip joint, and is lower than a joint, so swivel of hand belongs to the upper body fragment.
Similarly, other a plurality of joints through weighting of user can be assigned to another health fragment, such as lower part of the body fragment.For example, lower part of the body fragment can comprise user's the one or more joints between hip position and foot position in the weighting joint.Therefore, lower part of the body fragment can comprise knee endoprosthesis, pin joint and be positioned at anatomically the hip position and the foot position between other joints.For example, the one or more joints that are associated with user's right leg and left leg can be assigned to lower part of the body fragment.Therefore, even left leg joint may be positioned at the vertical direction (for example when the user carries out high leg kick such as convolution is played) of hip joint physically, left leg joint still is assigned to lower part of the body fragment, because left leg joint is anatomically between hip joint and pin joint.In other words, left leg joint is lower than hip joint, and is higher than the pin joint, so leg joint belongs to lower part of the body fragment.
Can understand, a plurality of each joint in the weighting joint can be assigned to only health fragment.In other words, single joint can not be assigned to an above health fragment.In this way, each joint in a plurality of joints of weighting that can analysis user, and need not in two health fragments, to repeat specifically through the weighting joint.In addition, because the hip position is described to the interval between upper body fragment and the lower part of the body fragment in the above, therefore can understand, the one or more hip joints in each hip joint can be assigned to upper body fragment or lower part of the body fragment.For example, the two can all be assigned to the upper body fragment left hip joint and right hip joint, perhaps left hip joint and right hip joint the two can all be assigned to lower part of the body fragment.Alternately, a hip joint can be assigned to the upper body fragment, and another hip joint can be assigned to lower part of the body fragment.
Get back to Fig. 4,406, method 400 comprises: average weighted horizontal velocity and the average weighted vertical speed of calculating the upper body fragment.For example, the average weighted horizontal and vertical speed of upper body fragment can be calculated in the following way: with top description similarly, determine in a plurality of joints of weighting, to be in each joint lapse of time between the incremental counter between the first frame and the second frame and the first frame and the second frame in the upper body position.For example, the average weighted speed of upper body fragment can be calculated according to the equation 4 that provides below and equation 5.Can understand, equation 4 and 5 provides as non-limiting example.
Equation 4:
Figure BDA00002279526800111
Equation 5:
Figure BDA00002279526800112
As shown in equation 4 and 5, " UB " indication upper body fragment and index " i " expression particular joint.In addition, total weight can be the weight sum that for example is applied to be assigned to each joint in a plurality of joints of upper body fragment.
408, method 400 comprises: average weighted horizontal velocity and the average weighted vertical speed of calculating lower part of the body fragment.For example, the average weighted horizontal and vertical speed of lower part of the body fragment can be calculated in the following way: with top description similarly, determine in a plurality of joints of weighting, to be in each joint lapse of time between the incremental counter between the first frame and the second frame and the first frame and the second frame in the lower part of the body position.For example, the average weighted speed of lower part of the body fragment can be calculated according to the equation 6 that provides below and equation 7.Can understand, equation 6 and 7 provides as non-limiting example.
Equation 6:
Figure BDA00002279526800113
Equation 7:
Figure BDA00002279526800114
As shown in equation 6 and 7, " LB " indication lower part of the body fragment and index " i " expression particular joint.In addition, total weight can be the weight sum that for example is applied to be assigned to each joint in a plurality of joints of lower part of the body fragment.
410, method 400 comprises: the average weighted horizontal and vertical speed that the lower part of the body factor is applied to lower part of the body fragment.For example, lower part of the body fragment may have different impacts to MET with the upper body fragment.Therefore, the lower part of the body factor can be applied to the average weighted horizontal and vertical speed of lower part of the body fragment to consider the difference on the impact of MET.
For example, lower part of the body fragment can have larger impact to MET, because lower part of the body fragment has been carried the weight of upper body fragment.Additionally and/or alternately, lower part of the body fragment can have larger impact to MET, because lower part of the body fragment is subject to the friction force with ground between active stage.In this way, even lower part of the body fragment may have similar speed with the interior joint of upper body fragment, but for example the interior joint of lower part of the body fragment may be larger on the impact of MET value than the joint in the upper body fragment.The inventor is in this value of having realized that 2 and be worth the difference that 3 the lower part of the body factor has been considered impact.Yet can understand, other lower part of the body factors are possible, and/or the upper body factor can be applied to upper body fragment speed and not be offset the scope of the present disclosure.
412, method 400 comprises: use the metabolism equation to estimate the value of task metabolic equivalent (MET).For example, described metabolism equation can be based on the average weighted speed of upper body and the average weighted speed of the lower part of the body, and wherein the average weighted speed of the lower part of the body comprises the applied lower part of the body factor.For example, can calculate MET according to above-mentioned equation 1, and oxygen expenditure (VO 2) the value equation 8,9 and 10 that can provide below by usefulness determine.Can understand, equation 8,9 and 10 provides as non-limiting example.
Equation 8: health speed h=UB speed h+ LB the factor * LB speed h
Equation 9: health speed v=UB speed v+ LB the factor * LB speed v
Equation 10:VO 2=K h(health speed h)+K v(health speed v)+R
As shown in equation 8 and 9, " UB " indication upper body fragment and " LB " indication lower part of the body fragment.In addition, can understand, equation 8,9 and 10 comprise with described equation before in some included variable class like variable, and for will not further described for purpose of brevity.
414, method 400 comprises: the MET value that output is calculated is for demonstration.For example, display 16 can comprise the graphic user interface of the value of the MET that shows this user.The value of MET can be end value, instantaneous value, snapshot value and/or the accumulated value of above-mentioned MET.
Can understand, method 400 provides with way of example, and therefore is not intended to for restrictive.Therefore, can understand, method 400 can not deviate from the scope of the present disclosure with any suitable order execution.In addition, method 400 can comprise with the step shown in Fig. 4 and compares more or alternative step.For example, method 400 can comprise: calculate a calorie burning based on the MET value of calculating.In addition, the MET value of calculating can be used for definite other body parameters, and described body parameter can be assessed the one side of the physical efficiency of user when mutual with the calculating meter systems.
As another example, method 400 can comprise: for the specific user regulates weighting factor.In certain embodiments, for regulating weighting factor, the specific user can comprise the user ID technology.For example, the user can identify by facial recognition techniques and/or by another user ID technology.
In this way, can estimate for the user with computing equipment mutual (such as games system 12) value of MET.In addition, because user's motion (or without motion) is tracked, therefore estimates that the value of MET can be finished more accurately, and needn't suppose the specific activities of the actual execution of user.
In certain embodiments, Method and Process described above can be bundled into the computing system that comprises one or more computing machines.Particularly, Method and Process described herein can be implemented as computer utility, Computer Service, computer A PI, calculate hangar and/or other computer programs.
Fig. 5 has schematically shown the one or more non-limiting computing system 70 that can carry out among said method and the process.Show in simplified form computing system 70.Should be appreciated that and to use basically any computer architecture and do not deviate from the scope of the present disclosure.In different embodiment, computing system 70 can be taked the form of mainframe computer, server computer, desk-top computer, laptop computer, flat computer, home entertaining computing machine, network computing device, mobile computing device, mobile communication equipment, game station etc.
Computing system 70 comprises processor 72 and storer 74.Computing system 70 can randomly comprise display subsystem 76, communication subsystem 78, sensor subsystem 80 and/or unshowned other assemblies in Fig. 5.Computing system 70 can also randomly comprise such as following user input device: for example keyboard, mouse, game console, camera, microphone and/or touch-screen etc.
Processor 72 can comprise the one or more physical equipments that are configured to carry out one or more instructions.For example, processor can be configured to carry out one or more instructions, and these one or more instructions are parts of one or more application, service, program, routine, storehouse, object, assembly, data structure or other logical construct.Can realize such instruction with the state of executing the task, realize data type, the one or more equipment of conversion or otherwise obtain desirable result.
Processor can comprise the one or more processors that are configured to the executive software instruction.In addition or alternatively, processor can comprise one or more hardware or the firmware logic machine that is configured to carry out hardware or firmware instructions.Each processor of processor can be monokaryon or multinuclear, and the program of carrying out thereon can be configured to parallel or distributed treatment.Processor can randomly comprise the stand-alone assembly that spreads all over two or more equipment, and described equipment can long-range placement and/or is configured to carry out associated treatment.One or more aspects of this processor can be virtualized and be carried out by the networking computing equipment capable of making remote access that is configured with the cloud computing configuration.
Storer 74 can comprise one or more physics, non-instantaneous equipment, and the instruction that these equipment are configured to keep data and/or can be carried out by this processor is to realize Method and Process described herein.When realizing these Method and Process, state (for example to preserve different data) that can conversion storer 74.
Storer 74 can comprise removable medium and/or built-in device.Storer 74 (for example can comprise optical memory devices, CD, DVD, HD-DVD, Blu-ray disc etc.), semiconductor memory devices (for example, RAM, EPROM, EEPROM etc.) and/or magnetic storage device (for example, hard disk drive, floppy disk, tape drive, MRAM etc.) etc.Storer 74 can comprise the equipment with the one or more characteristics in the following characteristic: volatibility, non-volatile, dynamic, static, read/write, read-only, random access, sequential access, position addressable, file addressable and content addressable.In certain embodiments, processor 72 and storer 74 can be integrated in one or more common devices, such as special IC or SOC (system on a chip).
Fig. 5 also illustrates the movably one side of the storer of computer-readable recording medium 82 forms, and this medium can be used for storage and/or transmit data and/or the instruction that can carry out to realize Method and Process described herein.Movable computer readable storage medium storing program for executing 82 especially can be taked the form of CD, DVD, HD-DVD, Blu-ray disc, EEPROM and/or floppy disk.
Can understand, storer 74 comprises instantaneous equipment one or more physics, non-.On the contrary, in certain embodiments, the each side of instruction described herein can be by the transient state mode by can't help pure signal (such as electromagnetic signal, the light signal etc.) propagation of physical equipment in limited at least duration maintenance.In addition, the data relevant with the present invention and/or other forms of information can be propagated by pure signal.
Term " module ", " program " and " engine " can be used for describing the one side that is implemented as the computing system 70 of carrying out one or more concrete functions.In some cases, can come the such module of instantiation, program or engine by the processor 72 of carrying out the instruction that is kept by storer 74.Should be appreciated that and to come the different module of instantiation, program and/or engine from same application, service, code block, object, storehouse, routine, API, function etc.Equally, can come the same module of instantiation, program and/or engine by different application programs, service, code block, object, routine, API, function etc.Term " module ", " program " and " engine " are intended to contain single or executable file in groups, data file, storehouse, driver, script, data-base recording etc.
Should be appreciated that as used herein " service " can be that to cross over a plurality of user conversations executable and to one or more system components, program and/or the available application program of other services.In some implementations, service can be in response to from the request of client computer and move at server.
When being included, display subsystem 76 can be used for presenting the visual representation of the data that kept by storer 74.Because Method and Process described herein has changed data by memory retention, and thus conversion the state of storer, therefore can change equally the state of display subsystem 76 with the change of vision ground expression bottom data.Display subsystem 76 can comprise one or more display devices of the technology of the in fact any type of use.Such display device and processor 72 and/or storer 74 can be combined in and share in the encapsulation, or such display device can be peripheral display device.
When comprising communication subsystem 78, communication subsystem 78 can be configured to computing system 70 and one or more other computing equipments can be coupled communicatedly.Communication subsystem 78 can comprise and one or more different communication protocols compatible wired and/or Wireless Telecom Equipment mutually.As non-limiting example, communication subsystem can be configured to communicate via radiotelephony network, WLAN (wireless local area network), cable LAN, wireless wide area network, wired wide area network etc.In certain embodiments, communication subsystem can allow computing system 70 to send a message to other equipment via the network such as the Internet and/or from other equipment receipt messages.
Sensor subsystem 80 can comprise and is configured to as described above one or more sensors of the one or more human subjects of sensing.For example, sensor subsystem 80 can comprise one or more imageing sensors, the motion sensor such as accelerometer, touch pad, touch-screen and/or any other suitable sensor.Therefore, sensor subsystem 80 for example can be configured to provide observation information to processor 72.As mentioned above, can be for carrying out such task, such as the position in each joint in a plurality of joints of determining one or more human subjects such as the observation information of view data, motion sensor data and/or any other appropriate sensor data.
In certain embodiments, sensor subsystem 80 can comprise for example depth camera 18 of Fig. 1 of depth camera 84().Depth camera 84 can comprise for example a left side and the right camera of stereo visual system.Can and can be combined to produce the video of deep analysis by mutual registration from the image of the time resolution of two cameras.
In other embodiments, depth camera 84 can be the structured light depth camera, and it is configured to the structuring infrared illumination that projection comprises a plurality of discrete features (for example, line or point).Depth camera 84 can be configured to the structured lighting that reflects the scene on structured lighting is projected to it is carried out imaging.Based on the interval between the adjacent features in the regional of the scene of imaging, can construct the depth image of this scene.
In other embodiments, depth camera 84 can be time-of-flight camera, and it is configured to the infrared illumination of pulse is projected on this scene.Depth camera can comprise two cameras, these two pulsing lights that are configured to detect from scene reflectivity.Two cameras all can comprise the electronic shutter synchronous with pulsing light, but the integrated time that is used for these two cameras can be different, so that then can the distinguishing from the amount of the relatively light that receives the corresponding pixel of two cameras to flight time that the pixel of these two cameras is resolved again from the source to the scene of pulsing light.
In certain embodiments, sensor subsystem 80 can comprise Visible Light Camera 86.Can use the digital camera technology of any type basically and do not deviate from the scope of the present disclosure.As unrestriced example, Visible Light Camera 86 can comprise the charge imageing sensor.
Should be appreciated that configuration described herein and/or method are exemplary in itself, and these specific embodiments or example should not be considered to circumscribed, because a plurality of variant is possible.Concrete routine described herein or method can represent one or more in any amount of processing policy.Thus, shown each action can by shown in order carry out, carry out, carry out concurrently or be omitted in some cases by other order.Equally, can change the order of said process.
Theme of the present disclosure comprise various processes, system and configuration, other features, function, action and/or characteristic disclosed herein, with and all novel and non-obvious combination and sub-portfolios of any and whole equivalents.

Claims (10)

1. one kind for the method that is used for estimation task metabolic equivalent with computing equipment (14), and the method comprises:
Receive the input of the image sequence that catches in time that comprises user (10) from capture device (18);
The position in each joint from described image sequence in a plurality of joints (36) of the described user of tracking;
Change to determine the distance of each joint between described the first frame (50) and the second frame (52) in described a plurality of joint based on the position of each joint in a plurality of joints of following the tracks of between the first frame (50) and the second frame (52);
Described distance and the lapse of time between described the first frame and described the second frame based on each joint in described a plurality of joints calculate horizontal velocity and the vertical speed in each joint in described a plurality of joint;
Estimate the value of described task metabolic equivalent with the metabolism equation, described metabolism equation comprises horizontal component and vertical component, and this horizontal and vertical component is based on the vertical and horizontal velocity in each joint in described a plurality of joints of calculating; And
Export described value for demonstration.
2. the method for claim 1 is characterized in that, also comprises: according to weighting scheme each joint in described a plurality of joints is weighted.
3. the method for claim 1 is characterized in that, described capture device is that depth camera and wherein said image sequence are range image sequences.
4. the method for claim 1 is characterized in that, described metabolism equation comprises the value of oxygen expenditure, and the value of described oxygen expenditure comprises level variable and vertical variable, and this horizontal and vertical variable is based on large-scale metabolic equivalent value.
5. the method for claim 1 is characterized in that, described horizontal velocity comprises the speed of x direction and the speed of z direction, and described vertical speed comprises the speed of y direction.
6. computing equipment that comprises the memory retention instruction, described instruction carried out by processor so that:
Use the depth camera (18) that is associated with described computing equipment (14) to catch a plurality of images of user (10);
Follow the tracks of in time the position in each joint in a plurality of joints (36) of described user;
Determine that the position of each joint between the first frame (50) and the second frame (52) in succession in described a plurality of joint changes; Described position changes to be determined in the institute's tracing positional that is each joint from described a plurality of joints;
Change to calculate the speed in each joint in described a plurality of joint based on the position during the lapse of time between the first and second frames; And
The value of output task metabolic equivalent, described value is exported from the metabolism equation, and described metabolism equation comprises horizontal velocity component and the vertical velocity component in each joint in described a plurality of joint.
7. equipment as claimed in claim 6 is characterized in that, described computing equipment is that game station and the value exported are output on the display of described computing equipment.
8. equipment as claimed in claim 6 is characterized in that, described value is the total value for the threshold time section, and wherein said total value is the task metabolic equivalent sum of calculating between every frame in described threshold time section and the successive frames.
9. equipment as claimed in claim 6, it is characterized in that, this comprises the instruction that each joint in described a plurality of joints is weighted according to weighting scheme, described weighting scheme comprises: upper body fragment or lower part of the body fragment are distributed in each joint in described a plurality of joints, and wherein said lower part of the body fragment has higher weighted value than described upper body fragment.
10. equipment as claimed in claim 9 is characterized in that, described metabolism equation is
Figure FDA00002279526700021
VO wherein 2Be the variable of oxygen expenditure, wherein said oxygen expenditure is to use the oxygen expenditure equation to calculate, and described oxygen expenditure equation comprises:
VO 2=K h(health speed h)+K v(health speed v)+3.5.
CN201210402712.7A 2011-10-21 2012-10-19 Computing equipment is utilized to calculate metabolic equivalent Expired - Fee Related CN103019372B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/279,124 US20130102387A1 (en) 2011-10-21 2011-10-21 Calculating metabolic equivalence with a computing device
US13/279,124 2011-10-21

Publications (2)

Publication Number Publication Date
CN103019372A true CN103019372A (en) 2013-04-03
CN103019372B CN103019372B (en) 2015-11-25

Family

ID=47968056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210402712.7A Expired - Fee Related CN103019372B (en) 2011-10-21 2012-10-19 Computing equipment is utilized to calculate metabolic equivalent

Country Status (3)

Country Link
US (1) US20130102387A1 (en)
CN (1) CN103019372B (en)
WO (1) WO2013059751A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI579021B (en) * 2016-02-04 2017-04-21 財團法人工業技術研究院 Analyzing system and method for evaulating calories consumption by detecting the intensity of wireless signal
CN107376304A (en) * 2017-08-04 2017-11-24 广东乐心医疗电子股份有限公司 Equivalent step number detection method and device and wearable device comprising same

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10463278B2 (en) 2012-01-18 2019-11-05 Nike, Inc. Activity and inactivity monitoring
US9724597B2 (en) 2012-06-04 2017-08-08 Sony Interactive Entertainment Inc. Multi-image interactive gaming device
US9442100B2 (en) 2013-12-18 2016-09-13 Medibotics Llc Caloric intake measuring system using spectroscopic and 3D imaging analysis
US9536449B2 (en) 2013-05-23 2017-01-03 Medibotics Llc Smart watch and food utensil for monitoring food consumption
US10314492B2 (en) 2013-05-23 2019-06-11 Medibotics Llc Wearable spectroscopic sensor to measure food consumption based on interaction between light and the human body
US9042596B2 (en) 2012-06-14 2015-05-26 Medibotics Llc Willpower watch (TM)—a wearable food consumption monitor
US9254099B2 (en) 2013-05-23 2016-02-09 Medibotics Llc Smart watch and food-imaging member for monitoring food consumption
FI124974B (en) * 2013-03-15 2015-04-15 Laturi Corp Oy Determination of daily energy reserve
US9529385B2 (en) 2013-05-23 2016-12-27 Medibotics Llc Smart watch and human-to-computer interface for monitoring food consumption

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020019258A1 (en) * 2000-05-31 2002-02-14 Kim Gerard Jounghyun Methods and apparatus of displaying and evaluating motion data in a motion game apparatus
US20070111858A1 (en) * 2001-03-08 2007-05-17 Dugan Brian M Systems and methods for using a video game to achieve an exercise objective
CN101068605A (en) * 2004-12-03 2007-11-07 新世代株式会社 Boxing game processing method, display control method, position detection method, cursor control method, energy consumption calculating method and exercise system
CN101983389A (en) * 2008-10-27 2011-03-02 松下电器产业株式会社 Moving body detection method and moving body detection device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH114820A (en) * 1997-06-18 1999-01-12 Ee D K:Kk Health caring device
JP3621338B2 (en) * 2000-10-05 2005-02-16 ヤーマン株式会社 Game and body movement measuring device
US20040043367A1 (en) * 2002-08-30 2004-03-04 Aileen Chou Dancing machine having stepped stages
JPWO2009004816A1 (en) * 2007-07-03 2010-08-26 新世代株式会社 Foot input type brain training apparatus and computer program
EP2175949A1 (en) * 2007-07-27 2010-04-21 Empire Of Sports Developments Ltd. Controlling avatar performance and simulating metabolism using virtual metabolism parameters
US8425295B2 (en) * 2010-08-17 2013-04-23 Paul Angelos BALLAS System and method for rating intensity of video games

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020019258A1 (en) * 2000-05-31 2002-02-14 Kim Gerard Jounghyun Methods and apparatus of displaying and evaluating motion data in a motion game apparatus
US20070111858A1 (en) * 2001-03-08 2007-05-17 Dugan Brian M Systems and methods for using a video game to achieve an exercise objective
CN101068605A (en) * 2004-12-03 2007-11-07 新世代株式会社 Boxing game processing method, display control method, position detection method, cursor control method, energy consumption calculating method and exercise system
CN101983389A (en) * 2008-10-27 2011-03-02 松下电器产业株式会社 Moving body detection method and moving body detection device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI579021B (en) * 2016-02-04 2017-04-21 財團法人工業技術研究院 Analyzing system and method for evaulating calories consumption by detecting the intensity of wireless signal
CN107376304A (en) * 2017-08-04 2017-11-24 广东乐心医疗电子股份有限公司 Equivalent step number detection method and device and wearable device comprising same
CN107376304B (en) * 2017-08-04 2019-07-19 广东乐心医疗电子股份有限公司 Equivalent step number detection method and device, wearable device comprising same and mobile terminal

Also Published As

Publication number Publication date
CN103019372B (en) 2015-11-25
WO2013059751A1 (en) 2013-04-25
US20130102387A1 (en) 2013-04-25

Similar Documents

Publication Publication Date Title
CN103019372B (en) Computing equipment is utilized to calculate metabolic equivalent
CN102129551B (en) Gesture detection based on joint skipping
Bideau et al. Real handball goalkeeper vs. virtual handball thrower
US20130077820A1 (en) Machine learning gesture detection
CN105073210B (en) Extracted using the user's body angle of depth image, curvature and average terminal position
CN105765488B (en) The motion control of virtual environment
CN102622774B (en) Living room film creates
CN102207771A (en) Intention deduction of users participating in motion capture system
CN105229666A (en) Motion analysis in 3D rendering
US11819734B2 (en) Video-based motion counting and analysis systems and methods for virtual fitness application
US20220203168A1 (en) Systems and Methods for Enhancing Exercise Instruction, Tracking and Motivation
CN102184009A (en) Hand position post processing refinement in tracking system
CN102129293A (en) Tracking groups of users in motion capture system
CN105209136A (en) Center of mass state vector for analyzing user motion in 3D images
US20140307927A1 (en) Tracking program and method
CN105228709A (en) For the signal analysis of duplicate detection and analysis
CN102918489A (en) Limiting avatar gesture display
CN102270276A (en) Caloric burn determination from body movement
CN102129292A (en) Recognizing user intent in motion capture system
CN102681657A (en) Interactive content creation
CN102918518A (en) Cloud-based personal trait profile data
Zhang et al. KaraKter: An autonomously interacting Karate Kumite character for VR-based training and research
Cordeiro et al. ARZombie: A mobile augmented reality game with multimodal interaction
Halilaj et al. American society of biomechanics early career achievement award 2020: Toward portable and modular biomechanics labs: How video and IMU fusion will change gait analysis
CN102929507A (en) Motion controlled list scrolling

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: MICROSOFT TECHNOLOGY LICENSING LLC

Free format text: FORMER OWNER: MICROSOFT CORP.

Effective date: 20150727

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20150727

Address after: Washington State

Applicant after: Micro soft technique license Co., Ltd

Address before: Washington State

Applicant before: Microsoft Corp.

C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151125

Termination date: 20191019