{"id":243,"date":"2021-05-17T07:57:20","date_gmt":"2021-05-17T07:57:20","guid":{"rendered":"https:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/?p=243"},"modified":"2023-05-02T09:42:28","modified_gmt":"2023-05-02T09:42:28","slug":"kinect-camera-sensor-based-pointing-motion-and-magnification-function-aretl-france-2021","status":"publish","type":"post","link":"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/?p=243","title":{"rendered":"Kinect camera sensor-based pointing motion and magnification function: ARETL, France 2021"},"content":{"rendered":"<div data-draftjs-conductor-fragment=\"{&quot;blocks&quot;:[{&quot;key&quot;:&quot;5h0p&quot;,&quot;text&quot;:&quot;The sudden shift of schools to distance learning due to the COVID-19 pandemic has also impacted those hospitalized children with severe medical, physical, or neurological conditions. We developed a system that enables children to attend classes using an alter-ego robot that let them virtually experience an actual classroom environment. However, due to inability of existing camera and monitors to automatically adjust and magnify pivotal information that teachers present, another challenge is to draw children\u2019s attention and increase or maintain their concentration and focus during remote classes. Thus, we proposed and developed a system that automatically magnifies the view around where a pointing motion is detected. In this study, we described the design of our proposed camera-sensor based pointing motion recognition with magnification function system using Kinect which consists of an RGB (colour) camera, an IR (infrared) camera, an emitter-based Depth sensor, and four microphone arrays. We also conducted multiple experiments to extract pointing motions and evaluate the ability of the system to detect them using different parameters such as magnification rate, change-of-position tracking rate, and reset-position rate which were consequently used for Semantic Differential analysis. The optimal conditions and algorithms (e.g. upper body limb positioning, camera-blackboard-body distance and height measurements, color dimensions, user-monitor distance etc.) were also investigated for system stability.&quot;,&quot;type&quot;:&quot;unstyled&quot;,&quot;depth&quot;:0,&quot;inlineStyleRanges&quot;:[],&quot;entityRanges&quot;:[],&quot;data&quot;:{&quot;textAlignment&quot;:&quot;left&quot;,&quot;dynamicStyles&quot;:{}}}],&quot;entityMap&quot;:{},&quot;VERSION&quot;:&quot;9.6.3&quot;}\">The sudden shift of schools to distance learning due to the COVID-19 pandemic has also impacted hospitalized children with severe medical, physical, or neurological conditions. We developed a system that enables children to attend classes using an alter-ego robot that lets them virtually experience an actual classroom environment. However, due to the inability of existing cameras and monitors to automatically adjust and magnify pivotal information that teachers present, another challenge is to draw children&#8217;s attention and increase or maintain their concentration and focus during remote classes. Thus, we proposed and developed a system that automatically magnifies the view around where a pointing motion is detected. In this study, we described the design of our proposed camera-sensor-based pointing motion recognition with magnification function system using Kinect, which consists of an RGB (color) camera, an IR (infrared) camera, an emitter-based Depth sensor, and four microphone arrays. We also conducted multiple experiments to extract pointing motions and evaluate the ability of the system to detect them using different parameters such as magnification rate, change-of-position tracking rate, and reset-position rate, which were consequently used for Semantic Differential analysis. The optimal conditions and algorithms (e.g., upper body limb positioning, camera-blackboard-body distance, height measurements, color dimensions, user-monitor distance, etc.) were also investigated for system stability.<\/div>\n<div data-draftjs-conductor-fragment=\"{&quot;blocks&quot;:[{&quot;key&quot;:&quot;5h0p&quot;,&quot;text&quot;:&quot;The sudden shift of schools to distance learning due to the COVID-19 pandemic has also impacted those hospitalized children with severe medical, physical, or neurological conditions. We developed a system that enables children to attend classes using an alter-ego robot that let them virtually experience an actual classroom environment. However, due to inability of existing camera and monitors to automatically adjust and magnify pivotal information that teachers present, another challenge is to draw children\u2019s attention and increase or maintain their concentration and focus during remote classes. Thus, we proposed and developed a system that automatically magnifies the view around where a pointing motion is detected. In this study, we described the design of our proposed camera-sensor based pointing motion recognition with magnification function system using Kinect which consists of an RGB (colour) camera, an IR (infrared) camera, an emitter-based Depth sensor, and four microphone arrays. We also conducted multiple experiments to extract pointing motions and evaluate the ability of the system to detect them using different parameters such as magnification rate, change-of-position tracking rate, and reset-position rate which were consequently used for Semantic Differential analysis. The optimal conditions and algorithms (e.g. upper body limb positioning, camera-blackboard-body distance and height measurements, color dimensions, user-monitor distance etc.) were also investigated for system stability.&quot;,&quot;type&quot;:&quot;unstyled&quot;,&quot;depth&quot;:0,&quot;inlineStyleRanges&quot;:[],&quot;entityRanges&quot;:[],&quot;data&quot;:{&quot;textAlignment&quot;:&quot;left&quot;,&quot;dynamicStyles&quot;:{}}}],&quot;entityMap&quot;:{},&quot;VERSION&quot;:&quot;9.6.3&quot;}\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-250\" src=\"https:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-009-1024x499.png\" alt=\"\" width=\"640\" height=\"312\" srcset=\"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-009-1024x499.png 1024w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-009-300x146.png 300w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-009-768x374.png 768w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-009.png 1139w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/> <img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-249\" src=\"https:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-008-1024x499.png\" alt=\"\" width=\"640\" height=\"312\" srcset=\"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-008-1024x499.png 1024w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-008-300x146.png 300w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-008-768x374.png 768w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-008.png 1139w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/> <img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-248\" src=\"https:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-005.png\" alt=\"\" width=\"782\" height=\"555\" srcset=\"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-005.png 782w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-005-300x213.png 300w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-005-768x545.png 768w\" sizes=\"auto, (max-width: 782px) 100vw, 782px\" \/> <img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-246\" src=\"https:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-002-1024x494.png\" alt=\"\" width=\"640\" height=\"309\" srcset=\"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-002-1024x494.png 1024w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-002-300x145.png 300w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-002-768x371.png 768w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-002-825x400.png 825w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-002.png 1042w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/div>\n<div data-draftjs-conductor-fragment=\"{&quot;blocks&quot;:[{&quot;key&quot;:&quot;5h0p&quot;,&quot;text&quot;:&quot;The sudden shift of schools to distance learning due to the COVID-19 pandemic has also impacted those hospitalized children with severe medical, physical, or neurological conditions. We developed a system that enables children to attend classes using an alter-ego robot that let them virtually experience an actual classroom environment. However, due to inability of existing camera and monitors to automatically adjust and magnify pivotal information that teachers present, another challenge is to draw children\u2019s attention and increase or maintain their concentration and focus during remote classes. Thus, we proposed and developed a system that automatically magnifies the view around where a pointing motion is detected. In this study, we described the design of our proposed camera-sensor based pointing motion recognition with magnification function system using Kinect which consists of an RGB (colour) camera, an IR (infrared) camera, an emitter-based Depth sensor, and four microphone arrays. We also conducted multiple experiments to extract pointing motions and evaluate the ability of the system to detect them using different parameters such as magnification rate, change-of-position tracking rate, and reset-position rate which were consequently used for Semantic Differential analysis. The optimal conditions and algorithms (e.g. upper body limb positioning, camera-blackboard-body distance and height measurements, color dimensions, user-monitor distance etc.) were also investigated for system stability.&quot;,&quot;type&quot;:&quot;unstyled&quot;,&quot;depth&quot;:0,&quot;inlineStyleRanges&quot;:[],&quot;entityRanges&quot;:[],&quot;data&quot;:{&quot;textAlignment&quot;:&quot;left&quot;,&quot;dynamicStyles&quot;:{}}}],&quot;entityMap&quot;:{},&quot;VERSION&quot;:&quot;9.6.3&quot;}\">\n<div data-draftjs-conductor-fragment=\"{&quot;blocks&quot;:[{&quot;key&quot;:&quot;65no0&quot;,&quot;text&quot;:&quot;Our study entitled \\&quot;Kinect camera sensor-based pointing motion recognition with magnification function to increase concentration among children during remote classes\\&quot; was recently presented in the 4th International conference on Advanced Research in Education, Teaching and Learning \u2013 (ARETL) which was held in Paris, France from 14th \u2013 16th December 2021. This engaging education and learning conference covered the latest developments in the field and answer common issues, including special education, education policy and leadership, learning psychology, assessment and evaluation, machine learning, inductive reasoning, and 120 other topics.&quot;,&quot;type&quot;:&quot;unstyled&quot;,&quot;depth&quot;:0,&quot;inlineStyleRanges&quot;:[{&quot;offset&quot;:20,&quot;length&quot;:147,&quot;style&quot;:&quot;BOLD&quot;},{&quot;offset&quot;:196,&quot;length&quot;:86,&quot;style&quot;:&quot;BOLD&quot;},{&quot;offset&quot;:311,&quot;length&quot;:13,&quot;style&quot;:&quot;BOLD&quot;},{&quot;offset&quot;:330,&quot;length&quot;:25,&quot;style&quot;:&quot;BOLD&quot;},{&quot;offset&quot;:282,&quot;length&quot;:4,&quot;style&quot;:&quot;{\\&quot;FG\\&quot;:\\&quot;#333333\\&quot;}&quot;},{&quot;offset&quot;:291,&quot;length&quot;:20,&quot;style&quot;:&quot;{\\&quot;FG\\&quot;:\\&quot;#333333\\&quot;}&quot;},{&quot;offset&quot;:324,&quot;length&quot;:6,&quot;style&quot;:&quot;{\\&quot;FG\\&quot;:\\&quot;#333333\\&quot;}&quot;},{&quot;offset&quot;:355,&quot;length&quot;:2,&quot;style&quot;:&quot;{\\&quot;FG\\&quot;:\\&quot;#333333\\&quot;}&quot;}],&quot;entityRanges&quot;:[{&quot;offset&quot;:286,&quot;length&quot;:5,&quot;key&quot;:0}],&quot;data&quot;:{}}],&quot;entityMap&quot;:{&quot;0&quot;:{&quot;type&quot;:&quot;LINK&quot;,&quot;mutability&quot;:&quot;MUTABLE&quot;,&quot;data&quot;:{&quot;url&quot;:&quot;https:\/\/aretl.org\/&quot;,&quot;target&quot;:&quot;_blank&quot;,&quot;rel&quot;:&quot;noopener&quot;}}},&quot;VERSION&quot;:&quot;9.6.3&quot;}\"><span data-offset-key=\"770et-0-0\">Our study entitled &#8220;<\/span><span data-offset-key=\"770et-0-1\">Kinect camera sensor-based pointing motion recognition with magnification function to increase concentration among children during remote classes&#8221; <\/span><span data-offset-key=\"770et-0-2\">was recently presented at the<\/span><strong> 4th International Conference on Advanced Research in Education, Teaching and Learning<\/strong><span data-offset-key=\"770et-0-4\"> \u2013 (<\/span><a class=\"_3Bkfb _1lsz7\" href=\"https:\/\/aretl.org\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-hook=\"linkViewer\"><span data-offset-key=\"770et-1-0\">ARETL<\/span><\/a><span data-offset-key=\"770et-2-0\">), which was held in <\/span><strong>Paris, France<\/strong><span data-offset-key=\"770et-2-2\"> from <\/span><span data-offset-key=\"770et-2-3\"><strong>14th \u2013 16th May 2021<\/strong><\/span><span data-offset-key=\"770et-2-4\">. <\/span><span data-offset-key=\"770et-2-5\">This engaging education and learning conference covered the latest developments in the field and answered common issues, including special education, education policy and leadership, learning psychology, assessment and evaluation, machine learning, inductive reasoning, and 120 other topics.<\/span><\/div>\n<div data-draftjs-conductor-fragment=\"{&quot;blocks&quot;:[{&quot;key&quot;:&quot;65no0&quot;,&quot;text&quot;:&quot;Our study entitled \\&quot;Kinect camera sensor-based pointing motion recognition with magnification function to increase concentration among children during remote classes\\&quot; was recently presented in the 4th International conference on Advanced Research in Education, Teaching and Learning \u2013 (ARETL) which was held in Paris, France from 14th \u2013 16th December 2021. This engaging education and learning conference covered the latest developments in the field and answer common issues, including special education, education policy and leadership, learning psychology, assessment and evaluation, machine learning, inductive reasoning, and 120 other topics.&quot;,&quot;type&quot;:&quot;unstyled&quot;,&quot;depth&quot;:0,&quot;inlineStyleRanges&quot;:[{&quot;offset&quot;:20,&quot;length&quot;:147,&quot;style&quot;:&quot;BOLD&quot;},{&quot;offset&quot;:196,&quot;length&quot;:86,&quot;style&quot;:&quot;BOLD&quot;},{&quot;offset&quot;:311,&quot;length&quot;:13,&quot;style&quot;:&quot;BOLD&quot;},{&quot;offset&quot;:330,&quot;length&quot;:25,&quot;style&quot;:&quot;BOLD&quot;},{&quot;offset&quot;:282,&quot;length&quot;:4,&quot;style&quot;:&quot;{\\&quot;FG\\&quot;:\\&quot;#333333\\&quot;}&quot;},{&quot;offset&quot;:291,&quot;length&quot;:20,&quot;style&quot;:&quot;{\\&quot;FG\\&quot;:\\&quot;#333333\\&quot;}&quot;},{&quot;offset&quot;:324,&quot;length&quot;:6,&quot;style&quot;:&quot;{\\&quot;FG\\&quot;:\\&quot;#333333\\&quot;}&quot;},{&quot;offset&quot;:355,&quot;length&quot;:2,&quot;style&quot;:&quot;{\\&quot;FG\\&quot;:\\&quot;#333333\\&quot;}&quot;}],&quot;entityRanges&quot;:[{&quot;offset&quot;:286,&quot;length&quot;:5,&quot;key&quot;:0}],&quot;data&quot;:{}}],&quot;entityMap&quot;:{&quot;0&quot;:{&quot;type&quot;:&quot;LINK&quot;,&quot;mutability&quot;:&quot;MUTABLE&quot;,&quot;data&quot;:{&quot;url&quot;:&quot;https:\/\/aretl.org\/&quot;,&quot;target&quot;:&quot;_blank&quot;,&quot;rel&quot;:&quot;noopener&quot;}}},&quot;VERSION&quot;:&quot;9.6.3&quot;}\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-244\" src=\"https:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-000-copy-1024x512.png\" alt=\"\" width=\"640\" height=\"320\" srcset=\"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-000-copy-1024x512.png 1024w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-000-copy-300x150.png 300w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-000-copy-768x384.png 768w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-000-copy-1536x768.png 1536w, http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/wp-content\/uploads\/2023\/05\/GoToWebinar-000-copy.png 1905w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The sudden shift of schools to distance learning due to the COVID-19 pandemic has also impacted hospitalized children with severe medical, physical, or neurological conditions. We developed a system that enables children to attend classes using an alter-ego robot that lets them virtually experience an actual classroom environment. However, due to the inability of existing cameras and monitors to automatically [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":251,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[],"class_list":["post-243","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news"],"_links":{"self":[{"href":"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/index.php?rest_route=\/wp\/v2\/posts\/243","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=243"}],"version-history":[{"count":1,"href":"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/index.php?rest_route=\/wp\/v2\/posts\/243\/revisions"}],"predecessor-version":[{"id":252,"href":"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/index.php?rest_route=\/wp\/v2\/posts\/243\/revisions\/252"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/index.php?rest_route=\/wp\/v2\/media\/251"}],"wp:attachment":[{"href":"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=243"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=243"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/treasure.ed.ehime-u.ac.jp\/CIE-EN\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=243"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}