Controlling wheelchairs by body motions: A learning framework for the adaptive remapping of space

Learning to operate a vehicle is generally accomplished by forming a new cognitive map between the body motions and extrapersonal space. Here, we consider the challenge of remapping movement-to-space representations in survivors of spinal cord injury…

Authors: Tauseef Gulrez, Aless, ro Tognetti

Controlling wheelchairs by body motions: A learning framework for the   adaptive remapping of space
In Pr oc. Cogsys 2008 1 Contr olling Wheelchairs by Body Motions: A Learning Framework for the Adaptive Remapping of Space T auseef Gulrez Member IEEE , Alessandro T ognetti, Alon Fishbach, Santiago Acosta, Christopher Scharver , Danilo De Rossi and Ferdinando A. Mussa-Ivaldi Member IEEE, Abstract — Learning to operate a vehicle is generally ac- complished by forming a new cognitive map between the body motions and extrapersonal space. Here, we consider the challenge of remapping movement-to-space representations in survivors of spinal cord injury , for the control of powered wheelchairs. Our goal is to facilitate this remapping by developing interfaces between residual body motions and navigational commands that exploit the degrees of freedom that disabled individuals are most capable to coordinate. W e present a new framework for allowing spinal cord injured persons to control powered wheelchairs through signals derived from their residual mobility . The main novelty of this approach lies in substituting the more common joystick controllers of powered wheelchairs with a sensor shirt. This allows the whole upper body of the user to operate as an adaptive joystick. Considerations about learning and risks have lead us to develop a safe testing environment in 3D V irtual Reality . A Personal Augmented Reality Immersive System (P ARIS) allows us to analyse learning skills and pro- vide users with an adequate training to control a simulated wheelchair through the signals generated by body motions in a safe environment. W e provide a description of the basic theory , of the development phases and of the operation of the complete system. W e also present preliminary results illustrating the processing of the data and supporting of the feasibility of this approach. Index T erms — Motor learning, Space remapping, wearable sensors, assistive technology , virtual reality I . I N T R O D U C T I O N R OBOTICS may be exploited to assist people in a great variety of activities [1]–[4]. Elderly and dis- abled people, in particular , are likely to benefit from these new technologies [3], [5], [6]. As they become limited in their mobility , they gain a greate r degree of independence through the use of assistive devices such as powered wheelchairs(Fig. I). However , loss of T . Gulrez is with Robotics Lab. of Sensory Motor Performance Pro- gram, Rehabilitation Institute of Chicago, Feinberg School of Medicine, Northwestern University , Chicago, USA & also with the V irtual and In- teractive Simulations of Reality (VISOR) Labs, Department of Comput- ing, Division of Information and Communication Sciences, Macquarie University Sydney ,Australia A. T ognetti and D.De Rossi are with Inter-Departmental Research Center “E.Piaggio”, University of Pisa, Italy . F .A. Mussa-Ivaldi, A. Fishbach, S. Acosta and C. Scharver are with Robotics Lab. of Sensory Motor Performance Program, Rehabilitation Institute of Chicago, Feinberg School of Medicine, Northwestern Uni- versity , Chicago, USA. coordination and cognitive impairments can r ender dif- ficult or impossible to execute steering maneuvers, with consequent fatigue, frustration, reduced social life and risks of dangerous accidents. One way to overcome these difficulties is to equip the chair with an intelligent controller , sharing planning and execution of actions with the user . This cooperation between human and machine can be compared to the cooperation between a horse and its rider: the rider navigates (global plan- ning, ride control), while the horse avoids obstacles and makes path adjustments(fine motion control). A differ ent approach - pursued here - is to allow the users to control the vehicle’s motions at all levels. This second approach requir es establishing a rapid communication between human and machine. However , one of the most chal- lenging tasks does not concern the technology of com- munications and contr ol, but rather the reor ganization of movements and the development of new cognitive maps of motor space. This is something most of us are familiar with, as we learn to drive a car . At first, the controls are foreign objects that require constant focus and attention. But, as we become expert drivers, the car becomes an extension of our bodies and the acts that we perform on the steering wheel and the pedals ar e directly and seemlsessly mapped into into their spatial and temporal consequences. Her e, we plan to achieve the same result in disabled populations through the interaction of human and machine learning. This paper desctibes the basic platform, which includes wearable sensing, interfacing and VR technologies. A typical power ed wheelchair [7]–[12] is operated by two rear dif ferential and two fr ont castor wheels. T wo high torque motors drive the rear wheels. Most powered wheelchairs come with a programmable joystick to drive and operate it. The joystick contr oller has four dir ectional commands i.e. forward, backward, left and right and a zero position to halt the wheelchair operations. The velocity of the wheelchair incrementally incr eases up to a fixed limit by holding the joystick continuously in the desired dir ection. While the joystick is a simple control device, it still repr esents a fixed interface that the user must learn to operate by mapping joystick into wheelchair motions. Accidents are often caused by the insufficient training on the handling of the joystick In Pr oc. Cogsy s 20 08 2 Fig. 1. System concept. The virtual environment provides a safe training platform, where the control parameters are set according to the motor skills of the users. Once a satifactory behavior is reached, the control parameters will be applied to an actual powered wheelchair . and on the proper control pr ocedures. Mor eover , the wheelchair ’s training itself is a dangerous process, es- pecially for spinal cord injured users. The need to apply learning technologies to control assistive devices is highlighted by a r ecent survey on the use of powered wheelchairs [13]. The authors inter- viewed 200 clinicians in spinal cor d injury facilities, reha- bilitation centers and geriatric care facilities. They asked about wheelchair user ’s feedback on the performance of different control interfaces, such as the joystick, sip- and-puff systems and head-an-chin devices. The study showed that about 10 percent of the disabled users “find it extr emely dif ficult or impossible” to use the wheelchair while 40 percent of the users report difficulties in steer- ing and maneuvering tasks. It is noteworthy that these figures refer to users that received specific (although conventional) training for controlling the wheelchair . In light of these difficulties, our appr oach is based on two characteristic features of the sensorimotor system: • Its ability to adapt to changes in the envir onment and • Its ability to exploit a large number of degrees of freedom for carrying out a variety of tasks. W e exploit these featur es for designing a body-machine interface that will allow disabled users to operate a variety of devices [4], [14]–[18]. In particular , we will aim at creating a learning and design framework for spinal cord injured people with complete injuries at the C5-6 cervical level, or incomplete injuries in the cervical cord. These injuries result in tetraplegia with limited residual body motions. I I . O V E R A L L M E T H O D O L O G Y A N D S Y S T E M D E S C R I P T I O N This article describes a novel method for controlling a powered wheelchair by spinal cord injured people. A wearable sensor shirt which is adequate to detect upper body (wrist, elbow and shoulder) movements, is custom built to extract some residual body movements of the users. A combination of virtual reality and signal processing methods is used for developing an effective body/device interface and for carrying out training pro- cedures. The proposed system architectur e is sketched in Fig. 1 and is based on four modules: proposed system architecture is sketched in Fig.I and is based on three modules: • Sensor Shirt : The sensor shirt is composed of 52 piezoresistive sensors that detect local fabric defor- mation caused by the movement of the user ’s upper body (i.e. wrist, elbow and shoulder; see section III). • Data Acquisition and Signal Processing : Signals acquired from the sensors ar e pr ocessed and the control parameters for the wheelchair control are determined (Section III-A). • V irtual Reality : Preliminary system tests are per- formed on a virtual reality simulator of a power ed wheelchair (section IV). The patient is trained to execute maneuvers of variable complexity , such as navigating a desert scene, moving among obstacles and following other moving objects. • Human Control Users ar e immersed in the P ARIS system. After an initial calibration (Section V) they begin practising the contr ol of the simulated wheelchair . The signals generated by the shirt ar e transformed in command variables, which are in- tegrated and combined with a head tracker (Flock I n Pr oc. Cogsys 2 008 3 Fig. 2. Sensor Shirt front view Fig. 3. Sensor Shirt back view of Birds, Ascension T echnology [19]–[21]) to gen- erate the current viewpoint from the simulated wheelchair . I I I . S E N S O R S H I R T The sensors of the shirt (Figur es 2, 3 and 4) are made of a conductive elastomer (CE) material (commercial product provided by W acker L TD [22]) printed on a L ycra/cotton fabric pr eviously covered by an adhesive mask. CE composites show piezor esistive properties when a deformation is applied [23]. CE materials can be applied to fabric or to other flexible substrate, they can be em- ployed as strain sensors [24], [25] and they represent an excellent trade-off between transduction properties and possibility of integration in textiles. Quasi-statical and dynamical sensor characterization has been done in [24]. CE sensors exhibit some non-linear dynamical pr operties and relatively long r elaxation times [26], [27] which should be taken into account in the control formulation. Fig. 4. a) Sensors on the back portion of the shoulder . b)Sensors on the muscle arm joint, elbow and wrist. c)Front view of the sesnors covering front shoulders and limb area. (a) (b) (c) Fig. 5. a) V irtual reality scene of our floor plan including corridors and small rooms which is projected in the P ARIS. b)Robotics V irtual wheelchair is navigating through a door way . c)Patient is driving the Robotics wheelchair on the marked path inside the virtual reality . A. Signal Acquisition The analog signals acquir ed from the sensors are amplified and then digitized using a general purpose 64 channels acquisition car d and real-time pr ocessed using a personal computer . Real-time signal processing has been performed by using the xPC-T arget R  toolbox of Matlab R  . The output of the signal processing stage, i.e the wheelchair controls, are sent to the virtual wheelchair described in the section below by using UDP connection. I V . V I R T U A L W H E E L C H A I R A N D P E R S O N A L A U G M E N T E D R E A L I T Y I M M E R S I V E S Y S T E M ( PA R I S ) A. Softwar e The virtual wheelchair and its surrounding envir on- ment are designed using VRCO’s CA VELib TM 3D Graph- ics [28], Coin3D graphics libraries and VRML models [29]. The whole pr ogram was simulated on a Personal Augmented Reality Immersive System (P ARIS) as de- scribed in [30] and [31]. P ARIS pr ovides the user with a perspective view of the scene. By wearing the specially designed goggles and head-tracker , users observe the scene from the viewpoint of the moving wheelchair . The goggles are actively switched and synchronized with the projection system to provide 3D stereo vision of the artificially generated images. The scene is updated asynchronously , based on an external input from the sensor shirt and from a head mounted 3D tracker . Fig. 5 shows a layout for the virtual environment, composed of several corridors, walls, obstacles and doorways. A path (repr esented by the white track of Fig.5) is drawn on the floor as guide for the subject to track during the learning phase (Section VI). B. V irtual Wheelchair Kinematics Model The wheelchair is modeled as a simple two-wheel vehicle [32], [33], as shown in Fig. 6. The non-holonomic In Pr oc. Cogsys 2 0 0 8 4 (a) (b) Fig. 6. a) V irtual wheelchair kinematics model based upon unicycle robot. b)Virtual wheelchair ’s 3D model created using Coin3d [29] libraries. Fig. 7. V irtual wheelchair ’s position update. kinematic equations of the wheelchair are: ˙ x ( t ) = v(t) cos ( θ ( t )) ˙ y ( t ) = v(t) sin ( θ ( t )) (1) ˙ θ ( t ) = ω ( t ) The kinematic model of the wheelchair has two inputs, the translational velocity , v and the r otational velocity ( ω ). In discrete time, the wheelchair ’s laws of motion are: x k +1 = x k + v k cos ( θ k )∆ t y k +1 = y k + v k sin ( θ k )∆ t (2) θ k +1 = θ k + ω k ∆ t The two control inputs, u 1 and u 2 , are generated by pro- cessing algorithms applied to the shirt signals (Section V -A). The virtual wheelchair position update from point ( x k , y k ) to point ( x k +1 , y k +1 ) (Fig. 7) is given by: ∆ S = v k ∆ t = u 1 V f ∆ t (3) ∆ θ = ω k ∆ t = u 2 V r ∆ t (4) where V r and V f are the maximal rotational and for- ward velocities, respectively , and ∆ t is the time interval between the two consecutive frames of the P ARIS. V . M A P P I N G B O D Y M O V E M E N T S I N T O V I R TU A L W H E E L C H A I R ( PA R I S B A S E D ) C O N T R O L S As a first stage in forming a map from body move- ments to wheelchair control, the nature of the contr ol (a) (b) Fig. 8. (a) Right elbow signals extracted during user elbow flexion. (b) First principal component extracted from the raw signals retaining the 80% of the variance. needs to be determined. The controls u 1 and u 2 , for example, may specify the translational velocity v and rotational velocity ( ω ). Alternatively , the controls may specify accelerations ˙ v and ˙ ω instead of velocities. Given that we would like to allow the patient to remain in a fixed and comfortable position as much as possible we suggest to map one of the controls to the linear acceler- ation of the wheelchair . This allows the patient to cruise at a fixed velocity while maintaining the resting posture. Unlike the transitional velocity , the rotational velocity would typically be maintained at zero and would set to nonzero values for only short periods of time. For that reason we decided to map the second control signal to the rotational velocity . In order to map the shirt signals to the controls one needs to assign certain body movements to each contr ol. This allows for gr eat flexibility , as the vocabulary is determined by the users, based on their specific movement ability and personal prefer ences. In our preliminary experiments, our subject decided to use the following body movements: • Right elbow flexion was used to increase the value of u 1 . • Left elbow flexion was used to decrease the value of u 1 . • Right shoulder movement forward (scapular pro- traction) was used to incr ease the value of u 2 . • Left shoulder movement forward (scapular protrac- tion) was used to decr ease the value of u 2 . (see Fig. 9). Since the shirt contains several sensors at each joint, we examined the possibility of reducing the dimensionality of the shirt signals by applying Principal Component Analysis (PCA) [34]–[37] to the signals originating from the same joint. PCA was performed on data that wer e In Pr oc. Cogsy s 2 0 08 5 Fig. 9. V irtual Robotics Wheelchair ’s position update w .r .t. the body movements. collected while the subject was moving his arms and shoulders in an uninstructed manner for a period of 10 seconds. W e found that the first principal component (PC) of each joint captures 80% − 90% of the same-joint- sensors variance (see Fig. 8). Thus, for the above control scheme, which was chosen by the subject, we use four signal combinations, the first PC of the right shoulder ( h rs ), the first PC of the left shoulder ( h ls ), the first PC of the right elbow ( h re ), and the first PC of the left elbow ( h le ). A. Description of Algorithms For removing possible drift and noise artifacts from the shirt signals, we used the following algorithm. The time derivative of each of the four PC’s was calculated and a dead-zone was applied to each of them. The signals were then positive-rectified, as we are only in- terested in the rising part of each PC ((see Fig. 10(a)). An example for the operation of the algorithm is shown in Fig. 11. The processed signals from the two elbows are then subtracted fr om each other to generate the transi- tional acceleration, while the processed signals from the two shoulders are subtracted from each other to generate the rotational velocity (see Fig. 10(b)). (a) (b) Fig. 10. (a)Rectified Derivative Algorithm. (b)Control scheme block diagram. Fig. 11. Rectified derivative algorithm example. V I . E X P E R I M E N TA L R E S U LT S A. Experimental Setup W e present the results of a preliminary study con- ducted on a consenting adult participant approved by Northwestern University’s Institutional Review Board (IRB). The participant wor e a shirt, embedded with 52 piezo-resistive sensors, capable of detecting the wearer ’s residual mobility . W ith the shirt on, the subject was seated in front of a virtual reality system (discussed in section III & IV). The virtual scene depicted in Fig.13(c) was modelled on a generic building floorplan, with mul- tiple rooms, doors and corridors. A thick white line was marked on the floor and the subject was asked to navi- gate through the corridors and doorways following the white track Fig. 13(d). The subject was able to navigate In Pr oc. Cogsys 2008 6 in the environment with little practice using arm and shoulder movements. Fig. 13(a) shows the trajectory of the virtual wheelchair (red line) as the subject attempted to track the pathway (blue line). The raw shirt signals, extracted principal components and relative controls for the trajectory experiment are shown in Fig. 12(c). B. T rajectory Analysis After each trial, the trajectories obtained from the participant’s body movements wer e plotted against the prescribed path. It is important to note her e that all of the trajectories were obtained using a uniform sensor shirt control scheme. W e analyzed the trajectories using the following measures: 1) The distance travelled by the subject from the start point to the end point of the prescribed path called ( D ist ) (shown in fig12(a,b)). 2) The error between the prescribed trajectory and the subject’s actual trajectory obtained by calculating the segmented area between both trajectories, from the start point to the end point of the prescribed path called ( E dif f ) . In the first trial result shown in fig.13(a) the subject began by familiarizing himself with the contr ol strategy (through arm and shoulder movements) without follow- ing the prescribed trajectory . When the subject completed this initial step he started following the pr escribed path (also shown in 13(a)). The participant moved in dif fer- ent directions in the virtual environment as shown in fig.13(a), to learn the control criteria. As the subject spent more time moving in the virtual scene the understanding of the control map impr oved and the subject was able to navigate the scene with greater accuracy . The data in fig.14(c,d) shows a monotonic reduction in the subject’s trajectory error from trial to trial. This is consistent with the hypothesis that, through practice, a subject is able to adapt to their envir onment using the novel control strategy of moving a wheelchair with shoulder and arm movements. The decreasing error trend is evident for both E dif f and dist over all of the trials. The results in fig.14(a,b,c&d) plot the total distance traveled by the subject for each trial to r each the pre- scribed endpoint from the starting point. The drastic reduction in area and distance error between the first, second and third trials, shows that the subject’s initial mobility adjustments ar e significant. In subsequent trials, the subject’s movement adjustments ar e more finely tuned as the subject’s familiarity of the sensor shirt- wheelchair contr ol plan impr oves resulting in smaller distance errors. V I I . C O N C L U S I O N The combination of robotics technology , intelligent interfaces and virtual r eality allow us to develop new ap- proaches to the design of assistive devices. Our appr oach −15 −10 −5 0 5 10 15 20 25 −10 −5 0 5 10 15 20 25 Trajectory Prescribed Trajectory Subject Trajectory (a) −15 −10 −5 0 5 10 15 20 25 −10 −5 0 5 10 15 20 25 Trajectory Area−wise Segmentation (b) 0 50 100 150 200 −1 −0.5 0 0.5 Rotation Velocity Control 0 50 100 150 200 −2 −1.5 −1 −0.5 PC Right Shoulder 0 50 100 150 200 −2 −1 0 PC Left Shoulder 0 50 100 150 200 0 0.5 1 1.5 2 Translation Velocity control 0 50 100 150 200 −1 0 1 2 3 PC Right Elbow 0 50 100 150 200 −1 0 1 2 PC Left Elbow (c) Fig. 12. (a) Subject trajectory obtained from one of the experiments. The black circles show the intersection points of both trajectories (b) The subject trajectory is segmented ar ea wise (i.e. in portions, after considering the points of intersections), inorder to calculate the err or area. (c) Principal components of shirt’s raw signals responsible to produce control signals neccessary for navigating the wheelchair in the virtual reality environment. In Pr oc. C ogsys 2008 7 −25 −20 −15 −10 −5 0 5 10 15 20 25 −30 −20 −10 0 10 20 30 First Trajectory First Trajectory Prescribed Trajectory (a) −15 −10 −5 0 5 10 15 20 25 −10 −5 0 5 10 15 20 25 All Trajectories (b) (c) (d) Fig. 13. In (a) red trajectory is obtained on the first day of experiment when subject used the sensor shirt and was asked to move on the marked line (in blue). (b) Shows all trajectories (in r ed) obtained after 22 preliminary experiment obtained from the subject’s travel on the marked line in the immersive virtual environment. In (c) subject sitting infront of V irtual Reality is immersed in the 3D scene and navigating the wheelchair by the residual mobility captured by sensors, along the line marked on the floor . In (d) the top view of the virtual environment is shown with the line (blue coloured) marked on the floor . 0 5 10 15 20 25 0 200 400 600 800 1000 1200 1400 No of Trials Trajectory Distance (Dist) Body Machine Interface Control Distance Reduction Learning Trend (a) 0 5 10 15 20 25 0 200 400 600 800 1000 1200 1400 1600 1800 2000 No of Trials Trajectory Area Error Trajectory Area Error with Prescribed Trajectory Error in Path Trend (b) Fig. 14. (a) Distance reduction in the trajectories after every trial. (b) Area error measure ( E dif f ) between the prescribed and subject’s trajectory at each time per day trial. In Pr oc. Cogsys 2008 8 is based on the key concept that the burden of learning should not fall entirely on the human operator . The field of machine learning has been rapidly developing in the recent decade and is now sufficiently mature to design interfaces that ar e capable of learning the user as the user is learning to operate the device. In this case, “learning the user ” means learning the degrees of freedom that the user is capable to move most efficiently and mapping these degrees of freedom onto wheelchair contr ols. W e should stress that such mapping cannot be static, as in some cases the users will eventually impr ove with practice. In other , more unfortunate cases, a disability may be progr essive and the mobility of the disabled user will gradually deteriorate. In both situations the bodymachine interface must be able to adapt and to update the transformation from body-generated signals to efficient patterns of contr ol.The final aim is to facilitate the formation of new and efficient maps fr om body motions to operational space. A C K N O W L E D G M E N T This resear ch was supported by NINDS 1R21HD053608, and by a grant of the Craig H. Neilsen Foundation. TG received support from the Macquarie University’s Post-graduate research fund. R E F E R E N C E S [1] F . A. Mussa-Ivaldi and J. L. Patton, “Robots can teach people how to move their arm,” in Proceedings of the 2000 IEEE International Conference on Robotics and Automation (ICRA) , 2000. [2] F . A. Mussa-Ivaldi(Sandr o), “Real brains for real robots,” Nature - Neural engineering , vol. 408, pp. 305–306, 2000. [3] R. G. Platts and M. H. Fraser , “Assistive technology in the rehabilitation of patients with high spinal cor d injury lesions.” Paraplegia , vol. 31, pp. 280–287, 1993. [4] F . A. Mussa-Ivaldi, A. Fishbach, T . Gulrez, A. T ognetti, and D. De, Rossi, “Remapping the residual motor space of spinal- cord injured patients for the control of assistive devices.” in Neuroscience 2006 , Atlanta, Georgia-USA., October 14-18, 2006. [5] M. W . Post, F . W . van Asbeck, A. J. van Dijk, and A. J. Schri- jvers, “Spinal cord injury rehabilitation: 3 functional outcomes.” Archives of Physical Medicine and Rehabilitation. , vol. 87, pp. 59–64, 1997. [6] C. C. Flynn and C. M. Clark, “Rehabilitation technology: Assess- ment practices in vocational agencies.” Assistive T echnology , vol. 7, pp. 111–118, 1995. [7] R. A. Cooper , Wheelchair Selection and Configuration. Demos Medical Publishing LLC. ISBN 1888799188., 1998. [8] ——, “Stability of a wheelchair controlled by a human pilot,” IEEE T ransactions on Rehabilitation Engineering. , vol. 1(4), pp. 193–206. [9] E. Prassler , J. Scholz, and P . Fiorini, “A robotics wheelchair for crowded public environment.” IEEE Robotics & Automation Magazine , vol. 8(1), pp. 38–45. [10] S. Levine, D. Bell, L. Jaros, R. Simpson, Y . Koren, and J. Bor enstein, “The navchair assistive wheelchair navigation system,” IEEE T ransactions on Rehabilitation Engineering , vol. 7(4), pp. 443–451. [11] H. A. Y anco, Assistive T echnology and Artificial Intelligence . Springer Berlin / Heidelberg., 2004. [12] R. Simpson, D. Poirot, and F . Baxter , “The hephaestus smart wheelchair system,” IEEE T ransactions on Rehabilitation Engineer- ing , vol. 10(2), pp. 118–122. [13] L. Fehr , W . E. Langbein, and S. B. Skaar , “Adequacy of power wheelchair control interfaces for persons with severe disabilities: a clinical survey ,” Journal of Rehabilitation Research and Development , vol. 37, pp. 353–360, 2000. [14] A. Kubler , “Brain computer communication: unlocking the locked,” Psychology Bulletin , vol. 127, pp. 358–375, 2001. [15] F . A. Mussa-Ivaldi and S. A. Solla, “Neural primitives for motion control,” IEEE Journal of Oceanic Engineering , vol. 29, pp. 640–650, 2004. [16] K. K. Mosier , R. A. Scheidt, S. Acosta, and F . A. Mussa-Ivaldi, “Remapping hand movements in a novel geometrical environ- ment,” Neurophysiology , vol. 94, pp. 4362–4372, 2005. [17] J. P . Donoghue, “Connecting cortex to machines: recent advances in brain interfaces,” Nature Neuroscience Reviews , vol. 5, pp. 1085– 1088, 2002. [18] F . A. Mussa-Ivaldi and L. E. Miller , “Brain machine interfaces: computational demands and clinical needs meet basic neuro- science,” Review , T rends in Neuroscience. , vol. 26, pp. 329–334, 2003. [19] J. L. Patton, M. Kovic, and F . A. Mussa-Ivaldi, “Custom-designed haptic training for restoring reaching ability to individuals with stroke.” Journal of Rehabilitation Research and Development (JRRD) , vol. 43 (5), pp. 643–656, 2006. [20] C. Scharver , J. Patton, R. Kenyon, and E. Kersten, “Comparing adaptation of constrained and unconstrained movements in three dimensions.” in Proceedings of the 2005 International Conference on Rehabilitation Robotics (ICORR) , 2005. [21] “Flock of birds, ascension technology corporation,” http://www .ascension-tech.com/products/flockofbir ds.php. [22] “Elastosil lr3162,” www .wacker .com. [23] W . Peng, D. T ianhuai, X. Feng, and Q. Y uanzhen, “Piezoresistivity of conductive composites filled by carbon black particles,” Acta Materlae Compositae Sinica , vol. 21, no. 6, 2004. [24] F . Lorussi, W . Rocchia, E. P . Scilingo, A. T ognetti, and D. De Rossi, “W earable redundant fabric-based sensors arrays for reconstruc- tion of body segment posture,” IEEE Sensors Journal , vol. 4, no. 6, pp. 807–818, December 2004. [25] F . Lorussi, E. Scilingo, M. T esconi, A. T ognetti, and D. De Rossi, “Strain sensing fabric for hand posture and gesture monitoring,” IEEE T ransactions On Information T echnology In Biomedicine , vol. 9, no. 3, pp. 372–381, September 2005. [26] W . Peng, X. Feng, D. T ianhuai, and Q. Y uanzhen, “T ime depen- dence of electrical resistivity under uniaxial pressures for carbon black/polymer composites,” Journal of Materials Science , vol. 39, no. 15, 2004. [27] X. Zhang, Y . Pan, Q. Zheng, and X. Yi, “Time dependence of piezoresistance for the conductor-filled polymer composites,” Journal of Polymer Science , vol. 38, no. 21, 2000. [28] “V rco cavelib,” www .vrco.com/CA VELib/OverviewCA VELib.html. [29] “Coin 3d graphics library ,” www .coin3d.org. [30] A. Johnson, D. Sandin, G. Dawe, Z. Qiu, and D. Plepys, “Devel- oping the paris : Using the cave to prototype a new vr display .” in Proceedings of IPT 2000 , Ames, Iowa, USA., Jun 2000. [31] S. Colin, M. Harrison, A. Grant, B, and Conway , “Haptic interfaces for wheelchair navigation in the built environment,” Pr esence: T eleoperators and V irtual Environments (MIT Press). , vol. 13, no. 5, pp. 520–534, Oct,2004. [32] K. ByungMoon and T . Panagiotis, “Controllers for unicycle-type wheeled robots : Some theoretical results and experimental val- idation,” IEEE T ransactions on Robotics and Automation , vol. 18, no. 3, pp. 294–307, 2002. [33] T . Satoshi, T . Gulrez, D. C. Herath, and G. W . M. Dissanayake, “Environmental recognition for autonomous robot using slam. real time path planning with dynamical localised voronoi divi- sion,” International Journal of Japan Society of Mech. Engg (JSME) , vol. 3, pp. 904–911, 2005. [34] K. Pearson, “On lines and planes of closest fit to systems of points in space.” Philosophical Magazine , vol. (6)2, p. 559572, 1901. [35] H. Hotelling, “Analysis of a complex of statistical variables into principal components.” Journal of Educational Psychology , vol. 24, pp. 41744,498–520, 1933. [36] M. Jordan, “Memoire sur les formes bilineaires,” Journal of Maths. Pures. Appl. , vol. 19, pp. 35–54, 1874. [37] E. Bryant and W . Atchley , Multivariate statistical methods: within group covariation . Str oudsberg: Halsted Press, 1975.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment