MINI REVIEW article
Decoding methods for neural prostheses: where have we reached?
- 1State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- 2Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing, China
This article reviews advances in decoding methods for brain-machine interfaces (BMIs). Recent work has focused on practical considerations for future clinical deployment of prosthetics. This review is organized by open questions in the field such as what variables to decode, how to design neural tuning models, which neurons to select, how to design models of desired actions, how to learn decoder parameters during prosthetic operation, and how to adapt to changes in neural signals and neural tuning. The concluding discussion highlights the need to design and test decoders within the context of their expected use and the need to answer the question of how much control accuracy is good enough for a prosthetic.
The field of brain-machine interfaces (BMIs) for control of motor prostheses is quickly growing (Baranauskas, 2014, for other reviews, see Tehovnik et al., 2013; Kao et al., 2014). Research in decoders, the algorithms which translate neural signals to movement commands, has largely switched focus from improving control accuracy to resolving practical considerations of future clinical deployments of prostheses. The goal of this mini-review is to briefly highlight recent (2013 to mid-2014) advances in decoding methodology for extracellular signals recorded from motor areas of the brain. The review sections are organized by main research themes, corresponding to important questions and practical considerations. At the end, the importance of developing and testing decoders in realistic contexts and the question of how much control accuracy is “good enough” for a prosthetic are discussed.
Algorithms for Decoding
Which algorithmic framework should we use for decoding? Different algorithms offer different benefits. Figure 1 illustrates three commonly-used methods. The Kalman filter and point process filter are state-based (modeling temporal evolution) and probabilistic (modeling and estimating uncertainty). The linear filter, in contrast, is a linear transformation of neural data to the decoded variables, with the advantages of simplicity and execution speed.
Figure 1. Schematic illustration of popular decoding algorithms. The Kalman and point process filters are based on the notion of a state, which holds the current estimates of the variables of interest. The state is related to neural activity through a neural model. Bayesian computations on the neural model, assuming a distribution for noise, permit probabilistic tracking of the state based on neural activity. In contrast, the linear filter is state-less; it linearly maps the recent history of neural activity to estimates of the variables of interest.
The Kalman filter’s Gaussian noise model clearly does not match the data (spike counts), yet due to its accuracy and execution speed the method has remained popular since its first use by Wu et al. (2003) (Aggarwal et al., 2013; Chen et al., 2013; Dangi et al., 2013a,c; Homer et al., 2013; Ifft et al., 2013; Jarosiewicz et al., 2013; Kao et al., 2013; Merel et al., 2013; Wong et al., 2013; Zhang and Chase, 2013; Fan et al., 2014; Golub et al., 2014; Gowda et al., 2014; Homer et al., 2014). While point process filters (for a review, see Koyama et al., 2010) offer a more realistic noise model, their use in decoding is still relatively rare (Shanechi et al., 2013; Velliste et al., 2014; Xu et al., 2014), due in part to their heavier computational burden. Recently, Citi et al. (2013) extend point process methods to model refractory periods of neurons and allow for coarser time discretization by a factor of 10, which may ease this burden. However, one feature which current point process decoders lack is the ability to assign different amounts of noise or weights to different neurons.
Linear filtering, or discrete Wiener filtering, is fading in popularity. It is still used by some research groups, either because of its computational form (Badreldin et al., 2013) or when the research focuses on other aspects of decoding (Chen et al., 2013; Chhatbar and Francis, 2013; Philip et al., 2013; Suminski et al., 2013; Willett et al., 2013). A variant of the Wiener filter method, which passes the Wiener filter output through a fitted non-linear function to compute the final output, is also used (Flint et al., 2013; Scheid et al., 2013).
Several other methods have been used in the recent literature: kernel autoregressive moving average (KARMA; Wong et al., 2013), quantized kernel least mean square (Li et al., 2014), support vector machines (Cao et al., 2013; Xu et al., 2013; Wang et al., 2014), K-nearest neighbors (Brockmeier et al., 2013; Ifft et al., 2013; Xu et al., 2013), naïve Bayes (Bishop et al., 2014), and artificial neural networks (Chen et al., 2013; Mahmoudi et al., 2013; Pohlmeyer et al., 2014). All of these methods allow highly non-linear neural models.
Variables to Decode
What values should a decoder predict? Ideally, a prosthetic should offer accurate, intuitive control that works under all likely usage contexts. In most prior work, desired positions or velocities of end-effectors were decoded. Homer et al. (2013) proposed a method for combining decoded position and velocity, to avoid choosing between the two. The method defines a quantity Δr as the difference between the decoded cursor position and the previous cursor position. The cursor position’s update is a linear combination of the decoded velocity and the decoded velocity vector rotated to the direction of Δr.
Decoders aim to predict user intentions, and it is possible that intentions may be slightly different from the observed limb movements used for parameter fitting. Fan et al. (2014) showed that the heuristic for guessing user intention during online recalibration proposed by Gilja et al. (2012), and tested with human users in a study by Jarosiewicz et al. (2013), could also be applied to the initial training data for fitting decoder parameters. This method rotates the cursor’s velocity vector towards the target and zeroes the velocity when the cursor is in the target. Fitting Kalman filter parameters on these estimates of intended movements, instead of actual limb movements, could achieve comparable gains in accuracy as online recalibration using the same scheme (Fan et al., 2014).
Two recent studies have explored decoding torque values, to allow a prosthetic to interact with objects with mass more naturally. Chhatbar and Francis (2013) showed that hybrid neural control by both torque and position produced more natural movements in novel dynamic environments. Decoding of position and torque was performed via a Wiener filter applied to the largest 20 principal components of neural activity. The final angular accelerations of the prosthetic joints were calculated by a weighted sum of the accelerations implied by the predicted positions and torques. Suminski et al. (2013) decoded position and velocity as well as torques of a two-link virtual arm with a Wiener filter. The kinematic variables were converted to torques using a position-derivative controller and the results were combined linearly with the directly decoded torques to produce the final torque output.
Besides kinematics and forces, the target location of a reach may be directly decoded to improve the trajectory of the reach. Shanechi et al. (2013) presented a real-time, two-stage decoder which first decoded target location during an instructed delay period and then decoded reach trajectory during the reach period. The decoded target location served as the goal position for an optimal feedback controller that acted as the movement model of the point process trajectory decoder.
To address the need for BMI control of limbs with many degrees of freedom, an important consideration for clinical deployment, Wong et al. (2013) used principal component analysis to reduce the dimensions of limb movements. They showed that decoding principal-component-space kinematic variables with KARMA or Kalman filtering was more accurate than decoding canonical-space kinematic variables. Ifft et al. (2013) decoded the kinematics of both arms during a bimanual reaching task using an unscented Kalman filter which included variables for both arms in its state.
Besides the biomimetic approach of designing a prosthetic so that it can be controlled like a natural limb, another approach is to use operant conditioning of neuron ensembles (for a review, see Sakurai et al., 2014) to let the user learn to control a new, synthetic actuator. Though more initial training may be required, greater final control accuracy may be possible using this paradigm. Balasubramanian et al. (2013) used two groups of neurons from M1 to control reach and grasp, which were simplified to one dimension each. The neuron groups were chosen based on their stability and functional connectivity, and algorithmic assistance was given during the operant conditioning process to assist neuronal learning. Badreldin et al. (2013) developed an unsupervised method for non-biomimetic linear filter initialization. The method performs an eigendecomposition of the sample covariance matrix of the neural data. The eigenvectors provide a basis for the space of all possible linear filters that could be fitted from the data. They designed a cost function to choose a particular linear filter from this space, which probably differs from the filter which would have been fitted by supervised linear regression. Their cost function optimizes for characteristics such as low jitter and evenly distributed weights among neurons.
How should we model neural activity? Recent studies have explored the aspects of neural models beyond movement tuning, with the hope of improving user-friendliness. Considering that a system with high latency is difficult to control, Willett et al. (2013) trained a decoder to predict intended future movements to compensate for BMI system latency and shorten the feedback loop. To predict intended future movements, they fitted a linear filter using kinematic values which were temporally offset from the neural data.
In a clinical setting, it is important that a prosthetic can be turned off when not used, to avoid undesired movements. Aggarwal et al. (2013) classified behavioral states into baseline, reaction, movement, and hold using linear discriminant analysis (LDA) on local field potentials (LFPs), and then decoded arm, hand, and finger kinematics using a Kalman filter on spike signals. The position outputs were held constant when the behavioral state decoder predicted the baseline or hold state. Similarly, Velliste et al. (2014) used LDA to detect idle (resting) arm states and set velocity to zero during idle. The baseline or idle states in these studies could serve as the “off” mode for a prosthetic.
Xu et al. (2014) included ensemble firing history, in addition to the standard tuning to kinematic variables, in the neural model. This paradigm helps model the background activity that is unrelated to movements. They used parallel computation on graphics processing units to achieve real-time execution of a point process particle filter that used this model.
Which neurons should we include in decoding? BMI researchers have long sought ways to reduce computational load to decoders and noise in neural data by excluding irrelevant neurons. Several recent studies have provided tools for finding relevant neurons. Chen et al. (2013) used variational Bayesian inference to fit parameters for a linear filter, a state-space model, and a non-linear echo state network. Using priors which favor small parameters, the inference procedure generated sparse parameter fits, and the zeroes in the fitted parameters can be interpreted as the absence of tuning.
Cao et al. (2013) determined which neurons modulated for reach direction versus hand configuration during grasping by using mutual information. In another study from the same group, Xu et al. (2013) proposed a supervised metric learning algorithm to optimize decoding of hand grasp configuration. Their gradient descent algorithm maximizes the difference between inter-class and intra-class distances while regularizing by the L1 norm, resulting in sparse weights which indicate relevance.
Also using a supervised approach, Brockmeier et al. (2013) proposed a method for computing a linear dimensionality reduction which maximizes the information between the class labels and the projected neural data. The low dimensional data can be used for visualization or decoding via distance-based methods such as K-nearest neighbors. Brockmeier et al. (2013) proposed an improved method that only uses inner products between inputs, allowing non-linear dimensionality reduction via the kernel trick. Their kernel metric learning algorithm aims to make data points with the same class labels lay close together in the output space.
How can we design movement models to assist decoding? Kalman and point process filters include movement models which can encode prior beliefs about how variables change over time. Cleverly engineering these models may make prostheses easier to control. Two studies examined how to improve the user’s ability to stop a BMI cursor when desired. Golub et al. (2014) designed a speed-dampening Kalman filter which modifies the movement model to decrease speed when fast changes in movement direction are detected, with the goal of allowing a quick change in direction to signal the desire to stop (a “hockey stop”). Using a different approach, Velliste et al. (2014) added a separate speed variable, independent of the Cartesian velocity variables, to the state space of a point process filter. This speed term dynamically adjusts the filter’s movement model error covariance so that smaller changes in position and velocity are allowed when the decoded speed is smaller.
In a general examination of movement models, Gowda et al. (2014) analyzed the linear models typically used in past studies and found that some may harbor hidden attractor points, to the detriment of controllability. They also point out that specific coefficients in movement model matrices parameterize the speed-accuracy tradeoff.
How can we improve decoder parameters during decoding? To handle poor initial parameter fits or changes in neural tuning after practice, continuous learning of decoder parameters may be required in a clinical device. There has been much recent work, mostly from Jose Carmena’s Lab (for a review, see Carmena, 2013), on improving decoder parameter fits during BMI operation, called closed-loop decoder adaptation. They adapted Kalman filter parameters via stochastic updates based on the likelihood gradient (Dangi et al., 2013a), provided tools for analysis of adaptive methods (Dangi et al., 2013b), and applied adaptation to decoding of LFPs (Dangi et al., 2013c).
Information about the target locations of reaches can help improve the parameter learning process. Kowalski et al. (2013) proposed an algorithm which uses the joint estimation paradigm (augmenting tuning parameters into the state space), combined with the “reach state equation” (Srinivasan et al., 2006) as a way to incorporate target location in decoder recalibration. Similarly, Shanechi and Carmena (2013) designed a dual filtering method which uses the target location to assist movements towards the target. The method provides the target location, assumed known, to a linear-quadratic-Gaussian optimal feedback controller which acts as the movement model of the point process decoder. A second point process filter updates the decoder parameters.
In Suminski et al.’s (2013) study, incongruence between decoded kinematics and torques were used as an error signal for recalibration. The differences between the decoded position (and velocity) and the virtual arm’s endpoint position (and velocity), as computed via the decoded torques, were used as an error signal to update torque decoder parameters via gradient descent.
Merel et al. (2013) modeled co-adaptation in BMIs as two agents (encoder and decoder) optimizing with respect to each other, under linear-quadratic-Gaussian assumptions. They derive a novel decoder update step which anticipates what the future encoder will be and updates with respect to that, instead of the current encoder. They show that this “one step ahead” update rule reduces error faster in simulations.
Signal Stability and Adaptation
Are neural signals stable over long time periods? There has been controversy as to whether updating of decoder parameters is required for long-term prosthetic usability. Recent studies have analyzed stability of signals over long time spans. Flint et al. (2013) and Scheid et al. (2013) showed that multiunit spiking activity can be stable over more than six months and LFPs can be stable for almost a year. Wang et al. (2014) found signal instability and concluded that LFPs allowed more accurate offline reconstruction than single- and multi-unit signals 1–2 years post implantation. Perge et al. (2013) found significant intra-day changes in neural firing rates and concluded that 85% of these changes were likely due to physiological mechanisms.
If decoder updates are needed, how can we improve the accuracy of updates? Recent studies have proposed heuristics to improve adaptation. Zhang and Chase (2013) used two extensions to a dual-Kalman filter. First, they updated baseline firing rates of neurons using a moving window. Second, they normalized the velocity provided to the parameter updater so that the median absolute velocity matches that of the initial training data. Kao et al. (2013) proposed a firing rate normalization that also includes a regularization term that penalizes neurons with low firing rates. They also showed that dimensionality reduction via principal component analysis improves robustness to neuron loss.
Besides updating baseline firing rates via windowed estimates, other methods for tracking baseline changes have been proposed. Bishop et al. (2014) found that most changes occur between days. They designed a classifier for movement direction using the naïve Bayes algorithm and a hierarchical model; baseline firing rates are inferred each day while the class-specific parameters and the prior distributions for the baseline firing rates are learned once on initial training data. Homer et al. (2014) designed a probabilistic algorithm for detecting infrequent, rapid changes in baseline firing rates under the Kalman filtering framework. Their method first performs a forward stepwise search for neurons which have changed in baseline firing rate and then determines the magnitude of changes.
Using a reinforcement learning approach to adaptation, two studies from the same group (Mahmoudi et al., 2013; Pohlmeyer et al., 2014) showed that an actor-critic reinforcement learning BMI that uses Hebbian learning on an artificial neural network decoder’s weights could learn weights from scratch and maintain decoding accuracy despite shuffling, loss, or gain of neurons, using only a one-bit feedback signal. In another study from the same group, Prins et al. (2013) decoded a one-bit reward signal from nucleus accumbens by clustering spike counts with k-means.
As researchers focus more on practical hurdles to clinical deployment of neural prostheses, it becomes more and more important to develop and test BMI decoders in the contexts in which actual prostheses will be used, i.e., to control artificial limbs, natural limbs via functional electrical stimulation (Moritz et al., 2008; Ethier et al., 2012; Nishimura et al., 2013), or computer cursors in graphical user interfaces. By using more realistic contexts, questions such as which variables to decode or which algorithms are sufficiently fast can be answered empirically. Realistic contexts may also uncover new considerations and obstacles to overcome.
An important question which has been thus far neglected in the field is how much control accuracy is enough? Full restoration of human ability in terms of movement accuracy may come at computational and other costs, e.g., number of recording channels, which likely trade off against other figures of merit of a prosthetic system. While we should continually endeavor to improve BMI technology, from a practical standpoint, we should also answer the question of how much control is good enough, so that engineers can design systems with clear requirements in mind.
Conflict of Interest Statement
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The author is grateful to Joseph E. O’Doherty, Mikhail A. Lebedev, and the reviewers for their helpful comments. This work was supported by the National Key Basic Research Program of China (2014CB846101) and the Fundamental Research Funds for the Central Universities.
Aggarwal, V., Mollazadeh, M., Davidson, A. G., Schieber, M. H., and Thakor, N. V. (2013). State-based decoding of hand and finger kinematics using neuronal ensemble and LFP activity during dexterous reach-to-grasp movements. J. Neurophysiol. 109, 3067–3081. doi: 10.1152/jn.01038.2011
Badreldin, I., Southerland, J., Vaidya, M., Eleryan, A., Balasubramanian, K., Fagg, A., et al. (2013). “Unsupervised decoder initialization for brain-machine interfaces using neural state space dynamics,” in 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER) (San Diego, CA), 997–1000. doi: 10.1109/NER.2013.6696104
Balasubramanian, K., Southerland, J., Vaidya, M., Qian, K., Eleryan, A., Fagg, A. H., et al. (2013). Operant conditioning of a multiple degree-of-freedom brain-machine interface in a primate model of amputation. Conf. Proc. IEEE Eng. Med. Biol. Soc. 303–306. doi: 10.1109/EMBC.2013.6609497
Bishop, W., Chestek, C. C., Gilja, V., Nuyujukian, P., Foster, J. D., Ryu, S. I., et al. (2014). Self-recalibrating classifiers for intracortical brain-computer interfaces. J. Neural Eng. 11:026001. doi: 10.1088/1741-2560/11/2/026001
Brockmeier, A. J., Sanchez Giraldo, L. G., Emigh, M. S., Bae, J., Choi, J. S., Francis, J. T., et al. (2013). Information-theoretic metric learning: 2-D linear projections of neural data for visualization. Conf. Proc. IEEE Eng. Med. Biol. Soc. 5586–5589. doi: 10.1109/EMBC.2013.6610816
Cao, Y., Hao, Y., Liao, Y., Xu, K., Wang, Y., Zhang, S., et al. (2013). Information analysis on neural tuning in dorsal premotor cortex for reaching and grasping. Comput. Math. Methods Med. 2013:730374. doi: 10.1155/2013/730374
Chen, Z., Takahashi, K., and Hatsopoulos, N. G. (2013). Sparse Bayesian inference methods for decoding 3D reach and grasp kinematics and joint angles with primary motor cortical ensembles. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2013, 5930–5933. doi: 10.1109/EMBC.2013.6610902
Chhatbar, P. Y., and Francis, J. T. (2013). Towards a naturalistic brain-machine interface: hybrid torque and position control allows generalization to novel dynamics. PLoS One 8:e52286. doi: 10.1371/journal.pone.0052286
Dangi, S., Gowda, S., and Carmena, J. (2013a). Likelihood gradient ascent (LGA): a closed-loop decoder adaptation algorithm for brain-machine interfaces. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2013, 2768–2771. doi: 10.1109/EMBC.2013.6610114
Dangi, S., Orsborn, A. L., Moorman, H. G., and Carmena, J. M. (2013b). Design and analysis of closed-loop decoder adaptation algorithms for brain-machine interfaces. Neural Comput. 25, 1693–1731. doi: 10.1162/NECO_a_00460
Dangi, S., So, K., Orsborn, A., Gastpar, M., and Carmena, J. (2013c). Brain-machine interface control using broadband spectral power from local field potentials. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2013, 285–288. doi: 10.1109/EMBC.2013.6609493
Ethier, C., Oby, E. R., Bauman, M. J., and Miller, L. E. (2012). Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature 485, 368–371. doi: 10.1038/nature10987
Fan, J. M., Nuyujukian, P., Kao, J. C., Chestek, C. A., Ryu, S. I., and Shenoy, K. V. (2014). Intention estimation in brain-machine interfaces. J. Neural Eng. 11:016004. doi: 10.1088/1741-2560/11/1/016004
Flint, R. D., Wright, Z. A., Scheid, M. R., and Slutzky, M. W. (2013). Long term, stable brain machine interface performance using local field potentials and multiunit spikes. J. Neural Eng. 10:056005. doi: 10.1088/1741-2560/10/5/056005
Gilja, V., Nuyujukian, P., Chestek, C. A., Cunningham, J. P., Yu, B. M., Fan, J. M., et al. (2012). A high-performance neural prosthesis enabled by control algorithm design. Nat. Neurosci. 15, 1752–1757. doi: 10.1038/nn.3265
Golub, M. D., Yu, B. M., Schwartz, A. B., and Chase, S. M. (2014). Motor cortical control of movement speed with implications for brain-machine interface control. J. Neurophysiol. doi: 10.1152/jn.00391.2013. [Epub ahead of print].
Gowda, S., Orsborn, A., Overduin, S., Moorman, H., and Carmena, J. (2014). Designing dynamical properties of brain-machine interfaces to optimize task-specific performance. IEEE Trans. Neural Syst. Rehabil. Eng. doi: 10.1109/tnsre.2014.2309673. [Epub ahead of print].
Homer, M., Harrison, M., Black, M., Perge, J., Cash, S., Friehs, G., et al. (2013). “Mixing decoded cursor velocity and position from an offline Kalman filter improves cursor control in people with tetraplegia,” in 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER) (San Diego, CA), 715–718. doi: 10.1109/NER.2013.6696034
Homer, M., Perge, J., Black, M., Harrison, M., Cash, S., and Hochberg, L. (2014). Adaptive offset correction for intracortical brain-computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 22, 239–248. doi: 10.1109/TNSRE.2013.2287768
Ifft, P. J., Shokur, S., Li, Z., Lebedev, M. A., and Nicolelis, M. A. L. (2013). A brain-machine interface enables bimanual arm movements in monkeys. Sci. Transl. Med. 5:210ra154. doi: 10.1126/scitranslmed.3006159
Jarosiewicz, B., Masse, N. Y., Bacher, D., Cash, S. S., Eskandar, E., Friehs, G., et al. (2013). Advantages of closed-loop calibration in intracortical brain-computer interfaces for people with tetraplegia. J. Neural Eng. 10:046012. doi: 10.1088/1741-2560/10/4/046012
Kao, J., Nuyujukian, P., Stavisky, S., Ryu, S., Ganguli, S., and Shenoy, K. (2013). Investigating the role of firing-rate normalization and dimensionality reduction in brain-machine interface robustness. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2013, 293–298. doi: 10.1109/EMBC.2013.6609495
Kao, J., Stavisky, S., Sussillo, D., Nuyujukian, P., and Shenoy, K. (2014). Information systems opportunities in brain-machine interface decoders. Proc. IEEE 102, 666–682. doi: 10.1109/jproc.2014.2307357
Li, L., Brockmeier, A. J., Choi, J. S., Francis, J. T., Sanchez, J. C., and Príncipe, J. C. (2014). A tensor-product-kernel framework for multiscale neural activity decoding and control. Comput. Intell. Neurosci. 2014:870160. doi: 10.1155/2014/870160
Mahmoudi, B., Pohlmeyer, E. A., Prins, N. W., Geng, S., and Sanchez, J. C. (2013). Towards autonomous neuroprosthetic control using Hebbian reinforcement learning. J. Neural Eng. 10:066005. doi: 10.1088/1741-2560/10/6/066005
Merel, J. S., Fox, R., Jebara, T., and Paninski, L. (2013). “A multi-agent control framework for co-adaptation in brain-computer interfaces,” in Advances in Neural Information Processing Systems, (vol. 26) eds C. Burges, L. Bottou, M. Welling, Z. Ghahramani and K. Weinberger (Red Hook, NY: Curran Associates, Inc.), 2841–2849.
Nishimura, Y., Perlmutter, S. I., and Fetz, E. E. (2013). Restoration of upper limb movement via artificial corticospinal and musculospinal connections in a monkey with spinal cord injury. Front. Neural Circuits. 7:57. doi: 10.3389/fncir.2013.00057
Perge, J. A., Homer, M. L., Malik, W. Q., Cash, S., Eskandar, E., Friehs, G., et al. (2013). Intra-day signal instabilities affect decoding performance in an intracortical neural interface system. J. Neural Eng. 10:036004. doi: 10.1088/1741-2560/10/3/036004
Philip, B., Rao, N., and Donoghue, J. (2013). Simultaneous reconstruction of continuous hand movements from primary motor and posterior parietal cortex. Exp. Brain Res. 225, 361–375. doi: 10.1007/s00221-012-3377-0
Pohlmeyer, E. A., Mahmoudi, B., Geng, S., Prins, N. W., and Sanchez, J. C. (2014). Using reinforcement learning to provide stable brain-machine interface control despite neural input reorganization. PLoS One 9:e87253. doi: 10.1371/journal.pone.0087253
Prins, N. W., Geng, S., Pohlmeyer, E. A., Mahmoudi, B., and Sanchez, J. C. (2013). Feature extraction and unsupervised classification of neural population reward signals for reinforcement based BMI. Conf. Proc. IEEE Eng. Med. Biol. Soc. 5250–5253. doi: 10.1109/EMBC.2013.6610733
Sakurai, Y., Song, K., Tachibana, S., and Takahashi, S. (2014). Volitional enhancement of firing synchrony and oscillation by neuronal operant conditioning: interaction with neurorehabilitation and brain-machine interface. Front. Syst. Neurosci. 8:11. doi: 10.3389/fnsys.2014.00011
Scheid, M., Flint, R., Wright, Z., and Slutzky, M. (2013). Long-term, stable behavior of local field potentials during brain machine interface use. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2013, 307–310. doi: 10.1109/EMBC.2013.6609498
Shanechi, M., and Carmena, J. (2013). “Optimal feedback-controlled point process decoder for adaptation and assisted training in brain-machine interfaces,” in 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER) (San Diego, USA), 653–656. doi: 10.1109/NER.2013.6696019
Shanechi, M. M., Williams, Z. M., Wornell, G. W., Hu, R. C., Powers, M., and Brown, E. N. (2013). A real-time brain-machine interface combining motor target and trajectory intent using an optimal feedback control design. PLoS One 8:e59049. doi: 10.1371/journal.pone.0059049
Srinivasan, L., Eden, U. T., Willsky, A. S., and Brown, E. N. (2006). A state-space analysis for reconstruction of goal-directed movements using neural signals. Neural Comput. 18, 2465–2494. doi: 10.1162/neco.2006.18.10.2465
Suminski, A., Fagg, A., Willett, F., Bodenhamer, M., and Hatsopoulos, N. (2013). Online adaptive decoding of intended movements with a hybrid kinetic and kinematic brain machine interface. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2013, 1583–1586. doi: 10.1109/EMBC.2013.6609817
Velliste, M., Kennedy, S. D., Schwartz, A. B., Whitford, A. S., Sohn, J.-W., and McMorland, A. J. (2014). Motor cortical correlates of arm resting in the context of a reaching task and implications for prosthetic control. J. Neurosci. 34, 6011–6022. doi: 10.1523/JNEUROSCI.3520-13.2014
Wang, D., Zhang, Q., Li, Y., Wang, Y., Zhu, J., Zhang, S., et al. (2014). Long-term decoding stability of local field potentials from silicon arrays in primate motor cortex during a 2D center out task. J. Neural Eng. 11:036009. doi: 10.1088/1741-2560/11/3/036009
Willett, F. R., Suminski, A. J., Fagg, A. H., and Hatsopoulos, N. G. (2013). Improving brain-machine interface performance by decoding intended future movements. J. Neural Eng. 10:026011. doi: 10.1088/1741-2560/10/2/026011
Wong, Y., Putrino, D., Weiss, A., and Pesaran, B. (2013). Utilizing movement synergies to improve decoding performance for a brain machine interface. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2013, 289–292. doi: 10.1109/EMBC.2013.6609494
Wu, W., Black, M. J., Gao, Y., Bienenstock, E., Serruya, M., Shaikhoun, A., et al. (2003). “Neural decoding of cursor motion using a Kalman filter,” in Advances in Neural Information Processing Systems (Vol. 15), eds S. Becker, S. Thrun and K. Obermayer (Boston: MIT Press), 133–140.
Xu, K., Wang, Y., Wang, F., Liao, Y., Zhang, Q., Li, H., et al. (2014). Neural decoding using a parallel sequential Monte Carlo method on point processes with ensemble effect. Biomed Res. Int. 2014:685492. doi: 10.1155/2014/685492
Xu, K., Wang, Y., Wang, Y., Wang, F., Hao, Y., Zhang, S., et al. (2013). Local-learning-based neuron selection for grasping gesture prediction in motor brain machine interfaces. J. Neural Eng. 10:026008. doi: 10.1088/1741-2560/10/2/026008
Keywords: brain-machine interface, brain computer interface, decoding, neural prosthetic, neural engineering, multichannel recordings, signal processing
Citation: Li Z (2014) Decoding methods for neural prostheses: where have we reached? Front. Syst. Neurosci. 8:129. doi: 10.3389/fnsys.2014.00129
Received: 29 May 2014; Accepted: 29 June 2014;
Published online: 16 July 2014.
Edited by:Ioan Opris, Wake Forest University, USA
Reviewed by:Gytis Baranauskas, Neuroscience Institute at Lithuanian University of Health Sciences, Lithuania
David Sussillo, Stanford University, USA
Copyright © 2014 Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Zheng Li, State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xin Jie Kou Wai Da Jie, Beijing 100875, China e-mail: firstname.lastname@example.org