Skip to main content

ORIGINAL RESEARCH article

Front. Neurol., 02 November 2023
Sec. Epilepsy

Automatic classification of hyperkinetic, tonic, and tonic-clonic seizures using unsupervised clustering of video signals

\r\nPetri Ojanen,
Petri Ojanen1,2*Csaba KertszCsaba Kertész2Elizabeth MoralesElizabeth Morales2Pragya RaiPragya Rai2Kaapo AnnalaKaapo Annala2Andrew KnightAndrew Knight2Jukka Peltola,,Jukka Peltola1,2,3
  • 1Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
  • 2Neuro Event Labs, Tampere, Finland
  • 3Department of Neurology, Tampere University Hospital, Tampere, Finland

Introduction: This study evaluated the accuracy of motion signals extracted from video monitoring data to differentiate epileptic motor seizures in patients with drug-resistant epilepsy. 3D near-infrared video was recorded by the Nelli® seizure monitoring system (Tampere, Finland).

Methods: 10 patients with 130 seizures were included in the training dataset, and 17 different patients with 98 seizures formed the testing dataset. Only seizures with unequivocal hyperkinetic, tonic, and tonic-clonic semiology were included. Motion features from the catch22 feature collection extracted from video were explored to transform the patients' videos into numerical time series for clustering and visualization.

Results: Changes in feature generation provided incremental discrimination power to differentiate between hyperkinetic, tonic, and tonic-clonic seizures. Temporal motion features showed the best results in the unsupervised clustering analysis. Using these features, the system differentiated hyperkinetic, tonic and tonic-clonic seizures with 91, 88, and 45% accuracy after 100 cross-validation runs, respectively. F1-scores were 93, 90, and 37%, respectively. Overall accuracy and f1-score were 74%.

Conclusion: The selected features of motion distinguished semiological differences within epileptic seizure types, enabling seizure classification to distinct motor seizure types. Further studies are needed with a larger dataset and additional seizure types. These results indicate the potential of video-based hybrid seizure monitoring systems to facilitate seizure classification improving the algorithmic processing and thus streamlining the clinical workflow for human annotators in hybrid (algorithmic-human) seizure monitoring systems.

1. Introduction

Overall, 30% of patients diagnosed with epilepsy suffer from uncontrolled seizures despite the adequate use of anti-seizure medications (ASM) (1). Drug-resistant epilepsy (DRE) causes an increased risk of mortality and morbidity (2) and sudden unexplained death in epilepsy (SUDEP) (3). Accurate seizure documentation is essential to optimize the treatment of epilepsy. Previous research studies have demonstrated inaccuracies related to seizure diaries (4, 5), which has given an impetus for the development of various seizure detection systems, aiming for more objective seizure documentation. Though seizure detection systems have improved seizure documentation, seizure classification based on videos or other data can still be challenging (68).

The International League Against Epilepsy (ILAE) has recently published new guidelines for the classification of epileptic seizures (9). ILAE seizure classification categorizes seizures based on their focal or generalized onset, level of awareness, and non-motor and motor manifestations. Seizures can also be classified based on semiology, only highlighting the relevance of the observable ictal motor and other manifestations without electrophysiological information from EEG. In semiological classification, motor manifestations are depicted as simple or complex based on the complexity of the movement (1012). Laterality (left, right, or bilateral) and chronological order of the symptoms are additional classification features (10, 13).

Video-based methods in the detection of epileptic seizures have been widely studied with high sensitivity and specificity for detection performance (14). Studies have shown promising results in the analysis of semiological features by utilizing convolutional neural networks (CNN) and long short-term memory (LSTM) in facial and body movement analysis (15), deep learning methods (16), and movement trajectories (17) in body movement analysis and ictal sound recordings in seizure semiology analysis (18). However, automatic seizure classification is a less explored topic. Temporal lobe epilepsy (TLE) and frontal lobe epilepsy (FLE) have been differentiated by utilizing movement trajectories (19) or quantitative movement analysis (20). Infrared and depth sensors were used in 3D video data to differentiate between seizures in FLE, TLE, and non-epileptic events reaching a cross-subject f1-score (a metric to assess machine-learning predictive skill) of 0.833 when differentiating between FLE and TLE seizures and of 0.763 when differentiating between FLE, TLE, and non-epileptic events from each other (21). However, only a few studies have evaluated the performance of deep learning in the analysis of multiple distinct motor seizure types.

The Nelli seizure monitoring system is an audio/video-based semi-automated (hybrid) seizure monitoring platform that uses computer vision and machine learning to identify kinematic data (motion, oscillation, and audio) commonly associated with seizures with a positive motor component and human experts to visually assess these epochs (22). Moreover, the utility of the hybrid (algorithm-human) system for reviewing nocturnal video recordings to significantly decrease the workload and to provide accurate classification of major motor seizures (tonic-clonic, clonic, and focal motor seizures) has been demonstrated (23). The potential to differentiate seizure types by utilizing algorithmic signal profiles was first explored in a previous case study (24). Even though Nelli®'s algorithmic performance in seizure detection has been demonstrated in previous validation studies (25), the potential of the algorithmic part of the system to classify seizure types has not been previously explored.

Given the potential of deep-learning methods to differentiate seizure types and the need for a tool to assist in seizure classification, novel methods to classify specific seizure types using video monitoring and deep learning are needed. One recent development on this frontier has been the catch22 feature collection (26). The catch22 project has implemented over 7,700 time-series features from multiple science fields to find the best-performing statistics for time-series classification, finally selecting the top 22 features for their software library to perform feature extraction or dimension reduction for time-series analysis. These features have been applied successfully in a wide range of scientific problems: e.g., tree deformation detection in winds (27), hydroclimatic data processing (28), human breast cancer cell detection (29), commercial sales prediction (30), or cardiometabolic risk detection (31). They have not been previously applied for video-based seizure classification.

The aim of this study was to evaluate the performance of a novel signal algorithm model in classifying tonic, tonic-clonic, and hyperkinetic seizures by utilizing motion and oscillation signal profiles. This study further examines the previously recognized potential of the Nelli system to automatically classify aforementioned seizure types by utilizing signal profiles and deep learning to take a step toward automatic seizure classification.

2. Methods

2.1. Patient population

A total of 27 patients with focal DRE were enrolled in the study. The study protocol and informed consent forms were reviewed and approved by the ethics committee of Tampere University Hospital. Signed informed consent was obtained from each participant. All patients were on two or more ASMs, and some of them were also treated with vagal nerve stimulation (VNS) therapy. Each patient was monitored from 4 to 8 weeks in a home setting for 7–11.5 h per night (average 9.19 h, median 9.25 h). Unequivocal seizures from previous recording sessions of enrolled patients were utilized only if they lacked unequivocal seizures in the latest monitoring period. Training patients were selected partly from a recent interventional study (22) and partly from Nelli® post-market surveillance (PMS) recordings, and testing patients were selected from Nelli® PMS recordings with the requirement that, for each subject, at least three unequivocal seizures of these three seizure types of interest were recorded during Nelli® registration and they had been described in detail in previous video-EEG reports. Due to the exclusion criteria listed above, 130 seizures from 10 patients formed a cohort for the model training, including four patients from the previous study (22). The testing patient cohort consisted of 98 seizures from 17 patients, who were not included in the training phase, to evaluate the performance of the model. Patient demographics and seizure counts are presented in Table 1.

TABLE 1
www.frontiersin.org

Table 1. Patient demographics and clinical characteristics.

2.2. Video monitoring

Video monitoring was performed by NEL (Neuro Event Labs, Tampere, Finland) using the Nelli® seizure monitoring system consisting of a camera and a microphone installed at the patient's bedside in their home so that the patient stays in sight of the camera during periods of rest. Video data from all patients were manually annotated. The epochs of suspected seizure events were reviewed by expert epilepsy annotators. Previous VEM (video-EEG monitoring) reports obtained before the start of the study were used for the assessment of behavioral features of seizures that occurred during Nelli monitoring. Seizures were classified by professionals according to the ILAE 2017 classification (9). Suspected seizure events were excluded from further analysis if they were not unequivocally identified as seizures by comparing them to previous VEM reports which were considered a feasible reference standard for the phase two study as previously suggested (32). Seizures were considered unequivocal to a seizure type if they were identified based on VEM reports and they shared similar manifestations as described in classification guidelines. All seizures belonging to the hyperkinetic, tonic, or tonic-clonic seizure type categories were included. These three seizure types were the most common in available recordings, providing a sufficient number of seizures for further analysis. Seizure semiology was defined according to semiological classification guidelines (10, 12) for each seizure type, using additional descriptors for the observable movements during a seizure. Seizure semiologies for each patient have been presented in Supplementary material 1.

To optimize the seizure signal analysis and minimize the effect of the background noise of the video event, seizure video clips were cropped from the raw data by a professional epileptologist. Videos were cropped so that they included the seizure onset and the assumed ending of the seizure activity by comparing the seizure manifestations in recorded video events and VEM reports. The postictal phase was left out of the analysis. For each seizure type and patient, the medians of motion, audio and oscillation signal were calculated using the method described in Section 2.3.

2.3. Signal generation from video data

The model of the system has been described in the previous proof-of-concept study (24) in detail. Similarly, the system relies on motion and oscillation biomarkers.

To create a motion signal, a background subtraction method by Zivkovic and Van Der Hejden (33) was combined with a stereo correspondence filter (34) based on semi-global matching (implemented in OpenCV). The background subtraction model created a binary mask of the moving parts of the image, and the proportion of the moving pixels in an image defined a one-dimensional motion signal for a video.

For movements with an oscillatory component (as present in tonic-clonic seizures), an optical flow-based method was utilized. By using this method, a time-series motion vector field was created. This vector field was utilized to construct a path history, where only the unbroken paths during a period of 1 s were analyzed for direction reversal (a reversal is each change in direction over 90°). An oscillation frequency of 2.5 Hz was previously found to be a good filter for separating ictal oscillation from paroxysmal events (24).

2.4. Clustering analysis

To separate hyperkinetic, tonic, and tonic-clonic seizures, unsupervised data representations were explored. A common technique for data visualization and exploration was used; a cluster diagram where a data sample is represented by a point on a 2D chart that is inspected by a human or a clustering algorithm to find meaningful structures (clusters) to solve a problem. Since the samples are usually multidimensional, they must be transformed by dimensional reduction algorithms into 2D space before drawing the diagram. After the original data are projected into lower dimensions, the diagram axes do not have any particular unit or meaning.

After the complete feature extraction from the patient videos, the time series of motion and oscillation signals were reduced in two dimensions. Each time series was transformed to lower data space by extracting time-series statistical features in order to have low and fixed data dimensions. The seizures had varying duration and variable length time series, but a fixed data dimension for conducting a principal component analysis (PCA) for the data reduction into 2D was used. To analyze ictal motion characteristics in the video data, motion features from the catch22 feature collections were utilized in this study. During the initial experimentation, 22 statistics were calculated by the catch22 library (26) from the training set and fed into PCA. With the final 2D data, cluster plots were created representing the seizures in different colors to visualize their distribution. The discrimination power of 22 statistics was further analyzed on the training set, and the original catch22 feature set was then reduced to five features before the PCA step by visually observing the relatively unchanging cluster diagram after incrementally removing redundant features.

Data clustering can be especially applied for data visualization, but separate training and testing steps were implemented in this experimentation. The dimension reduction methods were first used for the training patient group to develop an initial visualization and to find the most optimal parameters for seizure differentiation. After the training phase, the computed PCA coefficients were applied to the testing patient data for projecting the testing data points by the same dimension reduction transformations into the 2D data space and assessing the performance of the model by visual evaluation of data points and then by classification analysis discussed below. In the final step, agglomerative clustering was used to discover clusters on the image and observe how the unsupervised cluster represents the different seizure types.

2.5. Classification analysis

Unlike the unsupervised clustering analysis that is based on dimension reduction and cluster identification on 2D plots, a classification method was also employed in this study to assess the performance of a supervised learning approach. A better insight can be given to the discriminative power of the extracted features by analyzing the same data using these different techniques because the first method reduces the data dimension drastically while the second method works on the time-series directly. Note that the time-series of the pixel statistics are already a heavily reduced data dimension compared to the original video frames. A deep-learning network (multivariate long short-term memory with fully connected layers—MLSTM-FCM) specialized for time-series classification (35) was built on the training set to classify the data points of hyperkinetic, tonic, and tonic-clonic seizures and make predictions for unseen data points of the testing set. The implementation was based on the tsai library (36). The hyperparameters were an RNN layer count of 2, a hidden neuron count of 200, and RNN and FCN dropouts of 0.05. The previous clustering method transformed the time series into 2D data with dimensional reduction techniques (catch22, PCA), and the MLSTM-FCN model worked on the time series directly, processing and classifying a time series into a single seizure category.

After the automatic analysis of the data points of the testing set, we evaluated the performance of the deep-learning network by calculating the accuracy of the classification of hyperkinetic, tonic, and tonic-clonic seizures. Based on the classification, the overall accuracy of the model was determined. The description of the method used in this study is presented in Figure 1.

FIGURE 1
www.frontiersin.org

Figure 1. Summary of the method.

3. Results

3.1. Unsupervised clustering analysis

Adjunctive changes in the feature generation enabled improved discrimination power to differentiate between tonic-clonic, hyperkinetic, and tonic seizures. Two different motion feature setups, static motion features (Figure 2) and temporal motion features (Figure 3), were used to compare the feasibility of the features. Oscillation tracking was not included in these figures, as it did not improve the seizure cluster formation in clustering analysis (see Supplementary material 2). Although the data points are grouped in different shapes in this case compared to Figure 3, the general considerations do not change, and many tonic-clonic points and a considerable fraction of the clonic points appear in the hyperkinetic cluster.

FIGURE 2
www.frontiersin.org

Figure 2. Clustering analysis of tonic-clonic, hyperkinetic, and tonic seizures using static motion features in the training and testing phase (A). The second figure shows the agglomerative clustering results (B).

FIGURE 3
www.frontiersin.org

Figure 3. Clustering analysis of tonic-clonic, hyperkinetic, and tonic seizures using temporal motion difference features in the training and testing phase (A). The second figure shows the agglomerative clustering results (B).

The static motion features are the original time series extracted from the videos while the temporal features are a delta derivative time series calculated by a lag (a fixed duration, e.g. 1 s), often referred to as time-series lag difference or delta function measuring the change over time. Delta values were calculated with a difference between the current value and a past value (e.g., 1 s before). In all figures, tonic-clonic, hyperkinetic, and tonic seizures were marked as green, blue, and orange, respectively, indicating training phase results, and light green, light blue, and light orange, respectively, indicating testing phase results.

In Figure 2A, a cluster of tonic-clonic and tonic seizures appeared on the left side and hyperkinetic seizure clusters appeared on the right side of the figure. This cluster is visually noticeable in the training phase, while the hyperkinetic cluster spreads to both sides of the figure in the testing phase. Different types of seizures were interspersed in the center of the figure, and hyperkinetic seizures were infused in the left cluster among the tonic-clonic and tonic seizures. The tonic-clonic seizure does not represent a coherent, separate structure among the data points. Figure 2B shows unsupervised clustering results and the agglomerative clustering isolated the tonic and hyperkinetic clusters successfully on the left and right sides.

In Figure 3A, the clusters switched sides: hyperkinetic and tonic clusters were clearly separate, but the majority of tonic-clonic seizures were part of the hyperkinetic seizure cluster region. The hyperkinetic seizure cluster was plotted clearly on the left side of the figure in both the training and testing phases. The tonic seizure cluster was more dispersed in the testing phase than in the training phase. Tonic-clonic seizures did not separate from hyperkinetic seizures in either of the phases when this motion feature was used. In Figure 3B, the agglomerative clustering discovered two clusters, one for hyperkinetic and one for tonic seizures. The tonic seizures are spread above the hyperkinetic seizures on the left side and the clustering was not able to include this upper part in the tonic cluster.

3.2. Performance analysis

To analyze the performance of the method, incremental analysis was done in addition to unsupervised clustering analysis. By training a deep-learning network based on the background subtraction signal and comparing the results with original annotations, we calculated the accuracy of the seizure classification method. We ran a leave-one-out cross-validation of the deep-learning method. The cross-validation was then repeated 100 times to calculate an estimation of the unbiased accuracy and its confidence interval since the model performance has some variability between each training run because the deep-learning training is not deterministic with the random weight initialization. Our method achieved an overall mean accuracy of 74.68% and an f1-score of 74.26%. The hyperkinetic, tonic, and tonic-clonic seizures had mean accuracies of 91.03, 87.90, and 45.12%, respectively. The mean f1-scores were 92.83, 89.79, and 37.18%, respectively. The hyperkinetic and the tonic seizures had very similar accuracy and f1-score values, while the f1-score of the tonic-clonic seizure was lower by 8%, compared to the accuracy value. The accuracy and f1-scores of hyperkinetic and tonic seizures were high by approximately 90% while the tonic-clonic seizure had only 45.12 and 37.18%, respectively. Regarding the confidence intervals (p = 0.05), the hyperkinetic, tonic, and tonic-clonic seizures had 1.1, 1.5, and 4.2% for accuracy and 1, 1.5, and 4.1% for f1-scores, respectively. The confidence intervals were similar for hyperkinetic and tonic seizures, but more than double for tonic-clonic seizures. As the low accuracy and its high confidence interval of tonic-clonic seizures suggest, this seizure type was not recognized on a satisfactory level because of the lack of enough patient data to distinguish this seizure type from the other two types. This result is on pair with our unsupervised clustering results where the hyperkinetic and tonic seizures can be separated quite well, but the tonic-clonic data points are spread around. The accuracy and confidence interval of each seizure type are presented in Table 2.

TABLE 2
www.frontiersin.org

Table 2. The unbiased accuracy, f1-scores, and confidence intervals after 100 cross-validation runs.

4. Discussion

In this study, we present a novel method for differentiating between tonic-clonic, tonic, and hyperkinetic motor seizures based on the automatic analysis of motion and oscillation signals from previously annotated video data. This algorithmic component of Nelli® hybrid (algorithmic-human) seizure monitoring system has been previously studied for automated seizure detection, but in the present study, it was tested as an automated seizure classification tool by applying adjunctive video and unsupervised clustering analysis for the first time. We intended to develop the differentiation algorithmic ability that would aid clinicians in classifying seizures for Nelli® hybrid seizure monitoring, which is currently used in clinical practice (37). In the present study, our model differentiated and classified hyperkinetic and tonic seizures with a promising accuracy of 91 and 88%, respectively. However, tonic-clonic seizures were classified with only 45% accuracy. The f1-scores for hyperkinetic, tonic, and tonic-clonic seizures were 93, 90, and 37%, respectively.

Screening and differential diagnosis between different seizure types are essential components in the detection of seizures and the correct implementation of treatment (4). Seizure classification relies on objective criteria of ictal observations of caregivers or clinicians. Most motor seizures have distinguishable motor manifestations that indicate a specific seizure type; however, oftentimes, there are no eyewitnesses, especially, for nocturnal seizures (4). However, depending on the motor manifestations of seizures, seizure type classification can be difficult, even with the help of seizures recorded on videos because seizure semiology is often prone to inter-observer discrepancy due to qualitative criteria reliance (observer bias) (38). Moreover, it is very time- and resource-consuming to manually annotate and classify seizures of each patient (39, 40). A system capable of measuring seizure features qualitatively as well as quantitatively would allow the detection of changes in seizure severity or seizure propagation. Also, in case of multiple seizure types and high seizure frequency during the monitoring period, automatic tools could be useful to save time and resources in video annotation during seizure monitoring. Automatic classification could also improve seizure alarm systems by enabling alarms for different seizure types, which might be useful, especially in epilepsy monitoring units or institutional settings. Furthermore, EEG-based automatic seizure classification methods have already been examined with promising results as a clinical application of the automatic seizure classification tool (41).

In previous studies, automatic classification of epileptic seizures from psychogenic non-epileptic events was conducted by a multi-stream approach, reaching f1-scores and accuracy of 0.89 and 0.87, respectively, in seizure-wise cross-validation and 0.75 and 0.72, respectively, in leave-one-subject-out analysis (42). Hyperkinetic seizures have been automatically differentiated from non-hyperkinetic seizures and sleep-related paroxysmal events with 80% probability (43) and 80% accuracy (44). In another study, CNNs and recurrent neural networks (RNNs) were combined to automatically classify seizure videos into focal onset seizures and focal to bilateral tonic-clonic seizures achieving 98.9% accuracy (45). However, an automatic system that differentiates motor seizures into three types has not been previously reported in the literature. Our study reached a relatively good accuracy in hyperkinetic and tonic seizure classification, and the results from hyperkinetic seizure classification aligned with the previous research study (43). However, tonic-clonic seizures were not differentiated as accurately as the other two seizure types, which weakens the performance, especially when considering the clinical relevance of tonic-clonic seizure documentation in decreasing the risk of SUDEP (46). However, in previous validation studies of the Nelli® seizure monitoring hybrid (algorithmic-human annotation) system, all tonic-clonic seizures were correctly categorized (23) due to stereotypic and easily recognizable motor manifestations. Since the tonic-clonic seizures cannot be separated in both the clustering and the classification methods in this study, it suggests that this limitation is not caused by the applied methods, but the extracted time-series descriptor does not have discriminative power for this task.

Catch22 was utilized to extract statistical descriptors to reduce the data dimensionality of the training and testing sets drastically. This library turned out to be suitable for this task as a collection of the best statistics for time-series analysis across various science fields. To select the statistical features with real discriminative power for the current study, the redundant steps were removed from the initial set one by one after inspecting the unaffected cluster diagrams. The deep-learning experiment after the cluster analysis confirmed the good overall discriminative power. Since there were few tonic-clonic seizures compared to hyperkinetic and tonic seizures, they were not distinguished as well as the other seizures. This is not a by-product of catch22 or deep learning, but a common phenomenon in machine learning when a class is underrepresented in the learning task.

This study has several limitations. Our patient population was quite small, and especially the number of tonic-clonic seizures included in the study was low. The majority (>90%) of seizures included in the study consisted of hyperkinetic and tonic seizures, which may have affected the performance of our model, and a dataset with more evenly represented seizure types might improve the model development in future research. Also, due to availability, we only included tonic, tonic-clonic, and hyperkinetic seizures, which usually have recognizable motor manifestations and are reliably classified by human annotators. Seizures included in this study represented varying semiologies even within a seizure type, as shown in Supplementary material 1, which may have caused challenges in classification. A larger patient group would enable further training of the model and improve the statistical reliability of the results. However, running cross-validation 100 times as done in our study was found as one solution to this issue. The patient dataset consisted of adult patients, which limits the generalizability of the results to pediatric patients. Furthermore, we only tested seizures that were confirmed to be seizures, and we did not have a category for non-epileptic events involved, which may cause an overestimation of the performance. However, previous studies that utilized our system have shown the accuracy of automatic seizure detection in various seizure types (43). The exclusion of non-epileptic events from hyperkinetic seizures has been also reported with relatively high accuracy (42). In addition, seizures with more subtle motor manifestations can be challenging to detect automatically, as previously reported (23), even though studies have shown accurate detection for those seizures (16). However, seizures with minor motor manifestations may be difficult to detect, even for human annotators, which provides a topic for future development. Furthermore, simultaneous analysis of more seizure types might be challenging due to the similar motion and oscillation signal profile such as myoclonic and tonic seizures, as well as tonic-clonic and clonic seizures. There are other general limitations related to video monitoring: a patient should stay in sight of the camera, a caregiver should avoid being in the frame of the camera to not affect the motion signal, and a blanket can impede the movement of the patient. It is important to maintain the same monitoring settings throughout the monitoring period to avoid the effect of patient and environment-related factors on movement detection (47).

5. Conclusion

The quantitative analysis with selected motion features distinguished semiological differences between epileptic motor seizures and enabled differentiation of hyperkinetic and tonic seizure types from video data in patients with DRE. Our results suggest that the motion signal profiles seem to allow motor seizure differentiation and classification. The system achieved a promising accuracy and f1-score of 74% in the testing phase. Tonic and hyperkinetic seizures were classified with 91 and 88% accuracy, respectively, but accuracy for tonic-clonic seizures was only 45%. The f1-scores of hyperkinetic, tonic, and tonic-clonic were 93, 90, and 37%, respectively. Future studies are needed with a larger and more robust dataset, including additional motor seizure types and false positive events. These developments hold the potential to streamline the clinical workflow of video-based seizure monitoring systems by providing a supporting tool for seizure classification. In summary, despite the lack of accuracy in the classification of tonic-clonic seizures, the results from the present study can be considered a step forward toward an automatic seizure classification tool for clinicians.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by Ethics Committee of Tampere University Hospital. The studies were conducted in accordance with the local legislation and institutional requirements. Written informed consent for participation in this study was provided by the participants' legal guardians/next of kin. Written informed consent was obtained from the individual(s), and minor(s)' legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article.

Author contributions

PO: Writing—original draft, Writing—review & editing. CK: Writing—original draft, Writing—review & editing, Formal analysis, Software. EM: Formal analysis, Software, Writing—review & editing. PR: Formal analysis, Writing—review & editing. KA: Project administration, Writing—review & editing. AK: Data curation, Formal analysis, Investigation, Methodology, Writing—review & editing. JP: Data curation, Investigation, Supervision, Writing—original draft, Writing—review & editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

CK, EM, PR, AK, and KA are employees of Neuro Event Labs, the company that provided the equipment and technology used in the study. PO has provided medical consultation for Neuro Event Labs. JP and KA are shareholders of Neuro Event Labs. JP has participated in clinical trials for Eisai, UCB, and Bial; received grants from Eisai, Medtronic, UCB, and Liva-Nova; received speaker honoraria from LivaNova, Eisai, Medtronic, Orion Pharma, and UCB; received support for travel to congresses from LivaNova, Eisai, Medtronic, and UCB; and participated in advisory boards for Arvelle, Novartis, LivaNova, Eisai, Medtronic, UCB, and Pfizer.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fneur.2023.1270482/full#supplementary-material

References

1. Kwan P, Arzimanoglou A, Berg AT, Brodie MJ, Allen Hauser W, Mathern G, et al. Definition of drug resistant epilepsy: consensus proposal by the ad hoc Task Force of the ILAE Commission on Therapeutic Strategies. Epilepsia. (2010) 51:1069–77. doi: 10.1111/j.1528-1167.2009.02397.x

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Laxer KD, Trinka E, Hirsch LJ, Cendes F, Langfitt J, Delanty N, et al. The consequences of refractory epilepsy and its treatment. Epilepsy Behav. (2014) 37:59–70. doi: 10.1016/j.yebeh.2014.05.031

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Massey CA, Sowers LP, Dlouhy BJ, Richerson GB. Mechanisms of sudden unexpected death in epilepsy: the pathway to prevention. Nat Rev Neurol. (2014) 10:271–82. doi: 10.1038/nrneurol.2014.64

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Elger CE, Hoppe C. Diagnostic challenges in epilepsy: seizure under-reporting and seizure detection. Lancet Neurol. (2018) 17:279–88. doi: 10.1016/S1474-4422(18)30038-3

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Hoppe C, Poepel A, Elger CE. Epilepsy: accuracy of patient seizure counts. Arch Neurol. (2007) 64:1595–9. doi: 10.1001/archneur.64.11.1595

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Ulate-Campos A, Coughlin F, Gaínza-Lein M, Fernández IS, Pearl PL, Loddenkemper T. Automated seizure detection systems and their effectiveness for each type of seizure. Seizure. (2016) 40:88–101. doi: 10.1016/j.seizure.2016.06.008

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Beniczky S, Wiebe S, Jeppesen J, Tatum WO, Brazdil M, Wang Y, et al. Automated seizure detection using wearable devices: a clinical practice guideline of the International League Against Epilepsy and the International Federation of Clinical Neurophysiology. Epilepsia. (2021) 62:632–46. doi: 10.1111/epi.16818

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Shoeibi A, Khodatars M, Ghassemi N, Jafari M, Moridian P, Alizadehsani R, et al. Epileptic seizures detection using deep learning techniques: a review. Int J Environ Res Public Health. (2021) 18:5780. doi: 10.3390/ijerph18115780

CrossRef Full Text | Google Scholar

9. Fisher RS, Cross JH, French JA, Higurashi N, Hirsch E, Jansen FE, et al. Operational classification of seizure types by the international league against epilepsy: position paper of the ILAE commission for classification and terminology. Epilepsia. (2017) 58:522–30. doi: 10.1111/epi.13670

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Lüders H, Acharya J, Baumgartner C, Benbadis S, Bleasel A, Burgess R, et al. Semiological seizure classification. Epilepsia. (1998) 39:1006–13. doi: 10.1111/j.1528-1157.1998.tb01452.x

CrossRef Full Text | Google Scholar

11. Turek G, Skjei K. Seizure semiology, localization, and the 2017 ILAE seizure classification. Epilepsy Behav. (2022) 126:108455. doi: 10.1016/j.yebeh.2021.108455

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Beniczky S, Tatum WO, Blumenfeld H, Stefan H, Mani J, Maillard L, et al. Seizure semiology: ILAE glossary of terms and their significance. Epileptic Disord. (2022) 24:447–95. doi: 10.1684/epd.2022.1430

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Tufenkjian K, Lüders HO. Seizure semiology: its value and limitations in localizing the epileptogenic zone. J Clin Neurol. (2012) 8:243–50. doi: 10.3988/jcn.2012.8.4.243

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Karayiannis NB, Xiong Y, Tao G, Frost JD Jr, Wise MS, Hrachovy RA, et al. Automated detection of videotaped neonatal seizures of epileptic origin. Epilepsia. (2006) 47:966–80. doi: 10.1111/j.1528-1167.2006.00571.x

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Ahmedt-Aristizabal D, Fookes C, Denman S, Nguyen K, Fernando T, Sridharan S, et al. A hierarchical multimodal system for motion analysis in patients with epilepsy. Epilepsy Behav. (2018) 87:46–58. doi: 10.1016/j.yebeh.2018.07.028

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Hou JC, Thonnat M, Bartolomei F, McGonigal A. Automated video analysis of emotion and dystonia in epileptic seizures. Epilepsy Res. (2022) 184:106953. doi: 10.1016/j.eplepsyres.2022.106953

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Chen L, Yang X, Liu Y, Zeng D, Tang Y, Yan B, et al. Quantitative and trajectory analysis of movement trajectories in supplementary motor area seizures of frontal lobe epilepsy. Epilepsy Behav. (2009) 14:344–53. doi: 10.1016/j.yebeh.2008.11.007

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Hartl E, Knoche T, Choupina HMP, Rémi J, Vollmar C, Cunha JPS, et al. Quantitative and qualitative analysis of ictal vocalization in focal epilepsy syndromes. Seizure. (2018) 60:178–83. doi: 10.1016/j.seizure.2018.07.008

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Yang X, Chen L, Liu Y, Zeng D, Tang Y, Yan B, et al. Motor trajectories in automatisms and their quantitative analysis. Epilepsy Res. (2009) 83:97–102. doi: 10.1016/j.eplepsyres.2008.09.010

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Cunha JP, Rémi J, Vollmar C, Fernandes JM, Gonzalez-Victores JA, Noachtar S. Upper limb automatisms differ quantitatively in temporal and frontal lobe epilepsies. Epilepsy Behav. (2013) 27:404–8. doi: 10.1016/j.yebeh.2013.02.026

PubMed Abstract | CrossRef Full Text | Google Scholar

21. Karácsony T, Loesch-Biffar AM, Vollmar C, Rémi J, Noachtar S, Cunha JPS. Novel 3D video action recognition deep learning approach for near real time epileptic seizure classification. Sci Rep. (2022) 12:19571. doi: 10.1038/s41598-022-23133-9

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Ojanen P, Zabihi M, Knight A, Roivainen R, Lamusuo S, Peltola J. Feasibility of video/audio monitoring in the analysis of motion and treatment effects on night-time seizures - Interventional study. Epilepsy Res. (2022) 184:106949. doi: 10.1016/j.eplepsyres.2022.106949

PubMed Abstract | CrossRef Full Text | Google Scholar

23. Peltola J, Basnyat P, Armand Larsen S, Østerkjærhuus T, Vinding Merinder T, Terney D, et al. Semiautomated classification of nocturnal seizures using video recordings. Epilepsia. (2022) 1–7. doi: 10.1111/epi.17207

PubMed Abstract | CrossRef Full Text | Google Scholar

24. Ojanen P, Knight A, Hakala A, Bondarchik J, Noachtar S, Peltola J, et al. An integrative method to quantitatively detect nocturnal motor seizures. Epilepsy Res. (2021) 169:106486. doi: 10.1016/j.eplepsyres.2020.106486

PubMed Abstract | CrossRef Full Text | Google Scholar

25. Armand Larsen S, Terney D, Østerkjerhuus T, Vinding Merinder T, Annala K, Knight A, et al. Automated detection of nocturnal motor seizures using an audio-video system. Brain Behav. (2022) 12:e2737. doi: 10.1002/brb3.2737

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Lubba C.H., Sethi S.S., Knaute P, Schultz SR, Fulcher BD, Jones NS, et al. catch22: CAnonical time-series CHaracteristics. Data Min Knowl Disc. (2019) 33:1821–52. doi: 10.1007/s10618-019-00647-x

CrossRef Full Text | Google Scholar

27. Jackson TD, Sethi S, Dellwik E, Angelou N, Bunce A, Van Emmerik T, et al. The motion of trees in the wind: a data synthesis. Biogeosciences. (2021) 18:4059–72. doi: 10.5194/bg-18-4059-2021

CrossRef Full Text | Google Scholar

28. Papacharalampous G, Tyralis H, Papalexiou SM, Langousis A, Khatami S, Volpi E, et al. Global-scale massive feature extraction from monthly hydroclimatic time series: statistical characterizations, spatial patterns and hydrological similarity. Sci Total Environ. (2021) 767:144612. doi: 10.1016/j.scitotenv.2020.144612

PubMed Abstract | CrossRef Full Text | Google Scholar

29. Quicke P, Sun Y, Arias-Garcia M, Beykou M, Acker CD, Djamgoz MB, et al. Voltage imaging reveals the dynamic electrical signatures of human breast cancer cells. Commun Biol. (2022) 5:1178. doi: 10.1038/s42003-022-04077-2

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Theodorou E, Wang S, Kang Y, Spiliotis E, Makridakis S, Assimakopoulos V. Exploring the representativeness of the M5 competition data. Int J Forecast. (2022) 38:1500–6. doi: 10.1016/j.ijforecast.2021.07.006

CrossRef Full Text | Google Scholar

31. Zhou W, Chan YE, Foo CS, Zhang J, Teo JX, Davila S, et al. High-resolution digital phenotypes from consumer wearables and their applications in machine learning of cardiometabolic risk markers: cohort study. J Med Internet Res. (2022) 24:e34669. doi: 10.2196/34669

PubMed Abstract | CrossRef Full Text | Google Scholar

32. Beniczky S, Ryvlin P. Standards for testing and clinical validation of seizure detection devices. Epilepsia. (2018) 59:9–13. doi: 10.1111/epi.14049

PubMed Abstract | CrossRef Full Text | Google Scholar

33. Zivkovic Z, Van Der Heijden F. Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognit Lett. (2006) 27:773–80. doi: 10.1016/j.patrec.2005.11.005

CrossRef Full Text | Google Scholar

34. Hirschmuller H. Stereo processing by semiglobal matching and mutual information. IEEE Trans Pattern Anal Mach Intell. (2008) 30:328–41. doi: 10.1109/TPAMI.2007.1166

PubMed Abstract | CrossRef Full Text | Google Scholar

35. Karim F, Majumdar S, Darabi H, Harford S. Multivariate LSTM-FCNs for time series classification. Neural Netw. (2019) 116:237–45. doi: 10.1016/j.neunet.2019.04.014

PubMed Abstract | CrossRef Full Text | Google Scholar

36. Oguiza I. TSAI - A State-of-the-art Deep Learning Library for Time Series Sequential Data. Github. (2022). Available online at: https://github.com/timeseriesAI/tsai (accessed January 31, 2023).

Google Scholar

37. Basnyat P, Mäkinen J, Saarinen JT, Peltola J. Clinical utility of a video/audio-based epilepsy monitoring system Nelli. Epilepsy Behav. (2022) 133:108804. doi: 10.1016/j.yebeh.2022.108804

PubMed Abstract | CrossRef Full Text | Google Scholar

38. McGonigal A, Bartolomei F, Chauvel P. On seizure semiology. Epilepsia. (2021) 62:2019–35. doi: 10.1111/epi.16994

CrossRef Full Text | Google Scholar

39. Swinnen L, Chatzichristos C, Jansen K, Lagae L, Depondt C, Seynaeve L, et al. Accurate detection of typical absence seizures in adults and children using a two-channel electroencephalographic wearable behind the ears. Epilepsia. (2021) 62:2741–52. doi: 10.1111/epi.17061

PubMed Abstract | CrossRef Full Text | Google Scholar

40. Reus EEM, Visser GH, Cox FME. Using sampled visual EEG review in combination with automated detection software at the EMU. Seizure. (2020) 80:96–9. doi: 10.1016/j.seizure.2020.06.002

PubMed Abstract | CrossRef Full Text | Google Scholar

41. Cao X, Yao B, Chen B, Sun W, Tan G. Automatic seizure classification based on domain-invariant deep representation of EEG. Front Neurosci. (2021) 15:760987. doi: 10.3389/fnins.2021.760987

PubMed Abstract | CrossRef Full Text | Google Scholar

42. Hou JC, McGonigal A, Bartolomei F, Thonnat M. A Multi-Stream Approach for Seizure Classification with Knowledge Distillation. In: 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). Washington, DC, USA (2021). p. 1–8.

Google Scholar

43. Rémi J, Cunha JP, Vollmar C, Topçuoglu ÖB, Meier A, Ulowetz S, et al. Quantitative movement analysis differentiates focal seizures characterized by automatisms. Epilepsy Behav. (2011) 20:642–7. doi: 10.1016/j.yebeh.2011.01.020

PubMed Abstract | CrossRef Full Text | Google Scholar

44. Moro M, Pastore VP, Marchesi G, Proserpio P, Tassi L, Castelnovo A, et al. Automatic video analysis and classification of sleep-related hypermotor seizures and disorders of arousal. Epilepsia. (2023) 64:1653–62. doi: 10.1111/epi.17605

PubMed Abstract | CrossRef Full Text | Google Scholar

45. Pérez-García F, Scott C, Sparks R, Diehl B, Ourselin S. Transfer learning of deep spatiotemporal networks to model arbitrarily long videos of seizures. In:de Bruijne, M., et al., , editors. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science(). Springer, Cham (2021).

Google Scholar

46. Walczak TS, Leppik IE, D'Amelio M, Rarick J, So E, Ahman P, Ruggles K, Cascino GD, Annegers JF, Hauser WA. Incidence and risk factors in sudden unexpected death in epilepsy: a prospective cohort study. Neurology. (2001) 56:519–25. doi: 10.1212/WNL.56.4.519

PubMed Abstract | CrossRef Full Text | Google Scholar

47. Yang Y, Sarkis RA, Atrache RE, Loddenkemper T, Meisel C. Video-based detection of generalized tonic-clonic seizures using deep learning. IEEE J Biomed Health Inform. (2021) 25:2997–3008. doi: 10.1109/JBHI.2021.3049649

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: epilepsy, seizure classification, motor seizures, signal analysis, biomarkers

Citation: Ojanen P, Kertész C, Morales E, Rai P, Annala K, Knight A and Peltola J (2023) Automatic classification of hyperkinetic, tonic, and tonic-clonic seizures using unsupervised clustering of video signals. Front. Neurol. 14:1270482. doi: 10.3389/fneur.2023.1270482

Received: 31 July 2023; Accepted: 12 October 2023;
Published: 02 November 2023.

Edited by:

Fernando Cendes, State University of Campinas, Brazil

Reviewed by:

Erik Taubøll, Oslo University Hospital, Norway
Giuseppe Didato, Carlo Besta Neurological Institute (IRCCS), Italy

Copyright © 2023 Ojanen, Kertész, Morales, Rai, Annala, Knight and Peltola. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Petri Ojanen, petri.ojanen@tuni.fi

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.