Great — I’ll begin deep research for a structured report analyzing NeuroVista, Seer Medical, and Epiminder, focusing on the claims made in published studies and company materials, and assessing their scientific validity. I’ll also include select contrasts with other epilepsy-related AI companies to contextualize the findings. This will take some time to ensure accuracy and depth. I’ll let you know as soon as I have the full report ready for review. # Seizure Prediction Efforts by NeuroVista, Seer Medical, and Epiminder Epilepsy researchers and companies have long pursued a reliable way to **forecast seizures** before they occur, aiming to improve patient safety and autonomy. This report examines three leading efforts – **NeuroVista**, **Seer Medical**, and **Epiminder** – focusing on their technology, scientific claims, validation rigor, and critiques. Limited contrast with other epilepsy-focused AI initiatives (like responsive neurostimulators and wearable seizure detectors) is also provided to highlight different approaches and validation strategies. ## NeuroVista (Intracranial Seizure Advisory System) **NeuroVista** was a Seattle-based company that, in partnership with Australian researchers, developed one of the first implantable seizure prediction devices. Its work culminated in a landmark human trial around 2010–2013. ### Technology and Trials NeuroVista’s system consisted of an **implanted intracranial EEG electrode array** placed between the skull and brain surface, connected to an implanted telemetry unit in the chest ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=Professor%20Cook%20and%20his%20team%2C,EEG%20data)). This unit continuously recorded the brain’s electrical activity and transmitted data to an external handheld device. The handheld provided **real-time risk alerts** using a simple light system – blue for low risk, white for moderate, and red for high likelihood of an impending seizure ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=device%20which%20could%20be%20implanted,EEG%20data)). The intended use case was to give patients advance warning (on the order of minutes to hours) that a seizure was likely, so they could seek safety or medication. NeuroVista’s key study was a **first-in-man feasibility trial** in Melbourne, Australia (2010–2013) with 15 patients who had drug-resistant focal epilepsy ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=Methods%3A%20%20We%20enrolled%20patients,patients%20then%20entered)) ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=The%20two%20year%20study%20included,seizures%20controlled%20with%20existing%20treatments)). Patients underwent surgical implantation of the electrodes and chest unit. The trial design included an initial data-gathering phase (to train individualized prediction algorithms) followed by an advisory phase where patients received live seizure-risk warnings ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=Methods%3A%20%20We%20enrolled%20patients,events%20at%204%20months%20after)) ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=For%20the%20first%20month%20of,seizure%20prediction%20for%20each%20patient)). All patients had relatively frequent seizures (2–12 per month) and no psychogenic non-epileptic events, to ensure a clear training signal ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=Methods%3A%20%20We%20enrolled%20patients,and%20performance%20better%20than%20expected)). The primary endpoints were safety (device-related adverse events) and the performance of the algorithm in predicting seizures, while secondary endpoints included quality-of-life measures with the device ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=patients%20entered%20a%20data%20collection,4%20months%20after%20initiation%20of)). ### Claims and Performance **NeuroVista’s trial made the striking claim that seizure prediction in humans is possible.** In 11 of the 15 participants (73%), the personalized algorithms could **predict upcoming seizures at better-than-chance levels** ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=seizure%20prediction%20for%20each%20patient)). Specifically, the system issued “high likelihood” warnings with a sensitivity ranging from 65% up to 100% for seizures in those patients ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=device%20met%20enabling%20criteria%20in,and%204%20months%20after%20implantation)). In other words, a significant fraction of seizures were preceded by a red warning. Overall, **eight patients** achieved impressively accurate forecasts, with between 56% and 100% of their seizures correctly predicted by the device ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=The%20system%20correctly%20predicted%20seizures,100%20percent%20of%20the%20time)). This far exceeded random prediction (a random or chance predictor would be correct ~50% of the time by the trial’s definition ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=seizure%20prediction%20for%20each%20patient))). These results, published in *Lancet Neurology* in 2013 by Cook *et al*., were touted as a **“world-first” demonstration of accurate seizure prediction** in humans ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=Summary%3A%20A%20small%20device%20implanted,first%20study.%20Share)) ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=The%20system%20correctly%20predicted%20seizures,100%20percent%20of%20the%20time)). Importantly, NeuroVista described prediction in terms of risk levels (“high” vs “low” likelihood) rather than 100% certain advance warnings. The claim was that the device could successfully distinguish high-risk periods for seizures from low-risk periods. In practical terms, patients spent a portion of time under a red light indicating elevated risk; most seizures occurred during those red periods. The trial showed this **risk stratification was feasible** – for the responders, a red warning state captured the majority of their seizures ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=The%20system%20correctly%20predicted%20seizures,100%20percent%20of%20the%20time)). This led researchers to conclude that **seizure likelihood is not random but can be forecasted with implanted EEG monitoring** ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=Interpretation%3A%20%20This%20study%20showed,lead%20to%20new%20management%20strategies)). The broader promise was that such a device could improve patient independence and enable acute interventions if one knows a seizure is coming ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=Hospital)) ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=,the%20medical%20journal%20Lancet%20Neurology)). ### Validation and Methodology NeuroVista’s study was **prospective** in nature and employed patient-specific modeling. Each patient’s EEG data from the first month post-implant was used to **train a custom seizure prediction algorithm for that individual** ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=For%20the%20first%20month%20of,seizure%20prediction%20for%20each%20patient)). The algorithm’s parameters were adjusted to recognize that person’s pre-seizure EEG patterns. This approach acknowledges the well-known fact that EEG signatures of impending seizures vary greatly between patients. The **validation** of the algorithm then occurred in a separate timeframe: once an individual algorithm met preset performance criteria, the device began giving that patient real-time warnings (the advisory phase) and the team assessed its accuracy in ongoing use ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=patients%20entered%20a%20data%20collection,4%20months%20after%20initiation%20of)). To claim success, the investigators set **objective performance thresholds** before unblinding data to patients. Specifically, a predictor had to achieve **sensitivity >65% for “high-risk” warnings and perform better than a random predictor** in that patient ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=patients%20entered%20a%20data%20collection,events%20at%204%20months%20after)). This means that during the data collection phase the team checked, for each patient, how many of their known seizures would have fallen into high-risk periods versus what would be expected by chance. Only if the algorithm beat chance (p<0.05) and exceeded 65% sensitivity was it considered “enabled” for clinical use ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=patients%20entered%20a%20data%20collection,4%20months%20after%20initiation%20of)). In 11 patients the algorithm hit these marks, while 3 patients’ algorithms never met criteria (and thus those patients never received risk alerts) ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=device%20met%20enabling%20criteria%20in,and%204%20months%20after%20implantation)). One additional patient exited early due to a device issue ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=device%20met%20enabling%20criteria%20in,We%20detected%20no%20significant)). This validation design – *training on one segment of each patient’s data and testing on future data from the same patient* – provided a reasonably rigorous within-subject assessment. It was essentially a form of **prospective, patient-specific validation**. However, no **subject-independent** validation was attempted (there was no pooling of data to create a general algorithm for all patients). The study demonstrated what is often called *proof-of-concept*: that for many individuals, there exist EEG precursors that an algorithm can learn to detect. The **statistical rigor** was noteworthy for its time: previous seizure prediction studies in the early 2000s had been criticized for methodological flaws and lack of proper out-of-sample testing ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=Seizure%20prediction%20has%20been%20the,seizure%20prediction%20was%20possible%20in)) ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=clinical%20trial%20in%20Melbourne%20Australia%2C,the%20strategy%20is%20really%20better)). The NeuroVista trial benefited from guidelines developed by experts (Mormann *et al.*, 2007; Snyder *et al.*, 2008) to avoid false positives and chance correlations ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=Seizure%20prediction%20has%20been%20the,seizure%20prediction%20was%20possible%20in)). By requiring performance above chance and using a separate test phase, the trial ensured the predictions were not simply overfitting noise. In terms of **sample size and duration**, the trial was small (15 people) but each patient was recorded continuously for many months (up to 2 years of data in some cases) ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=Karoly%E2%80%99s%202018%20findings%20drew%20from,analyzed%2C%20and%20an%20alert%20was)). This yielded an unprecedented long-term EEG dataset. The consistency of algorithm performance over time was not deeply reported in the initial paper, but later analyses of the data indicated that seizure likelihood often followed cyclical patterns over days/weeks (more on this in Seer Medical’s section). The NeuroVista trial’s focus was not on generalizability to new patients but on **feasibility and safety** within this cohort ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=Interpretation%3A%20%20This%20study%20showed,lead%20to%20new%20management%20strategies)). Notably, the trial measured patient-centered outcomes (like anxiety and quality of life) after a few months of using the advisory device, to see if having warnings made a difference ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=through%20chance%20prediction%20of%20randomly,gov%2C%20number%20NCT01043406)). ### Limitations and Criticisms Despite its pioneering nature, NeuroVista’s effort had **significant limitations** that have been pointed out by experts: - **Incomplete Generalizability:** About one-quarter of the patients saw no benefit because the algorithm could not be trained to reliably predict their seizures ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=device%20met%20enabling%20criteria%20in,and%204%20months%20after%20implantation)). This highlights a likely biological reality – some patients simply do not have clear pre-ictal EEG signatures (or they were too subtle to detect with that algorithm). The company’s approach was all-or-nothing per patient; if a person’s data didn’t meet criteria, the device provided no help. This variability was a **major factor in the trial (and company) ending**. As one commentary noted, NeuroVista’s trial showed prediction was *possible in some people*, but the fact that it **“did not work well” in several patients with the initial algorithms led the sponsor to terminate the project** ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=clinical%20trial%20in%20Melbourne%20Australia%2C,the%20strategy%20is%20really%20better)). - **False Alarms vs. Missed Seizures:** While sensitivity was reported, information on specificity or false alarm rates was not prominently published in initial reports. An algorithm that predicts 100% of seizures by labeling *most time* as “high risk” is not clinically useful. NeuroVista’s requirement that performance exceed chance was an attempt to balance sensitivity and false positives. In the best-performing patients, about 30% of time was spent in a high-risk state to catch ~80% of seizures ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=In%20high%20risk%20state%20In,63)) (this comes from later analysis of the data). Still, **false warnings** would have occurred. If a red light comes on and no seizure follows, it could cause anxiety or behavior changes unnecessarily. The trial was too small to fully characterize the false alarm burden, and this remains a critical issue for any prediction device. - **No Demonstrated Clinical Benefit:** Perhaps the most pragmatic critique: after 4 months of using the NeuroVista advisory device, **patients did not show significant improvements in clinical outcomes** (such as reduced anxiety or improved quality of life) compared to baseline ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=device%20met%20enabling%20criteria%20in,and%204%20months%20after%20implantation)). The Lancet Neurology paper explicitly noted *“no significant changes in clinical effectiveness measures”* (e.g. depression, anxiety scales) after introducing seizure warnings ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=device%20met%20enabling%20criteria%20in,and%204%20months%20after%20implantation)). This might be due to the short follow-up or small sample, but it underscores that *prediction alone isn’t a treatment*. If the warnings are not perfectly reliable or patients don’t know how to act on them, the tangible benefits can be hard to achieve. Future studies would need to show that prediction leads to improved safety (fewer injuries) or allows interventions (like fast-acting meds) that actually prevent seizures. - **Safety and Invasiveness:** The NeuroVista system required two surgeries (brain electrode implantation and chest unit placement) with long-term implanted hardware. In the trial, there were **multiple device-related adverse events** – 11 events in 15 patients within 4 months, including two serious cases (a device that migrated, a fluid seroma) and later two infections that required intervention ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=Findings%3A%20%20We%20implanted%2015,We%20detected%20no%20significant)). While all were resolved, any invasive device carries risk. The relatively high complication rate (compared to, say, a pacemaker) was partly due to the novelty of having electrodes in the subdural space chronically. This level of invasiveness could only be justified for the most severe epilepsy cases, especially if the benefit is just a warning light. - **Company Demise and Lack of Follow-up:** NeuroVista **ceased operations around 2013–2014**, so no larger trial or product commercialization occurred. This means the initial results were never replicated in a broader population or improved upon by the company. The trial has been described by some as “appearing like a failed trial” in terms of commercial outcome, even though scientifically it was a breakthrough ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=et%20al,The%20next)). The lack of continuation left open questions: Could algorithms be improved to help the non-responders? Would longer training or more electrodes yield better results? These remained unaddressed by NeuroVista itself. - **Data Analysis Critiques:** In the wider academic sphere, seizure prediction studies have faced scrutiny for methodological rigor. NeuroVista’s approach was relatively robust, but earlier efforts suffered issues like data leakage (training and testing on non-independent data) or confirmation bias. The NeuroVista data later became public (with patient consent) for researchers, enabling **crowdsourced assessments of algorithm performance**. Notably, the **Melbourne University team shared segments of this dataset in worldwide competitions**, including a Kaggle challenge, to objectively test new algorithms ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=step%20was%20further%20optimization%20and,are%20many%20potential%20successful%20algorithms)). In one competition, data from patients whom NeuroVista’s original algorithm failed were used; remarkably, many submitted algorithms managed to beat chance on those very patients ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=step%20was%20further%20optimization%20and,are%20many%20potential%20successful%20algorithms)). This suggests that more sophisticated or machine-learning-based methods might rescue some cases, implying NeuroVista’s initial algorithm (proprietary details not fully public) might not have been optimal. It’s a reminder that the **choice of algorithm and validation matters greatly** – a different model might extract predictive features where another sees none. In summary, **NeuroVista provided proof that seizure forecasting is achievable in principle**, but its technology was ahead of what was practically sustainable. The effort underscored challenges like patient specificity, false alarms, and invasive-device tradeoffs that subsequent efforts (including Seer Medical and Epiminder) have aimed to address. The NeuroVista trial’s data seeded much of the modern research into seizure cycles and forecasting algorithms, effectively kick-starting a field even though the company itself did not survive ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=some%20people%2C%20though%20the%20trial,a%20worldwide%20seizure%20prediction%20competition)) ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=Perhaps%20the%20most%20important%20contribution,2017)). ## Seer Medical (Data-Driven Seizure Forecasting and Wearables) **Seer Medical** is an Australian company (spun out of University of Melbourne research) that takes a less invasive, big-data approach to seizure prediction. Instead of an implanted device, Seer focuses on harnessing **long-term EEG monitoring, mobile technology, and machine learning** to forecast seizures. They have built a platform including a wearable smartwatch, a smartphone app, and cloud analytics to deliver personalized seizure risk forecasts to patients. ([image]()) *Figure: The Seer mobile app provides users with individualized 24-hour and multi-day seizure risk predictions (displayed like a calendar of high/low risk). Users log seizures and wear optional sensors, and the app’s algorithm updates their seizure risk forecast accordingly. This user-facing tool is one of the first practical implementations of seizure forecasting for patients.* ### Technology and Datasets Seer’s technological approach centers on collecting **large-scale, real-world epilepsy data** and identifying patterns in that data that precede seizures. The company’s origins trace to academic studies where researchers had access to two valuable datasets: (1) the **NeuroVista intracranial EEG recordings** (from the above trial) for high-resolution brain signal data, and (2) a **large corpus of patient-reported seizure diaries** (thousands of seizures logged over long periods) ([From a PhD to a mobile app that helps predict seizures](https://research.unimelb.edu.au/study/experience/researcher-life/phd-to-mobile-app-predicts-seizures#:~:text=I%20was%20working%20in%20a,some%20collaborators%20in%20the%20USA)). By mining these, Seer’s team (including Dr. Philippa Karoly and Prof. Mark Cook) discovered that many people’s seizures were not random but followed cycles – e.g. daily, multi-day, or monthly periodicities ([From a PhD to a mobile app that helps predict seizures](https://research.unimelb.edu.au/study/experience/researcher-life/phd-to-mobile-app-predicts-seizures#:~:text=During%C2%A0her%20PhD%2C%20Dr%20Pip%20Karoly%C2%A0noticed%C2%A0new,she%20outlined%20during%20her%20PhD)) ([From a PhD to a mobile app that helps predict seizures](https://research.unimelb.edu.au/study/experience/researcher-life/phd-to-mobile-app-predicts-seizures#:~:text=One%20of%20the%20big%20discoveries,remains%20somewhat%20of%20a%20mystery)). A cornerstone of Seer’s research has been documenting these **seizure cycles**. In one study, they noted **circadian (24-hour) rhythms** in seizure probability in ~80% of patients, plus longer multi-day rhythms (weekly, monthly) in a substantial subset ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=least%20one%20significant%20cycle,cent%20had%20a%20weekly%20cycle)). For example, **83% of individuals had a significant daily cycle** (certain times of day with higher seizure likelihood), and **about 23% showed a weekly cycle** ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=least%20one%20significant%20cycle,cent%20had%20a%20weekly%20cycle)). Some patients even had cycles on the order of weeks or more ([From a PhD to a mobile app that helps predict seizures](https://research.unimelb.edu.au/study/experience/researcher-life/phd-to-mobile-app-predicts-seizures#:~:text=During%C2%A0her%20PhD%2C%20Dr%20Pip%20Karoly%C2%A0noticed%C2%A0new,she%20outlined%20during%20her%20PhD)) ([From a PhD to a mobile app that helps predict seizures](https://research.unimelb.edu.au/study/experience/researcher-life/phd-to-mobile-app-predicts-seizures#:~:text=One%20of%20the%20big%20discoveries,remains%20somewhat%20of%20a%20mystery)). These findings came from analyzing long-term seizure diaries and continuous EEG where available. The implication was that each patient has a sort of “seizure clock” – unique to them – which could be used to forecast high-risk vs low-risk windows ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=When%20average%20brain%20activity%20is,%E2%80%9D)) ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=Karoly%E2%80%99s%20research%2C%20published%20in%20The,pattern%20is%20different%20for%20everyone)). Building on this, Seer developed a **mobile app (the Seer App)** that functions as a seizure diary and forecasting tool. Patients use a smartphone to log events and can pair it with wearable devices (like a smartwatch that records heart rate, movement, etc.). The system detects cycles in the data and provides a **risk forecast**: for instance, it might indicate that today the user is in a “high-risk” period versus later in the week “low-risk” ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=People%20who%20use%20the%20Seer,seizures%20in%20a%20diary%20after)). The app was launched to the public in 2022, and by mid-2023 it had generated over 2,000 forecasts for users in a pilot deployment ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=match%20at%20L218%20risks%2C%20but,2%2C000%20forecasts%20have%20been%20generated)) ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=risks%2C%20but%20since%20the%20launch,2%2C000%20forecasts%20have%20been%20generated)). On the research side, Seer’s team has also explored cutting-edge **machine learning on EEG**. In 2018, they published a study demonstrating a **deep learning algorithm** on EEG data that could potentially run on an ultra-low-power chip ([Epileptic seizure prediction using big data and deep learning: Toward a mobile system - Seer Medical](https://seermedical.com/research-and-publications/epileptic-seizure-prediction-using-big-data-and-deep-learning-toward-a-mobile-system/#:~:text=)). This project (Kiral-Kornek et al., 2018 in *EBioMedicine*) used a neuromorphic processor to analyze EEG from the NeuroVista dataset, converting EEG signals into image-like spectrogram “snapshots” and then classifying those as pre-seizure or not ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=order%20to%20present%20the%20device,at%20a%20time%2C%20much%20longer)) ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=algorithm%27s%20efficiency%20allowed%20them%20to,27%20Schulze)). The significance of this was to show a path toward a **wearable or portable real-time EEG warning system** – as opposed to heavy computation on a server – by leveraging energy-efficient AI hardware ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=In%20EBioMedicine%2C%20Kiral,feasibility%3A%20it%20uses%20extremely%20low)) ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=Adapting%20Deep%20Learning%20to%20seizure,2013)). Seer’s key datasets and trials include: a **large diary study** (retrospective) encompassing many patient-years to identify cycles, a **prospective pilot** of their forecasting app with wearables (small sample of 10–20 patients), and ongoing data from **hundreds of home EEG monitoring patients** (Seer’s primary business is outpatient video-EEG diagnostics, which also yields a trove of EEG recordings that can be repurposed for research) ([Improving epilepsy diagnosis and seizure forecasting](https://research.unimelb.edu.au/partnerships/case-studies/improving-epilepsy-diagnosis-and-seizure-forecasting#:~:text=Improving%20epilepsy%20diagnosis%20and%20seizure,by%20University%20of%20Melbourne%20researchers)) ([From a PhD to a mobile app that helps predict seizures](https://research.unimelb.edu.au/study/experience/researcher-life/phd-to-mobile-app-predicts-seizures#:~:text=data%20sets%20for%20analysis,some%20collaborators%20in%20the%20USA)). One specific pilot published in 2023 involved 13 patients using a wearable smartwatch and the app over an extended period ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=Seizure%20and%20heart%20rate%20cycles,collected%20after%20algorithms%20were%20developed)). This provided a testbed to validate non-invasive forecasts. ### Scientific Claims and Predictive Performance Seer Medical and its affiliated researchers have made several prominent scientific claims: - **Widespread Existence of Seizure Cycles:** Perhaps their most important insight is that *the majority of people with chronic epilepsy have identifiable cycles in their seizure occurrences*. Karoly *et al.* (2017) reported that circadian patterns are almost ubiquitous, and a significant fraction have multi-day cycles ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=least%20one%20significant%20cycle,cent%20had%20a%20weekly%20cycle)). Later work by colleagues (Baud *et al.*, 2018, using long-term implanted data) reinforced that **brain excitability cycles of 20–30 days are common**, with seizures tending to occur at particular phases of these cycles ([Multi-day rhythms modulate seizure risk in epilepsy | Nature Communications](https://www.nature.com/articles/s41467-017-02577-y#:~:text=observed%20between%20seizures%20by%20electroencephalography,effect%20size%20in%20most%20subjects)) ([Multi-day rhythms modulate seizure risk in epilepsy | Nature Communications](https://www.nature.com/articles/s41467-017-02577-y#:~:text=years%2C%20we%20find%20that%20IEA,effect%20size%20in%20most%20subjects)). This claim is foundational: it means seizures are not purely random events, and thus prediction is plausible. It also underpins Seer’s forecasting approach – by tracking a patient’s position in various cycles (time-of-day, time-of-week, etc.), one can gauge if they are entering a higher-risk phase ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=Perhaps%20the%20most%20important%20contribution,2017)). - **Feasibility of Seizure Forecasting with Wearables and Diaries:** Seer’s team has claimed that useful seizure forecasts can be generated *without invasive EEG*, using **multimodal data like self-reported seizures and heart-rate sensors**. In a 2023 pilot study, they validated a forecasting method combining cycles derived from seizure diaries and from wearable heart-rate fluctuations ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=results,cycles%20recorded%20from%20wearable%20devices)) ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=Seizure%20and%20heart%20rate%20cycles,collected%20after%20algorithms%20were%20developed)). The results showed a **mean AUC (area under ROC curve) of ~0.73** in retrospective analysis across 13 subjects (9 of 13 had forecasting performance above chance) ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=Findings)). More importantly, they tested the models prospectively in 6 subjects (i.e. trained the algorithm, then evaluated future data) and achieved **mean AUC ~0.77**, with 4 of 6 patients showing above-chance prediction ability in that truly prospective test ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=The%20results%20showed%20that%20the,participants%20showing%20performance%20above%20chance)) ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=showing%20performance%20above%20chance%20during,participants%20showing%20performance%20above%20chance)). An AUC of 0.77 indicates moderately good discrimination of high-risk vs low-risk periods. The authors interpret this as evidence that a **mobile seizure risk app can work in practice** – a notable claim since very few prediction algorithms had been prospectively validated in any form ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=The%20results%20of%20this%20study,critical%20step%20towards%20clinical%20applications)). - **“Weather Forecast” Analogy – Probabilistic Risk Prediction:** Unlike NeuroVista’s somewhat binary alert (red light vs blue light), Seer emphasizes that they are providing a **probabilistic forecast**, much like a weather forecast ([From a PhD to a mobile app that helps predict seizures](https://research.unimelb.edu.au/study/experience/researcher-life/phd-to-mobile-app-predicts-seizures#:~:text=In%20some%20cases%2C%20we%20had,predicts%20the%20chance%20of%20rain)). They claim this shift in framing (talking about increased likelihood rather than absolute prediction) is more aligned with how epilepsy actually behaves and is more useful to patients. For instance, instead of saying “a seizure will occur in X hours,” the app might say “today has a 70% risk vs tomorrow 20%.” The claim is that patients can meaningfully use this information (e.g., to adjust daily activities) and that many patients *want* at least a long-term forecast even if it’s not perfectly specific ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=Karoly%20acknowledged%20that%20the%20usefulness,safe%20to%20drive%2C%E2%80%9D%20she%20said)). Some patients only find value if an imminent warning is possible, but others appreciate knowing which weeks are likely to be bad ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=Karoly%20acknowledged%20that%20the%20usefulness,safe%20to%20drive%2C%E2%80%9D%20she%20said)). - **Deep Learning and AI can Enhance Prediction:** Through their 2018 study, Seer’s researchers claimed that **deep learning algorithms can achieve prediction performance comparable to classical methods while operating in real-time on low-power devices** ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=In%20EBioMedicine%2C%20Kiral,feasibility%3A%20it%20uses%20extremely%20low)) ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=Adapting%20Deep%20Learning%20to%20seizure,2013)). In that study, a convolutional neural network was trained on EEG spectrogram images and achieved results *“comparable with the efficacy of past successful published prediction algorithms”* on the same data ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=then%20learned%20how%20to%20distinguish,the%20study%2C%20automatically%20tuning%20to)). In fact, the deep learning system maintained performance across all patients over extended durations and could be tuned per patient preferences (trading off sensitivity vs false alarms) ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=prediction%20algorithms%20using%20the%20same,27%20Schulze)). This is a claim that advanced AI can overcome some prior limitations (like algorithms failing over time or needing constant manual adjustment). - **Clinical Deployment and User Engagement:** Seer often points out that theirs is **not just an academic exercise – they have deployed forecasting to real patients via the app.** By 2023, over 2,000 forecasts had been delivered to app users, and the app also serves practical functions like medication tracking and seizure logging ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=risks%2C%20but%20since%20the%20launch,2%2C000%20forecasts%20have%20been%20generated)) ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=People%20who%20use%20the%20Seer,on%20the%20floor%2C%E2%80%9D%20said%20Nurse)). The claim here is that Seer is leading in *translating* seizure prediction research into a usable clinical tool. This is more of a company statement than a peer-reviewed result, but it underscores that their predictions are now being continuously tested in the wild. They also highlight that users contribute data back (a form of real-world validation, as every new seizure reported can confirm or refute a forecast) ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=People%20who%20use%20the%20Seer,seizures%20in%20a%20diary%20after)). In summary, Seer Medical’s claims revolve around **seizure risk being forecastable for many patients using personalized data patterns**, and that modern AI plus big data can deliver these forecasts in a practical, non-invasive way. The performance metrics reported (AUC ~0.7–0.8 for many individuals) indicate that while forecasts are not perfect, they do significantly better than random chance ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=The%20results%20showed%20that%20the,participants%20showing%20performance%20above%20chance)) and can potentially be tuned to individual needs. ### Validation Strategy and Methodology Seer’s work spans both retrospective analyses and prospective validations, with a heavy emphasis on **robust statistical methodology** and appropriate validation: - **Retrospective Cycle Discovery:** Initial analyses of seizure diaries and long EEG recordings were retrospective. To rigorously show a cycle exists, the team would use methods like spectral analysis of seizure timestamps and compare against surrogate (randomized) data ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=least%20one%20significant%20cycle,cent%20had%20a%20weekly%20cycle)). For example, if a patient’s seizures cluster every 7 days, a spectral peak at ~1/7 days⁻¹ would appear. By shuffling seizure times and seeing that no such peak occurs by chance, one can validate that the weekly rhythm is real. This kind of analysis, done across cohorts, led to statements like “83% had a circadian rhythm” ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=least%20one%20significant%20cycle,cent%20had%20a%20weekly%20cycle)). The **sample sizes** for these findings were relatively large (dozens of patients with long observation spans), giving the statistical power to detect cycles. One study in *Brain* (Karoly et al., 2017) included data from 12 patients with intracranial EEG (the NeuroVista group) and 11,000+ seizures recorded, plus 1,485 patients using a mobile diary (with ~120,000 seizures logged) ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=Karoly%E2%80%99s%202018%20findings%20drew%20from,analyzed%2C%20and%20an%20alert%20was)) ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=Karoly%E2%80%99s%20research%2C%20published%20in%20The,pattern%20is%20different%20for%20everyone)). Such large-n analyses provided strong evidence for prevalent rhythmicity. - **Prospective Pilot Testing:** To move from observation to true prediction, Seer conducted a pilot where forecasts were made *in advance* and then checked against what actually happened – a critical step. In the 2023 study (published in *Lancet Neurology* or *EBioMedicine* – the excerpt is from a preprint server), they **trained algorithms on each patient’s initial data (e.g., first 6-12 months)** and then **evaluated performance on subsequent months**, with patients **blinded** to the forecasts during the test period ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=additive%20regression%20model%20was%20used,collected%20after%20algorithms%20were%20developed)) ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=provide%20robust%20performance,critical%20step%20towards%20clinical%20applications)). This is a high-quality validation approach because it simulates how the system would work in real life: the model is built, then the patient goes about life and we see if the model’s risk estimates correlate with incoming seizures. The fact they achieved an average AUC of 0.77 in this scenario (with 4 of 6 showing clear predictive power) demonstrates *prospective validity* ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=Findings)) ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=The%20results%20showed%20that%20the,participants%20showing%20performance%20above%20chance)). Few prior studies have done prospective validation; in the context of seizure prediction, only three groups had ever prospectively tested any kind of forecast (one being NeuroVista) according to a literature review in that paper ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=more%20of%20the%20terms%20%E2%80%9Cepilepsy%E2%80%9D%2C,Another%20study%20conducted%20a%20prospective)). This gives Seer’s claims more credibility. - **Subject-Specific Modeling:** Seer’s methodology acknowledges that **each patient needs an individualized model**. All their validations are effectively *within-subject* (similar to NeuroVista’s approach, but often with far more data per subject and additional modalities). They are not trying to train one universal algorithm that works for everyone – rather the process is: identify that person’s cycles and patterns, possibly use population insights as a starting point, but ultimately the forecast is tuned to personal data ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=People%20who%20use%20the%20Seer,on%20the%20floor%2C%E2%80%9D%20said%20Nurse)) ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=People%20who%20use%20the%20Seer,seizures%20in%20a%20diary%20after)). This is why their app continuously updates: as a patient logs more seizures (or as their wearable feeds in physiology data), the system can recalibrate if needed. Methodologically, this means **cross-validation is done in a time-series sense**, not mixing data from the future and past. They often use training/test splits separated in time to avoid look-ahead bias, which is good practice for non-stationary data like seizures. - **“Above Chance” Benchmarks:** Following community standards, Seer’s studies evaluate whether a prediction performance is above what a random or naïve predictor would achieve ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=Findings)). For instance, if a patient has one seizure a week on average, a naive predictor that always says “low risk” might still be “right” most days. Seer’s metrics like AUC (area under the ROC curve) inherently compare sensitivity vs false positive rate and a value of 0.5 would be chance level. They report how many patients exceeded that (e.g., 9 of 13 retrospective, 4 of 6 prospective) ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=The%20results%20showed%20that%20the,participants%20showing%20performance%20above%20chance)). Additionally, some papers use **time-sensitivity metrics** (like time spent in high-risk vs proportion of seizures in high-risk) akin to NeuroVista’s original criteria. For example, one of their case studies achieved 83% of seizures in high-risk while high-risk occupied 26% of time ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=In%20high%20risk%20state%20In,63)) – significantly better than the 50%/50% expected by chance. - **Deep Learning Validation:** The 2018 deep learning study was validated by *cross-testing on held-out data from the same patients*. Kiral-Kornek et al. trained a convolutional neural network on a portion of the intracranial EEG recordings and tested on unseen segments, reporting performance comparable to earlier algorithms on those patients ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=then%20learned%20how%20to%20distinguish,the%20study%2C%20automatically%20tuning%20to)). They also highlighted that the system retained accuracy across long durations (months of EEG) which they cite as an advantage ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=prediction%20algorithms%20using%20the%20same,27%20Schulze)). Notably, that study was also run in a *fixed and frozen* manner for evaluation – meaning they trained the model, fixed it, and then evaluated it on continuous data without further adjustment, simulating a deployed device ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=algorithm%27s%20efficiency%20allowed%20them%20to,27%20Schulze)). This kind of rigorous testing prevents overfitting and is analogous to how FDA would expect an algorithm to be validated (train on some patients, test on an independent set or independent time periods). - **Real-World Feedback Loop:** As the Seer app is now used by patients, an interesting aspect is that it provides a form of real-world validation. Each time a user logs a seizure that occurred with no warning, or conversely spends a predicted high-risk week with no seizures, Seer can gather that information to refine algorithms. While formal results of this real-world performance aren’t published yet, the company likely monitors metrics like **sensitivity (fraction of seizures that occurred during high-risk predictions)** and **false alarm rates (how often high-risk is declared without a seizure)** across their user base. They have acknowledged that studying *how people use the app* and how accurate it is in practice will be crucial to guide further development ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=Karoly%20acknowledged%20that%20the%20usefulness,safe%20to%20drive%2C%E2%80%9D%20she%20said)). In sum, Seer’s validation approach is characterized by **data-driven, individualized models tested on separate data from training** (often future data). The studies they publish tend to follow rigorous scientific standards (blinded testing, statistically significant above-chance results, and appropriate baselines). They have not yet done a large randomized controlled trial (e.g., where one group gets forecasts and another doesn’t), which would be the gold standard to prove real-life benefit, but they are accumulating evidence stepwise: first that the forecasts can work, and next that patients can benefit. ### Limitations and Criticisms While promising, Seer Medical’s seizure forecasting approach faces several **limitations and open criticisms**: - **Prediction Accuracy is Limited:** The forecasting performance, while above chance, is far from perfect. An AUC around 0.75 means that there is significant overlap between “high-risk” and “low-risk” periods – some seizures will still happen unexpectedly in supposed low-risk times, and there will be false alarms (high-risk periods with no seizures). For example, in the pilot, 4 of 6 patients had useful forecasts ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=showing%20performance%20above%20chance%20during,participants%20showing%20performance%20above%20chance)) – meaning 2 patients saw no predictive power even with careful analysis. Even for the responders, a prediction algorithm that’s 75% accurate might or might not be practically helpful day-to-day. **False alarms and missed seizures can cause frustration or complacency**: If a patient is told it’s low risk and then has a seizure, their trust in the system could erode; if they constantly get high-risk warnings that don’t result in seizures, it could induce unnecessary anxiety or lifestyle restriction. This balance of sensitivity vs specificity remains a delicate issue. Seer’s team is aware of this and mentions that different patients desire different trade-offs (some want every possible warning, others only want to be warned if very certain) ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=Karoly%20acknowledged%20that%20the%20usefulness,safe%20to%20drive%2C%E2%80%9D%20she%20said)). But achieving both high sensitivity and specificity simultaneously is difficult – it’s essentially the classic trade-off in any detection algorithm. - **Not All Patients Have Predictable Patterns:** While most patients do exhibit some cycles, **some individuals have highly irregular seizure occurrence that defies easy prediction**. The Seer app might simply never generate a strong predictive model for those people. Additionally, certain epilepsy syndromes (e.g., those triggered by specific stimuli or acute illnesses) might not follow the intrinsic brain rhythms that these algorithms detect. Thus, a forecasting service could end up telling a subset of users “no reliable forecast can be made for you,” which is a limitation. Understanding *which factors make someone’s seizures predictable* (e.g., focal vs generalized epilepsy, presence of circadian modulation, etc.) is an ongoing research question. So far, studies show diversity – each patient is unique ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=Karoly%E2%80%99s%20research%2C%20published%20in%20The,pattern%20is%20different%20for%20everyone)) ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=When%20average%20brain%20activity%20is,%E2%80%9D)) – which means a forecasting solution might only be applicable or **robust in a subset of the epilepsy population**. - **Reliance on Self-Reported Data:** A practical criticism is that Seer’s approach (especially via the app) initially relies heavily on patients logging their seizures. But it’s well documented that **seizure diaries are often inaccurate** – patients miss seizures (especially if they occur at night or with impaired awareness) and sometimes misclassify events ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=Accurate%20identification%20of%20seizure%20activity%2C,Five)) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=The%20sub,are%20likely%20to%20represent%20similar)). If the training data going into the algorithm is incomplete or erroneous, the forecasts will suffer. Seer is mitigating this by incorporating **wearable sensors to passively detect seizures** (for instance, a smartwatch can automatically detect tonic-clonic seizures via motion and physiological changes, similar to the Empatica device discussed later). They note that wearables can help capture seizures patients aren’t aware of ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=People%20who%20use%20the%20Seer,on%20the%20floor%2C%E2%80%9D%20said%20Nurse)). However, current wearables still *cannot detect all seizure types* – they are best at generalized convulsions. So, in cases of focal impaired-awareness seizures (which have subtle outward signs), the app might still be at the mercy of patient self-report. This data quality issue is a known bottleneck for machine learning in epilepsy. - **No Peer-Reviewed Clinical Outcome Studies Yet:** As of 2025, **we don’t yet have published evidence that using Seer’s forecasts improves patients’ lives** (reduces injuries, SUDEP, anxiety, etc.). It is one thing to have a predictive model; it’s another to integrate it into care such that outcomes improve. NeuroVista’s trial hinted that just having a warning device did not immediately change quality of life ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=device%20met%20enabling%20criteria%20in,and%204%20months%20after%20implantation)). Seer’s app is newer and more user-friendly, but it will likely need formal studies to show that patients who use forecasts, for example, have fewer accident-related injuries or feel more in control. There’s also a psychological dimension: some clinicians have voiced concern that an unreliable forecast could *increase* anxiety (the patient constantly checks if they are “red” or “green” risk). Dr. Patrick Kwan, an epilepsy expert, pointed out that explaining the concept of probabilistic forecasting to patients is challenging – people might misconstrue a “forecast” as destiny, whereas in reality it’s only a likelihood that they might influence by behavior ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=Kwan%20said%20that%20using%20the,forecast%20also%20carries%20certain%20connotations)). There is a need for careful user experience design and patient education so that the tool empowers rather than worries patients. Until real-world outcome data are gathered, this remains a theoretical benefit. - **Generalizability and External Validation:** Most of Seer’s publications come from the internal team or close collaborators. Independent researchers will need to validate their algorithms on other datasets to ensure the results hold. The good news is Seer’s scientific contributions (e.g., revealing cycles) *have* been corroborated by others – for instance, the work by Baud et al. using the NeuroPace RNS data confirmed multi-day cycles in a completely independent cohort ([Multi-day rhythms modulate seizure risk in epilepsy | Nature Communications](https://www.nature.com/articles/s41467-017-02577-y#:~:text=observed%20between%20seizures%20by%20electroencephalography,effect%20size%20in%20most%20subjects)). But the end-to-end forecasting pipeline (from app data to forecast) hasn’t been independently tested in a published study yet. There’s also the matter of geographic and demographic generalizability: much of the Seer data may come from Australian patients using their service; epilepsy can vary with different etiologies, and adherence to using the app might differ in other populations. **Sample size** in their prospective pilot was small (n=6 for the prospective test) ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=additive%20regression%20model%20was%20used,collected%20after%20algorithms%20were%20developed)), so larger trials are needed to really quantify how many people can benefit and how much. - **Competitive and Regulatory Landscape:** As a critique of the effort broadly, one could note that no regulatory body (like the FDA) has yet approved a seizure *forecasting* device or app as a medical product. This means the evidentiary bar is still high – it likely requires demonstration of safety and efficacy in a controlled trial. Seer’s approach might eventually need to clear that bar. The path might be challenging: unlike a seizure detector (which can be validated with sensitivity/specificity alone), a forecaster may need to prove it actually leads to improved patient outcomes to justify its use. This is partly a limitation because it means forecasting tech, however clever, might face skepticism until outcome data are available. In conclusion, **Seer Medical’s work is at the forefront of non-invasive seizure forecasting**, and it has introduced rigorous data science to epilepsy. The existence of cycles and moderately accurate forecasts is well-supported by their studies ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=Perhaps%20the%20most%20important%20contribution,2017)) ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=The%20results%20showed%20that%20the,participants%20showing%20performance%20above%20chance)). The main criticisms lie in the gap between forecasting accuracy and clinical utility – essentially, “can we trust these predictions enough to act on them, and will that make a difference?” The coming years should bring larger studies (perhaps using the app) to answer these questions. Nonetheless, Seer has significantly advanced the methodology of the field, moving it from anecdotal observations of cycles to quantitative models that patients can actually try in daily life. ## Epiminder (Minder® Sub-Scalp EEG Implant for Seizure Forecasting) **Epiminder** is a company (founded in 2018 in Melbourne, Australia) that is effectively NeuroVista’s spiritual successor, aiming to realize seizure prediction through a next-generation implantable device. Epiminder’s flagship product is the **Minder® system**, a **minimally invasive sub-scalp EEG monitoring device** designed for ultra-long-term use ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=There%20are%20five%20components%20to,the%20Minder%C2%AE%20system)). The effort is led by many of the same experts from the NeuroVista trial (Prof. Mark Cook and colleagues), now with updated technology and algorithms incorporating the lessons learned since 2013 ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=Professor%20Cook%E2%80%99s%20research%20team%20had,on%20their%20individual%20brain%20activity)) ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=To%20overcome%20this%2C%20Professor%20Cook%E2%80%99s,on%20seizure%20cycles%20and%20duration)). ### Technology and Intended Use The Epiminder Minder system consists of several components ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=There%20are%20five%20components%20to,the%20Minder%C2%AE%20system)): 1. **Minimally Invasive Implant:** A small electrode strip implanted **under the scalp (above the skull)**, typically over the epileptic focus side of the brain. This electrode records EEG from one or two channels (from both hemispheres) continuously ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=invasive%20electroencephalography%20,scalp%20device)). The placement under the scalp – not penetrating the dura or brain – makes it much less invasive than traditional intracranial electrodes while still capturing clearer signals than surface (scalp) EEG. 2. **Subscalp Recorder:** A compact encapsulated electronics module under the scalp that attaches to the electrode. This module digitizes the EEG and is the core of the implant. 3. **External Wearable and Coil:** A **wearable processor** sits behind the ear (magnetically attached through the skin to the implant) and powers the implant wirelessly ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=1,senses%20and%20records%20neural%20events)). It also receives the EEG data via inductive coupling. Essentially, the implant has no battery – power and data link are provided by the external unit, which the patient wears like a hearing aid or patch. This design choice eliminates the need for battery replacement surgeries. 4. **Smartphone App and Cloud:** The external unit streams data to a **smartphone app via Bluetooth**, which in turn uploads to a cloud database ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=An%20implantable%20device%20to%20detect,data%20to%20a%20remote%20database)) ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=1,senses%20and%20records%20neural%20events)). The data can then be analyzed by Epiminder’s algorithms to detect seizures and possibly forecast risk. Patients can also use the app to mark events (similar to a diary) and view information if enabled. In essence, the **Minder device provides continuous, 24/7 EEG monitoring in a home setting**, over months and years, without the inconvenience of external electrodes or frequent battery charging (the external processor likely has a rechargeable battery). The intended use initially is **seizure detection and quantification** – giving an accurate count of seizures (including subclinical ones) and understanding an individual’s EEG patterns over long durations ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=A%20means%20to%20reliably%20predict,safety%2C%20mental%20health%20and%20employability)) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=Here%2C%20we%20have%20successfully%20shown,trials%20to%20provide%20more%20objective)). This alone has clinical value: it can help assess whether a new medication is working, since patient self-report is often unreliable. But the ultimate goal, as stated by the company, is to **“detect and eventually predict epileptic seizures”** ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=,develop%20and%20commercialise%20the%20device)). The device is envisioned as a platform on which seizure forecasting algorithms (once validated) can run to provide patients with warnings similar to NeuroVista’s concept, but more reliably. Epiminder’s Minder received an FDA Breakthrough Device designation in April 2023 ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=The%20Minder%20device%20received%20Breakthrough,irreversibly%20debilitating%20diseases%20or%20conditions)), highlighting its potential to address unmet needs in epilepsy. This fast-track status was likely granted due to the promise of seizure forecasting and the novelty of a long-term sub-scalp EEG solution. ### Key Trials and Data Collection Epiminder began a **Phase I clinical trial in 2019** in Australia to primarily evaluate **safety and device performance** (not yet efficacy of prediction) ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=An%20implantable%20device%20to%20detect,data%20to%20a%20remote%20database)) ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=In%202019%2C%20Epiminder%20began%20a,cloud%20storage%20managed%20by%20Epiminder)). In this trial, a small number of patients with refractory epilepsy are implanted with the Minder and followed for many months. The main questions are: Is the device safe to implant and wear continuously? Does it reliably record high-quality EEG and capture all the patient’s seizures? And can it transmit that data effectively to the cloud? Initial funding rounds (over AUD $44 million by late 2021 ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=A%20device%20to%20monitor%20ultra,44%20million%20since%20June%202018)) ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=In%20late%202020%2C%20Epiminder%20raised,16%20million%20in%20late%202021))) supported this development and trial. By 2021, **five patients** had been implanted and monitored, and Epiminder published preliminary results from these first cases ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=invasive%20electroencephalography%20,scalp%20device)). Each patient had the device for over 6-12 months, accumulating a very long EEG record (some > 12 months of continuous data) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=Discussion)). Importantly, they also had each patient undergo a standard ambulatory video-EEG with scalp electrodes for ~1 week, overlapping with the time they had the sub-scalp device, to provide a **ground truth comparison** ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=were%20compared%20to%20data%20recorded,The)). This allowed validation that the sub-scalp EEG picks up the same seizures and interictal epileptiform activity as conventional EEG. The results from these first five patients were encouraging: **the sub-scalp device captured all the seizures that were also seen on scalp video-EEG**, confirming its accuracy in detection ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=were%20compared%20to%20data%20recorded,The)) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=Here%2C%20we%20have%20successfully%20shown,trials%20to%20provide%20more%20objective)). Additionally, patients **tolerated the device well with no serious adverse events** reported during that period ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=we%20present%20preliminary%20results%20of,scalp%20device)) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=Here%2C%20we%20have%20successfully%20shown,trials%20to%20provide%20more%20objective)). This addresses a key safety concern, given NeuroVista’s prior issues. The sub-scalp approach appears to dramatically reduce infection risk (no transcutaneous leads) and avoids hardware migration since it’s secured under the scalp. Crucially, Epiminder also analyzed the rich data to explore **seizure forecasting retrospectively**. They leveraged the continuous EEG to measure cycles of epileptiform activity (interictal spikes, etc.) and past seizure times, then attempted to forecast high-risk intervals for seizures. In their 2021 Frontiers in Neurology article (Stirling *et al*.), they reported a **retrospective seizure forecast AUC of 0.88** for the group ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=utilizing%20cycles%20in%20EA%20and,forecasting%20work%20using%20intracranial%20EEG)) – which is remarkably high. For context, an AUC of 0.88 means the model could distinguish “imminent seizure hour” vs “normal hour” with 88% probability of ranking a true seizure hour higher. They noted this was *“comparable to the best score in recent state-of-the-art forecasting work using intracranial EEG”* ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=were%20well,forecasting%20work%20using%20intracranial%20EEG)). Indeed, it outperforms the ~0.7–0.8 AUC typical in prior studies, though one must remember this is on a small, likely cherry-picked sample and done retrospectively. The forecasting in that analysis was done by identifying cyclical patterns in each patient’s EEG. For example, they might detect a multi-day oscillation in the rate of spikes or other features, and find that seizures tend to occur at the peaks of that oscillation ([Multi-day rhythms modulate seizure risk in epilepsy | Nature Communications](https://www.nature.com/articles/s41467-017-02577-y#:~:text=years%2C%20we%20find%20that%20IEA,effect%20size%20in%20most%20subjects)) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=utilizing%20cycles%20in%20EA%20and,forecasting%20work%20using%20intracranial%20EEG)). By combining circadian and multi-day cycle phase information, they set thresholds for high, medium, low risk. In the combined results reported, **83% of seizures occurred in high-risk states**, which occupied about 26% of total time ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=In%20high%20risk%20state%20In,63)). Only 10% of seizures fell in low-risk periods (which were 63% of the time) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=In%20high%20risk%20state%20In,63)). This is a very promising ratio (high sensitivity for seizures with relatively low false alarm time) – albeit again demonstrated with hindsight on a small sample. ### Scientific Claims Epiminder, based on these early results and its stated mission, puts forward several claims: - **Continuous EEG is the Gold Standard for Prediction:** The company emphasizes that **capturing the full, unbroken stream of brain activity 24/7 can reveal “previously unseen” aspects of epilepsy** ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=database)) ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=%E2%80%9CThe%20Minder%20system%20captures%20EEG,biomedical%20engineer%20Professor%20Mark%20Cook)). Unlike intermittent monitoring or self-reports, the implant doesn’t miss any events. Epiminder claims this ultra-long record is key to understanding an individual’s epilepsy – for instance, seeing subtle electrographic seizures or patterns that patients aren’t aware of. Mark Cook stated, “The Minder system captures EEG data 24/7, revealing previously unseen brain activity” ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=database)). The implicit claim is that with this comprehensive data, algorithms can achieve **much more accurate forecasts** than ever before, because nothing is missed or misreported. - **Minimally Invasive Yet High-Fidelity:** A crucial technical claim is that a **sub-scalp electrode can record EEG nearly as well as traditional scalp EEG or even intracranial EEG**. The early patient comparisons showed **1:1 correspondence of seizures on sub-scalp vs scalp EEG** ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=were%20compared%20to%20data%20recorded,The)). They also found the sub-scalp signal quality to be good: less noise from things like electrical interference or motion artifact compared to scalp electrodes (since it’s under the skin) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=Our%20results%20demonstrate%20that%20sub,and%20was%20not)). While there is some muscle (EMG) artifact, certain artifacts like eye blinks were actually absent in sub-scalp recording due to electrode placement ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=scalp%20EEG%20was%20less%20noisy,Figure%202D)). This claim is important because it assures that going “minimally invasive” doesn’t sacrifice the critical EEG information needed for prediction. It positions the device as potentially a long-term alternative to having electrodes in the skull (with far fewer risks). - **High Prediction Performance Achieved (Retrospectively):** Epiminder’s team claims that using their device’s data, they have demonstrated **very high predictive performance in forecasting seizures, with AUC ~0.88** ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=utilizing%20cycles%20in%20EA%20and,forecasting%20work%20using%20intracranial%20EEG)). They highlight that this is on par with the best results from prior intracranial studies ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=%280,forecasting%20work%20using%20intracranial%20EEG)). In practical terms, they showed in a case study that the algorithm could put the patient into a high-risk category less than 30% of the time and still catch 83% of their seizures ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=medium%20risk%20and%2012%20%2810,scalp%20EEG%20device)) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=In%20high%20risk%20state%20In,63)). That specificity is better than NeuroVista’s original (which required 65% sensitivity but may have had a lot of false positives). The claim here is **“we can do prediction better now”** thanks to improved algorithms and the rich dataset. However, they are careful to call this preliminary – it’s described as a demonstration of feasibility in their publication ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=was%200,scalp%20EEG%20device)). - **Value of Objective Seizure Tracking:** Another angle of Epiminder’s claims is that even before true prediction, the device provides **immense value by accurately counting seizures and characterizing interictal activity** ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=Here%2C%20we%20have%20successfully%20shown,trials%20to%20provide%20more%20objective)) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=The%20sub,are%20likely%20to%20represent%20similar)). They argue this will improve clinical trials and therapy adjustments. For example, if a new drug reduces spike activity by 50% even if clinical seizures don’t drop as much, that could be a sign of efficacy that would be missed without continuous EEG. They also suggest it can illuminate relationships like sleep and seizures (the device picks up sleep patterns too) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=signals%20that%20are%20similar%20to,and%20was%20not)). By quantifying the “seizure burden” objectively, they aim to change how epilepsy is managed. This is more a broader healthcare claim than a prediction claim, but it’s part of how Epiminder differentiates itself from wearables or diaries – it’s providing a clinical grade data stream. - **Long-Term Stability:** Epiminder implies that because their device can be used for years, it can track and adapt to a patient’s epilepsy as it evolves. They cite that multi-day cycles can be stable over up to 10 years in some people ([Multi-day rhythms modulate seizure risk in epilepsy | Nature Communications](https://www.nature.com/articles/s41467-017-02577-y#:~:text=observed%20between%20seizures%20by%20electroencephalography,effect%20size%20in%20most%20subjects)). Having a permanent implant means one could observe these long rhythms and adjust forecasts continuously. The claim (to be proven) is that **the device’s algorithms will continue to work long-term**. This contrasts with some earlier algorithms that might need frequent retraining. In their study, they mention the algorithm maintained predictive functionality over the entire recording ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=prediction%20algorithms%20using%20the%20same,27%20Schulze)) (though that reference was about the deep learning approach). The goal is a **durable implant that offers a reliable warning system over the patient’s lifetime**. In public-facing statements, Epiminder often conveys optimism that this device *“will pave the way for forecasting and warning systems”* that could **“change the lives of people with epilepsy for good”** ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=,Read%20article)). The scientific claim underpinning that is that *all the pieces (data, algorithms, form factor) are finally coming together* to make seizure prediction clinically viable. ### Validation and Methodological Rigor As of now, Epiminder’s validation is primarily engineering and feasibility-oriented, with clinical validation in early stages: - **Safety and Tolerability:** The Phase I trial’s main endpoint is safety at 4 months post-implant and beyond ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=An%20implantable%20device%20to%20detect,data%20to%20a%20remote%20database)). The initial reports of “no significant complications” among the first five patients ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=detected%20using%20machine%20learning%20algorithms,art)) are a positive sign. Still, five patients is a small sample. Ongoing monitoring is needed to see if any late issues arise (e.g., scar tissue, minor skin infections, or need for reimplantation after device lifespan). The trial’s careful stepwise approach – starting with a handful of patients – is appropriate given this is a first-of-its-kind implant. - **Data Quality Validation:** By doing concurrent scalp EEG recordings, the team rigorously validated that **events detected by the Minder device truly correspond to epileptic events**. In the five patients, every seizure captured on scalp EEG was also seen on the sub-scalp device (100% sensitivity for those seizures) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=were%20compared%20to%20data%20recorded,The)). They also had neurophysiologists review the sub-scalp recordings to confirm they contain typical epileptiform patterns ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=were%20compared%20to%20data%20recorded,The)). This addresses validity of detection. Additionally, they likely looked at whether the device *missed* any seizures (e.g., if sub-scalp saw something scalp didn’t, or vice versa). So far it appears the sub-scalp is equivalent to scalp EEG for the types of seizures those patients had. - **Retrospective Forecasting Validation:** The high AUC results reported were done retrospectively with cross-validation. The methodology (though not described in detail in our snippets) likely involved dividing each patient’s long data into training and test segments, possibly multiple folds, given the length. They specifically mention “training was performed on electrographic seizures” and then testing done (referring to Table 3) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=In%20high%20risk%20state%20In,63)). They also reference prior studies (their AUC 0.88 was comparable to forecasting using interictal EA cycles from intracranial EEG in other research) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=AUC%20score%20%280,26%2C%2032)). This suggests they used a similar approach to those prior cycle-based algorithms (like Karoly 2017, Leguia 2021 etc.), but now on a new data type. The rigorous aspect is that they did not simply train and test on the same data chunk – they respected the time-series nature. The result being in line with state-of-art hints at no major methodological flaw like overfitting to noise. However, until we see a larger sample, we have to be cautious – with 5 patients, even one patient’s data dominating could skew the “mean AUC”. - **No Prospective (Real-Time) Testing Yet:** A limitation in validation is that, so far, all forecasting analysis is retrospective. The patients in the trial were *not* receiving seizure risk alerts; the data were analyzed later by researchers. Thus, the true prospective performance (where an algorithm is fixed and then used to predict future unseen seizures) has not been established in an official trial. In the future, Epiminder would presumably do what NeuroVista did – once they are confident in the algorithm, turn on a warning for patients and track prospectively. For now, their validation is akin to a very detailed offline analysis. It’s scientifically valid, but clinically we don’t know how it might perform if, say, seizures frequency changes or if patients alter behavior due to having the device (though in this case they had no alerts, so behavior wouldn’t change in phase I). - **Sample Size and Diversity:** The initial validation is on 5 individuals, which is too few to assess variability. They likely all had focal epilepsy with reasonably frequent seizures (the trial required ≥2 per month) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=minimally%20invasive%20sub,EA%29%20was)). It’s possible that these first five were relatively “ideal” subjects (e.g., maybe clear cycles or single focal onset). As the trial expands to more patients, it’s crucial to validate that the device works equally well for different scenarios: bilateral onsets, multiple foci, or primarily nocturnal seizures, etc. Methodologically, the algorithms might need to adapt. The published case study aggregated results, but ideally, one should see each patient’s AUC to know if one had 0.95 and another 0.7, for instance. The company hasn’t released that level of detail publicly. - **Integration of Machine Learning:** Epiminder is leveraging machine learning similar to Seer (indeed the teams overlap). The forecasting method mentioned used **interictal spike rates and cycles** – a fairly transparent, hypothesis-driven model. It’s not clear if they have also tried more black-box models on the data. Given Mark Cook’s comment that a limitation of NeuroVista was the inability to analyze data fast enough ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=for%20each%20person%20based%20on,their%20individual%20brain%20activity)) ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=%E2%80%9CThe%20trial%20showed%20that%20it,seizure%20forecasts%2C%E2%80%9D%20says%20Professor%20Cook)), Epiminder presumably uses more powerful algorithms and cloud computing now. The Phase I is more about proving the device, so validation of algorithm may be ongoing as more data accumulates. It’s likely Epiminder will incorporate the deep learning from Kiral-Kornek 2018 (the neuromorphic chip concept) when they move to an on-device prediction in the future. That will need separate validation (ensuring the on-device processor can replicate the cloud analysis results). - **Regulatory Steps:** Being a Breakthrough Device doesn’t mean approved – Epiminder will have to present its validation data to regulators. For detection (diagnostic) purposes, they’d need to show sensitivity and false detection rates of the device. For prediction claims, they might need a clinical trial. The current Phase I (safety) and planned Phase II (efficacy) will form that body of evidence. The methodologies used (blinded expert reviews of EEG, etc.) are all standard and sound. One thing to watch is how they validate that giving a warning helps – that might be done in a later trial with a control group (perhaps an “on/off” within-subject design or something). ### Limitations and Criticisms Epiminder’s approach, while very cutting-edge, also has limitations and potential criticisms to address: - **Invasiveness (albeit reduced):** The Minder is less invasive than NeuroVista’s fully intracranial electrodes, but it *still requires surgery*. Patients must undergo a procedure to place the electrode under the scalp. While this is usually an outpatient or short-stay procedure (akin to implanting a loop recorder for heart monitoring), it carries some risk (bleeding, infection, pain). The threshold for using this device will likely be patients with frequent, uncontrolled seizures for whom knowing seizure timing is worth a surgery. It’s not suitable for the broader population of people with well-controlled or rare seizures. Thus, the device is targeting a niche (albeit an important one: those with refractory epilepsy). Some critics might argue that less invasive options (wearables, surface EEG devices) could be tried first, reserving an implant for only when those fail. Epiminder will have to show that the **incremental benefit of sub-scalp EEG outweighs the inconvenience/risk** relative to non-invasive methods. - **Forecasting Not Yet Proven Prospective:** As mentioned, all prediction results are retrospective. There is a risk that when tried in real-time, performance could drop. This could happen if, for example, a patient’s seizure rate changes (making previously learned cycles less accurate), or if there are unforeseen non-stationarities. Retrospective analyses can sometimes overestimate performance because algorithms may inadvertently “peek” at the data in tuning parameters. Epiminder’s 0.88 AUC is impressively high – one wonders if any slight overfitting played a role. The real test will be prospective deployment of that algorithm in more patients. So a fair critique is **“Great, but show it works going forward, not just on past data”**. - **Scalability and Automation:** Each patient generates enormous data (months of EEG). Crafting a personalized prediction model from that might require significant expert and computational input. Epiminder will need to automate this process to be a viable product – essentially, plug in the device, let it record, and after some time the system outputs a predictor. The methodologies used (cycle detection, etc.) are a good candidate for automation, but it’s unclear how well the approach generalizes. It’s one thing to retrospectively fit cycles to an individual’s known seizure times; it’s another to, after say 3 months of data, predict what will happen in the next 3 months. Some patients might need longer “training” durations to get enough data to identify cycles confidently. This means a patient could have to wait months after implantation before receiving any predictive benefit. That lag and the process of fine-tuning the algorithm could be seen as a limitation in practicality. - **Data Burden and Analysis Speed:** Continuous EEG from an implant will produce an unprecedented volume of data per patient. Epiminder will rely on cloud storage and big-data analytics. There could be technical challenges in transmitting and analyzing that data in real-time (though the Breakthrough Device status implies they have a plan). If the external unit or phone loses connectivity, data could back up. Also, reviewing the data to ensure quality (artifact vs true signals) might be non-trivial. From a validation perspective, ensuring **data integrity** (no large gaps, no device malfunctions) is part of the challenge. The initial 5 patients likely were closely monitored by the research team; scaling to hundreds of users will test the robustness of the tech. - **User Acceptance:** Having a piece of hardware under the scalp and wearing a behind-the-ear transmitter at all times requires user commitment. Epiminder’s target users often are highly motivated (refractory epilepsy can be devastating). But still, lifestyle integration matters. The device needs to be comfortable, and users must remember to wear the external piece and keep the phone nearby. Early users have done so under trial conditions, but in real life, adherence could wane if the system is cumbersome. This is more of a product design concern, but it affects the real-world effectiveness (if someone leaves off the external unit for a day, that day’s data and predictions are lost). Ongoing human factors evaluation will be important. - **Long-Term Outcomes Unknown:** Finally, as with Seer, **Epiminder has yet to demonstrate improved clinical outcomes**. If all goes perfectly, the device would warn patients of seizures and perhaps link to an intervention (maybe a future version could trigger neurostimulation or medication delivery). But until tested, we don’t know if patients will effectively use warnings to avoid injury or whether knowing one’s risk reduces stress or perhaps increases it. It’s notable that Epiminder’s current trial does not provide warnings to patients – it’s purely monitoring. So the benefit is indirect (better data for doctors). The *next* step would be a trial of the advisory capability; ethics and design of that will need careful consideration (e.g., if the prediction is wrong, does it put the patient at any risk?). In summary, **Epiminder has made an impressive start by solving many technical hurdles** (long-term recording, good signal quality, patient tolerance) and showing in retrospective analysis that seizure forecasting with high accuracy is possible on its device ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=utilizing%20cycles%20in%20EA%20and,forecasting%20work%20using%20intracranial%20EEG)). The major caveats are that these results are preliminary (few patients, non-prospective) and that the real-world utility and acceptance of an implantable forecasting system remain to be proven. If Epiminder can demonstrate a meaningful improvement in patient outcomes (perhaps enabling timely intervention or providing peace of mind), it will validate decades of work in seizure prediction. Until then, the concept is scientifically exciting but clinically cautious – as one might say, we need to *“make it practical”*, echoing the sentiment in the field ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=%28Dumanis%20et%20al,has%20been%20an%20unanswered%20question)) ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=to%20a%20portable%20device%20designed,prediction%2C%20as%20identifying%20periods%20of)). ## Other Epilepsy AI Initiatives: Detection Devices and Neuromodulation (Contrast) To put the above seizure prediction efforts in context, it’s useful to compare them to other epilepsy technologies that use AI or closed-loop principles, namely **seizure detection devices** and **responsive stimulation therapies**. These systems generally focus on reacting to seizures (or detecting them as they happen) rather than forecasting them ahead of time. They underscore different validation approaches – often emphasizing sensitivity and clinical efficacy rather than predictive power. Below we briefly review a couple of key examples: ### Responsive Neurostimulation (RNS by NeuroPace) While NeuroVista and Epiminder aim to warn patients of seizures, the **NeuroPace RNS system** takes a different approach: it *treats seizures at onset*. The RNS (Responsive Neurostimulator) is an FDA-approved implanted device for refractory focal epilepsy that **continuously monitors intracranial EEG and delivers electrical stimulation when it detects abnormal activity**, with the goal of preventing seizures from fully developing. Essentially, it’s like a tiny EEG recorder and stimulator embedded in the skull, with leads placed at the seizure focus. **Detection and Algorithm:** The RNS uses computerized pattern recognition to identify the onset of a seizure (or specific epileptic discharge patterns) in real time. The detection parameters are **tuned per patient by clinicians** – for instance, sensitivity to certain rhythms or spike patterns – rather than using a one-size-fits-all ML model. When the device detects what it thinks is a seizure beginning, it triggers brief electrical stimulation to interrupt it. This is **closed-loop therapy**, not a patient alert system, so the patient typically doesn’t consciously perceive a warning (though some feel a brief tingling from the stimulus). The **“prediction horizon”** here is extremely short – the device might detect activity a few seconds before clinical manifestations, but it’s essentially detecting an imminent or already-started seizure to stop it, not giving minutes-to-hours advance notice. **Validation and Efficacy:** The NeuroPace RNS underwent rigorous clinical trials. In a pivotal randomized controlled trial, 191 patients were implanted and after a stabilization period, half had the stimulations turned on and half left off (sham) for a few months ([ Two-year seizure reduction in adults with medically intractable partial onset epilepsy treated with responsive neurostimulation: Final results of the RNS System Pivotal trial - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC4233950/#:~:text=Randomized%20multicenter%20double,up)). The trial’s outcome was **reduction in seizure frequency**. It showed a significant benefit: a **44% median seizure frequency reduction at 1 year** in treated patients (vs baseline), and **53% at 2 years** ([ Two-year seizure reduction in adults with medically intractable partial onset epilepsy treated with responsive neurostimulation: Final results of the RNS System Pivotal trial - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC4233950/#:~:text=reduction%20in%20seizures%20in%20the,the%20known%20risks%20of%20an)). With continued use, long-term studies reported seizure reductions of 60–70% after 5–6 years, and up to 75% at 9 years ([Brain-responsive neurostimulation for epilepsy (RNS® System)](https://www.sciencedirect.com/science/article/pii/S0920121118305758#:~:text=System%29%20www,)). These results indicate the RNS’s detections and stimulations are effectively blunting a sizable fraction of seizures. The FDA approved RNS in 2013 as an adjunct therapy for adults with focal epilepsy on the strength of these trials. The validation here is through **clinical outcomes (seizure reduction)** rather than a classical sensitivity/specificity metric. However, behind that outcome is the device’s detection capability: it must consistently catch seizures early to deliver therapy. The RNS’s detection algorithms were tested and refined in earlier feasibility studies to maximize sensitivity while avoiding overstimulation. They achieved sufficient reliability that on average the device could stimulate quickly enough to curtail many seizures. **Comparative Note:** RNS and NeuroVista/Epiminder both implant EEG sensors, but RNS does not explicitly *forecast* to the user – it acts on the EEG immediately. One could say RNS is solving the unpredictability problem by *mitigating consequences* (the seizures are less severe/frequent because they’re interrupted), whereas NeuroVista/Epiminder try to solve it by *prior warning*. In terms of AI, RNS’s algorithms are relatively simple threshold and pattern detectors (not heavily machine-learning-based), but they are **patient-specific** and adaptive in the sense that neurologists adjust them over time for optimal performance. The validation of RNS was extensive (randomized trial plus ongoing post-market studies), setting a high bar for any future prediction device to clear in terms of showing real benefit ([ Two-year seizure reduction in adults with medically intractable partial onset epilepsy treated with responsive neurostimulation: Final results of the RNS System Pivotal trial - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC4233950/#:~:text=reduction%20in%20seizures%20in%20the,the%20known%20risks%20of%20an)) ([ Two-year seizure reduction in adults with medically intractable partial onset epilepsy treated with responsive neurostimulation: Final results of the RNS System Pivotal trial - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC4233950/#:~:text=percent%20reduction%20at%201%C2%A0year%20was,0.0001)). Interestingly, data from RNS devices, which record ECoG snippets over years, have been analyzed to reveal the same kind of **multi-day and circadian cycles** that Seer and Epiminder focus on ([Multi-day rhythms modulate seizure risk in epilepsy | Nature Communications](https://www.nature.com/articles/s41467-017-02577-y#:~:text=observed%20between%20seizures%20by%20electroencephalography,effect%20size%20in%20most%20subjects)). For example, an RNS study found seizures and interictal spikes tend to fluctuate in ~28-day cycles in many patients, and combining those cycles with time of day yields a “risk profile” for seizures ([Multi-day rhythms modulate seizure risk in epilepsy | Nature Communications](https://www.nature.com/articles/s41467-017-02577-y#:~:text=years%2C%20we%20find%20that%20IEA,effect%20size%20in%20most%20subjects)) ([Multi-day rhythms modulate seizure risk in epilepsy | Nature Communications](https://www.nature.com/articles/s41467-017-02577-y#:~:text=duration%2C%20are%20robust%20and%20relatively,effect%20size%20in%20most%20subjects)). This shows that even a therapeutic device like RNS holds valuable prediction data, but NeuroPace hasn’t (yet) incorporated that into a patient-facing feature. It’s conceivable future versions could provide the patient or doctor with a risk assessment, but at present RNS’s primary goal is treatment, not forewarning. ### Wearable Seizure Detectors (Empatica Embrace & others) Another category of epilepsy AI products are **wearable seizure detection devices**, which use sensors like accelerometers, heart rate and skin conductance to rapidly **detect a seizure in progress and alert caregivers**. A leading example is the **Empatica Embrace** watch. This is a wrist-worn device resembling a smartwatch that continuously monitors physiological signals. It was **FDA-cleared in 2018** as a seizure alert device for convulsive (generalized tonic-clonic) seizures. **Approach:** The Empatica Embrace combines an accelerometer (motion sensor) and an electrodermal activity (EDA) sensor (which measures sweat-induced skin conductance changes). During a generalized convulsive seizure, the body’s vigorous shaking produces a characteristic accelerometer pattern, and often there is a surge in sympathetic nervous system activity (the fight-or-flight response) that increases heart rate and sweat (detected via EDA). Empatica’s algorithm was developed with machine learning to detect this signature combination of rhythmic motion and EDA change that signifies a convulsive seizure ([Multimodal wrist-worn devices for seizure detection and ... - PubMed](https://pubmed.ncbi.nlm.nih.gov/30846346/#:~:text=The%20Embrace%20and%20E4%20wristbands,the%20physiological%20hallmarks%20of)). It runs in real-time on the watch. If a seizure is detected, the device can alert a caregiver’s smartphone, sending an alarm for help. **Validation:** Being a medical device, the Embrace underwent formal clinical validation. In one pivotal study, the system was tested on **152 patients** (children and adults) during in-hospital video-EEG monitoring, which provided the ground truth of whether a seizure occurred ([ Prospective Study of a Multimodal Convulsive Seizure Detection Wearable System on Pediatric and Adult Patients in the Epilepsy Monitoring Unit - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8418082/#:~:text=Results%3A%20Data%20from%20152%20patients,95)). The Embrace’s algorithm was “fixed” (no training on those patients), making this a prospective validation of a general model ([ Prospective Study of a Multimodal Convulsive Seizure Detection Wearable System on Pediatric and Adult Patients in the Epilepsy Monitoring Unit - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8418082/#:~:text=results%20into%20question%20%2828%29,believe%20that%20overfitting%20is%20not)). The results showed a **sensitivity of 92% for detecting convulsive seizures** (95% CI ~85–100%) ([ Prospective Study of a Multimodal Convulsive Seizure Detection Wearable System on Pediatric and Adult Patients in the Epilepsy Monitoring Unit - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8418082/#:~:text=Results%3A%20Data%20from%20152%20patients,95)). In other words, it detected 61 out of 66 convulsive seizures in the study. The **false alarm rate** was around **0.5 to 1.0 per 24 hours**, depending on activity (higher in active daytime, virtually zero during sleep) ([ Prospective Study of a Multimodal Convulsive Seizure Detection Wearable System on Pediatric and Adult Patients in the Epilepsy Monitoring Unit - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8418082/#:~:text=interval%20%28CI%29%20of%20%5B0.85,0.001)). This FAR met the FDA requirement of <2 per day ([ Prospective Study of a Multimodal Convulsive Seizure Detection Wearable System on Pediatric and Adult Patients in the Epilepsy Monitoring Unit - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8418082/#:~:text=,0.001)). Notably, these performance metrics were calculated with rigorous statistical adjustments (for multiple seizures in same patient, etc.) ([ Prospective Study of a Multimodal Convulsive Seizure Detection Wearable System on Pediatric and Adult Patients in the Epilepsy Monitoring Unit - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8418082/#:~:text=Results%3A%20Data%20from%20152%20patients,95)). The algorithm proved **effective across different ages** (it worked equally well in children and adults) and across different scenarios, which is testament to how strongly defined the signature of a tonic-clonic seizure is physiologically. The Empatica device thus demonstrates a successful application of AI in epilepsy: a **generalized model** that can detect a specific type of seizure with high accuracy in many patients ([ Prospective Study of a Multimodal Convulsive Seizure Detection Wearable System on Pediatric and Adult Patients in the Epilepsy Monitoring Unit - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8418082/#:~:text=Results%3A%20Data%20from%20152%20patients,were%200%20for%20all%20patients)). It sacrifices breadth for reliability – it doesn’t detect non-convulsive seizures because the signals for those are subtle or absent on the wrist. But for what it’s designed for, it performs well. **Deployment:** Empatica’s watch is used by patients who are at risk of tonic-clonics, especially at night (to alert family) or those with conditions like Dravet syndrome. Its validation in real-world settings (outside the hospital) also showed continued good performance, and many users have reported that it successfully alerted caregivers, potentially preventing SUDEP by ensuring someone was there to help if a seizure occurred during sleep. **Comparison with Prediction:** Wearable detectors like Embrace operate in real-time, effectively with zero lead time (they alert *during* a seizure). They do not predict the seizure beforehand. The validation is much simpler – you either detect the seizure or you don’t, and you measure false alarms when there’s no seizure. This yields concrete metrics (sensitivity 92%, FAR ~1/24h) ([ Prospective Study of a Multimodal Convulsive Seizure Detection Wearable System on Pediatric and Adult Patients in the Epilepsy Monitoring Unit - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8418082/#:~:text=Results%3A%20Data%20from%20152%20patients,95)). For a prediction device, a comparable metric might be sensitivity for seizures with, say, a 30-minute warning and the false alarm rate per time period – but because predictions involve time-windows and probabilities, the evaluation is more complex. Detectors also can be evaluated in randomized trials for impact (for instance, does alerting reduce consequences?), but primarily their job is straightforward: alert accurately. The success of Empatica shows that **AI can reliably detect certain seizures using peripheral signals**, which complements the efforts of NeuroVista/Seer/Epiminder that rely on brain signals. In fact, Seer’s app can integrate wearable data in a forecasting model; wearables could also potentially feed into prediction (e.g., maybe a subtle heart rate cycle correlates with seizures ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=recorded%20from%20wearable%20devices))). Other wearable devices and smart-home devices exist (camera-based monitors for nighttime seizures, bed sensors for convulsions, etc.), but Empatica is a flagship example. Another notable device is the **BrainSentinel SPEAC** (an EMG-based detector for tonic-clonics via a patch on the bicep), which also underwent trials. It had ~76% sensitivity and a few false alarms per day ([ Prospective Study of a Multimodal Convulsive Seizure Detection Wearable System on Pediatric and Adult Patients in the Epilepsy Monitoring Unit - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8418082/#:~:text=present%20results%2C%20both%20sensitivity%20and,h%2C%20and%20with%20corrected%20midline)). These detection devices highlight a different validation focus: *algorithm generalizability and real-world robustness*. Unlike seizure prediction algorithms that so far are very individualized, detection algorithms can be more universal for a specific seizure phenotype. This is likely because the manifestation (like violent shaking) is similar across patients, whereas the pre-ictal EEG signatures are idiosyncratic. ### Other Neurostimulation (Open-Loop DBS, VNS) Beyond AI algorithms, there are also therapeutic devices like Medtronic’s **Deep Brain Stimulation (DBS)** for epilepsy and LivaNova’s **Vagus Nerve Stimulation (VNS)**. These are not AI-based, but worth a mention in contrast: they deliver periodic stimulation without sensing (in the case of standard DBS) or with simple heart-rate based triggering (in the case of newer VNS). Medtronic’s DBS targeting the anterior thalamus (approved in 2018 for epilepsy) had a trial (SANTE) that showed ~40% seizure reduction at 2 years, and ~68% at 5 years open-label, by continuously stimulating at intervals ([Brain-responsive neurostimulation for epilepsy (RNS® System)](https://www.sciencedirect.com/science/article/pii/S0920121118305758#:~:text=System%29%20www,)). It’s “dumb” stimulation (no detection, no prediction), but effective through neuromodulation. VNS has an AutoStim feature that detects acute heart rate upticks and provides an extra pulse of vagus stimulation when a seizure likely started (since many seizures, especially convulsive ones, cause heart rate to jump). This is a simple on-board algorithm, not machine learning per se. It improved seizure detection in VNS and possibly outcomes for some patients, but is still reactive. The reason to mention these is to underline that **prediction isn’t the only way to address unpredictability**. Responsive devices like RNS and safety devices like Embrace have achieved clinical uptake by demonstrating concrete benefits (fewer seizures, or saved lives via alerts). Their validation was focused on those endpoints. For NeuroVista, Seer, Epiminder – the burden is to show that prediction can either rival those benefits or add new ones (for example, allow a preventive therapy or a lifestyle change that avoids a seizure trigger). It might even be that in the future, systems converge: a device could predict a high-risk period and then automatically adjust a stimulator (akin to a hybrid of RNS and forecasting). In fact, a recent concept is to use chronotherapy – timing medication or neurostimulation doses according to one’s seizure cycle – which relies on forecasting data ([Multi-day rhythms modulate seizure risk in epilepsy | Nature Communications](https://www.nature.com/articles/s41467-017-02577-y#:~:text=years%2C%20we%20find%20that%20IEA,effect%20size%20in%20most%20subjects)) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=The%20device%20was%20well%20tolerated,highly%20suitable%20for%20this%20purpose)). Such approaches will need rigorous trials as well. ## Conclusion **Seizure prediction research has transitioned from exploratory trials to increasingly robust, data-driven endeavors**. The efforts by NeuroVista, Seer Medical, and Epiminder each represent milestones on this path: - NeuroVista demonstrated for the first time that implanted EEG could foretell seizures in humans, though its solution was limited by per-patient variability and practical issues ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=device%20met%20enabling%20criteria%20in,and%204%20months%20after%20implantation)) ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=to%20a%20portable%20device%20designed,prediction%2C%20as%20identifying%20periods%20of)). - Seer Medical leveraged big data and modern AI to reveal that seizures often follow personal rhythms, and has begun turning those insights into a useful forecasting app – validated in pilot studies and now being tested in real-world use ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=Perhaps%20the%20most%20important%20contribution,2017)) ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=The%20results%20showed%20that%20the,participants%20showing%20performance%20above%20chance)). - Epiminder is marrying the two approaches with advanced implant technology, showing early evidence that near-clinical-grade prediction accuracy might be achievable with continuous brain monitoring ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=utilizing%20cycles%20in%20EA%20and,forecasting%20work%20using%20intracranial%20EEG)) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=medium%20risk%20and%2012%20%2810,scalp%20EEG%20device)). Each has made bold claims – from sensitivities above 90% in some cases ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=The%20system%20correctly%20predicted%20seizures,100%20percent%20of%20the%20time)) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=In%20high%20risk%20state%20In,63)) to the very existence of “seizure cycles” in most patients ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=least%20one%20significant%20cycle,cent%20had%20a%20weekly%20cycle)) – and backed them with peer-reviewed data. At the same time, independent evaluation and replication of these claims is essential. There are known challenges: small sample sizes, the need for prospective validation, and ensuring that performance metrics translate into tangible patient benefit. The methodologies are becoming more rigorous (e.g. blinded prospective trials, proper cross-validation, statistical comparison to chance ([ Forecasting seizure likelihood from cycles of self-reported events and heart rate: a prospective pilot study - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC10300292/#:~:text=additive%20regression%20model%20was%20used,collected%20after%20algorithms%20were%20developed)) ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=increased%20risk%20is%20more%20physiological,are%20many%20potential%20successful%20algorithms))), which increases confidence that the reported prediction performance is real and not an artifact. We also see that **different approaches require different validation mindsets**. A prediction algorithm’s success is measured by nuanced statistics and ultimately by whether using it improves outcomes, whereas a detection or treatment device is validated by more direct outcomes (seizures stopped or alerted). All contribute to the same goal: reducing harm from seizures. It may be that a combination of these approaches yields the best result – for instance, a device that warns and treats, or an app that not only forecasts but also signals a wearable drug delivery system. In the academic discourse, there is both excitement and caution. As one reviewer titled: “Seizure Prediction Is Possible – Now Let’s Make It Practical” ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=clinical%20trial%20in%20Melbourne%20Australia%2C,the%20strategy%20is%20really%20better)). The main scientific consensus is that **seizure prediction is indeed possible for at least some patients**, thanks to the work of these groups. The remaining questions revolve around *generalizability*, *reliability*, and *usability*. Larger trials (some ongoing or upcoming) will likely address subject-independent validation (can an algorithm trained on many patients help a new patient?) and long-term durability (does accuracy hold up over years?). Criticisms such as those about false alarm burden or lack of outcome improvement will be answered as prototypes evolve into user-facing systems. In conclusion, NeuroVista, Seer Medical, and Epiminder have each pushed the boundary – from initial proof-of-concept to home-based forecasting to implantable 24/7 monitoring – bringing the field closer to a future where people with epilepsy might receive a “seizure forecast” akin to a weather report. The **scientific foundations (EEG dynamics, circadian biology, machine learning)** are now better understood and published, which strengthens the claims made. However, turning these into **widely adopted clinical tools will require continued rigorous validation** (possibly regulatory-approved trials) and addressing practical limitations (safety, cost, patient acceptance). The next few years will be pivotal in seeing whether seizure prediction moves from an intriguing possibility to a routine part of epilepsy care. If the promising results hold and limitations are surmounted, these efforts could indeed reduce the unpredictability of epilepsy, fulfilling a long-held hope of patients, clinicians, and researchers alike. **Sources:** - Cook *et al.*, Lancet Neurology 2013 – NeuroVista trial results ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=device%20met%20enabling%20criteria%20in,and%204%20months%20after%20implantation)) ([Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study - PubMed](https://pubmed.ncbi.nlm.nih.gov/23642342/#:~:text=device%20met%20enabling%20criteria%20in,and%204%20months%20after%20implantation)) - ScienceDaily (Univ. of Melbourne), 2013 – Summary of NeuroVista findings ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=For%20the%20first%20month%20of,seizure%20prediction%20for%20each%20patient)) ([World-first study predicts epilepsy seizures in humans | ScienceDaily](https://www.sciencedaily.com/releases/2013/05/130502094804.htm#:~:text=The%20system%20correctly%20predicted%20seizures,100%20percent%20of%20the%20time)) - Stacey & Mormann, *Lancet Neurology* editorial 2013 – Commentary on prediction trial limits ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=clinical%20trial%20in%20Melbourne%20Australia%2C,the%20strategy%20is%20really%20better)) - Stacey WC, *EBioMedicine* 2018 – “Seizure prediction is possible – now make it practical” ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=clinical%20trial%20in%20Melbourne%20Australia%2C,the%20strategy%20is%20really%20better)) ([ Seizure Prediction Is Possible–Now Let's Make It Practical - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC5828555/#:~:text=increased%20risk%20is%20more%20physiological,are%20many%20potential%20successful%20algorithms)) - Karoly *et al.*, Brain 2017 – Seizure circadian profiles and forecasting ([Case study: Monitoring epileptic seizures | Melbourne Research](https://research.unimelb.edu.au/partnerships/case-studies/monitoring-epileptic-seizures-with-an-implantable-device#:~:text=least%20one%20significant%20cycle,cent%20had%20a%20weekly%20cycle)) - Baud *et al.*, Nat. Commun. 2018 – Multiday seizure cycles in RNS data ([Multi-day rhythms modulate seizure risk in epilepsy | Nature Communications](https://www.nature.com/articles/s41467-017-02577-y#:~:text=observed%20between%20seizures%20by%20electroencephalography,effect%20size%20in%20most%20subjects)) ([Multi-day rhythms modulate seizure risk in epilepsy | Nature Communications](https://www.nature.com/articles/s41467-017-02577-y#:~:text=years%2C%20we%20find%20that%20IEA,effect%20size%20in%20most%20subjects)) - Stirling *et al.*, Front. Neurol. 2021 – Epiminder sub-scalp device preliminary results ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=we%20present%20preliminary%20results%20of,scalp%20device)) ([ Seizure Forecasting Using a Novel Sub-Scalp Ultra-Long Term EEG Monitoring System - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8419461/#:~:text=utilizing%20cycles%20in%20EA%20and,forecasting%20work%20using%20intracranial%20EEG)) - Onorati/Regalia *et al.*, Front. Neurol. 2021 – Empatica Embrace trial results (sensitivity/FAR) ([ Prospective Study of a Multimodal Convulsive Seizure Detection Wearable System on Pediatric and Adult Patients in the Epilepsy Monitoring Unit - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8418082/#:~:text=Results%3A%20Data%20from%20152%20patients,95)) ([ Prospective Study of a Multimodal Convulsive Seizure Detection Wearable System on Pediatric and Adult Patients in the Epilepsy Monitoring Unit - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC8418082/#:~:text=,0.001)) - NeuroPace RNS Pivotal Trial (Morrell *et al.*, Epilepsia 2014) – Efficacy outcomes ([ Two-year seizure reduction in adults with medically intractable partial onset epilepsy treated with responsive neurostimulation: Final results of the RNS System Pivotal trial - PMC ](https://pmc.ncbi.nlm.nih.gov/articles/PMC4233950/#:~:text=reduction%20in%20seizures%20in%20the,the%20known%20risks%20of%20an)) - Science Writer (A. Jarman) 2023 – Article on Seer’s app and forecasting efforts ([Creating a Forecast for Seizures — The Science Writer](https://www.thesciencewriter.org/enigma-stories/creating-a-forecast-for-seizures#:~:text=People%20who%20use%20the%20Seer,on%20the%20floor%2C%E2%80%9D%20said%20Nurse))