Projects at KU

Our lab is focused on the lifecycle development of brain-computer interfaces for speech and communication that includes three major areas:

  1. Investigating the neuroscience of speech and communication using electrophysiology and modeling
  2. Development of brain-computer interface technology for both fluent speech synthesis and for accessing current augmentative and alternative communication devices
  3. Translating BCI technology into paradigms and frameworks consistent with current clinical best practices in augmentative and alternative communication

A brain computer interface controlled speech synthesizer

This project is focused on developing and testing a motor imagery BCI that converts changes in the brain’s sensorimotor rhythm into speech formant frequencies for instantaneous continuous speech synthesis.  Supported by the NIH: R03 DC011304 (PI: Brumberg)

read more

In this project individuals are taught how to control a 2D formant frequency speech synthesizer to produce speech sounds with continuous and instantaneous audio-visual feedback. Formant frequencies are a robust, low dimensional representation of the energy produced during speech, and can be used to acoustically characterize all vowel sounds. Here, an adaptive filter (Kalman Filter) brain-computer interface translates the sensorimotor rhythm into 2D formants for display on the screen and real-time synthesis for auditory feedback. Individuals can change the 2D formants by imagining moving their left and right hands, and feet. Specifically, left hand imagery will move the synthesizer toward an UW sound (like who'd), right hand toward an AA sound (like hot) and their feet toward an IY sound (like heed).

The above images are scalp topographies of EEG activity used for motor imagery control of the formant synthesizer - up is toward the nose. The BCI converts the resulting changes in the sensorimotor rhythm into formant frequencies for output.

Brumberg, J. S., Pitt, K. M., and Burnison, J. D. (2018). A non-invasive brain-computer interface for real-time speech synthesis: the importance of multimodal feedback. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26(4), 874–881. doi:10.1109/TNSRE.2018.2808425

 

Controlling augmentative and alternative communication devices with BCIs

A major goal of the lab is to provide BCI control to commercial AAC devices through collaborations with industry partners, federal agencies, foundations and current users of AAC.  This project also focuses on the development of appropriate screening and assessment tools for the most appropriate selection of BCIs for accessing AAC. Supported by the National Institute on Deafness and Other Communication Disorders (PI: Brumberg), University of Kansas New Faculty General Research Fund (PI: Brumberg) and the American Speech-Language-Hearing Foundation: New Century Scholars Research Grant (PI: Brumberg)

read more

 

Tutorial: BCI as an access method for AAC. Translation of BCI devices into clinical practice may be enhanced by increasing outreach to speech-language pathologists and other AAC specialists.  This tutorial is intended to provide a broad background on BCI methodologies, with particular emphasis on areas of overlap with existing high-tech AAC and AAC access techniques by answering questions in 6 topic areas:

  1. How Do People Who Use BCI Interact With the Computer?
  2. Who May Best Benefit From a BCI?
  3. Are BCIs Faster Than Other Access Methods for AAC?
  4. Fatigue and Its Effects
  5. BCI as an Addition to Conventional AAC Access Technology
  6. Limitations of BCI and Future Directions

We end with broad conclusions important for SLPs and other AAC professionals.  Supported in part by the National Institutes of Health (National Institute on Deafness and Other Communication Disorders R03-DC011304, PI: J. Brumberg), the University of Kansas New Faculty Research Fund (PI: J. Brumberg), and the American Speech-Language-Hearing Foundation New Century Scholars Research Grant (PI: J. Brumberg)

Brumberg, J. S., Pitt, K. M., Mantie-Kozlowski, A. & Burnison, J. D. (2018). Brain-computer interfaces for augmentative and alternative communication: a tutorial. American Journal of Speech-Language Pathology. 27(1). 1-12. DOI:10.1044/2017_AJSLP-16-0244



Developing AAC feature matching guidelines for BCI. Feature matching is the accepted best practice for ensuring needs and preferences of individuals who use AAC are being met by any AAC intervention.  As an access method for AAC, feature matching also applies to BCI, perhaps moreso given the wide variety of BCI paradigms available.  In this project, we developed a feature matching tool that when combined with appropriate screening / assessment can be used to help identify BCI options that best match individual neurological characteristics and communication needs for additional live trial-based evaluation for final selection.  We explore these guidelines using three hypothetical cases SLPs are likely to encounter in their work.  Interviews with AAC and BCI specialists revealed seven major areas of consideration:


  1. Sensory: visual acuity and hearing sensitivity.
  2. Medical considerations: for example, history of seizures (important for some sensory BCI techniques), use of medications.
  3. Motor: oculomotor (eye) movement and absence of involuntary motor movements.
  4. Motor imagery: ability to perform first-person motor imagery, presence of neurological activity related to movement imagery.
  5. Cognition: attention, memory/working memory, cognitive and motor learning performance factors (e.g., task switching, self-monitoring, and abstract reasoning).
  6. Literacy: reading and spelling.
  7. General considerations: physical barriers, age, devicepositioning, and training

Supported in part by the National Institutes of Health (National Institute on Deafness and Other Communication Disorders R03-DC011304, PI: J. Brumberg), the University of Kansas New Faculty Research Fund (PI: J. Brumberg), and the American Speech-Language-Hearing Foundation New Century Scholars Research Grant (PI: J. Brumberg)

Pitt, K. M. and Brumberg, J. S. (2018). Guidelines for Feature Matching Assessment of Brain-Computer Interfaces for Augmentative and Alternative Communication. American Journal of Speech-Language Pathology, 27(3). 950–964. doi:10.1044/2018_AJSLP-17-0135


Examining sensory interactions with BCI performance.  In this project we investigated the impact of oculomotor deficits for steady state visually evoked potential BCI performance in three populations with specific oculomotor impairment (ALS: idiosyncratic, Locked-In Syndrome: impaired horizontal movement, Progressive Supranuclear Palsy: impaired lower visual field).  

Brumberg, J. S., Nguyen, A., Pitt, K. M., and Lorenz, S. D. (2018). Examining sensory ability, feature matching, and assessment-based adaptation for a brain-computer interface using the steady-state visually evoked potential. Disability and Rehabilitation: Assistive Technology, 1–9. DOI: 10.1080/17483107.2018.1428369

 

Neural correlates of vowel identification

Motivated from our work on a BCI controlled formant frequency speech synthesizer, this project investigates how auditory event related potentials change as adults listen to vowel sounds of varying quality and distinctiveness.  A second goal is to determine whether these ERP changes can be used in a real time BCI for formant synthesis.

 

The readiness potential and its application for BCI

Motor-based BCIs require not only the ability to interpret motor execution or imagery related neural activity into control signals, but they must also do so only when the users intends.  We are investigating the readiness potential as a possible neural signal of upcoming movement intention.

 

Collaborating projects

The Prosodic Marionette

In collaboration with the CADLAB at Northeastern University.  The Prosodic Marionette is a novel visual-spatial graphical interface for manipulating the acoustic and temporal cues involved in linguistic prosody.  We are investigating prosodic knowledge in two populations: 1) typically developing children and 2) adults with congenital and acquired neuromotor impairment.

Electrocorticography of continuous speech production

In collaboration with the Schalk Lab.  By using electrocorticography we are able to sample the brain’s neuroelectrical activity at incredibly fine spatial and temporal resolution necessary for studying the neural dynamics of continuous speech production.

The Unlock Project

In collaboration with CELEST at Boston University


One of 34 U.S. public institutions in the prestigious Association of American Universities
44 nationally ranked graduate programs.
—U.S. News & World Report
Top 50 nationwide for size of library collection.
—ALA
23rd nationwide for service to veterans —"Best for Vets," Military Times
KU Today