Background: Brain-computer interfaces (BCIs) decode neural activity and extract from it information that can be meaningfully interpreted. One of the most intriguing opportunities is to employ BCIs for decoding speech, a uniquely human trait, which opens up plentiful applications from rehabilitation of patients to a direct and seamless communication between human species. To decipher neuronal code complex deep neural networks furnish only limited success. In such solutions an iffy performance gain is achieved with uniterpretable decision rules characterised by thousands of parameters to be identified from a limited amount of training data. Our recent experience shows that when applied to neural activity data compact neural networks with trainable and physiologically meaningful feature extraction layers  deliver comparable performance, ensure robustness of the learned decision rules and offer the exciting opportunity of automatic knowledge discovery. Methods: We collected approximately one hour of data (from two sessions) where we recorded stereotactic EEG (sEEG) activity during overt speech (6 different randomly shuffled phrases and rest). We have also recorded synchronized audio speech signal. The sEEG recording was carried out in an epilepsy patient implanted for medical reasons with an sEEG electrode passing through Broca area with 6 contacts spaced at 5 mm. We then used a compact convolutional network-based architecture to recover speech mel-cepstrum coefficients followed by a 2D convolutional network to classify individual words. We then interpreted the former network weights using the theoretically justified approach devised by us earlier . Results: We achieved on average 44% accuracy in classifying 26+ 1 words (3.7% chance level) using only 6 channels of data recorded with a single minimally invasive sEEG electrode. We compared the performance of our compact convolutional network to that of the DenseNet-like architecture that has recently been featured in neural speech decoding literature and did not find statistically significant performance differences. Moreover, our architecture appeared to be able to learn faster and resulted in a stable, interpretable and physiologically meaningful decision rule successfully operating over a contiguous data segment no-overlapping with the training data interval. Spatial characteristics of neuronal population pivotal to the task corroborate the results of active speech mapping procedure and frequency domain patterns show primary involvement of the high frequency activity. Conclusions: Most of the speech decoding solutions availabel to date either use potentially harmful intracortical electrodes or rely on the data recorded with impractically massive multielectrode grids covering large cortical area. Here we for the first time achieved practically usable decoding accuracy for the vocabulary of 26 words + 1 silence class backed by only 6 channels of cortical activity sampled with a single sEEG shaft. The decoding was implemented using a compact and interpretable architecture which ensures robustness of the solution and requires small amount of training data. The proposed approach is the first step towards minimally invasive implantable BCI solution for restoring speech function.
People often change their beliefs by succumbing to an opinion of the majority. Such changes are often referred to as majority influence or conformity. While some previous studies have focused on the reinforcement learning mechanisms of conformity or on its internalization, others have reported evidence of changes in sensory processing evoked by majority opinion. In this study, we used magnetoencephalographic (MEG) source imaging to further investigate the remote effects of agreement and disagreement with the majority. During the first session, participants rated the trustworthiness of faces and subsequently learned how the majority of their peers had previously rated each face. To identify the neural correlates of the post-effect of agreeing or disagreeing with the group, we recorded MEG activity while participants rated faces during the next session. We found MEG traces of past disagreement or agreement with the peer group at the parietal cortices as early as approximately 230 ms after the face onset. The neural activity of the superior parietal lobule, intraparietal sulcus, and precuneus was significantly stronger if the participant’s rating had previously differed from the ratings of his or her peers. The early MEG correlates of disagreement with the majority were followed by activity in the orbitofrontal cortex starting at about 320 ms after the face onset. Altogether, the results reveal the temporal dynamics of the neural mechanism of remote effects of disagreement with the peer group: early signatures of modified face processing were followed by later markers of long-term social influence on the valuation process at the ventromedial prefrontal cortex