Публикации
По годам
2025
SIGNAL: Dataset for Semantic and Inferred Grammar Neurological Analysis of Language
Scientific Data, vol. 12, no. 1, pp. 1687, 2025
Anna Komissarenko, Ekaterina Voloshina, Anastasia Cheveleva, Ilia Semenkov, Oleg Serikov, Alex Ossadtchi
@article{Komissarenko2025,
doi = {10.1038/s41597-025-05966-x},
url = {https://doi.org/10.1038/s41597-025-05966-x},
year = {2025},
volume = {12},
number = {1},
pages = {1687},
author = {Komissarenko, Anna and Voloshina, Ekaterina and Cheveleva, Anastasia and Semenkov, Ilia and Serikov, Oleg and Ossadtchi, Alex},
title = {SIGNAL: Dataset for Semantic and Inferred Grammar Neurological Analysis of Language},
journal = {Scientific Data},
abstract = {Recently, the idea of brain-model alignment has been the topic of several influential works. However, most of previous studies were based on datasets collected during regular reading tasks where the subjects were not exposed to processing linguistic incongruencies, and stimuli were not controlled for key linguistic properties. Meanwhile, interpretability studies of Large Language Models pay growing attention to thoroughly designed linguistic tasks based on certain acceptability measures. We present a dataset that contains 600 sentences with a combination of congruent and grammatically or/and semantically incongruent sentences coupled with high density 64-channel EEG recordings of 21 participants. The text stimuli were assessed by native speakers and later used in EEG recording and validation and LLM probing. The validation results proved suitability of the data for future research on brain-model alignment in the linguistic context.}
} A. Komissarenko, E. Voloshina, A. Cheveleva, I. Semenkov, O. Serikov, and A. Ossadtchi "SIGNAL: Dataset for Semantic and Inferred Grammar Neurological Analysis of Language," Scientific Data 2025.
Towards stimulation-free automatic electrocorticographic speech mapping in neurosurgery patients
Journal of Neural Engineering, vol. 22, no. 5, pp. 056002, 2025
Alexei Voskoboynikov, Magomed Aliverdiev, Yulia Nekrasova, Ilia Semenkov, Anastasia Skalnaya, Mikhail Sinkin, Alexei Ossadtchi
@article{Voskoboynikov_2025,
doi = {10.1088/1741-2552/adfc9c},
url = {https://doi.org/10.1088/1741-2552/adfc9c},
year = {2025},
month = {sep},
publisher = {IOP Publishing},
volume = {22},
number = {5},
pages = {056002},
author = {Voskoboynikov, Alexei and Aliverdiev, Magomed and Nekrasova, Yulia and Semenkov, Ilia and Skalnaya, Anastasia and Sinkin, Mikhail and Ossadtchi, Alexei},
title = {Towards stimulation-free automatic electrocorticographic speech mapping in neurosurgery patients},
journal = {Journal of Neural Engineering},
abstract = {Objective. The precise mapping of speech-related functions is crucial for successful neurosurgical interventions in epilepsy and brain tumor cases. Traditional methods like electrocortical stimulation mapping (ESM) are effective but carry a significant risk of inducing seizures. Methods. To address this, we have prepared a comprehensive ESM + electrocorticographic mapping (ECM) dataset from 14 patients with chronically implanted stereo-EEG electrodes. Then we explored several compact machine learning (ML) approaches to convert the ECM signals to the ground truth derived from the risky ESM procedure. Both procedures involved the standard picture naming task. As features, we used gamma-band power within successive temporal windows in the data averaged with respect to picture and voice onsets. We focused on a range of classifiers, including XGBoost, linear support vector classification (SVC), regularized logistic regression, random forest, k-nearest neighbors, decision tree, multi-Layer perceptron, AdaBoost and Gaussian Naive Bayes classifiers and equipped them with confidence interval estimates, crucial in a real-life application. We validated the ML approaches using a leave-one-patient-out procedure and computed ROC and Precision–Recall curves for various feature combinations. Results. For linear SVC we achieved ROC-AUC and PR-AUC scores of 0.91 and 0.88, respectively, which effectively distinguishes speech-related from non-related iEEG channels. We have also observed that the use of information on the voice onset moment notably improved the classification accuracy. Significance. We have for the first time rigorously compared the ECM and ESM results and mimicked a real-life use of the ECM technology. We have also provided public access to the comprehensive ECM+ESM dataset to pave the road towards safer and more reliable eloquent cortex mapping procedures.}
} A. Voskoboynikov, M. Aliverdiev, Y. Nekrasova, I. Semenkov, A. Skalnaya, M. Sinkin, and A. Ossadtchi "Towards stimulation-free automatic electrocorticographic speech mapping in neurosurgery patients," Journal of Neural Engineering 2025.
2024
Representational dissimilarity component analysis (ReDisCA)
NeuroImage, vol. 301, pp. 120868, 2024
Alexei Ossadtchi, Ilia Semenkov, Anna Zhuravleva, Vladimir Kozunov, Oleg Serikov, Ekaterina Voloshina
@article{OSSADTCHI2024120868,
title = {Representational dissimilarity component analysis (ReDisCA)},
journal = {NeuroImage},
volume = {301},
pages = {120868},
year = {2024},
issn = {1053-8119},
doi = {https://doi.org/10.1016/j.neuroimage.2024.120868},
url = {https://doi.org/10.1016/j.neuroimage.2024.120868},
author = {Alexei Ossadtchi and Ilia Semenkov and Anna Zhuravleva and Vladimir Kozunov and Oleg Serikov and Ekaterina Voloshina},
keywords = {EEG and MEG, Spatial–temporal decomposition, Representational similarity analysis, Source localization},
abstract = {The principle of Representational Similarity Analysis (RSA) posits that neural representations reflect the structure of encoded information, allowing exploration of spatial and temporal organization of brain information processing. Traditional RSA when applied to EEG or MEG data faces challenges in accessing activation time series at the brain source level due to modeling complexities and insufficient geometric/anatomical data. To overcome this, we introduce Representational Dissimilarity Component Analysis (ReDisCA), a method for estimating spatial–temporal components in EEG or MEG responses aligned with a target representational dissimilarity matrix (RDM). ReDisCA yields informative spatial filters and associated topographies, offering insights into the location of ”representationally relevant” sources. Applied to evoked response time series, ReDisCA produces temporal source activation profiles with the desired RDM. Importantly, while ReDisCA does not require inverse modeling its output is consistent with EEG and MEG observation equation and can be used as an input to rigorous source localization procedures. Demonstrating ReDisCA’s efficacy through simulations and comparison with conventional methods, we show superior source localization accuracy and apply the method to real EEG and MEG datasets, revealing physiologically plausible representational structures without inverse modeling. ReDisCA adds to the family of inverse modeling free methods such as independent component analysis (Makeig, 1995), Spatial spectral decomposition (Nikulin, 2011), and Source power comodulation (Dähne, 2014) designed for extraction sources with desired properties from EEG or MEG data. Extending its utility beyond EEG and MEG analysis, ReDisCA is likely to find application in fMRI data analysis and exploration of representational structures emerging in multilayered artificial neural networks.}
} A. Ossadtchi, I. Semenkov, A. Zhuravleva, V. Kozunov, O. Serikov, and E. Voloshina "Representational dissimilarity component analysis (ReDisCA)," NeuroImage 2024.
Inpainting Semantic and Depth Features to Improve Visual Place Recognition in the Wild
IEEE Access, vol. 12, pp. 5163-5176, 2024
Ilia Semenkov, Aleksei Karpov, Andrey V. Savchenko, Ilya Makarov
@ARTICLE{10380584,
author={Semenkov, Ilia and Karpov, Aleksei and Savchenko, Andrey V. and Makarov, Ilya},
journal={IEEE Access},
title={Inpainting Semantic and Depth Features to Improve Visual Place Recognition in the Wild},
year={2024},
volume={12},
number={},
pages={5163-5176},
keywords={Visualization;Semantics;Indexes;Image recognition;Refining;Pipelines;Vehicle dynamics;Image retrieval;Visual place recognition;image retrieval;inpainting;semantic segmentation;monocular depth estimation;NetVLAD;CosPlace},
doi={10.1109/ACCESS.2024.3350038}} I. Semenkov, A. Karpov, A. Savchenko, and I. Makarov "Inpainting Semantic and Depth Features to Improve Visual Place Recognition in the Wild," IEEE Access 2024.
Proceedings of the 27th European Conference on Artificial Intelligence, pp. 4463-4466, 2024
Nikolay Dagaev, Ilia Semenkov, Alexei Ossadtchi
@InProceedings{Dagaev_2024,
author = {Dagaev, Nikolay and Semenkov, Ilia and Ossadtchi, Alexei},
title = {EEG-Based fMRI Digital Twin: Towards a Cheap and Ecological Approach to Measure Subcortical Brain Activity},
booktitle = {Proceedings of the 27th European Conference on Artificial Intelligence},
month = {October},
year = {2024},
pages = {4463-4466},
doi = {10.3233/FAIA241030},
} N. Dagaev, I. Semenkov, and A. Ossadtchi "EEG-Based fMRI Digital Twin: Towards a Cheap and Ecological Approach to Measure Subcortical Brain Activity," Proceedings of the 27th European Conference on Artificial Intelligence 2024.
2023
Real-time low latency estimation of brain rhythms with deep neural networks
Journal of Neural Engineering, vol. 20, no. 5, pp. 056008, 2023
Ilia Semenkov, Nikita Fedosov, Ilya Makarov, Alexei Ossadtchi
@article{Semenkov_2023,
doi = {10.1088/1741-2552/acf7f3},
url = {https://doi.org/10.1088/1741-2552/acf7f3},
year = {2023},
month = {sep},
publisher = {IOP Publishing},
volume = {20},
number = {5},
pages = {056008},
author = {Semenkov, Ilia and Fedosov, Nikita and Makarov, Ilya and Ossadtchi, Alexei},
title = {Real-time low latency estimation of brain rhythms with deep neural networks},
journal = {Journal of Neural Engineering},
abstract = {Objective. Neurofeedback and brain-computer interfacing technology open the exciting opportunity for establishing interactive closed-loop real-time communication with the human brain. This requires interpreting brain’s rhythmic activity and generating timely feedback to the brain. Lower delay between neuronal events and the appropriate feedback increases the efficacy of such interaction. Novel more efficient approaches capable of tracking brain rhythm’s phase and envelope are needed for scenarios that entail instantaneous interaction with the brain circuits. Approach. Isolating narrow-band signals incurs fundamental delays. To some extent they can be compensated using forecasting models. Given the high quality of modern time series forecasting neural networks we explored their utility for low-latency extraction of brain rhythm parameters. We tested five neural networks with conceptually distinct architectures in forecasting synthetic EEG rhythms. The strongest architecture was then trained to simultaneously filter and forecast EEG data. We compared it against the state-of-the-art techniques using synthetic and real data from 25 subjects. Main results. The temporal convolutional network (TCN) remained the strongest forecasting model that achieved in the majority of testing scenarios 90% rhythm’s envelope correlation with 10 ms effective delay and circular standard deviation of phase estimates. It also remained stable enough to noise level perturbations. Trained to filter and predict the TCN outperformed the cFIR, the Kalman filter based state-space estimation technique and remained on par with the larger Conv-TasNet architecture. Significance. Here we have for the first time demonstrated the utility of the neural network approach for low-latency narrow-band filtering of brain activity signals. Our proposed approach coupled with efficient implementation enhances the effectiveness of brain-state dependent paradigms across various applications. Moreover, our framework for forecasting EEG signals holds promise for investigating the predictability of brain activity, providing valuable insights into the fundamental questions surrounding the functional organization and hierarchical information processing properties of the brain.}
} I. Semenkov, N. Fedosov, I. Makarov, and A. Ossadtchi "Real-time low latency estimation of brain rhythms with deep neural networks," Journal of Neural Engineering 2023.
2022
Human knowledge models: Learning applied knowledge from the data
PLOS ONE, vol. 17, no. 10, pp. 1-16, 2022
Egor Dudyrev, Ilia Semenkov, Sergei O. Kuznetsov, Gleb Gusev, Andrew Sharp, Oleg S. Pianykh
@article{10.1371/journal.pone.0275814,
doi = {10.1371/journal.pone.0275814},
author = {Dudyrev, Egor and Semenkov, Ilia and Kuznetsov, Sergei O. and Gusev, Gleb and Sharp, Andrew and Pianykh, Oleg S.},
journal = {PLOS ONE},
publisher = {Public Library of Science},
title = {Human knowledge models: Learning applied knowledge from the data},
year = {2022},
month = {10},
volume = {17},
url = {https://doi.org/10.1371/journal.pone.0275814},
pages = {1-16},
abstract = {Artificial intelligence and machine learning have demonstrated remarkable results in science and applied work. However, present AI models, developed to be run on computers but used in human-driven applications, create a visible disconnect between AI forms of processing and human ways of discovering and using knowledge. In this work, we introduce a new concept of “Human Knowledge Models” (HKMs), designed to reproduce human computational abilities. Departing from a vast body of cognitive research, we formalized the definition of HKMs into a new form of machine learning. Then, by training the models with human processing capabilities, we learned human-like knowledge, that humans can not only understand, but also compute, modify, and apply. We used several datasets from different applied fields to demonstrate the advantages of HKMs, including their high predictive power and resistance to noise and overfitting. Our results proved that HKMs can efficiently mine knowledge directly from the data and can compete with complex AI models in explaining the main data patterns. As a result, our study reveals the great potential of HKMs, particularly in the decision-making applications where “black box” models cannot be accepted. Moreover, this improves our understanding of how well human decision-making, modeled by HKMs, can approach the ideal solutions in real-life problems.},
number = {10},
} E. Dudyrev, I. Semenkov, S. Kuznetsov, G. Gusev, A. Sharp, and O. Pianykh "Human knowledge models: Learning applied knowledge from the data," PLOS ONE 2022.
NTIRE 2022 Challenge on Night Photography Rendering
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1286-1299, 2022
Egor Ershov, Alex Savchik, Denis Shepelev, Nikola Banić, Michael S. Brown, Radu Timofte и др. , Karlo Koščević, Michael Freeman, Vasily Tesalin, Dmitry Bocharov, Illya Semenkov, Marko Subašić, Sven Lončarić, Arseniy Terekhin, Shuai Liu, Chaoyu Feng, Hao Wang, Ran Zhu, Yongqiang Li, Lei Lei, Zhihao Li, Si Yi, Ling-Hao Han, Ruiqi Wu, Xin Jin, Chunle Guo, Furkan Kınlı, Sami Menteş, Barış Özcan, Furkan Kıraç, Simone Zini, Claudio Rota, Marco Buzzelli, Simone Bianco, Raimondo Schettini, Wei Li, Yipeng Ma, Tao Wang, Ruikang Xu, Fenglong Song, Wei-Ting Chen, Hao-Hsiang Yang, Zhi-Kai Huang, Hua-En Chang, Sy-Yen Kuo, Zhexin Liang, Shangchen Zhou, Ruicheng Feng, Chongyi Li, Xiangyu Chen, Binbin Song, Shile Zhang, Lin Liu, Zhendong Wang, Dohoon Ryu, Hyokyoung Bae, Taesung Kwon, Chaitra Desai, Nikhil Akalwadi, Amogh Joshi, Chinmayee Mandi, Sampada Malagi, Akash Uppin, Sai Sudheer Reddy, Ramesh Ashok Tabib, Ujwala Patil, Uma Mudenagudi
@INPROCEEDINGS{9856964,
author={Ershov, Egor and Savchik, Alex and Shepelev, Denis and Banić, Nikola and Brown, Michael S. and Timofte, Radu and Koščević, Karlo and Freeman, Michael and Tesalin, Vasily and Bocharov, Dmitry and Semenkov, Illya and Subašić, Marko and Lončarić, Sven and Terekhin, Arseniy and Liu, Shuai and Feng, Chaoyu and Wang, Hao and Zhu, Ran and Li, Yongqiang and Lei, Lei and Li, Zhihao and Yi, Si and Han, Ling-Hao and Wu, Ruiqi and Jin, Xin and Guo, Chunle and Kınlı, Furkan and Menteş, Sami and Özcan, Barış and Kıraç, Furkan and Zini, Simone and Rota, Claudio and Buzzelli, Marco and Bianco, Simone and Schettini, Raimondo and Li, Wei and Ma, Yipeng and Wang, Tao and Xu, Ruikang and Song, Fenglong and Chen, Wei-Ting and Yang, Hao-Hsiang and Huang, Zhi-Kai and Chang, Hua-En and Kuo, Sy-Yen and Liang, Zhexin and Zhou, Shangchen and Feng, Ruicheng and Li, Chongyi and Chen, Xiangyu and Song, Binbin and Zhang, Shile and Liu, Lin and Wang, Zhendong and Ryu, Dohoon and Bae, Hyokyoung and Kwon, Taesung and Desai, Chaitra and Akalwadi, Nikhil and Joshi, Amogh and Mandi, Chinmayee and Malagi, Sampada and Uppin, Akash and Sudheer Reddy, Sai and Ashok Tabib, Ramesh and Patil, Ujwala and Mudenagudi, Uma},
booktitle={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
title={NTIRE 2022 Challenge on Night Photography Rendering},
year={2022},
volume={},
number={},
pages={1286-1299},
keywords={Photography;Visualization;Computer vision;Conferences;Rendering (computer graphics);Cameras;Pattern recognition},
doi={10.1109/CVPRW56347.2022.00135},
url={https://openaccess.thecvf.com/content/CVPR2022W/NTIRE/html/Ershov_NTIRE_2022_Challenge_on_Night_Photography_Rendering_CVPRW_2022_paper.html},
} E. Ershov, A. Savchik, D. Shepelev, N. Banić, M. Brown, R. Timofte, K. Koščević, M. Freeman, V. Tesalin, D. Bocharov, I. Semenkov, M. Subašić, S. Lončarić, A. Terekhin, S. Liu, C. Feng, H. Wang, R. Zhu, Y. Li, L. Lei, Z. Li, S. Yi, L. Han, R. Wu, X. Jin, C. Guo, F. Kınlı, S. Menteş, B. Özcan, F. Kıraç, S. Zini, C. Rota, M. Buzzelli, S. Bianco, R. Schettini, W. Li, Y. Ma, T. Wang, R. Xu, F. Song, W. Chen, H. Yang, Z. Huang, H. Chang, S. Kuo, Z. Liang, S. Zhou, R. Feng, C. Li, X. Chen, B. Song, S. Zhang, L. Liu, Z. Wang, D. Ryu, H. Bae, T. Kwon, C. Desai, N. Akalwadi, A. Joshi, C. Mandi, S. Malagi, A. Uppin, S. Sudheer Reddy, R. Ashok Tabib, U. Patil, and U. Mudenagudi "NTIRE 2022 Challenge on Night Photography Rendering," 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022.
2021
Illumination estimation challenge: The experience of the first 2 years
Color Research & Application, vol. 46, no. 4, pp. 705-718, 2021
Egor Ershov, Alex Savchik, Ilya Semenkov, Nikola Banić, Karlo Koščević, Marko Subašić и др. , Alexander Belokopytov, Arseniy Terekhin, Daria Senshina, Artem Nikonorov, Zhihao Li, Yanlin Qian, Marco Buzzelli, Riccardo Riva, Simone Bianco, Raimondo Schettini, Jonathan T. Barron, Sven Lončarić, Dmitry Nikolaev
@article{https://doi.org/10.1002/col.22675,
author = {Ershov, Egor and Savchik, Alex and Semenkov, Ilya and Banić, Nikola and Koščević, Karlo and Subašić, Marko and Belokopytov, Alexander and Terekhin, Arseniy and Senshina, Daria and Nikonorov, Artem and Li, Zhihao and Qian, Yanlin and Buzzelli, Marco and Riva, Riccardo and Bianco, Simone and Schettini, Raimondo and Barron, Jonathan T. and Lončarić, Sven and Nikolaev, Dmitry},
title = {Illumination estimation challenge: The experience of the first 2 years},
journal = {Color Research \& Application},
volume = {46},
number = {4},
pages = {705-718},
keywords = {challenge, color constancy, illumination estimation, mixed illumination, multiple illumination, white balancing},
doi = {https://doi.org/10.1002/col.22675},
url = {https://doi.org/10.1002/col.22675},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1002/col.22675},
abstract = {Abstract Illumination estimation is the essential step of computational color constancy, one of the core parts of various image processing pipelines of modern digital cameras. Having an accurate and reliable illumination estimation is important for reducing the illumination influence on the image colors. To motivate the generation of new ideas and the development of new algorithms in this field, two challenges on illumination estimation were conducted. The main advantage of testing a method on a challenge over testing it on some of the known datasets is the fact that the ground-truth illuminations for the challenge test images are unknown up until the results have been submitted, which prevents any potential hyperparameter tuning that may be biased. The First illumination estimation challenge (IEC\#1) had only a single task, global illumination estimation. The second illumination estimation challenge (IEC\#2) was enriched with two additional tracks that encompassed indoor and two-illuminant illumination estimation. Other main features of it are a new large dataset of images (about 5000) taken with the same camera sensor model, a manual markup accompanying each image, diverse content with scenes taken in numerous countries under a huge variety of illuminations extracted by using the SpyderCube calibration object, and a contest-like markup for the images from the Cube++ dataset. This article focuses on the description of the past two challenges, algorithms which won in each track, and the conclusions that were drawn based on the results obtained during the first and second challenge that can be useful for similar future developments.},
year = {2021}
} E. Ershov, A. Savchik, I. Semenkov, N. Banić, K. Koščević, M. Subašić, A. Belokopytov, A. Terekhin, D. Senshina, A. Nikonorov, Z. Li, Y. Qian, M. Buzzelli, R. Riva, S. Bianco, R. Schettini, J. Barron, S. Lončarić, and D. Nikolaev "Illumination estimation challenge: The experience of the first 2 years," Color Research & Application 2021.
2020
The Cube++ Illumination Estimation Dataset
IEEE Access, vol. 8, pp. 227511-227527, 2020
Egor Ershov, Alexey Savchik, Illya Semenkov, Nikola Banić, Alexander Belokopytov, Daria Senshina и др. , Karlo Koščević, Marko Subašić, Sven Lončarić
@ARTICLE{9296220,
author={Ershov, Egor and Savchik, Alexey and Semenkov, Illya and Banić, Nikola and Belokopytov, Alexander and Senshina, Daria and Koščević, Karlo and Subašić, Marko and Lončarić, Sven},
journal={IEEE Access},
title={The Cube++ Illumination Estimation Dataset},
year={2020},
volume={8},
number={},
pages={227511-227527},
keywords={Lighting;Estimation;Image color analysis;Light sources;Calibration;Semantics;Publishing;Color constancy;dataset;illumination estimation;white balancing;multiple illumination;mixed illumination},
doi={10.1109/ACCESS.2020.3045066}} E. Ershov, A. Savchik, I. Semenkov, N. Banić, A. Belokopytov, D. Senshina, K. Koščević, M. Subašić, and S. Lončarić "The Cube++ Illumination Estimation Dataset," IEEE Access 2020.