Compositionality of language refers to an idea that you could recursively expand words into phrases with the same meaning, while keeping the meaning of a sentence intact.

For example, we could take a sentence "I ate a carrot" and change it to "I ate a crunchy orange vegetable". The Current sentence embedding methods fail to fully capture constitutionality, as shown, for example, in this work. The goal original goal of the project was to develop a universally applicable modification for the loss function that would help enforcing compositionality in sentence embedding models. Here is a draft of the work in progress paper I submitted to the ACL conference last year. Currently, I am collecting data for this project through the website, while also exploring more sophisticated ways of dealing with the problem. In particular, I am interested in using VAE-based models to learn disentangled representations of sentence meaning and compositional depth.
Goal: develop a model of category learning which would account for both the case when people learn categories via verbal explanations and for the case of learning from examples.

Traditional category learning models focus primarily on example-based learning. Learning from examples is relevant in many situations, especially in early childhood, but it definitely does not exhaust all possibilities. In fact, common knowledge suggests that starting from primary school years people usually learn new concepts through a combination of examples and verbal explanations. For adults, it might be enough to hear a definition of a word to add it to one's knowledge repertoire.

In this project, I am working on a range of experiments and computational models to understand what are the fundamental differences between verbal explanations and examples in how they could be used to transfer knowledge about categories.

Past (incomplete list)

Goal: create a suite of online word games to further AI research.
Project website (currently inactive, since data collection is complete):
Currently, the project features one multiplayer game, aimed to collect data for improving compositionality of sentence embedding models. Check out the project website with a high-level description and instructions, or just enter the game and figure it out on the fly.
Motivation: multiplayer word games require exercising the most advanced and fascinating properties of human cognition, such as rapidly switching between different levels of abstraction, utilizing surprising alalogies, combining logic and intuition. At the same time, they rarely rely on rote knowledge of obscure facts, which eliminates the advantage of traditional information retrieval systems. Word games focus directly on mastering the process of thinking itself, with different games focusing on different aspects of this process. I believe that this makes solving word games a challenging and exciting problem for the AI community to focus on.
Plans: The set of games in the project should be expanded. Your ideas and collaborations are very welcome! I especially look for people with some experience in web development or AI/Machine Learning/Natural Language Processing, but if you want to contribute by promoting the project, improving its design, sharing your ideas, or in any other way - I would still be happy to hear from you!
This is sample homework from the Bayesian Methods in Machine Learning course I took at the Yandex School of Data Analysis. I think this write-up on adapting EM algorithm to an interesting toy problem of villain image reconstruction represents the creative and demanding nature of assignments there. UPD: I had to remove the write-up as students were using it instead of doing the homework themselves. If you want to see it (and you are not a cheating student), please send me an email.
This project was done in collaboration with Cambridge Prosociality and Well-Being Lab. In this cross-cultural project, we studied how health and well-being manifest themselves through one's behavior in social networks. In particular, we focused on the so-called dark triad of psychological traits. I applied a range of Machine Learning models to predict individual Psychological traits based on behavior in social networks (wall-post texts, comments, likes), and performed Statistical analysis of Psychological data. See the publications page for relevant papers.
In my Master's thesis, I worked on the electroencephalogram (EEG)-based motor imagery recognition. The setting is simple: a person in an EEG machine imagines that he or she moves a certain part of their body (without actually moving it). I record their brain activity and try to decipher what is the movement they are thinking about. This project allowed me to get a lot of experience with a range of machine - learning techniques, including deep neural networks and Deep Boltzmann machines, and spectral analysis.