Tal Barami
Ph.D. student in the department of computer science at Ben Gurion University of the Negev. Researcher, interested in solving graphics and vision problems with algorithms and data.
Education
Computer Science, M.Sc.
Ben-Gurion University of the Negev
Advisors: Prof. Ilan Dinstein, Prof. Andrei Sharf
Software Engineering, B.E.
Ben-Gurion University of the Negev
Publications & Projects:
Disentanglement Beyond Static vs. Dynamic: A Benchmark and Evaluation Framework for Multi-Factor Sequential Representations
Tal Barami, Nimrod Berman, Ilan Naiman, Amos Haviv Hason, Rotem Ezra, Omri Azencot
Learning disentangled representations in sequential data is a key goal in deep learning, with broad applications in vision, audio, and time series. While real-world data involves multiple interacting semantic factors over time, prior work has mostly focused on simpler two-factor static and dynamic settings, primarily because such settings make data collection easier, thereby overlooking the inherently multi-factor nature of real-world data. We introduce the first standardized benchmark for evaluating multi-factor sequential disentanglement across six diverse datasets spanning video, audio, and time series. Our benchmark includes modular tools for dataset integration, model development, and evaluation metrics tailored to multi-factor analysis. We additionally propose a post-hoc Latent Exploration Stage to automatically align latent dimensions with semantic factors, and introduce a Koopman-inspired model that achieves state-of-the-art results. Moreover, we show that Vision-Language Models can automate dataset annotation and serve as zero-shot disentanglement evaluators, removing the need for manual labels and human intervention. Together, these contributions provide a robust and scalable foundation for advancing multi-factor sequential disentanglement. Our code is available on GitHub, and the datasets and trained models are available on Hugging Face.
Comparing three algorithms of automated facial expression analysis in autistic children: different sensitivities but consistent proportion
Liora Manelis, Tal Barami, Michal Ilan, Gal Meiri, Idan Menashe, Elizabeth Soskin, Carmel Sofer, Ilan Dinstein
We analyzed over 5 million video frames from 100 children (72 autistic, 28 controls; ages 2–7) recorded during ADOS-2 assessments. Facial expressions were quantified using three leading analysis tools (iMotions, FaceReader, Py-Feat), enabling objective comparisons across algorithms and groups. Despite substantial variability between tools, all three consistently showed no group differences in the quantity of facial expressions, suggesting that atypical expression use in autism relates more to quality, timing, and social context than to overall frequency.
Automated Analysis of Stereotypical Movements in Videos of Children With Autism Spectrum Disorder
Tal Barami, Liora Manelis-Baram, Hadas Kaiser, Michal Ilan, Aviv Slobodkin, Ofri Hadashi, Dor Hadad, Omri Azencot, Andrei Sharf, Ilan Dinstein
We developed ASDMotion, the first large-scale open-source tool for detecting and quantifying stereotypical motor movements (SMMs) in children with autism. Trained on over 200 clinical assessments with expert annotations, ASDMotion combines deep learning with pose-based analysis to identify repetitive behaviors such as hand flapping and body rocking.
The system achieves over 92% recall and strong alignment with expert ratings, enabling scalable and objective measurement of a core symptom of autism. Beyond its immediate clinical utility, ASDMotion provides a rich dataset and benchmark for advancing automated behavior analysis in naturalistic settings, opening the door to more reliable diagnostics, treatment monitoring, and research on developmental disorders.
Neural Approaches for 3D Pose Estimation from 3D Data
Gali Hod, Tal Barami, Michael Kolomenkin
We present two novel, open-source methods for human pose estimation directly from 3D point clouds and meshes, enabling accurate reconstruction of body joints for use in creative, clinical, and interactive applications. Unlike classical approaches, our methods are fully differentiable and designed to integrate seamlessly into modern deep learning pipelines. One approach uses body-part segmentation for skeleton construction; the other directly estimates joint positions using a PointNet++-based neural network.
Sensperience: A Virtual Reality Journey Through Altered Perception
Tal Barami, Liza Fridman, Carmel Lederer, Boaz Krysler
Sensperience is an immersive virtual reality application designed to let users explore and experience a wide range of altered sensory realities. The system combines a VR headset with a Geomagic haptic device, creating a fully interactive, multi-sensory environment that can be entirely controlled and customized by the developer. Users are guided through a series of simulated scenes, each engineered to manipulate and challenge specific human senses. These scenarios include experiences such as complete darkness, underwater immersion, vertigo, zero gravity, and interaction with varied textures—all designed to evoke different perceptual and physical responses. The result is a holistic simulation platform that provides both visual and tactile feedback, enabling applications in education, therapy, training, and entertainment. Sensperience offers developers a unique sandbox to create and experiment with customized sensory manipulations in a controlled, virtual setting.
Professional Experience
Researcher
Playtika
2021 — 2022: Playtika Research Group: Pursued the field of feature disentanglement in generative methods for 3D modeling. Developed a segmentation-based method for 3D pose estimation.
2019 — 2021: Data Scientist: Developed models for uplift modeling and churn predictions, as well as internal tools for the automation of research phases such as data acquisition, modeling, analysis, and evaluation.
Software Engineer
Mentor Graphics
2016 — 2018: Development and maintenance of features for some of the company's top products. Both handle production-line management for large-scale factories around the globe.