Projects & Lab
Funded research across AI, biosensing, neuroscience, and computer vision.
This project aims to enhance correlative nanoscopy techniques by integrating generative AI models for image reconstruction, enhancement, and cross-modality translation. Correlative nanoscopy combines multiple imaging modalities (e.g., fluorescence and electron microscopy) to provide detailed structural and functional information at the nanoscale.
By leveraging GANs and diffusion models, the project seeks to bridge resolution gaps, reduce acquisition time, and improve interpretability of nanoscopic datasets. As part of a Turkish-Romanian collaboration, Zoi Data contributes expertise in AI model development and computational image analysis.
BB-REBUS is a transdisciplinary European research project investigating neural and bodily mechanisms underlying distorted bodily representations in conditions such as chronic pain, eating disorders, and neurological syndromes.
By integrating neuroscience, clinical research, and computational modeling, the project identifies common brain-body interaction factors contributing to altered self-perception. As the Turkish partner, Zoi Data leads the development of advanced machine learning solutions for analyzing neurophysiological and behavioral data.
This project develops a novel paper-based biosensor enhanced with melanin for early detection of cardiac ischemia by monitoring hypoxanthine levels — a key biomarker that rises during oxygen deprivation in cardiac tissues.
The proposed sensor offers a low-cost, portable, and rapid diagnostic tool suitable for point-of-care applications. By leveraging the conductive and biocompatible properties of melanin, the sensor achieves significantly improved sensitivity and stability.
This project builds an intelligent recommendation system that suggests visually and stylistically compatible clothing items to complete an outfit. Unlike traditional recommendation engines that rely on simple similarity or co-purchase patterns, this system analyzes fashion images to understand visual harmony and contextual pairing.
By leveraging deep learning and computer vision, the system identifies key attributes such as color, texture, shape, and category to produce coherent outfit recommendations.
This project develops an AI-powered software solution for automated analysis of microscope images generated by organ-on-chip systems — platforms that replicate key physiological functions of human organs for biomedical research and drug testing.
The software performs automatic image segmentation, cell counting, and statistical analysis. AI integration ensures objective, scalable, and reproducible analysis across experimental conditions.
This project addresses automatic assessment of student music performances across two exercise types: melody repetition/imitation and rhythm repetition/imitation.
The implemented system estimates fundamental frequency series and chroma feature matrices, matching them using Dynamic Time Warping (DTW). For rhythm assessment, a Siamese neural network is trained via metric learning to directly learn from onset positions — targeting instructor-level evaluation accuracy.