Project Name: Pushing the Limits of Correlative Nanoscopy with Generative Artificial Intelligence
Funding Program: TÜBİTAK 2569 – Bilateral Cooperation Program with the Ministry of Research, Innovation and Digitization (MCID) of Romania
Role: Researcher
Project Dates: 01.07.2025 – 30.06.2027
Project Owner Organization: Zoi Data
Project Summary:
This project aims to enhance correlative nanoscopy techniques by integrating generative artificial intelligence models for image reconstruction, enhancement, and cross-modality translation. Correlative nanoscopy combines multiple imaging modalities (e.g., fluorescence and electron microscopy) to provide detailed structural and functional information at the nanoscale. However, image acquisition and alignment across modalities remain challenging. By leveraging the power of generative AI, such as GANs and diffusion models, the project seeks to bridge resolution gaps, reduce acquisition time, and improve the interpretability of nanoscopic datasets. As part of a Turkish-Romanian collaboration, Zoi Data contributes expertise in AI model development and computational image analysis.
Project Name: BB-REBUS – Brain-Body factoRs mediating altEred Bodily representations in mUltiple pathological conditionS
Funding Program: European Commission – ERA-NET NEURON JTC2024
Role: Researcher
Project Dates: 01.05.2025 – 30.04.2028
National Project Owner Organization: Zoi Data
Project Summary:
BB-REBUS is a transdisciplinary European research project aiming to investigate the neural and bodily mechanisms underlying distorted bodily representations observed in various pathological conditions, such as chronic pain, eating disorders, and neurological syndromes. By integrating neuroscience, clinical research, and computational modeling, the project seeks to identify common brain-body interaction factors contributing to altered self-perception. As the Turkish partner, Zoi Data is responsible for developing advanced machine learning solutions for analyzing neurophysiological and behavioral data. The project aims to pave the way for more personalized and effective diagnostic and therapeutic interventions across clinical populations.
Project Name: Early Diagnosis of Cardiac Ischemia: Monitoring Hypoxanthine with a Melanin-Enhanced Paper-Based Biosensor
Funding Program: Scientific Research Projects (BAP) – İzmir Demokrasi University
Role: Coordinator
Project Dates: 13.03.2025 – 13.03.2026
Project Owner Organization: İzmir Demokrasi University
Project Summary:
This project aims to develop a novel, paper-based biosensor enhanced with melanin for the early detection of cardiac ischemia by monitoring hypoxanthine levels. Hypoxanthine is a key biomarker that increases during oxygen deprivation in cardiac tissues. The proposed biosensor offers a low-cost, portable, and rapid diagnostic tool suitable for point-of-care applications. By leveraging the conductive and biocompatible properties of melanin, the sensor’s sensitivity and stability are significantly improved. The ultimate goal is to provide an accessible solution for early ischemia diagnosis, which can enable timely medical intervention and reduce the risk of severe cardiac events.
Project Name: Creating an Outfit Combination Completion Based Recommendation System
Funding Program: TÜBİTAK TEYDEB 1507 – SME R&D Start-up Support Program
Grant No: 7230156
Role: Project Manager
Project Dates: 01.01.2024 – 30.06.2025
Project Owner Organization: Zoi Data
Project Summary:
The project focuses on building an intelligent recommendation system that suggests visually and stylistically compatible clothing items to complete an outfit. Unlike traditional recommendation engines that rely on simple similarity or co-purchase patterns, this system analyzes fashion images and outfit compositions to understand visual harmony and contextual pairing. By leveraging deep learning models and computer vision techniques, the system identifies key attributes such as color, texture, shape, and category, then generates suggestions to complete a partially defined outfit (e.g., recommending shoes and a jacket to match a dress). The solution aims to enhance user experience in fashion e-commerce platforms by providing personalized, fashion-aware outfit recommendations
Project Name: Artificial Intelligence-Based Analysis of Microscope Images Obtained Using Organ-on-Chip
Funding Program: TÜBİTAK TEYDEB 1501 – Industry R&D Support Program
Grant No: 3230138
Role: Researcher
Project Dates: 01.12.2023 – 01.05.2025
Project Owner Organization: InitioCell, in collaboration with ZoiData
Project Summary:
This project aims to develop an AI-powered software solution for the automated analysis of microscope images generated by organ-on-chip systems. These platforms replicate key physiological functions of human organs, providing a powerful tool for biomedical research and drug testing. The software being developed is capable of performing automatic image segmentation, cell counting, and statistical analysis. By processing time-series image data, the system can track and report how cell populations evolve over different days, enabling researchers to monitor cellular responses and growth dynamics efficiently. The integration of artificial intelligence ensures objective, scalable, and reproducible analysis, ultimately accelerating research in personalized medicine, toxicology, and tissue engineering.
Project Name: Developing a visual search-based clothing recommendation system
Funding Program: KOSGEB’s R&D, Product Development, and Innovation Support Program
Grant No: 68KIB
Role: Project Manager
Project Dates: 31.03.2023 – 31.07.2024
Project Owner Organization: Zoi Data
Project Summary:
In traditional e-commerce websites, users input keywords related to the product they’re searching for into the search box. In a typical scenario, after a user enters keywords for a product, the search algorithm matches these keywords with product labels in the database and presents relevant products to the user based on the entered keywords. The user then selects the desired product from the presented options and proceeds to the ordering steps. For text-based searches to be effective, it’s important for the customer to fully understand the product and know the appropriate keywords to enter into the search box. However, this is not always the case. This problem can be solved using visual search. During a visual search, customers search for a product using images instead of keywords, as they would in a traditional search. Therefore, in the proposed project, we aim to develop a visual search engine/solution for finding similar clothing items from extensive databases.
Project Name: Cloud Based Automatic Product Tagging via Image
Funding Program: Tubitak Teydeb 1507
Grant No: 7210160
Role: Project Manager
Project Dates: 01.09.2021 – 28.02.2023
Project Owner Organization: Zoi Data
Project Summary:
Within the scope of the project, automatic product tagging is performed for the fashion industry using deep learning and image processing techniques. During this product tagging, the boundaries of fashion products are determined through segmentation on the image, and their main categories are identified through classification. Subsequently, the dominant color is determined to decide the color of the product. Products whose main group and color have been determined enter other classifiers in the label tree according to their main product group (for example, skirt length, collar type, sleeve length, etc.), and the outputs obtained from these classifiers are combined in a grammatically coherent manner to create a long and comprehensive product label. The created product label can be presented in various languages such as Turkish, English, etc. Additionally, within the scope of the project, the issue of detecting licensed products from images has been addressed to monitor copyright payments. Thus, a product containing a licensed image can automatically be tagged with its corresponding license. This comprehensive product tagging system not only reduces the workload of e-commerce sites operating in the fashion industry and facilitates product tracking and control, but also enables the production of search engine-optimized labels.
Project Name: Automatic grading of student music exercise performance for use in online music education system design
Funding Program: TÜBİTAK ARDEB 1001
Grant No: 121E198
Role: Researcher
Project Dates: 15.12.2021 – 15.06.2023
Project Owner Organization: Izmir Democracy University
Project Summary:
We have entered a new era where education is largely conducted online. The number of music students following online courses is growing rapidly as well as the resources made available. In parallel to that, the human resource requirement for the grading of a large number of music students’ musical performances is also increasing. For this reason, the design of automatic systems for grading student performances becomes a necessity. Our project addresses the problem of “automatic assessment of student music performances“, for two types of musical exercises: melody repetition/imitation and rhythm repetition/imitation. In addition to this, we also studied the design and implementation of a system that automatically generates exercises at a targeted difficulty level.
Our main research goal is the implementation of a system that can automatically assess a student performance as accurately as a music instructor. Melody and rhythm dimensions are highly important in musical exercises. Hence, the results of this project will have large use in developing technologies supporting online education. For both of the tasks, we collected and publicly shared data sets that consist of recordings from real auditions for conservatory entrance exams in Turkey. All recordings are annotated/graded by three experts.
In the first study, we designed and implemented a system that automatically grades student vocal performances repeating melodic patterns. The system takes in a student performance recording and the reference piano recording (of the pattern repeated) as input and produces a grade as output, based on the melodic similarity/distance of these two recordings. Four sequential blocks are included in the system architecture. The first block estimates the fundamental frequency series and chroma feature matrices for the reference and performance recordings. The second block matches these series with different lengths in time using a dynamic time warping algorithm. In the third block, the statistical distribution of the distances between the aligned representations is computed. The fourth block is a machine learning model that takes in the distance distribution and automatically estimates a grade. The machine learning model is trained using a supervised learning approach. We present an analysis of inter and intra expert agreement and the machine learning experiment results for the automatic system proposed.
In the second study, we designed and implemented an automatic performance assessment system for a student’s rhythmic pattern imitation. The system compares the student’s performance recording with a teacher’s reference recording and assigns a grade between 1 and 4. The automatic assessment (grade assignment) task is considered as a regression problem and two approaches are tested and compared. The first approach applies classical regression methods to distance features. The second approach applies metric learning using a Siamese neural network to directly learn the most efficient feature representations from the onset positions. The best performance is achieved using the Siamese network.
In addition to the automatic assessment problem, we also studied the problem of automatic exercise creation at a target difficulty level. This technology is especially needed in artificial intelligence-driven personalized education and in situations where different exercises at similar difficulty levels are required (e.g. conservatory entrance exams). Due to lack of data, the potential for applying data-driven techniques is very limited for this problem. Hence, we implemented rule-based systems for automatic exercise creation of melodic pattern exercises and rhythmic pattern exercises. Difficulty level analysis of the generated patterns are studied using expert annotations on the generated patterns as well as student performances of these patterns.