Machine Learning Lab

MLL

Sharif University of Technology



What is MLL Lab?

The Machine Learning Lab (MLL), under the supervision of Dr. Soleymani, is a cutting-edge research center based in Sharif University of Technology, Tehran, Iran. MLL is dedicated to exploring a wide range of critical topics in the field of machine learning, from generalization to compositionality.

Research Area

  • Generalization
  • Compositional Learning
  • Reinforcement Learning
  • Generative Models
  • Vision-language Models

Contact Info

People

Director

Dr. Mahdieh Soleymani

Email: soleymani@sharif.edu

Google Scholar: link

GitHub: -

LinkedIn: -

Mohammad Mahdi Samiei

Email: mohmahsamiei@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Hosein Hasani

Email: hosein.hasani.ce@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Mahsa Ghorbani

Email: m.ghorbani1991@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Negin Hashemi Dijujin

Email: n.hashemi202@gmail.com

Google Scholar: link

Seyed Mohammad Hadi Hosseini

Email: hadi.hosseini0171@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Arash Marioriyad

Email: arashmarioriyad@gmail.com

GitHub: link

LinkedIn: link

Soroush Vafaie Tabar

Email: svafaie@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Ali Bababeig

Ali Rahimiakbar

Email: alirahimyakbar@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Mohammad Mahdi Vahedi

Email: m.m.vahedi13800@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Nima Niroumand

Email: nima.niroumand@ce.sharif.edu

GitHub: link

LinkedIn: link

Adeleh Bitarafan

Email: bitarafan@ce.sharif.edu

Google Scholar: link

GitHub: link

LinkedIn: link

Fatemeh Seyedsalehi

Email: fateme.ssalehi@gmail.com

LinkedIn: link

Faezeh Faez

Email: faezeh.faez@gmail.com

Google Scholar: link

LinkedIn: link

Mohammadreza Mohammadzadeh Asl

Email: mohammadreza.mohammadzadeh.asl@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Mohammad Mozafari

Email: mozafari.mmd@gmail.com

Google Scholar: link

LinkedIn: link

Ali Abdollahi

Email: aliabdollahi024a@gmail.com

Google Scholar: link

LinkedIn: link

Alireza Roshanzamir

Email: a.roshanzamir1996@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Alireza Sahaf Naeini

Email: alisahaf70@gmail.com

Google Scholar: link

AmirHossein Ameli Kalkhoran

AmirShayan Haghipour

Email: amirshayanhaghi@gmail.com

LinkedIn: link

Amir Ali Moinfar

Email: moinfar.amirali@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Amir Akbarnejad

Email: a.akbanejad@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Danial Alihosseini

Email: danial.alihosseini@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Ehsan Montahaei

Email: ehsan.montahaie@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Fahimeh Hosseini Noohdani

Email: fahim.hosseini.77@gmail.com

Google Scholar: link

GitHub: link

Faridoun Mehri

Email: feraidoonmehri@gmail.com

GitHub: link

LinkedIn: link

Fatemeh Farahnak-Ghazani

Email: f.farahnak.g@gmail.com

Google Scholar: link

Hossein Khalili

Email: hosseinhkh@live.com

Google Scholar: link

LinkedIn: link

Mahdi Ghaznavi

Email: mahdi.ghaznavi91@sharif.edu

Google Scholar: link

GitHub: link

LinkedIn: link

Marzieh Gheisari

Email: m.gheisari69@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Melika Behjati

Email: melikabehjati@gmail.com

Google Scholar: link

LinkedIn: link

Amin Banayeeanzade

Email: m.banayeean@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Mohamadreza Fereydooni

Email: imohamadreza7@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Omid Abbasi

Parishad BehnamGhader

Email: pbehnamghader@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Rasool Mirzaiezadeh

Sarah Rastegar

Email: s.rastegar2@uva.nl

Google Scholar: link

GitHub: link

Seyed Alireza Mirmohammad Sadeghi

Roostaiyan Seyed Mahdi

Email: mips00@gmail.com

Google Scholar: link

GitHub: link

Seyed Mohammad Chavoshian

Email: mohammad.chavosh@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Seyed Mohsen Shojaee

Email: mohsen.shojaie@gmail.com

Google Scholar: link

LinkedIn: link

Seyed Roozbeh Razavi Rohani

Email: razavii.roozbeh@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Sina Hajimiri

Email: sina.hajimiri@gmail.com

Google Scholar: link

GitHub: link

LinkedIn: link

Zeinab Golgooni

Courses

Artificial intelligence

Fall 2025

Website: link

System II

Spring 2025

Website: link

Modern Information Retrieval
Spring 2024

Website: ...

Deep Learning

Spring 2024

Website: ...

Large Language Models

Fall 2023

Website: link

Machine Learning

2022

Website: –

Projects

Building models that generalize beyond training distributions.

We develop frameworks that mitigate shortcut learning. Our methods improve robustness under subpopulation shifts by mitigating reliance on confounding signals and non-causal signals. Crucially, we address realistic settings in which spurious correlations are unknown or unlabeled, introducing approaches that achieve strong group robustness without requiring explicit group annotations.

We study how language abstractions and compositional representations can guide reinforcement learning agents toward robust generalization across unseen environments, enabling more transferable policies.

We tackle challenging out-of-distribution settings in which shifts arise from novel compositions of previously seen concepts. Our work introduces principled benchmarks and learning methods that both evaluate and improve model robustness under compositional distribution shifts and complex OoD scenarios.

Enabling generative models to faithfully compose multiple objects, attributes, and relations:

We design test-time strategies that improve compositional control in text-to-image generation. Our work targets failures such as object missing, attribute binding, and relational problems, introducing mechanisms that enforce structured binding between textual descriptions and visual elements. The result is more reliable generation under complex, multi-object prompts.

We propose benchmarks and methods to probe multi-step reasoning and relational understanding. We introduce approaches that strengthen compositional reasoning over complex visual scenes and propose visual reasoning methods for LVLMs.

Representational analysis of VLMs: We introduce datasets and conduct fine-grained analyses of how CLIP represents objects, attributes, and their interactions. By dissecting internal representations, we identify where compositionality emerges—and where it breaks—providing actionable insights for improving model design and training.

Understanding how LLMs and VLMs compute in visual and textual reasoning problems: We develop intervention-based and mechanistic tools to trace how semantic functions are formed and transferred across layers in these models. Our work moves beyond post-hoc explanations toward causal, computation-level understanding of linguistic or multimodal reasoning.

Reliable and faithful attribution methods: We introduce new attribution techniques for Vision Transformers. Our methods address known failure modes of existing attribution approaches and provide more stable, interpretable explanations of model decisions.

Advancing multilingual and low-resource NLP: We introduce high-quality Persian benchmarks covering reasoning, generation, and evaluation, enabling rigorous assessment of LLM capabilities beyond English-centric settings.

Systematic and compositional reasoning in LLMs: We study how LLMs perform multi-step reasoning, identify failure modes, and propose methods to improve logical consistency and generalization across reasoning tasks.

Autonomous decision-making with LLMs: We explore LLM-based agents that interact with environments, tools, and users, studying planning, memory, and coordination in long-horizon tasks.

LLMs as tools for scientific discovery: We investigate how language models can assist in idea generation, pattern discovery, and scientific reasoning, with applications across scientific domains.

Safety and trustworthiness of LLMs: We analyze deceptive behaviors in LLMs, studying when and why models produce misleading outputs, and propose evaluation frameworks to better understand and mitigate such risks.

Open Positions

Publications

For a complete list of publications, please visit Dr. Mahdieh Soleymani's Google Scholar Profile

Send a message