Machine Learning Lab

MLL

Sharif University of Technology



What is MLL Lab?

The Machine Learning Lab (MLL), under the supervision of Dr. Soleymani, is a cutting-edge research center based in Sharif University of Technology, Tehran, Iran. MLL is dedicated to exploring a wide range of critical topics in the field of machine learning, from generalization to compositionality.

Research Area

  • Generalization
  • Compositional Learning
  • Reinforcement Learning
  • Generative Models
  • Vision-language Models

Contact Info

People

Director

Dr. Mahdieh Soleymani

Mohammad Mahdi Samiei

Hosein Hasani

Negin Hashemi Dijujin

Mohammad Hossein Narimani

Mohammadreza Mohammadzadeh Asl

Soroush Vafaie Tabar

Ali Bababeig

Ali Rahimiakbar

Mohammad Mahdi Vahedi

Nima Niroumand

Adeleh Bitarafan

Mahsa Ghorbani

Fatemeh Seyedsalehi

Faezeh Faez

Seyed Mohammad Hadi Hosseini

Arash Marioriyad

Mohammad Mozafari

Ali Abdollahi

Alireza Roshanzamir

Alireza Sahaf Naeini

AmirHossein Ameli Kalkhoran

AmirShayan Haghipour

Amir Ali Moinfar

Amir Akbarnejad

Danial Alihosseini

Ehsan Montahaei

Fahimeh Hosseini Noohdani

Faridoun Mehri

Fatemeh Farahnak-Ghazani

Hossein Khalili

Mahdi Ghaznavi

Marzieh Gheisari

Melika Behjati

Amin Banayeeanzade

Mohamadreza Fereydooni

Omid Abbasi

Parishad BehnamGhader

Rasool Mirzaiezadeh

Sarah Rastegar

Seyed Alireza Mirmohammad Sadeghi

Roostaiyan Seyed Mahdi

Seyed Mohammad Chavoshian

Seyed Mohsen Shojaee

Seyed Roozbeh Razavi Rohani

Sina Hajimiri

Zeinab Golgooni

Courses

Artificial intelligence

Fall 2025

Website: link

System II

Spring 2025

Website: link

Modern Information Retrieval
Spring 2024

Website: ...

Deep Learning

Spring 2024

Website: ...

Large Language Models

Fall 2023

Website: link

Machine Learning

2022

Website: –

Projects

Building models that generalize beyond training distributions.

We develop frameworks that mitigate shortcut learning. Our methods improve robustness under subpopulation shifts by mitigating reliance on confounding signals and non-causal signals. Crucially, we address realistic settings in which spurious correlations are unknown or unlabeled, introducing approaches that achieve strong group robustness without requiring explicit group annotations.

We study how language abstractions and compositional representations can guide reinforcement learning agents toward robust generalization across unseen environments, enabling more transferable policies.

We tackle challenging out-of-distribution settings in which shifts arise from novel compositions of previously seen concepts. Our work introduces principled benchmarks and learning methods that both evaluate and improve model robustness under compositional distribution shifts and complex OoD scenarios.

Enabling generative models to faithfully compose multiple objects, attributes, and relations:

We design test-time strategies that improve compositional control in text-to-image generation. Our work targets failures such as object missing, attribute binding, and relational problems, introducing mechanisms that enforce structured binding between textual descriptions and visual elements. The result is more reliable generation under complex, multi-object prompts.

We propose benchmarks and methods to probe multi-step reasoning and relational understanding. We introduce approaches that strengthen compositional reasoning over complex visual scenes and propose visual reasoning methods for LVLMs.

Representational analysis of VLMs: We introduce datasets and conduct fine-grained analyses of how CLIP represents objects, attributes, and their interactions. By dissecting internal representations, we identify where compositionality emerges—and where it breaks—providing actionable insights for improving model design and training.

Understanding how LLMs and VLMs compute in visual and textual reasoning problems: We develop intervention-based and mechanistic tools to trace how semantic functions are formed and transferred across layers in these models. Our work moves beyond post-hoc explanations toward causal, computation-level understanding of linguistic or multimodal reasoning.

Reliable and faithful attribution methods: We introduce new attribution techniques for Vision Transformers. Our methods address known failure modes of existing attribution approaches and provide more stable, interpretable explanations of model decisions.

Advancing multilingual and low-resource NLP: We introduce high-quality Persian benchmarks covering reasoning, generation, and evaluation, enabling rigorous assessment of LLM capabilities beyond English-centric settings.

Systematic and compositional reasoning in LLMs: We study how LLMs perform multi-step reasoning, identify failure modes, and propose methods to improve logical consistency and generalization across reasoning tasks.

Autonomous decision-making with LLMs: We explore LLM-based agents that interact with environments, tools, and users, studying planning, memory, and coordination in long-horizon tasks.

LLMs as tools for scientific discovery: We investigate how language models can assist in idea generation, pattern discovery, and scientific reasoning, with applications across scientific domains.

Safety and trustworthiness of LLMs: We analyze deceptive behaviors in LLMs, studying when and why models produce misleading outputs, and propose evaluation frameworks to better understand and mitigate such risks.

Open Positions

Publications

For a complete list of publications, please visit Dr. Mahdieh Soleymani's Profile

Send a message