My research interests span (I) AI / data science / machine learning, with emphasis on algorithm stability, learning under noise and limited data, interpretable methods, and systematic evaluation; (II) networks and graphs, focusing on interpretable network representations, higher-order networks, graph uncertainty and robustness, and structural patterns in information diffusion; and (III) online behavior, with focus on misinformation, credibility and intent analysis, evaluation without ground truth, and the societal impact of information ecosystems.
I am big fan of multidisciplinary research and as such I have a significant interest in mining social media as it provides me with the opportunity to pursue all my research interests under one unified ecosystem. I have championed the opportunities in such exciting interdisciplinary studies in my textbook on Social Media Mining: An Introduction; Cambridge University Press, check it out, it's free!
When conducting such interdisciplinary research, a common pattern in my studies is to collect and analyze large scale data to glean actionable patterns. When studying online human behavior, I often employ theories from social sciences, psychology, or anthropology, in addition to developing and using advanced mathematical, statistical, and machine learning machinery to prove the validity of such patterns. My research is supported by an NSF CAREER award.
For a sample of my work see our recent Tutorials:
Traditionally, a network is represented by an adjacency matrix, which captures the nodes connected in the network. Adjacency matrices can be massive even for sparse large graphs, are not interpretable (e.g., not directly capturing complex relationships such as paths or cuts), and are hard to visualize, appearing as ``hairballs"; dense tangled structures of nodes and edges often carrying no insights. To address these challenges, we have developed new network representations that are (I) easy-to-visualize and (II) interpretable (i.e., structurally-informative). See these examples (Spectral Zoo — KDD'20, Spectral Paths — KDD'22, and Network Shapes — ICDM'18) and their applications in network identification & authentication (also TKDE'22) and in network robustness assessment (ICKG'22). The WebShapes demo (WSDM'20) shows how these spectral representations enable 3D network visualization.
Many real-world systems — co-authorship, group conversations, biochemical reactions, multi-party transactions — involve interactions among groups of entities, not just pairs. We study how to faithfully represent and learn from such higher-order networks.
Our survey in SIGKDD Explorations (2024) reviews higher-order network representations and learning. We introduce spectral-moment representations of higher-order networks (PAKDD'25), exploit cross-order patterns for link prediction in higher-order networks (ICDMW'22), and have a dedicated common-neighbor approach for link prediction in CIKM'20. This direction was the focus of Hao Tian's 2024 dissertation, Exploring Higher-order Networks.
Conventional wisdom treats noise as something to denoise away. We take the opposite view, building on stochastic-resonance ideas from physics: in many problems, carefully injected noise can improve learning algorithms, especially when data is limited or models are overparameterized.
Our survey on harnessing the power of noise (2024) maps the techniques and applications. Specific results include noise-enhanced community detection (Hypertext'20, Best Paper Candidate) and noise-enhanced unsupervised link prediction (PAKDD'21). We have given tutorials on this material at SDM'22 and TheWebConf'23. The line of work was the centerpiece of Reyhaneh Abdolazimi's 2024 dissertation, Noise-Enhanced Network Science.
Graph neural networks (GNNs) inherit whatever structural noise lives in the input graph: redundant edges, adversarial perturbations, distributional shift. We study a family of techniques that edit the graph itself — sparsifying, augmenting, or attacking it — to improve downstream learning.
Highlights include SGCN: a graph sparsifier based on graph convolutional networks (PAKDD'20) and its extended JDSA journal version; semi-supervised graph ultra-sparsification via reweighted ℓ1 optimization (ICASSP'23); and AdverSparse: an adversarial-attack framework for spatio-temporal GNNs (ICASSP'22). This program of work underlies Jiayu Li's 2024 dissertation, Enhancing Graph Neural Networks by Editing Graphs.
Hate speech detection systems are routinely brittle: they miss hate that is implied rather than stated, conflate dialect with toxicity, and fail to transfer across cultural contexts. Our recent work tackles each of these failure modes.
For hateful memes, we unpack the presupposed context and false claims that text-only systems miss (2025). For text, we propose hate-subspace modeling for culture-aware hate speech detection (2025) — recognizing that what counts as hate depends on the speech community. The research has started based on Weibin Cai's MS Thesis: Harnessing LLMs to Detect Hate Speech (2025).
Do large language models exhibit the classical memory effects studied for decades in human cognition — the list-length effect, list-strength effect, fan effect, and the other "sins" of memory? Or do their failures follow an entirely different structure?
Our 2025 paper "Analyzing Memory Effects in Large Language Models through the lens of Cognitive Psychology" systematically tests seven classical memory phenomena in state-of-the-art LLMs using paradigms drawn from psychological research, comparing human and model behavior side-by-side. The work is part of a broader effort to evaluate AI systems with the same care we apply to human subjects.
A summary of fake news research can be obtained through our CSUR survey. Our work spans detecting fake news using content, link/network information, and early detection theories. We have built multimodal news credibility datasets such as ReCOVery (CIKM'20) for COVID-19 news, and Chinese-language resources such as CHECKED.
Recent work pushes detection toward the realistic regime of limited information: our 2025 SIGKDD Explorations paper "Is Less Really More? Fake News Detection with Limited Information" studies what is recoverable when text and labels are scarce, and our HERO model learns the hierarchical linguistic style of fake news, drawing on psychological theories of how deception manifests in writing. We also introduced the first techniques to assess the intent of fake-news spreaders (TheWebConf'22), and we are now investigating AI-generated fake news across multiple domains. For more, see our KDD/WSDM Tutorials here.
To mine across social media sites, we particularly focus on two specific problems. First, how does user behavior vary across sites (e.g., difference between LinkedIn Friends and Facebook Friends). In addition to designing new techniques, we investigate means to scale and adapt traditional models that analyze user behavior for a single site to multiple sites. For recent results on this research question, see my papers in Information Fusion'16 and ICWSM'14 and this book chapter. Second, I study user behaviors that are only observed across sites. An example includes our study on user migrations across sites.
My research has investigated means to realistically analyze human behavior online by focusing on ways to exploit information redundancies generated by user behavior. The methodology has been used to identify sarcasm on Twitter, to identify users across sites, among other behaviors. For more on the topic see this article or this textbook chapter. As a by-product, my research on human behavior modeling has had implication in information verification, privacy and security.
In data mining terms, ground truth is rarely available online. I recently started to investigate this problem and identified some ways to tackle the problem. For a succinct review of the topic see my recent Communciations of the ACM (CACM) paper on this issue.
I have looked at how to utilize minimum information to identify users, detect malicious users, or to recommend friends on social media sites with high accuracy. As these methods utilize only minimum information, they scale easily to millions of users. Recently, I have been investigating theoretical limits of using minimum information.
I have recently investigated the balance between privacy and mining user-generated content by connecting ideas from complexity theory, specifically Kolmogrov complexity. See this paper for some (very!) preliminary results.
My research has focused on (1) online means to map areas impacted by natural disasters in real-time [ICDM'15], (2) identifying relevant users that provide most useful information in case of crises [HT 2014], and (3) systematic approaches to crowdsource user-generated content in case of disasters [CMOT'12].