Latest AI Research: November 1, 2025 - Top 15 Papers

by Admin 53 views
Latest AI Research: November 1, 2025 - Top 15 Papers

Hey everyone! 👋 Check out the latest AI research papers from November 1, 2025. This article will cover the top 15 papers across various categories like recommendation systems, representation learning, graph transformers, LLMs (Large Language Models), and graph neural networks. Let's dive into the exciting advancements shaping the future of AI! Be sure to check the Github page for a better reading experience and more papers.

Recommendation Systems

In this section, we'll explore the latest research in recommendation systems, a critical area for personalizing user experiences. From sustainable travel planning to balancing recommendations with LLMs, these papers offer exciting insights.

Title Date Comment
SmartSustain Recommender System: Navigating Sustainability Trade-offs in Personalized City Trip Planning 2025-10-30
Accep...

Accepted for presentation at Workshop on Recommender Systems for Sustainable Development (RS4SD), co-located with CIKM'2025

Collab-REC: An LLM-based Agentic Framework for Balancing Recommendations in Tourism 2025-10-30
RecCocktail: A Generalizable and Efficient Framework for LLM-Based Recommendation 2025-10-30
Vectorized Context-Aware Embeddings for GAT-Based Collaborative Filtering 2025-10-30
Shilling Recommender Systems by Generating Side-feature-aware Fake User Profiles 2025-10-30
OneTrans: Unified Feature Interaction and Sequence Modeling with One Transformer in Industrial Recommender 2025-10-30
ORBIT -- Open Recommendation Benchmark for Reproducible Research with Hidden Tests 2025-10-30
Accep...

Accepted to NeurIPS 2025 Datasets & Benchmarks track

MMQ-v2: Align, Denoise, and Amplify: Adaptive Behavior Mining for Semantic IDs Learning in Recommendation 2025-10-30
Decoupled Multimodal Fusion for User Interest Modeling in Click-Through Rate Prediction 2025-10-30
A Task-Centric Perspective on Recommendation Systems 2025-10-30
The Quest for Reliable Metrics of Responsible AI 2025-10-29
Accep...

Accepted for presentation at the AI in Science Summit 2025

HyMiRec: A Hybrid Multi-interest Learning Framework for LLM-based Sequential Recommendation 2025-10-29
Who You Are Matters: Bridging Topics and Social Roles via LLM-Enhanced Logical Recommendation 2025-10-29
to be...

to be published in NeurIPS 2025

Revisiting scalable sequential recommendation with Multi-Embedding Approach and Mixture-of-Experts 2025-10-29
Can LLMs Outshine Conventional Recommenders? A Comparative Evaluation 2025-10-29
NeurI...

NeurIPS 2025 DB Track Accepted Paper

Recommendation systems are becoming increasingly sophisticated, leveraging LLMs and advanced techniques to provide personalized and relevant suggestions. One standout paper, SmartSustain Recommender System, focuses on navigating sustainability trade-offs in personalized city trip planning. This research highlights the growing importance of ethical considerations in AI, particularly in applications that impact the environment. Another noteworthy paper, Collab-REC, explores an LLM-based agentic framework for balancing recommendations in tourism, showcasing the potential of LLMs to create more nuanced and context-aware recommendation systems. Several other papers delve into the technical aspects of improving recommendation algorithms, such as RecCocktail, which introduces a generalizable and efficient framework for LLM-based recommendation, and Vectorized Context-Aware Embeddings for GAT-Based Collaborative Filtering, which enhances collaborative filtering using graph attention networks. These studies collectively demonstrate the diverse approaches researchers are taking to advance the field of recommendation systems, addressing both practical applications and theoretical challenges. Understanding how to generate user profiles effectively, as discussed in Shilling Recommender Systems by Generating Side-feature-aware Fake User Profiles, is crucial for maintaining the integrity of these systems. Additionally, OneTrans presents a unified approach to feature interaction and sequence modeling, crucial for industrial applications of recommender systems. Finally, the development of benchmarks like ORBIT is vital for ensuring reproducible research in this rapidly evolving field. This collection of papers underscores the dynamic nature of recommendation systems research and its potential to shape various aspects of our digital lives.

Representation Learning

Representation learning is at the core of AI, enabling machines to understand and process data more effectively. Let's explore the latest research in this crucial domain.

Title Date Comment
Clone Deterministic 3D Worlds with Geometrically-Regularized World Models 2025-10-30
Demystifying the Roles of LLM Layers in Retrieval, Knowledge, and Reasoning 2025-10-30 ICASSP 2025
UniTok-Audio: A Unified Audio Generation Framework via Generative Modeling on Discrete Codec Tokens 2025-10-30 21 pages, 3 figures
Understanding Hardness of Vision-Language Compositionality from A Token-level Causal Lens 2025-10-30
ReaKase-8B: Legal Case Retrieval via Knowledge and Reasoning Representations with LLMs 2025-10-30
Bridging the Gap Between Molecule and Textual Descriptions via Substructure-aware Alignment 2025-10-30 EMNLP 2025 (main)
Decoupled Multimodal Fusion for User Interest Modeling in Click-Through Rate Prediction 2025-10-30
Learning Geometry: A Framework for Building Adaptive Manifold Models through Metric Optimization 2025-10-30 9 pages
Dual Mixture-of-Experts Framework for Discrete-Time Survival Analysis 2025-10-29
Accep...

Accepted to NeurIPS 2025 workshop Learning from Time Series for Health (TS4H)

Ditch the Denoiser: Emergence of Noise Robustness in Self-Supervised Learning from Data Curriculum 2025-10-29 NeurIPS 2025
Dynamic Traceback Learning for Medical Report Generation 2025-10-29
Accep...

Accepted to IEEE Transactions on Multimedia (TMM)

Quality-Aware Prototype Memory for Face Representation Learning 2025-10-29 Preprint
Contrastive Predictive Coding Done Right for Mutual Information Estimation 2025-10-29 26 pages, 5 figures
CAUSAL3D: A Comprehensive Benchmark for Causal Learning from Visual Data 2025-10-29
Datas...

Datasets link: https://huggingface.co/datasets/LLDDSS/Causal3D_Dataset

KARMA: Efficient Structural Defect Segmentation via Kolmogorov-Arnold Representation Learning 2025-10-29
This ...

This work has been submitted to the IEEE for possible publication

Representation learning is a cornerstone of modern AI, enabling machines to automatically discover the representations needed for feature detection and classification from raw data. Several papers highlight the diverse applications and innovative techniques in this field. Clone Deterministic 3D Worlds with Geometrically-Regularized World Models introduces a method to clone 3D worlds, demonstrating advancements in creating realistic and controllable environments. Demystifying the Roles of LLM Layers in Retrieval, Knowledge, and Reasoning delves into the inner workings of LLMs, providing insights into how different layers contribute to their capabilities, which is crucial for optimizing these powerful models. UniTok-Audio presents a unified framework for audio generation, showcasing progress in generative models. Vision-language compositionality is explored in Understanding Hardness of Vision-Language Compositionality from A Token-level Causal Lens, which focuses on how models understand and combine visual and textual information. The legal domain benefits from ReaKase-8B, a system for legal case retrieval using knowledge and reasoning representations with LLMs. Bridging the gap between different modalities, Bridging the Gap Between Molecule and Textual Descriptions via Substructure-aware Alignment aligns molecular structures with textual descriptions, valuable for cheminformatics. User interest modeling is enhanced in Decoupled Multimodal Fusion for User Interest Modeling in Click-Through Rate Prediction, improving click-through rate predictions. Furthermore, Learning Geometry: A Framework for Building Adaptive Manifold Models through Metric Optimization offers a geometric perspective on building adaptive models. Techniques for handling time-series data are presented in Dual Mixture-of-Experts Framework for Discrete-Time Survival Analysis, relevant for health applications. Ditch the Denoiser reveals that noise robustness can emerge in self-supervised learning through data curriculum, and Dynamic Traceback Learning for Medical Report Generation improves medical report generation. Other notable papers include Quality-Aware Prototype Memory for Face Representation Learning, Contrastive Predictive Coding Done Right for Mutual Information Estimation, CAUSAL3D: A Comprehensive Benchmark for Causal Learning from Visual Data, and KARMA, each contributing to the broader understanding and advancement of representation learning. These papers collectively illustrate the breadth and depth of research in representation learning, crucial for the continued progress of AI.

Graph Transformers

Graph Transformers are revolutionizing how we process and understand graph-structured data. Here's a look at the latest papers in this exciting field.

Title Date Comment
Same Same But Different: Preventing Refactoring Attacks on Software Plagiarism Detection 2025-10-29
To be...

To be published at ICSE'26. 13 pages, 6 figures

Inferring Group Intent as a Cooperative Game. An NLP-based Framework for Trajectory Analysis using Graph Transformer Neural Network 2025-10-27
FoGE: Fock Space inspired encoding for graph prompting 2025-10-27
Bhav-Net: Knowledge Transfer for Cross-Lingual Antonym vs Synonym Distinction via Dual-Space Graph Transformers 2025-10-25
Found...

Found some issues and need to correct them

Relieving the Over-Aggregating Effect in Graph Transformers 2025-10-24
Accep...

Accepted by NeurIPS 2025

Return of ChebNet: Understanding and Improving an Overlooked GNN on Long Range Tasks 2025-10-24
Structural Invariance Matters: Rethinking Graph Rewiring through Graph Metrics 2025-10-23
21 pa...

21 pages, 5 figures, conference

Unifying and Enhancing Graph Transformers via a Hierarchical Mask Framework 2025-10-21
Accep...

Accepted by NeurIPS 2025 (Poster)

Soft Graph Transformer for MIMO Detection 2025-10-17
5 pag...

5 pages with 3 figures and 2 tables, submitted to IEEE for a possible publication

A Comprehensive Evaluation of Graph Neural Networks and Physics Informed Learning for Surrogate Modelling of Finite Element Analysis 2025-10-16
14 pa...

14 pages, 6 figures, 5 tables. Code available at:https://github.com/SinghNayanKumar/DL-surrogate-modelling

DARTS-GT: Differentiable Architecture Search for Graph Transformers with Quantifiable Instance-Specific Interpretability Analysis 2025-10-16
Multitask finetuning and acceleration of chemical pretrained models for small molecule drug property prediction 2025-10-14
GraphTARIF: Linear Graph Transformer with Augmented Rank and Improved Focus 2025-10-12
HeSRN: Representation Learning On Heterogeneous Graphs via Slot-Aware Retentive Network 2025-10-10
Spatial-Functional awareness Transformer-based graph archetype contrastive learning for Decoding Visual Neural Representations from EEG 2025-10-09

Graph Transformers represent a significant advancement in handling graph-structured data, enabling a wide range of applications from software plagiarism detection to drug property prediction. Same Same But Different: Preventing Refactoring Attacks on Software Plagiarism Detection addresses the critical issue of software plagiarism by employing graph transformers to identify refactoring attacks, showcasing the model's ability to understand complex code structures. The paper Inferring Group Intent as a Cooperative Game uses a graph transformer neural network to analyze trajectories, demonstrating the power of these models in understanding complex interactions. FoGE: Fock Space inspired encoding for graph prompting introduces a novel approach to graph prompting, while Bhav-Net uses dual-space graph transformers for cross-lingual antonym vs synonym distinction, highlighting the versatility of graph transformers in natural language processing tasks. Addressing a common challenge in graph transformers, Relieving the Over-Aggregating Effect in Graph Transformers offers a solution to the over-aggregation problem. In contrast, Return of ChebNet revisits and improves an overlooked GNN for long-range tasks, and Structural Invariance Matters rethinks graph rewiring through graph metrics, providing insights into graph structure. Unifying and Enhancing Graph Transformers via a Hierarchical Mask Framework presents a unified framework for graph transformers, improving their performance, and Soft Graph Transformer for MIMO Detection showcases their application in wireless communication. Applications in scientific modeling are explored in A Comprehensive Evaluation of Graph Neural Networks and Physics Informed Learning for Surrogate Modelling of Finite Element Analysis, where graph neural networks are used for surrogate modeling. Further advancements include DARTS-GT, which uses differentiable architecture search for graph transformers, and Multitask finetuning and acceleration of chemical pretrained models for small molecule drug property prediction, which fine-tunes chemical pretrained models. Additionally, GraphTARIF introduces a linear graph transformer, and HeSRN presents a slot-aware retentive network for representation learning on heterogeneous graphs. Finally, Spatial-Functional awareness Transformer-based graph archetype contrastive learning for Decoding Visual Neural Representations from EEG uses graph transformers to decode visual neural representations from EEG data. These papers collectively highlight the diverse applications and ongoing advancements in graph transformer research.

LLM (Large Language Models)

Large Language Models (LLMs) continue to dominate AI research. This section highlights the latest advancements in LLM technology and applications.

Title Date Comment
LLMs Process Lists With General Filter Heads 2025-10-30
Code ...

Code and data at https://filter.baulab.info/

Comparing human and LLM politeness strategies in free production 2025-10-30
25 pa...

25 pages, 5 figures

EMNLP 2025 camera-ready version

Quality Over Quantity? LLM-Based Curation for a Data-Efficient Audio-Video Foundation Model 2025-10-30
5 pag...

5 pages, 5 figures, 2 tables. Accepted at EUSIPCO 2025

Massive Supervised Fine-tuning Experiments Reveal How Data, Layer, and Training Factors Shape LLM Alignment Quality 2025-10-30
Accep...

Accepted to EMNLP 2025 (Main Conference). Models and evaluation results available at: https://github.com/llm-jp/massive-sft

Value Drifts: Tracing Value Alignment During LLM Post-Training 2025-10-30
MemAscend: System Memory Optimization for SSD-Offloaded LLM Fine-Tuning 2025-10-30
16 pa...

16 pages, 21 figures, 6 tables

Analysis and Optimized CXL-Attached Memory Allocation for Long-Context LLM Fine-Tuning 2025-10-30
13 pa...

13 pages, 15 figures, 2 tables

Refine-n-Judge: Curating High-Quality Preference Chains for LLM-Fine-Tuning 2025-10-30
CompoST: A Benchmark for Analyzing the Ability of LLMs To Compositionally Interpret Questions in a QALD Setting 2025-10-30
Resea...

Research Track, 24th International Semantic Web Conference (ISWC 2025), November 2-6, 2025, Nara, Japan

All You Need for Object Detection: From Pixels, Points, and Prompts to Next-Gen Fusion and Multimodal LLMs/VLMs in Autonomous Vehicles 2025-10-30
SignalLLM: A General-Purpose LLM Agent Framework for Automated Signal Processing 2025-10-30 11 pages
Incentivizing LLMs to Self-Verify Their Answers 2025-10-30
WeaveRec: An LLM-Based Cross-Domain Sequential Recommendation Framework with Model Merging 2025-10-30
Collab-REC: An LLM-based Agentic Framework for Balancing Recommendations in Tourism 2025-10-30
LLMs as In-Context Meta-Learners for Model and Hyperparameter Selection 2025-10-30 27 pages, 6 figures

Large Language Models (LLMs) are at the forefront of AI research, driving innovation across numerous applications. The paper LLMs Process Lists With General Filter Heads explores how LLMs process lists, revealing insights into their internal mechanisms. Comparing human and LLM politeness strategies in free production examines the politeness strategies employed by LLMs compared to humans, crucial for building socially aware AI. Efficiently leveraging data for LLMs is addressed in Quality Over Quantity?, which investigates LLM-based curation for audio-video foundation models. The impact of training data and methods on LLM alignment is studied in Massive Supervised Fine-tuning Experiments Reveal How Data, Layer, and Training Factors Shape LLM Alignment Quality, offering valuable guidance for training aligned models. The critical issue of value alignment during post-training is examined in Value Drifts: Tracing Value Alignment During LLM Post-Training, while MemAscend and Analysis and Optimized CXL-Attached Memory Allocation for Long-Context LLM Fine-Tuning focus on system memory optimization for efficient LLM fine-tuning. Enhancing the quality of training data is the focus of Refine-n-Judge, which curates high-quality preference chains for LLM fine-tuning. The ability of LLMs to interpret compositional questions is assessed in CompoST, a benchmark for analyzing compositional question interpretation. LLMs are also making strides in multimodal applications, as demonstrated in All You Need for Object Detection, which explores the use of LLMs in autonomous vehicles. Furthermore, SignalLLM presents a general-purpose LLM agent framework for automated signal processing, and Incentivizing LLMs to Self-Verify Their Answers explores methods to improve the reliability of LLM responses. Applications in recommendation systems are highlighted in WeaveRec and Collab-REC, which use LLMs for cross-domain sequential recommendation and balancing recommendations in tourism, respectively. Finally, the meta-learning capabilities of LLMs are explored in LLMs as In-Context Meta-Learners for Model and Hyperparameter Selection. This collection of papers showcases the diverse research efforts focused on enhancing LLMs, from improving their efficiency and alignment to expanding their applications across various domains.

Graph Neural Networks

Graph Neural Networks (GNNs) are essential for processing graph data, with applications in social networks, drug discovery, and more. Let's explore the latest GNN research.

Title Date Comment
HEIR: Learning Graph-Based Motion Hierarchies 2025-10-30
Code ...

Code link: https://github.com/princeton-computational-imaging/HEIR

Understanding Generalization in Node and Link Prediction 2025-10-30
arXiv...

arXiv admin note: text overlap with arXiv:2412.07106

UnifiedFL: A Dynamic Unified Learning Framework for Equitable Federation 2025-10-30
From Embedding to Control: Representations for Stochastic Multi-Object Systems 2025-10-30
A Survey of Heterogeneous Graph Neural Networks for Cybersecurity Anomaly Detection 2025-10-30
37 pa...

37 pages, 4 figures, 86 references. Submitted to Journal of Computer Security (under review)

Morphology-Aware Graph Reinforcement Learning for Tensegrity Robot Locomotion 2025-10-30
Data-driven Projection Generation for Efficiently Solving Heterogeneous Quadratic Programming Problems 2025-10-30
Hierarchical Graph Networks for Accurate Weather Forecasting via Lightweight Training 2025-10-29
Robust GNN Watermarking via Implicit Perception of Topological Invariants 2025-10-29
Graph Network-based Structural Simulator: Graph Neural Networks for Structural Dynamics 2025-10-29 16 pages, 14 figures
A method for the systematic generation of graph XAI benchmarks via Weisfeiler-Leman coloring 2025-10-29
Exploring End-to-end Differentiable Neural Charged Particle Tracking -- A Loss Landscape Perspective 2025-10-29
Publi...

Published in Transactions on Machine Learning Research (TMLR), 2025

GnnXemplar: Exemplars to Explanations -- Natural Language Rules for Global GNN Interpretability 2025-10-29
38 pa...

38 pages, 20 figures, NeurIPS 2025 (Oral)

FastJAM: a Fast Joint Alignment Model for Images 2025-10-29
Accep...

Accepted to NeurIPS 2025. Pages 1-10 are the Main Paper. Pages 23-31 are Supplemental Material. FastJAM website - https://bgu-cs-vil.github.io/FastJAM/

Expand and Compress: Exploring Tuning Principles for Continual Spatio-Temporal Graph Forecasting 2025-10-29
Accep...

Accepted by ICLR 2025

Graph Neural Networks (GNNs) are rapidly evolving, with applications ranging from motion analysis to cybersecurity. The paper HEIR: Learning Graph-Based Motion Hierarchies introduces a method for learning motion hierarchies from graph data, essential for understanding complex movements. Generalization in GNNs for node and link prediction is explored in Understanding Generalization in Node and Link Prediction, providing insights into how well these models perform on unseen data. UnifiedFL presents a dynamic unified learning framework for equitable federation, addressing fairness in federated learning scenarios. From Embedding to Control focuses on representations for stochastic multi-object systems, while A Survey of Heterogeneous Graph Neural Networks for Cybersecurity Anomaly Detection provides a comprehensive overview of GNNs used in cybersecurity. GNNs are also applied to robotics, as seen in Morphology-Aware Graph Reinforcement Learning for Tensegrity Robot Locomotion. Efficiently solving heterogeneous quadratic programming problems is addressed in Data-driven Projection Generation for Efficiently Solving Heterogeneous Quadratic Programming Problems. Weather forecasting benefits from GNNs in Hierarchical Graph Networks for Accurate Weather Forecasting via Lightweight Training, and Robust GNN Watermarking via Implicit Perception of Topological Invariants explores techniques for watermarking GNNs. The use of GNNs for structural dynamics is highlighted in Graph Network-based Structural Simulator, while A method for the systematic generation of graph XAI benchmarks via Weisfeiler-Leman coloring introduces a method for generating benchmarks for graph explainable AI (XAI). Applications in particle tracking are explored in Exploring End-to-end Differentiable Neural Charged Particle Tracking, and GnnXemplar presents a method for global GNN interpretability. FastJAM introduces a fast joint alignment model for images, and finally, Expand and Compress explores tuning principles for continual spatio-temporal graph forecasting. These papers collectively showcase the diverse applications and advancements in GNN research, highlighting their crucial role in AI.

Conclusion

Alright, folks! That wraps up the latest batch of AI research papers from November 1, 2025. We've covered some really exciting advancements in recommendation systems, representation learning, graph transformers, LLMs, and graph neural networks. It's clear that the field of AI is constantly evolving, with new breakthroughs happening all the time. Stay tuned for more updates, and keep pushing the boundaries of what's possible! 🚀