PaperBot Daily Digest

March 04, 2026
20 papers v1.0.2dev
Research Papers
20 papers summarized from arXiv

Aletheia tackles FirstProof autonomously

In a landmark demonstration of AI’s growing sophistication in high-level mathematics, Google DeepMind’s Aletheia agent successfully tackled the "FirstProof" challenge, a set of ten professional-grade research problems designed to test the limits of machine autonomy. Powered by the Gemini 3 Deep Think model, Aletheia operated without any human intervention to produce rigorous, LaTeX-formatted proofs for six of the ten problems, passing the scrutiny of panels of expert mathematicians. The study reveals a significant leap in AI reliability, as the agent utilized "self-filtering" to avoid submitting guesses for problems it couldn't solve, focusing instead on producing "publishable-quality" solutions for highly complex geometry and algebra. By documenting exactly how these solutions were generated and verified, the researchers provide a transparent roadmap for how autonomous AI "researchers" might soon become indispensable partners in expanding the frontiers of mathematical discovery.

AI Review

Failed to generate LLM review.

Research Directions

Based on the research paper "Aletheia tackles FirstProof autonomously," here are potential research directions, areas for future work, and highlighted problems, focusing on actionable and innovative ideas.

1. Direct Extensions of This Work

These are research projects that build directly on the methods and results presented in the paper.

  • Iterative Self-Correction and Refinement: The current pipeline uses a single "Verification and Extraction Prompt" which can yield a [FIXABLE] verdict and an autonomous revision. A direct extension would be to develop a multi-step, iterative self-correction loop. The agent's own critique (or a separate critique module) could be fed back to the generator, allowing it to refine "sketchy" or "inadequate" proofs (like the initial attempts for P7 and P8) through several cycles until a [CORRECT] verdict is reached. This would mimic the human process of redrafting a paper.
  • Autonomous "Best-of-N" Selection: The study relied on human expertise to select the "best-of-2" solutions. A significant advancement would be to automate this selection process. This could involve developing a meta-agent that scores and ranks multiple candidate solutions based on criteria like logical soundness, clarity, citation quality, and even computational efficiency (as proxied by inference cost). This would be a crucial step towards full autonomy.
  • Root-Cause Analysis of Failures: The agent failed to produce any output for problems 1, 3, 4, and 6, a feature touted for its reliability. A critical research direction is to equip the agent with the ability to perform and articulate a root-cause analysis for these failures. For example, could the agent report: "No solution found because I could not find a relevant theorem connecting Group Theory and Manifold properties in my knowledge base," or "The search space for this problem's combinatorial nature exceeds my computational limits"? This would turn failures from dead-ends into valuable feedback for human researchers.
  • Dynamic Inference Cost Allocation: Problem 7 required an order of magnitude more inference cost. This suggests a static or pre-determined budget. Future work could focus on creating an agent that dynamically allocates its computational resources. It could learn to identify problems that require "deep thought" and allocate more generator/verifier cycles to them, while quickly dispensing with simpler problems, optimizing the use of massive computational resources.

2. Novel Research Directions Inspired by This Paper

These are more speculative, paradigm-shifting ideas sparked by the paper's findings and limitations.

  • Modeling and Navigating Subjectivity in Mathematical Proof: The non-unanimous evaluation of Problem 8 ("the ambiguity came from subjective interpretation of the meaning of being ‘publishable after minor revisions'") is a key finding. This suggests a novel research direction beyond mere correctness: AI for Mathematical Exposition and Scholarship. This would involve:
    • Training models to understand the implicit standards of rigor and detail in different mathematical sub-fields.
    • Developing agents that can generate a proof at varying levels of detail, from a high-level "sketch" (like the P8 solution) to a fully detailed, line-by-line argument suitable for a textbook.
    • Creating a "Dialectic Agent" that engages in a clarification dialogue (hypothesized in Section 2) to transform a "Correct?" proof into a unanimously "Correct" one, a process that would help formalize what constitutes a "complete" argument.
  • From Problem Solver to Conjecture Generator: Aletheia is a problem-solver. The next frontier is to create an AI that can generate interesting, non-trivial mathematical conjectures. Using its vast knowledge from training, such an agent could identify patterns, generalize existing theorems, or explore the boundaries of known results to propose new problems, moving from being a tool for solving existing research to a source of new research questions.
  • Developing a Structured "Mathematical Memory" for Agents: The paper treats each problem in isolation. A powerful new direction would be to develop agents with long-term, structured memory. Solving Problem 2 (on local Rankin–Selberg integrals) should ideally add new concepts and techniques to the agent's internal "knowledge graph," which it could then retrieve and apply when tackling a related problem in the future, mimicking how a human mathematician's expertise grows over time.
  • Formalizing Human-AI Interaction for Scientific Discovery: The "Human-AI Interaction Card" is a nascent but important concept. A dedicated research effort could be made to create a comprehensive framework and taxonomy for Human-AI collaboration in research. This would go beyond simple prompts, defining roles like "AI-Strategist, Human-Technician" or "Human-Manager, AI-Workforce," and developing the interfaces and protocols to support these complex collaborative structures.

3. Unexplored Problems Highlighted by This Work

The paper's methodology and results implicitly reveal several concrete, unsolved challenges.

  • High-Fidelity Automated Citation: The paper explicitly notes that the AI solutions "fail to meet the stated requirement that ‘Citations should include precise statement numbers’." This highlights a significant and unsolved sub-problem: training language models to not only identify relevant literature but to pinpoint the exact theorem, lemma, or even equation number and verify its applicability. This is a problem of retrieval, grounding, and logical verification.
  • The Economics of AI-Driven Research: The inference cost plot (Figure 1) raises a critical issue. If solving a single problem requires an "order of magnitude" more computation than previous benchmarks, the scalability and accessibility of such systems are in question. An unexplored area is the study of the "computational economics" of AI research agents. This includes research on algorithmic efficiency, developing less costly-but-still-powerful models, and assessing the ROI (Return on Investment) of using such systems for discovery.
  • Bridging Natural Language Proofs and Formal Verification: The entire Aletheia pipeline operates in the domain of natural language mathematics (LaTeX). A major challenge, highlighted by the ambiguity of P8, is to bridge this to formal proof systems (like Lean, Coq, or Isabelle/HOL). A research problem would be to create an agent that can either generate a formal proof directly or, more powerfully, translate its own natural language proof into a formal language, thereby making correctness an objective, machine-verifiable property rather than a subjective expert judgment.

4. Potential Applications or Domains

The Aletheia agent architecture could be adapted and applied to other fields.

  • Automated Scientific Peer Review Assistant: The "Verification and Extraction Prompt" (Appendix A) is a blueprint for an AI tool to assist human peer reviewers. Such a tool could be deployed in academic publishing to provide a "first pass" review, checking for logical fallacies, verifying citations, flagging unsubstantiated claims, and highlighting "sketchy" steps, as it did for Problem 8.
  • Theoretical Sciences and Engineering: The methodology of an autonomous agent tackling research-level problems is directly applicable to other formal sciences. Future work could deploy a similar agent in theoretical physics, theoretical computer science (e.g., proving properties of algorithms), or quantitative finance, where complex mathematical proofs and derivations are central.
  • Advanced Educational Tools: An agent like Aletheia, capable of generating correct, multi-step solutions to novel problems, could be the engine for a new generation of educational tools. It could act as a "Socratic tutor" for graduate-level students, generating bespoke problems, providing tailored hints, and explaining complex proofs in multiple ways, thereby accelerating the training of the next generation of researchers.
↑ Back to top

Test-Time Training with KV Binding Is Secretly Linear Attention

While Test-Time Training (TTT) is traditionally celebrated as a way for models to "memorize" new information on the fly, this research reveals a surprising "memorization paradox" where better internal learning actually leads to worse overall performance. By dismantling the common assumption that these models act like a digital storage-and-retrieval system, the authors prove that TTT is mathematically equivalent to a sophisticated version of linear attention. This discovery allows the researchers to strip away unnecessary architectural bloat, simplifying complex models into more efficient, parallelized versions that achieve up to a 4.0× speedup without losing power. Ultimately, the paper reframes TTT not as a form of temporary memory, but as a high-speed feature mixer that paves the way for faster, leaner, and more scalable AI architectures.

AI Review

Failed to generate LLM review.

Research Directions

Excellent analysis of the research paper. Based on its core findings, here are potential research directions and areas for future work, focusing on actionable and innovative ideas.

1. Direct Extensions of This Work

These directions build directly on the paper's theorems, ablations, and stated limitations.

  • 1.1. Investigating Non-Linear Final Layers: The paper's theoretical analysis is restricted to TTT models with a linear, bias-free final layer. A critical extension is to analyze TTT variants where the final layer is non-linear (e.g., includes a bias term, a ReLU, or a sigmoid activation).

    • Research Question: Can TTT with a non-linear final layer still be expressed as a known computational primitive?
    • Approach: Attempt to unroll the gradient updates using techniques like Taylor series expansion on the non-linearity. This might reveal that TTT becomes a more complex, state-dependent operator that is not strictly linear attention but perhaps a "gated" or "dynamic" variant. The goal would be to find a closed-form expression, even if approximate.
  • 1.2. Analyzing End-to-End TTT (TTT-E2E): The paper exclusively focuses on TTT with a key-value binding loss (TTT-KVB). A major open question is whether the same "secretly linear attention" interpretation holds for TTT-E2E, where gradients from the final task loss are backpropagated through the inner loop.

    • Research Question: Does TTT-E2E also reduce to a linear attention operator, and if so, what form does it take?
    • Approach: Unroll the optimization for TTT-E2E. The key difference is that the gradient g_t(k) will now depend on the final model output and task loss, making it a function of the entire sequence history, not just the local key-value pair. This could lead to a more complex, history-dependent form of attention that might explain its effectiveness in long-context tasks.
  • 1.3. Reversing the Equivalence: Designing Novel Linear Attention via TTT: The paper shows TTT → Linear Attention. The reverse direction is a compelling design paradigm.

    • Research Question: Can we design novel, high-performance linear attention mechanisms by first formulating them as a TTT inner-loop optimization problem?
    • Approach: Start with a target attention property (e.g., selective forgetting, token-dependent decay). Formulate a corresponding "fast weight" architecture and an inner-loop "loss" (even a non-standard one) that, when unrolled, produces this target mechanism. For instance, could the selective mechanism in Mamba be derived from an inner-loop TTT process with a specific regularizer or objective? This treats TTT as a generative framework for creating new RNN/attention variants.
  • 1.4. The Role of the "Dynamic Kernel": The paper's best-performing variant (Variant 1) only updates the last layer, freezing the feature extractor phi(·) into a "static kernel". This contradicts the intuition that a dynamic, history-dependent kernel should be more powerful.

    • Research Question: How can we harness the expressive power of a dynamic kernel phi_t(·) while mitigating the train-test mismatch that degrades performance?
    • Approach: Introduce a regularization term during training that penalizes large changes in the kernel parameters Θ_t between successive steps. For example, add a loss term ||Θ_t - Θ_{t-1}||² to the main training objective. This would encourage the dynamic kernel to evolve smoothly, potentially retaining its adaptive benefits without causing catastrophic distribution shift at test time.

2. Novel Research Directions Inspired by This Paper

These ideas generalize the paper's core insight—that an optimization process can be a computational operator—to new territories.

  • 2.1. The "Optimizer as Operator" Paradigm: The paper analyzes SGD with momentum. This can be generalized to explore how different inner-loop optimizers compile down to different computational operators.

    • Research Question: What novel attention-like mechanisms are induced by using more sophisticated optimizers like Adam, RMSprop, or second-order methods in the TTT inner loop?
    • Approach: Analytically unroll the updates for an inner-loop Adam optimizer. The momentum (m_t) and variance (v_t) terms of Adam will likely translate into learnable, per-feature decay and normalization factors within the resulting linear attention-like mechanism. This could lead to a new class of adaptive attention models that are "discovered" through the lens of optimization theory, rather than designed by hand.
  • 2.2. Unifying Standard (Softmax) Attention: The paper unifies TTT and linear attention. The ultimate goal would be to unify both linear and standard softmax attention under a single "optimization-as-computation" framework.

    • Research Question: Is it possible to formulate an inner-loop optimization problem that, when unrolled, is equivalent to standard softmax attention (softmax(QK^T/sqrt(d_k))V)?
    • Approach: This would be a significant theoretical challenge. The inner-loop objective would likely need to be something other than MSE or dot-product. Perhaps a cross-entropy or KL-divergence-based objective, when optimized with a specific differentiable solver, would yield the exponential exp(Q K^T) term. A successful result would reframe "attention" as a family of solutions to different inner-loop optimization problems.
  • 2.3. Beyond Gradients: "Computational Scaffolding" for Sequence Modeling: The "Gradient Ascent Anomaly" suggests that the mechanics of the update, not the objective's minimization, are what matter. This opens the door to non-gradient-based update rules.

    • Research Question: Can we design powerful sequence models by replacing the inner-loop gradient descent with non-gradient-based, associative update rules for the "fast weights"?
    • Approach: Design a sequence layer where the state S_t is updated via a simple, learnable, non-gradient update rule, such as a Hebbian update (S_t = S_{t-1} + f(k_t) g(v_t)^T) or a gated update (S_t = gate * S_{t-1} + (1-gate) * update). This moves away from the "training at test-time" analogy and toward a more direct "fast weight programming" or memory-editing perspective, with potential for even greater efficiency.

3. Unexplored Problems Highlighted by This Work

These are specific empirical puzzles and contradictions from the paper that warrant deeper investigation.

  • 3.1. The Purpose of Q/K Distributional Asymmetry: The paper shows that for TTT, queries and keys come from different distributions, which is "pathological" for retrieval but normal for its linear attention form. The unexplored question is why the model learns this asymmetry and whether it can be controlled.

    • Research Question: What is the inductive bias of TTT that encourages the separation of query and key distributions, and what functional roles do they learn?
    • Approach: Design probing experiments to analyze the information encoded in phi(q) vs. phi(k). For instance, does phi(k) learn to encode positional or structural information for building the state S_t, while phi(q) learns to encode semantic content for reading from it? One could try to enforce or prevent this asymmetry with contrastive losses during training and measure the impact on performance.
  • 3.2. Exploring the Boundaries of the "Gradient Ascent Anomaly": The finding that gradient ascent works as well as, or better than, descent is striking. It's crucial to understand if this is a universal property or an artifact of the specific tasks and models tested.

    • Research Question: Are there tasks or model configurations where gradient ascent fails and true "memorization" (descent) is necessary?
    • Approach: Test the gradient ascent version of TTT on tasks that might require more precise, factual retrieval, such as associative recall or question answering over a long context. It's possible that for tasks where the key -> value mapping must be very precise, the structured "noise" of gradient ascent is detrimental, revealing the limits of this anomaly.

4. Potential Applications or Domains

These directions explore where the re-framed understanding of TTT as an efficient, adaptive linear attention mechanism could be most impactful.

  • 4.1. Lifelong Learning and Streaming Data Processing: The online, adaptive nature of the TTT mechanism makes it a prime candidate for scenarios with continuously shifting data distributions.

    • Application: Use TTT-based models for tasks like real-time anomaly detection in financial time series, predictive maintenance from sensor data, or adapting to new topics in a continuous news stream. The model's state S_t acts as a compressed, adaptive summary of the stream's history.
  • 4.2. On-the-Fly Personalization: The ability to update a model's state in-context without changing its core weights is ideal for efficient personalization.

    • Application: Deploy a large foundation model where the TTT layers adapt to a user's interaction history within a single session. The state matrix S becomes a "session cache" or "user profile," tailoring responses without expensive fine-tuning. This could be applied to recommender systems, personalized chatbots, or assistive code generation.
  • 4.3. Reinforcement Learning Agents with Adaptive Memory: An RL agent's state representation needs to adapt quickly to changes within an episode.

    • Application: Integrate a TTT layer into an RL agent's policy or value network. The (state, action) pairs from the trajectory can be treated as the (key, value) inputs to the TTT layer. The unrolled optimization would allow the agent to build an adaptive "short-term memory" of the episode, potentially improving performance in non-stationary environments or tasks requiring long-term credit assignment.
↑ Back to top

Squint: Fast Visual Reinforcement Learning for Sim-to-Real Robotics

Training robots to perform tasks using only camera images is notoriously slow and expensive, often requiring millions of simulations that can take days to process. To bridge this gap, researchers introduced Squint, a high-speed learning method that can train a robot to master complex manipulation tasks—like stacking blocks or placing cans—in as little as 15 minutes on a single standard gaming GPU. By "squinting" (rendering high-resolution images and then downsampling them) and optimizing how the AI reuses its past experiences, the system achieves a 91% success rate when transferred directly from the simulator to a real-world robotic arm. This breakthrough suggests a future where sophisticated robotic behaviors can be developed with minimal hardware in less time than it takes to grab a cup of coffee.

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

Multi-Vector Index Compression in Any Modality

As digital information shifts from simple text to a mix of images, videos, and audio, modern search engines are struggling to store the massive amounts of data required to retrieve these "multimodal" documents efficiently. To solve this, researchers developed Attention-Guided Clustering (AGC), a smart compression technique that identifies the most important parts of a document and condenses them into a tiny, high-impact storage footprint. By prioritizing the most descriptive elements of a video or image rather than saving every redundant frame, this method can shrink an index to just a fraction of its original size while actually maintaining—or even improving—search accuracy. This breakthrough makes high-performance, "any-modality" search practical for massive real-world collections like YouTube or web-scale digital archives without requiring astronomical storage costs.

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

Untied Ulysses: Memory-Efficient Context Parallelism via Headwise Chunking

Untied Ulysses: Memory-Efficient Context Parallelism via Headwise Chunking
Ravi Ghadia 1 Maksim Abraham 1 Sergei Vorobyov 1 Max Ryabinin 1
Abstract
Efficiently processing long sequences with Trans-
former models usually requires splitting the com-
putations across accelerators via context paral-
lelism. The dominant approaches in this fam-
ily of methods, such as Ring Attention or Deep-
Speed Ulysses, enable scaling over the context
dimension but do not focus on memory efficiency,
which limits t

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

Learning from Trials and Errors: Reflective Test-Time Planning for Embodied LLMs

Learning from Trials and Errors:
Reflective Test-Time Planning for Embodied LLMs
Yining Hong 1 Huang Huang 1 Manling Li 2 Li Fei-Fei 1 Jiajun Wu 1 Yejin Choi 1
€ Website: https://reflective-test-time-planning.github.io
§ Code: https://github.com/Reflective-Test-Time-Planning/Reflective-Test-Time-Planning
(a) Task
Put the toy car in the green box
Bad choice: the teddy bear is
already in the green box.
Score: 22
The orange box is
too small. The toy
car doesn’t fit into
the orange box.
Score:

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

Statistical Query Lower Bounds for Smoothed Agnostic Learning

Statistical Query Lower Bounds for Smoothed Agnostic Learning
Ilias Diakonikolas∗
University of Wisconsin-Madison
ilias@cs.wisc.edu
Daniel M. Kane†
University of California, San Diego
dakane@cs.ucsd.edu
February 25, 2026
Abstract
We study the complexity of smoothed agnostic learning, recently introduced by [CKK+24],
in which the learner competes with the best classifier in a target class under slight Gaussian
perturbations of the inputs. Specifically, we focus on the prototypical task of agnostica

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

On Data Engineering for Scaling LLM Terminal Capabilities

2026-2-25
On Data Engineering for Scaling LLM Terminal
Capabilities
Renjie Pi∗, Grace Lam*, Mohammad Shoeybi, Pooya Jannaty, Bryan Catanzaro, Wei Ping†
Abstract
Despite rapid recent progress in the terminal capabilities of large language models, the training data
strategies behind state-of-the-art terminal agents remain largely undisclosed. We address this gap
through a systematic study of data engineering practices for terminal agents, making two key con-
tributions: (1) Terminal-Task-Gen, a li

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

XMorph: Explainable Brain Tumor Analysis Via LLM-Assisted Hybrid Deep Intelligence

XMorph: Explainable Brain Tumor Analysis
Via LLM-Assisted Hybrid Deep Intelligence
Sepehr Salem Ghahfarokhi1, M. Moein Esfahani2, Raj Sunderraman1, Vince Calhoun2, Mohammed Alser1
1Department of Computer Science, Georgia State University, Atlanta, GA, USA
2TReNDS Center, Georgia State University, Atlanta, GA, USA
Corresponding authors: ssalemghahfarokhi1@gsu.edu, malser@gsu.edu
Abstract—Deep learning has significantly advanced automated
brain tumor diagnosis, yet clinical adoption remains limite

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

The Diffusion Duality, Chapter II: $Ψ$-Samplers and Efficient Curriculum

Published as a conference paper at ICLR 2026
THE DIFFUSION DUALITY, CHAPTER II:
Ψ-SAMPLERS AND EFFICIENT CURRICULUM
Justin Deschenaux1∗
Caglar Gulcehre1,2
Subham Sekhar Sahoo3∗
1EPFL, Lausanne, Switzerland
2Microsoft AI
3Cornell Tech, NY
ABSTRACT
Uniform-state discrete diffusion models excel at few-step generation and guidance
due to their ability to self-correct, making them preferred over autoregressive or
Masked diffusion models in these settings. However, their sampling quality plateaus
with

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

Why Pass@k Optimization Can Degrade Pass@1: Prompt Interference in LLM Post-training

2026-02-25
Why Pass@k Optimization Can Degrade Pass@1:
Prompt Interference in LLM Post-training
Anas Barakat1, Souradip Chakraborty2, Khushbu Pahwa*, Amrit Singh Bedi3
1Singapore University of Technology and Design
2University of Maryland, College Park
3University of Central Florida
Pass@k is a widely used performance metric for verifiable large language model tasks, including
mathematical reasoning, code generation, and short-answer reasoning. It defines success if any of 𝑘
independently sample

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

Efficient Hierarchical Any-Angle Path Planning on Multi-Resolution 3D Grids

Efficient Hierarchical Any-Angle Path Planning
on Multi-Resolution 3D Grids
Victor Reijgwart, Cesar Cadena, Roland Siegwart and Lionel Ott
Autonomous Systems Lab, ETH Z¨urich, Switzerland
Email: vreijgwart@rai-inst.com, [cesarc | rolandsi | lioott]@ethz.ch
Abstract—Hierarchical, multi-resolution volumetric mapping
approaches are widely used to represent large and complex
environments as they can efficiently capture their occupancy
and connectivity information. Yet widely used path planning
metho

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

NoRD: A Data-Efficient Vision-Language-Action Model that Drives without Reasoning

NORD: A Data-Efficient Vision-Language-Action Model that Drives without
Reasoning
Ishaan Rawal1,2*
Shubh Gupta1
Yihan Hu1
Wei Zhan1,3†
1Applied Intuition
2Texas A&M University
3UC Berkeley
Abstract
Vision-Language-Action (VLA) models are advancing au-
tonomous driving by replacing modular pipelines with uni-
fied end-to-end architectures. However, current VLAs face
two expensive requirements: (1) massive dataset collec-
tion, and (2) dense reasoning annotations. In this work,
we address both cha

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

SELAUR: Self Evolving LLM Agent via Uncertainty-aware Rewards

SELAUR: Self Evolving LLM Agent via
Uncertainty-aware Rewards
Dengjia Zhang1, Xiaoou Liu2, Lu Cheng3, Yaqing Wang4, Kenton Murray1,
and Hua Wei2
1 Johns Hopkins University, Baltimore MD, USA {dzhang98,kenton}@jhu.edu
2 Arizona State University, Tempe AZ, USA {xiaoouli,hua.wei}@asu.edu
3 University of Illinois Chicago, Chicago IL, USA lucheng@uic.edu
4 Purdue University, West Lafayette IN, USA wang5075@purdue.edu
Abstract. Large language models (LLMs) are increasingly deployed as
multi-step decis

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

CG-DMER: Hybrid Contrastive-Generative Framework for Disentangled Multimodal ECG Representation Learning

CG-DMER: HYBRID CONTRASTIVE-GENERATIVE FRAMEWORK FOR DISENTANGLED
MULTIMODAL ECG REPRESENTATION LEARNING
Ziwei Niu1,3
Hao Sun4
Shujun Bian1
Xihong Yang2
Lanfen Lin3
Yuxin Liu1
Yueming Jin1,2 Q
1 Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore
2 Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
3 College of Computer Science and Technology, Zhejiang University, Hangzhou, China
4 College of informat

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

Not Just How Much, But Where: Decomposing Epistemic Uncertainty into Per-Class Contributions

Not Just How Much, But Where: Decomposing Epistemic Uncertainty into
Per-Class Contributions
Mame Diarra Toure1
David A. Stephens1
1Department of Mathematics and Statistics , McGill University
Abstract
In safety-critical classification, the cost of failure is
often asymmetric. Yet Bayesian deep learning sum-
marises epistemic uncertainty with a single scalar,
mutual information (MI), which cannot distinguish
whether a model’s ignorance involves a benign or
safety-critical class. We decompose MI

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

Scaling State-Space Models on Multiple GPUs with Tensor Parallelism

Scaling State-Space Models on Multiple GPUs with
Tensor Parallelism
Anurag Dutt
Stony Brook University
adutt@cs.stonybrook.edu
Nimit Shah
Stony Brook University
nimishah@cs.stonybrook.edu
Hazem Masarani
Stony Brook University
hazem.masarani@stonybrook.edu
Anshul Gandhi
Stony Brook University
anshul@cs.stonybrook.edu
Abstract—Selective state space models (SSMs) have rapidly
become a compelling backbone for large language models,
especially for long-context workloads. Yet in deployment, their
infe

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

Sequential Counterfactual Inference for Temporal Clinical Data: Addressing the Time Traveler Dilemma

When doctors use AI to predict a patient’s health risks, they often ask "what if" questions—like "what if this patient didn’t have diabetes?"—to understand how to improve outcomes. However, this paper reveals a "Time Traveler Dilemma," where standard AI methods propose biologically impossible scenarios, such as "removing" a chronic disease that a patient has actually lived with for years. To fix this, researchers developed the Sequential Counterfactual Framework, a new approach that respects the flow of time and medical reality by distinguishing between what we can change (like lab results) and what we cannot (like chronic diagnoses). By testing this on thousands of COVID-19 patients, the team demonstrated how we can move past impossible "what ifs" to generate realistic, actionable medical insights that show exactly how early interventions can stop dangerous health cascades before they start.

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

PVminer: A Domain-Specific Tool to Detect the Patient Voice in Patient Generated Data

, 2022, pp. 1–18
doi: DOI HERE
Advance Access Publication Date: Day Month Year
Paper
PAPER
PVminer: A Domain-Specific Tool to Detect the
Patient Voice in Patient Generated Data
Samah Fodeh ,1,2∗Linhai Ma ,1 Yan Wang,1 Srivani Talakokkul,1
Ganesh Puthiaraju,1 Afshan Khan,1 Ashley Hagaman,3 Sarah Lowe3
and Aimee Roundtree4
1Department Of Emergency Medicine, Yale School of Medicine, 464 Congress Ave, 06519, CT, USA, 2Department of Biomedical Informatics
& Data Science, Yale School of Medicine, 100

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top

A Benchmark for Deep Information Synthesis

Published as a conference paper at ICLR 2026
A BENCHMARK FOR DEEP INFORMATION SYNTHESIS
Debjit Paul1, Daniel Murphy2, Milan Gritta1, Ronald Cardenas1,
Victor Prokhorov1, Jun Wang3, Gerasimos Lampouras1
Dataset Contributors:
Lena Sophia Bolliger4, Aysim Toker1, Roy Miles1, Andreea-Maria Oncescu1, Jasivan
Alex Sivakumar5, Philipp Borchert1, Ismail Elezi1, Meiru Zhang6, Ka Yiu Lee1, Guchun Zhang1
1Huawei Noah’s Ark Lab, UK
2Imperial College London
3UCL Centre for Artificial Intelligence
4University

Peer Reviews

Failed to generate review summary.

AI Review

Failed to generate LLM review.

Research Directions

Failed to generate research directions.

↑ Back to top