Publications

A central part of NXAI is the research itself. We believe in the power of deep scientific discovery to solve industrial challenges.

Papers published in 2025

  • xLSTM 7B: A Recurrent LLM for Fast and Efficient Inference (presented at ICML 2025)

  • Tiled Flash Linear Attention: More Efficient Linear RNN and xLSTM Kernels (Workshop on Foundation Models in the Wild – ICLR 2025)

  • TiRex: Zero-Shot Forecasting Across Long and Short Horizons with Enhanced In-Context Learning (presented at NeurIPS 2025)

  • TiRex Classification (presented at NeurIPS BERT2S WS 2025)

  • xLSTM Scaling Laws: Competitive Performance with Linear Time-Complexity

Papers published in 2024

  • xLSTM: Extended Long Short-Term Memory (presented at NeurIPS 2024)

  • Vision-LSTM: xLSTM as Generic Vision Backbone (presented at ICLR 2025)

  • A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks (presented at ICML 2025)

  • Bio-xLSTM: Generative modeling, representation and in-context learning of biological and chemical sequences (presented at ICLR 2025)

  • FlashRNN: I/O-Aware Optimization of Traditional RNNs on modern hardware (presented at ICLR 2025)

Try out xLSTM

Discover our models, our code and give feedback to our developers.

Bridging research to your business?

If you have questions about implementing the research found on this page, our experts are here to help.