Business Problem
Fragmented and siloed analyses hinder transactional data’s potential, creating major technical and managerial barriers to scaling analysis across multiple tasks.
Our Vision
Large Transactional Models (LTMs) unify transactional data analysis, enabling businesses to extract insights across multiple tasks with a single, scalable system.
How It Works
LTMs integrate deep learning with transactional-specific innovations, eliminating the need for separate models for tasks such as fraud detection, forecasting, and segmentation.
Key Benefits
A unified system replaces multiple specialized models.
Increased efficiency reduces complexity, costs, and operational burden.
Enhanced decision-making across business functions.
AI Challenges
LTMs must efficiently process large-scale transactional data, addressing its peculiarities, managing long histories, and generalizing across tasks. This demands novel architectures tailored to its unique challenges.
Transactional data captures the essence of business events—the "who," "what," "when," and "where" of interactions. In sales, this includes transaction dates, amounts, items or services exchanged, and the parties involved. This data is a goldmine for insights into customer behavior, operational optimization, and predictive patterns.
Despite its potential, businesses often fail to unlock the full value of transactional data analysis. Traditional approaches rely on fragmented tools, techniques, and teams, treating tasks like fraud detection, churn prediction, and demand forecasting as separete, siloed processes. As each requires specialized models, pipelines, and feature extraction, businesses are left with significant technical and managerial burdens.
As a result, many companies struggle to scale their transactional data analysis effectively. Some avoid implementation altogether, fearing costs will outweigh benefits. Others settle for partial solutions, addressing only a subset of tasks the data could support.
To fully capitalize on transactional data’s opportunities, a unified, scalable, and adaptable framework is essential—one that works seamlessly across tasks to deliver consistent and efficient insights.
We envision Large Transactional Models (LTMs) as a transformative solution for transactional data analysis, akin to the revolution Large Language Models (LLMs) brought to natural language understanding.
LTMs integrate deep learning’s predictive power with innovations tailored to transactional analysis. By generalizing across tasks, LTMs eliminate the need for multiple, isolated solutions. They empower businesses to perform anomaly detection, segmentation, forecasting, and more—all within a unified system.
This streamlined approach reduces operational overhead, enhances efficiency, and unlocks the full spectrum of insights transactional data offers. Through collaborative R&D and cutting-edge techniques, we aim to make LTMs a scalable, groundbreaking tool for businesses.
Transactional data is fundamentally different from text, images, or other common AI modalities. Yet, most state-of-the-art models are not designed to handle its unique structure.
Unlike text, transactional data is inherently tabular, with each row representing a distinct event, typically ordered by time. While predicting the next transaction is a natural task, it does not align with how language models predict the next token in a sentence. Tokenizing a transaction row may work superficially, but it strips away the true structure of the data—it’s like trying to understand a movie by putting together individual frame descriptions.
More than just sequences of events, transactional data encodes complex personal and group dynamics through numerical, temporal, and spatial patterns that current AI models struggle to process effectively. In short, transactional data is not just "long text"—it’s a structured, multi-dimensional signal that demands a fundamentally different modeling approach.
Scale adds another challenge: transactional datasets often span years of interactions, exceeding the context limits of existing AI models. Transformer-based architectures face quadratic computational costs as context length increases, making large-scale analysis prohibitively expensive. Reducing context length cuts costs but risks losing critical long-term dependencies—an unacceptable trade-off for real-world applications.
Overcoming these limitations requires a reimagined AI architecture for transactional data. While isolated solutions, such as the Mamba architecture for long-context processing, offer partial improvements, no model today fully addresses the combined complexities of transactional data—let alone unifies all relevant analytical tasks into a single system.
Building Large Transactional Models that scale efficiently, retain accuracy, and unify multiple tasks is the next frontier in AI for business intelligence.
Developing Large Transactional Models demands not just technical expertise but also diverse perspectives and access to real-world data. We are actively seeking partners from industry and academia to collaborate on this journey.
Whether you’re a business seeking to unlock the full potential of your transactional data or a researcher eager to connect cutting-edge AI innovations to practical applications, we invite you to join us.
This technology holds exceptional potential for data-driven organizations managing vast transactional datasets, particularly in the financial, retail, and e-commerce sectors.
As part of this initiative, we are also exploring grant opportunities to support R&D innovation in the private sector.
Lear more about this initiative below, or reach out to discuss how we can collaborate.
There are currently no open roles for this initiative.
Follow us on LinkedIn for updates.