Here at RELEX, data isn’t just an industry buzzword – it’s what powers everything we do. Our SaaS products do more than predict a few days of sales; they manage complex logistics to keep retail supply chains running smoothly. If you’ve ever wondered about the behind-the-scenes work that makes global retail tick, you’ll get why we’re all about efficiency and prediction.
Now, we’re raising the bar with AI. Our latest tool, Rebot, is a Retrieval-Augmented Generation (RAG) solution already making smarter decisions possible. But we’re setting our sights on something even bigger—integrating LLM-based (Large Language Models) apps that work with actual customer data in real time.
Let’s be honest – everyone (and their mother) wants to add LLMs to their products, but not everyone’s thinking about the gritty details—like protecting sensitive customer data, managing a multitenant setup, and making sure we don’t blow our budget sending too much data to OpenAI. These are some of the realities we’re dealing with here.
So here’s a look at what’s happening behind the scenes when we are working on expanding the capabilities of Rebot.
This is Rebot.
The Impact of Our Work
Building LLM-powered apps isn’t as simple as picking a framework and calling it a day.
Before choosing the framework, there are a quite a few tests and trials we need to conduct before we can truly understand the impact of our decisions. Each of the “industry-leading” products has their own strengths and weaknesses, and we need to understand what we are committing ourselves, and our clients, to.
For example, here are some of the solutions and partial findings we’ve tried so far when trying to choose the way we handle LLM connection to actual client data via the tools interface:
LlamaIndex:
We started with LlamaIndex, which integrates well with LangChain. But its agent’s limited middleware customization and lack of frontend tool support pose challenges. We’re tempted to make it work, though—it’s so close!
LangChain:
LangChain is like the Swiss Army knife of LLM tools: flexible, popular, and supported by a big community. But it’s no easy lift—switching Rebot’s backend to LangChain feels like remodeling your entire house just to add a new room. And like LlamaIndex, no frontend tool support limits some of its appeal.
Vercel AI SDK:
Then there’s Vercel’s AI SDK, which shines with mature features, and seamless React integration. Frontend tool support opens up great possibilities for dynamic experiences. The hitch? We might have to rewrite Rebot’s backend—but maybe not fully. It’s a promising option, though it may take some work.
Langgraph.js:
Langgraph.js is another possibility, especially for frontend tasks, but to get it working well, we’d likely need LangChain, leading us back to the same rewrite challenge. It looks good on paper, but in practice, it could mean some tricky integrations ahead.
What Keeps Us Up at Night
There are so many frameworks, libraries and tools to choose from and so many prototypes to build and choosing the approach is just the beginning…
Plus, there’s the cost and privacy concern with sending data to hosted LLMs. We have to balance providing enough data for the model to work well without overloading it with unnecessary info.
And then there’s the impact on our team. Some frameworks need skills we don’t yet have, which could mean hiring new talent. Is that the direction we want, or do we stick with tools our team already knows? This is more than a tech decision—it’s a strategic one.
Final Thoughts
At RELEX, we’re not exploring AI just because it’s trendy. We’re looking at how LLMs can genuinely improve our customers’ experience—forecasting sales, managing logistics, replacing complex UI’s, and enabling smarter decisions.
Sure, there are challenges, from framework choices to data security, but that’s what makes this work so rewarding. We’re constantly adapting, testing new tools, and pushing the boundaries of what’s possible.
If you’re excited by the idea of tackling these challenges and want to be part of the future of AI in retail, RELEX could be the place for you. We’re solving real problems with cutting-edge technology, and this is only just beginning.