The Retrieval-Reasoning Tango: Charting the Rise of Agentic RAG

In the AI race to make large language models both factual and reasoned, two camps have emerged: one focused on retrieval-augmented generation (RAG) to fight hallucination, the other on long-chain reasoning to mimic logic. But neither wins alone. This week’s survey by Li et al. (2025), Towards Agentic RAG with Deep Reasoning, delivers the most comprehensive synthesis yet of the field’s convergence point: synergized RAG–Reasoning. It’s no longer a question of whether retrieval helps generation or reasoning helps retrieval—but how tightly the two can co-evolve, often under the coordination of autonomous agents. ...

July 15, 2025 · 3 min · Zelina

Wall Street’s New Intern: How LLMs Are Redefining Financial Intelligence

The financial industry has always prided itself on cold precision. For decades, quantitative models and spreadsheets dominated boardrooms and trading desks. But that orthodoxy is now under siege. Not from another statistical breakthrough, but from something surprisingly human-like: Large Language Models (LLMs). Recent research shows a dramatic shift in how AI—particularly LLMs like GPT-4 and LLaMA—is being integrated across financial workflows. Far from just summarizing news or answering earnings call questions, LLMs are now organizing entire investment pipelines, fine-tuning themselves on proprietary data, and even collaborating as autonomous financial agents. A recent survey by Mahdavi et al. (2025) categorized over 70 state-of-the-art systems into four distinct architectural frameworks, offering us a lens through which to assess the future of financial AI. ...

July 4, 2025 · 4 min · Zelina