Words, Not Just Answers: Using Psycholinguistics to Test LLM Alignment

Words, Not Just Answers: Using Psycholinguistics to Test LLM Alignment For years, evaluating large language models (LLMs) has revolved around whether they get the answer right. Multiple-choice benchmarks, logical puzzles, and coding tasks dominate the leaderboard mindset. But a new study argues we may be asking the wrong questions — or at least, measuring the wrong aspects of language. Instead of judging models by their correctness, Psycholinguistic Word Features: a New Approach for the Evaluation of LLMs Alignment with Humans introduces a richer, more cognitively grounded evaluation: comparing how LLMs rate words on human-centric features like arousal, concreteness, and even gustatory experience. The study repurposes well-established datasets from psycholinguistics to assess whether LLMs process language in ways similar to people — not just syntactically, but experientially. ...

July 1, 2025 · 4 min · Zelina

Good AI Goes Rogue: Why Intelligent Disobedience May Be the Key to Trustworthy Teammates

Good AI Goes Rogue: Why Intelligent Disobedience May Be the Key to Trustworthy Teammates We expect artificial intelligence to follow orders. But what if following orders isn’t always the right thing to do? In a world increasingly filled with AI teammates—chatbots, robots, digital assistants—the most helpful agents may not be the most obedient. A new paper by Reuth Mirsky argues for a shift in how we design collaborative AI: rather than blind obedience, we should build in the capacity for intelligent disobedience. ...

June 30, 2025 · 3 min · Zelina

Inked in the Code: Can Watermarks Save LLMs from Deepfake Dystopia?

In a digital world flooded with AI-generated content, the question isn’t if we need to trace origins—it’s how we can do it without breaking everything else. BiMark, a new watermarking framework for large language models (LLMs), may have just offered the first truly practical answer. Let’s unpack why it matters and what makes BiMark different. The Triad of Trade-offs in LLM Watermarking Watermarking AI-generated text is like threading a needle while juggling three balls: ...

June 30, 2025 · 3 min · Zelina

When Text Doesn’t Help: Rethinking Multimodality in Forecasting

The Multimodal Mirage In recent years, there’s been growing enthusiasm around combining unstructured text with time series data. The promise? Textual context—say, clinical notes, weather reports, or market news—might inject rich insights into otherwise pattern-driven numerical streams. With powerful vision-language and text-generation models dominating headlines, it’s only natural to wonder: Could Large Language Models (LLMs) revolutionize time series forecasting too? A new paper from AWS researchers provides the first large-scale empirical answer. The verdict? The benefits of multimodality are far from guaranteed. In fact, across 14 datasets spanning domains from agriculture to healthcare, incorporating text often fails to outperform well-tuned unimodal baselines. Multimodal forecasting, it turns out, is more of a conditional advantage than a universal one. ...

June 30, 2025 · 3 min · Zelina

Catalysts of Thought: How LLM Agents are Reinventing Chemical Process Optimization

In the world of chemical engineering, optimization is both a science and an art. But when operating conditions are ambiguous or constraints are missing, even the most robust solvers stumble. Enter the next-gen solution: a team of LLM agents that not only understand the problem but define it. When Optimization Meets Ambiguity Traditional solvers like IPOPT or grid search work well—if you already know the boundaries. In real-world industrial setups, however, engineers often have to guess the feasible ranges based on heuristics and fragmented documentation. This paper from Carnegie Mellon University breaks the mold by deploying AutoGen-based multi-agent LLMs that generate constraints, propose solutions, validate them, and run simulations—all with minimal human input. ...

June 27, 2025 · 4 min · Zelina

Playing with Strangers: A New Benchmark for Ad-Hoc Human-AI Teamwork

Human-AI collaboration is easy to romanticize in theory but hard to operationalize in practice. While reinforcement learning agents have dazzled us in games like Go and StarCraft, they often stumble when asked to cooperate with humans under real-world constraints: imperfect information, ambiguous signals, and no chance to train together beforehand. That’s the realm of ad-hoc teamwork—and the latest paper from Oxford’s FLAIR lab introduces a critical step forward. The Ad-Hoc Human-AI Coordination Challenge (AH2AC2) tackles this problem by leveraging Hanabi, a cooperative card game infamous among AI researchers for its subtle, communication-constrained dynamics. Unlike chess, Hanabi demands theory of mind—inferring what your teammate knows and intends based on sparse, indirect cues. It’s a Turing Test of collaboration. ...

June 27, 2025 · 4 min · Zelina

Mind Games for Machines: How Decrypto Reveals the Hidden Gaps in AI Reasoning

As large language models (LLMs) evolve from mere tools into interactive agents, they are increasingly expected to operate in multi-agent environments—collaborating, competing, and communicating not just with humans but with each other. But can they understand the beliefs, intentions, and misunderstandings of others? Welcome to the world of Theory of Mind (ToM)—and the cleverest AI benchmark you haven’t heard of: Decrypto. Cracking the Code: What is Decrypto? Inspired by the award-winning board game of the same name, Decrypto is a three-player game of secret codes and subtle hints, reimagined as a benchmark to test LLMs’ ability to coordinate and deceive. Each game features: ...

June 26, 2025 · 4 min · Zelina

Unsafe at Any Bit: Patching the Safety Gaps in Quantized LLMs

When deploying large language models (LLMs) on mobile devices, edge servers, or any resource-constrained environment, quantization is the go-to trick. It slashes memory and compute costs by reducing model precision from 16-bit or 32-bit floating points to 8-bit or even 4-bit integers. But there’s a problem: this efficiency comes at a cost. Quantization can quietly erode the safety guarantees of well-aligned models, making them vulnerable to adversarial prompts and jailbreak attacks. ...

June 26, 2025 · 3 min · Zelina

Anchored Thinking: Mapping the Inner Compass of Reasoning LLMs

In the world of large language models (LLMs), answers often emerge from an intricate internal dialogue. But what if we could locate the few sentences within that stream of thoughts that disproportionately steer the outcome—like anchors stabilizing a drifting ship? That’s exactly what Paul Bogdan, Uzay Macar, Neel Nanda, and Arthur Conmy aim to do in their new work, “Thought Anchors: Which LLM Reasoning Steps Matter?”. This study presents an ambitious trifecta of methods to trace the true influencers of LLM reasoning. ...

June 25, 2025 · 3 min · Zelina

The Joy of Many Minds: How JoyAgents-R1 Unleashes the Power of Multi-LLM Reinforcement Learning

When it comes to language model agents, more minds may not always mean merrier results. Multi-agent reinforcement learning (MARL) promises a flexible path for decomposing and solving complex tasks, but coordinating multiple large language models (LLMs) remains riddled with instability, inefficiency, and memory fragmentation. Enter JoyAgents-R1, a novel framework that proposes an elegant, scalable solution for jointly evolving heterogeneous LLM agents using Group Relative Policy Optimization (GRPO). Developed by researchers at JD.com, JoyAgents-R1 combines memory evolution, policy optimization, and clever sampling strategies to form a resilient multi-agent architecture capable of matching the performance of larger SOTA models with far fewer parameters. ...

June 25, 2025 · 3 min · Zelina