Cover image

MoE Money, MoE Problems? FinCast Bets Big on Foundation Models for Markets

TL;DR FinCast is a 1B‑parameter, decoder‑only Transformer trained on >20B financial time points with a token‑level sparse Mixture‑of‑Experts (MoE), learnable frequency embeddings, and a Point‑Quantile (PQ) loss that combines Huber point forecasts with quantile targets and a trend‑consistency term. In zero‑shot benchmarks across crypto/FX/stocks/futures, it reports ~20% lower MSE vs leading generic time‑series FMs, and it also beats supervised SOTAs—even without fine‑tuning—then widens the gap with a light fine‑tune. If you build risk or execution systems, the interesting part isn’t just accuracy points; it’s the shape of the predictions (tail‑aware, regime‑sensitive) and the deployment economics (conditional compute via sparse MoE + patching). ...

August 30, 2025 · 5 min · Zelina
Cover image

Fast & Curious: How ‘Speed-First’ LLM Architectures Change the Build vs. Buy Math

Executive takeaway: Efficient LLM architectures aren’t just academic: they reset the economics of AI products by cutting context costs, shrinking GPUs per QPS, and opening new form factors—from phone-side agents to ultra-cheap serverless endpoints. The winning strategy is hybrid by default, KV-light, and latency-budgeted. Why this matters now If you ship with AI, your margins live and die by three levers: sequence length, active parameters per token, and memory traffic. Classical Transformers lose on all three. The latest wave of “speed-first” designs offers a menu of swaps that trade negligible accuracy for step-change gains in throughput, tail latency, and $ per million tokens. This survey gives us a clean taxonomy and—more importantly—the design intent behind each family: compress the compute (linear & sparse sequence modeling), route the compute (MoE), restructure the compute (efficient full attention), and rethink the decoder (diffusion LLMs). ...

August 16, 2025 · 5 min · Zelina
Cover image

Reasoning at Scale: How DeepSeek Redefines the LLM Playbook

If GPT-4 was the apex of pretraining, DeepSeek might be the blueprint for what comes next. Released in two families—DeepSeek-V3 and DeepSeek-R1—this Chinese open-source model series isn’t just catching up to frontier LLMs. It’s reshaping the paradigm entirely. By sidestepping traditional supervised fine-tuning in favor of reinforcement learning (RL), and coupling it with memory-efficient innovations like Multi-head Latent Attention (MLA) and cost-efficient training techniques like FP8 mixed precision and fine-grained MoE, DeepSeek models demonstrate how strategic architectural bets can outpace brute-force scale. ...

July 15, 2025 · 3 min · Zelina

LLaMA 4 Maverick 17B 128E (Original)

Meta’s experimental ultra-sparse MoE model with 128 experts, designed to explore efficient large-scale scaling and routing strategies for future LLaMA architectures.

1 min

LLaMA 4 Scout 17B 16E

Meta’s experimental LLaMA 4-series MoE model with 17 billion parameters and 16 experts, designed to explore sparse routing and scaling strategies.

1 min

LLaMA 4 Scout 17B Instruct (Unsloth, 4-bit)

A 4-bit quantized, instruction-tuned variant of Meta’s LLaMA 4 Scout MoE model, optimized by Unsloth for efficient fine-tuning and deployment.

1 min

Mixtral 8x7B Instruct v0.1

A powerful sparse Mixture-of-Experts (MoE) instruction-tuned language model by Mistral AI, combining efficiency and performance for chat and task-oriented generation.

1 min