Most real-time financial systems rely on deep stacks of infrastructure, from custom APIs to cloud VMs and high-frequency data ingestion pipelines. But what if a single developer could deploy a daily-updating, AI-powered stock analysis engine without a single server?
That’s exactly what Taniv Ashraf set out to do — and accomplished — in his recent case study on a fully serverless architecture using Google Gemini, GitHub Actions, and static web hosting. The result is an elegantly simple yet conceptually powerful demonstration of how qualitative LLM analysis and automation tools can replace entire categories of financial tooling — if wielded strategically.
🧠 Not Just Prediction — Interpretation
Unlike traditional ML trading bots, which often attempt to forecast price movements through time-series modeling (ARIMA, LSTM, etc.), this system aims for qualitative insight generation:
- Daily stock data is pulled from
yfinance
, - News headlines are ingested from NewsAPI,
- A Python script packages both into a prompt,
- And the Gemini API responds with structured JSON commentary — e.g., “Cautious outlook for $TSLA due to declining delivery numbers and analyst downgrades.”
The result? A continuously updated stream of contextual financial sentiment that reads like a Bloomberg analyst’s note, but is generated and published entirely through automation.
🛠️ GitHub Actions: From CI/CD to Cloud OS
The architecture is deceptively simple:
Component | Tool Used | Purpose |
---|---|---|
Data Fetching | yfinance + NewsAPI | Pull latest prices + headlines |
AI Reasoning | Google Gemini API | Generate human-like financial assessments |
Scheduling & Compute | GitHub Actions | Run daily job via cron-style workflow |
Storage | GitHub repo (JSON) | Store updated predictions for frontend access |
Frontend | Static HTML+JS | Pull and render JSON analysis live |
Instead of building a Flask app or deploying to Vercel, the backend logic lives entirely within a scheduled GitHub Action. That makes GitHub not just a code host — but the runtime environment itself.
This zero-cost, zero-maintenance stack challenges the assumption that building intelligent systems requires infrastructure orchestration. In fact, it hints at a new pattern: Repos as runtime, APIs as cognition, Actions as automation.
🪲 Debugging: The Invisible Workload of AI Automation
The real story, however, lies in the debugging journey. While the final architecture is clean, the path to it was anything but:
- Serialization Failures: Gemini outputs had to be properly cast from Pandas types to standard JSON — a reminder that LLM pipelines demand strict interface hygiene.
- Permission Errors: GitHub Actions initially failed to push back to the repo due to default read-only tokens. Fixing required custom YAML permission blocks.
- The Ghost Bug: An environment-level GitHub error refused to resolve standard actions. Solution? Recreate the entire repo from scratch — not a code fix, but a platform reset.
Each issue underscores a vital lesson: in AI-driven software, the complexity often migrates from code logic to platform coordination. And solving it requires a mindset of debugging systems, not just syntax.
👨💻 Human-AI Collaboration in the Loop
This wasn’t a solo coding project in the traditional sense. The author explicitly states that Gemini played the role of primary code executor, while he acted as architect, debugger, and prompt engineer.
This makes the system not just a tool built using AI, but a case study in working with AI:
- LLMs were tasked with generating scripts, fixing bugs, and refactoring prompts.
- The human steered iterations, provided architectural direction, and solved integration deadlocks.
This collaboration pattern — AI writes, human supervises — is emerging as the dominant loop in infrastructureless software design.
🧭 Where This Pattern Leads
While the project is modest in scope, it gestures toward something bigger: a paradigm where qualitative analysis becomes automatable, and software becomes deployable by intent, not by infrastructure.
Future extensions could include:
- Fundamental financial metrics (e.g., earnings, P/E ratios) as additional prompt variables.
- Per-article sentiment scoring and clustering.
- Real-time validation and backtesting pipelines.
But even as it stands, this project is a proof of concept for zero-friction AI systems — particularly useful for startups, analysts, educators, or internal tools.
The next wave of FinTech won’t just be about faster models or bigger datasets. It will be about designing lightweight, intelligent agents that operate autonomously within serverless environments, guided by high-level human strategy.
Cognaptus: Automate the Present, Incubate the Future.