AYDO Structured Products Platform
A single workspace for researching, comparing, and monitoring structured products. Built so analysts stopped living in 12 browser tabs.
The first time I sat with an AYDO analyst, she had 12 tabs open. Term sheets in three counterparty portals, a Bloomberg terminal in the background, an Excel file tracking barrier levels by hand, and a Slack thread about whether an autocall had triggered on a product they'd bought six months prior. The data existed. None of it was in one place.
That's the gap AYDO was built to close. Research a structured product, compare it against others on a like-for-like basis, then monitor the actual lifecycle once it's in the book.
What was hard about it
The comparison problem looks like a data problem. It's actually a normalization problem. A reverse convertible from one issuer looks structurally similar to one from another, but the barrier conventions, observation dates, and coupon mechanics differ in ways that matter. Getting two products on the same screen with truly comparable numbers required hand-modeling each product family and being honest when comparison wasn't safe.
I built the Go backend around a small set of clean domain boundaries: catalog, valuation, monitoring. Each surfaces a typed REST contract. The frontend is React with one set of building blocks per product family (autocalls, reverse convertibles, capital-protected notes), so adding the fourth family didn't require redrawing the screen.
What we underestimated
Scheduling. Structured products have observation dates that matter to the cent. A coupon fixing missed by a day can change a client's outcome. We started with a vanilla cron, hit our first DST rollover, and learned that "every day at market open" is not a thing crons handle well across European calendars. We rewrote scheduling on top of an explicit market-calendar table. Less elegant. Much more correct.
PostgreSQL on RDS held up fine. S3 for term-sheet PDFs and lifecycle snapshots. Nothing exotic on the infra side; the complexity was in the domain.
Where it landed
Research work that used to take an hour collapsed to a few minutes for the products we modeled cleanly. Monitoring went from spreadsheet-driven to alert-driven. Most of all, analysts stopped owning the data plumbing.
What's still open: deeper integration with counterparty portals. Today we ingest term sheets and observations through normalized feeds. Pulling live observation data directly from each counterparty would close the last manual loop. That's the next thing I'd build.