From neural network research before anyone cared, to building production systems that handle millions of conversations. 25 years in AI. I've seen what works and what doesn't.
Canigami is a consultancy focused on product-led AI development — building systems that work in production, not just in pitch decks.
We bring a long view to a field prone to hype cycles, and we focus on the unglamorous work that makes AI actually useful: robust architecture, systematic testing, and solving for failure points before they reach your customers.
We work selectively with organisations building AI applications that matter.
Qamir Hussain has worked in AI for over 25 years. He started in neural network research at Dublin City University in 1999 and was a visiting scholar at Stanford's Center for the Study of Language and Information, building voice-controlled accessibility systems before the technology was fashionable.
More recently, he spent five years leading AI at Webio (now Aryza Engage), building the company's AI ecosystem from initial prototype to production — intent classification, entity extraction, conversation summarisation, and document processing systems handling sensitive financial conversations at scale.
After more than two decades in AI, Qamir has watched the field transform several times over. Now he writes and speaks on AI development, scientific thinking, and building robust systems — with a focus on AI that amplifies human capability rather than replacing it.
The journey from AlphaGo's Move 37 to AlphaFold's Nobel Prize — and what it tells us about how AI actually advances. On original thinking, scientific rigour, and what developers can learn from how scientists use AI.
Read essayI speak on AI development, the history and future of the field, and what it takes to build systems that work in production.
The journey from AlphaGo's Move 37 to AlphaFold's Nobel Prize — and what it tells us about how AI actually advances. This talk explores what developers can learn from how scientists use AI, and why rigour beats hype every time.
AI systems increasingly handle sensitive situations — financial stress, personal crises, vulnerable users. What responsibilities come with building these systems? A talk about empathy, integrity, and what "responsible AI" means in practice, not just in press releases.
Lessons from five years building production AI in fintech. Modular architecture, managing the shift from deterministic to stochastic systems, and why you shouldn't be afraid to build your own AI ecosystem. Practical, grounded, drawn from real deployments.
A technical look at how LLMs actually work — transformer architecture, context and memory challenges, the evolution from code completion to agentic systems. For teams building with GenAI who want to understand what's happening beneath the abstractions.
My primary focus is leading AI at Aryza Engage. Selectively, I take on:
If you're working on something interesting, get in touch.
Get in touch