Welcome

Sajid Siraj

The algorithm said no.

No loan. No place on the shortlist. No explanation, just a verdict from a model that had processed ten thousand variables in milliseconds and arrived at a conclusion it couldn’t justify. You couldn’t challenge it. You couldn’t understand it. You didn’t even know if it was fair.

Most people walk away. Some build systems to fight back.

I study how decisions get made, by people, by machines, and by the complicated middle ground where both are involved. Multi-criteria decision analysis. Explainable AI. Fairness in machine learning. The common thread isn’t the method; it’s the question: when a system reaches a conclusion, can you trust it, interrogate it, and hold it accountable?

I’m Sajid Siraj; my route here wasn’t straight. Engineering in Pakistan, software design and development in the telecom industry, computation at Manchester. Then a PhD, then academia, then the slow realisation that the most interesting problems weren’t technical at all — they were about what happens when technical systems touch human lives.

Here’s the part people don’t expect.

Making AI explainable isn’t just an engineering problem. It’s a political one. Who gets to understand the system? Whose values go into the criteria? When a model optimises, optimises for whom? These questions have started showing up in courtrooms, in elections, in climate negotiations — I’ve been tracking the latter, building tools to follow what governments actually commit to at events like COP30.

That’s where this ends up: not just papers, but stakes.

[email protected]  |  ORCID  |  LinkedIn

Scroll to Top