When social scientists engage in public debates on AI, they often do so because they’re concerned. Bias, degraded work quality and opaque systems – these risks are real, well documented and recognised by the EU institutions.
But when critique becomes the only lens for discussing AI, something important is missed. As philosopher Lode Lauwaert recently argued, AI not only raises ethical problems but also ethical possibilities – and social scientists should take a more prominent role in highlighting them.
These possibilities don’t lie in technological optimism or the belief that AI will somehow ‘fix’ society. They lie in the fact that AI makes long-standing problems in human systems visible, measurable and thus actionable. In that sense, AI functions more as a mirror for organisations. What it reflects can be uncomfortable but potentially transformative.
As the EU advances on AI in the workplace and prepares its Quality Jobs Act, it should look beyond limiting AI’s risks and more into addressing the organisational dysfunctions AI reveals that undermine job quality in the first place.
Bias – from moral concern to a measurable organisational problem
AI systems can be biased – sometimes severely so – and they can act on a larger scale than individual managers. But bias didn’t enter organisations with algorithms; it was already there. Hiring decisions, performance evaluations and promotion pathways have always reflected social inequalities and subjective judgement. The difference is that when bias is at human discretion, it remains diffuse and contestable. When it’s embedded in data and models, it becomes explicit and – with proper accountability and governance mechanisms – actionable.
AI operates at scale and leaves traces. Training data, decision rules and outcomes across groups can be analysed and compared. Discrimination becomes quantifiable. What once triggered debates about intentions – was this manager unfair? – becomes an analysis of patterns embedded in organisational processes.
Bias stops being a vague moral accusation and becomes a technical and organisational challenge: which data are used, which fairness criteria are applied, what corrections are possible and what human oversight is required?
None of this guarantees fairness. But it does create leverage. In practice, it’s often easier to mitigate bias in an AI system than in an organisation’s culture. Algorithms can be audited, adjusted and constrained in ways that social norms rarely can.
The EU’s AI Act already classifies AI in employment as high-risk, requiring providers and deployers to ensure transparency, data governance documentation and human oversight. But for bias to become truly visible and contestable, workers and their representatives must have meaningful access to this information – and the capacity to act on it.
Algorithmic management – a yardstick for human leadership
A second contentious issue is algorithmic management. Systems that allocate tasks, monitor performance or optimise schedules are often – rightly – criticised for undermining autonomy, intensifying work and fragmenting jobs.
Yet human management isn’t by definition better than algorithmic management. Poor leadership, arbitrary evaluations and toxic work cultures existed long before algorithms. As with bias, AI can expand managerial control to a larger scale, but it also increases its visibility.
Algorithmic systems generate data on workload, targets, pacing and breaks. They also expose who makes decisions about allocating tasks and development opportunities, and how these decisions are taken.
This creates an opportunity – organisations can finally analyse systematically how decisions about planning and task allocation relate to wellbeing, motivation and turnover. Algorithmic management forces clarity as objectives must be explicitly defined and encoded. The problem, therefore, isn’t that work is measured or optimised, but what is optimised – and who has a say in it.
The Platform Work Directive strengthens transparency and human review rights in algorithmic management – but again, such safeguards won’t matter unless workers can influence how these systems are designed in the first place. Meanwhile, non-platform workers remain largely unprotected, even as algorithmic management spreads across traditional employment.
Multi-agent systems – focus on structure, not culture
A third, more recent concern relates to complex multi-agent systems, which are networks of AI agents that cooperate, negotiate or compete. Research shows that such systems can fail through miscoordination, conflict or even collusion. Think about pricing algorithms that start colluding on prices or one AI agent who doesn’t hand over a critical piece of information to another. These risks are serious, particularly as AI systems become more autonomous.
Once again, a comparison with human organisations is revealing. Human collaboration constantly fails. When it does, explanations tend to focus on personalities, communication styles or organisational culture. In AI systems, failure is approached differently: we don’t blame intentions but aspects of the system design, such as ambiguous prompts, misaligned agents and inadequate verification of output. When AI agents don’t work together properly, engineers analyse their role definitions, information flows and control processes.
In other words, it’s the structure of the system, not the culture itself.
That contrast is telling. AI pushes us to treat collaboration as a design problem rather than a character flaw. This raises an uncomfortable question: what if the many failures of human collaboration are also structural, but we’ve been too quick to individualise them?
By studying how AI systems succeed or fail collectively, organisations gain new tools to rethink decision-making, accountability and task allocation in human teams.
A trigger for organisational redesign
The common thread is clear. AI isn’t a moral outsider corrupting organisations. It’s an amplifier and a revealer. It makes existing problems – bias, poor management, failing collaboration – more legible and harder to deny.
Organisations that treat AI merely as an efficiency tool miss its deeper value. Those that use it to rethink processes, structures and leadership may end up with better jobs and more resilient organisations. Not by replacing human responsibility with technology but by finally taking it seriously.
Perhaps this is the optimism Lauwaert points to – not a belief that AI will deliver a better world by itself but recognising that it leaves fewer places to hide. What becomes visible and measurable can’t be dismissed as anecdotal or inevitable. And it’s precisely here that AI’s ethical potential lies.
Now if AI reveals what’s broken, who holds the power to see and to fix it? Power and responsibility cannot rest solely with AI providers or deployers. If AI makes organisational dysfunction measurable, workers and their representatives must have a meaningful role in accessing that information and shaping how systems are designed and deployed.
Strengthening workers’ voices in AI and algorithmic management thus isn’t a side issue of regulation – but a precondition for turning visibility into real leverage for promoting organisational change.
This is an extended version of a piece that was originally published in De Tijd (Dutch only).