Algorithmic management (AM) can undermine job quality when it’s built solely around efficiency – but it can also improve work when human goals and ambitions are designed into the system from the very beginning. Specifically, AM is the use of computer programmes that automate functions traditionally done by human managers – assigning tasks, monitoring and evaluating performance, and disciplining or rewarding workers.
While these tools originated in the platform economy, connecting riders to restaurants and customers, they’re now spreading across traditional workplaces. A recent survey by the European Commission’s Joint Research Centre found that one in four EU workers get their work schedules automatically allocated and one in five also get their tasks automatically allocated.
This rapid diffusion triggered an impact assessment on AM in the workplace. The Quality Jobs Roadmap, aimed at creating and maintaining quality jobs, also emphasises the need for the responsible use of algorithmic management. Policymakers see a dual challenge – capturing productivity benefits while preventing any deterioration in working conditions, fairness and autonomy.
Yet Europe needs a debate that’s more precise than merely ‘efficiency vs workers’ rights’. If done right, AM could benefit workers.
In short, AM is a tool – not destiny. Its real-world effects depend on what is built into the system’s objectives and constraints, and who builds it. Instead of overregulating AM systems, policymakers should give workers a say in how they’re built.
AM has a bad reputation – but it’s not the whole story
The social science critique of AM is well grounded. Many deployments focus narrowly on operational optimisation, namely speed, throughput and cost control – common because they’re measurable and tied to efficiency. Recent reviews highlight a set of recurring risks for workers: loss of autonomy, work intensification, invasive monitoring, algorithmic opacity and the perpetuation of bias. As such, AM can erode meaningful work and the social fabric of workplaces by shifting decision-making power to systems that workers neither understand nor influence.
Yet AM can also bring opportunities – better task allocation, fewer workplace accidents and less planning disruption. It could even better match workers’ skills or preferences with their allocated tasks. The technology is inherently neutral. Treating AM as equivalent to ‘digital Taylorism’ risks misdiagnosing the problem and, in turn, overregulating tools that could be repurposed towards improving job quality.
Thus, the problem isn’t optimisation, but what is optimised and who optimises it. Regulation should thus target harmful design choices and governance gaps, not optimisation itself.
Humanising AM means designing the algorithm for workers
‘Humanising’ AM means expanding what exactly algorithms are optimising. Today, most systems encode employer-side constraints (coverage rules, predicted demand, cost ceilings). But there’s no technical reason why worker-centred parameters can’t also be part of the algorithm’s objective function. Even relatively ‘soft’ concepts can be formalised as inputs: workers can rate tasks based on existing preferences or desired learning opportunities, specify preferred co-workers, set weekly hour bands or define minimum rest windows. These parameters should be gathered and periodically updated with workers’ involvement.
This isn’t a new idea. In technical terms, it means formulating AM as a multi-objective optimisation problem in which worker-centred criteria sit alongside traditional goals such as cost or productivity. Classic optimisation techniques already support this kind of design, enabling decision-makers to weigh and balance several dimensions at once rather than minimise costs alone.
Across domains such as healthcare, education, business operations and more, existing systems have incorporated elements of human wellbeing (like work-life balance, schedule stability or preference satisfaction) into their objective functions, showing that it’s possible to improve workers’ experience while still addressing core organisational constraints.
What policy and practice need to change
Even when AM systems aren’t fully autonomous, their opacity exacerbates power asymmetries that existing labour and occupational health and safety rules don’t adequately address. Additional safeguards are then needed to ensure that workers and their representatives can meaningfully challenge harmful optimisation choices. Getting there requires three mutually reinforcing shifts.
First, worker participation must move upstream into system design and adoption. Workers hold job-specific knowledge that algorithms miss. This leads AM systems to confuse what ‘good performance’ is and triggers resentment and workarounds. Participation therefore improves the model’s relevance and legitimacy, and it makes the trade-offs between cost, productivity and worker-centred objectives explicit before they‘re silently hard-coded into the system.
It’s also essential because human parameters such as preferences or perceived fairness can’t be inferred from administrative data alone. Implementing workers’ interests along the entire machine learning pipeline isn’t wishful thinking. Workers can help define the problem, shape the data and key constructs, agree on fairness criteria and set monitoring and retaining rules – especially when backed by transparency requirements and technical expertise.
Second, social dialogue needs real capacity to engage with AM development, not just to react after systems are purchased. Power asymmetries mean that the employer’s self-assessment of risks is not enough. Worker representatives should be involved in evaluating risks and safeguards, including how worker-centred criteria are weighted against cost and productivity goals.
This implies targeted AI literacy and technical support for unions and works councils, so they can scrutinise data sources and performance metrics. A good example is Germany’s Works Council Modernisation Act (2021) that expands the codetermination rights of work councils to deploy AI systems in the workplace but also gives workers’ representatives the right to bring in external technical expertise when they need it.
Third, AM developers and adopting employers need explicit job quality obligations. The loss of autonomy, increased work intensity and reduced social interaction – resulting in psychosocial strain – are already recognised as core risks. Embedding job quality awareness into procurement, model validation and post-deployment monitoring would make these risks visible earlier. This requires insight into what’s being optimised, mandate testing for impacts on autonomy, predictability and fairness and tying OSH and psychosocial risk assessments into AM rollouts.
Taken together, these steps shift AM from the ‘past of work’ to the future of work. Europe’s upcoming Quality Jobs agenda is an opportunity to codify that shift – not by rejecting optimisation but by insisting that what’s being optimised isn’t just for efficiency – but also for human flourishing and meaningful work meaningfulness.