Abstract: Discriminatory bias in algorithmic systems is widely documented. How should the law respond? A broad consensus suggests the lens of indirect discrimination, focusing on algorithmic systems’ impact. In this article, we set out to challenge this approach, arguing that it is both normatively undesirable and built on an unduly narrow understanding of direct discrimination, particularly in the context of machine learning systems. We illustrate how certain forms of algorithmic bias in frequently deployed algorithms might constitute direct discrimination, and explore the ramifications - both in practical terms, and the broader challenges automated decision-making systems pose to the conceptual apparatus of anti-discrimination law.