Abstract: Autonomous systems are machines which can alter their behaviour without direct human oversight or control. How ought we to program them to behave? A plausible starting point is given by the Reduction to Acts Thesis, according to which we ought to program autonomous systems to do whatever we ought to do in the same circumstances. In this talk, we will argue that the Reduction to Acts Thesis is false: it is sometimes permissible to program autonomous systems to do things which it would be wrong for a human to do. We provide two main arguments for this claim. The first has to do with the lack of programmers' knowledge about the identities of the victims and beneficiaries of an autonomous system's behaviour. The second concerns how the way that a system is programmed will indirectly affect the behaviour of other agents.