In §26 ofA Theory of Justice(1999 ed.), Rawls writes: A problem of choice is well-defined only if the alternatives are suitably restricted by natural laws and other constraints, and those deciding already have certain inclinations to choose among them. Without a definite structure of this kind, the question posed is indeterminate. Offhand, Rawls put enough effort into his theory to arrive at relatively specific conclusions (the two principles and their priority weights), but elsewhere in the same book he mentions how the whole system might very well break down when exposed to the complications of partial compliance/nonideal theory. He even makes an ominous remark about a "day of reckoning" when justice dies and vengeance rises. At any rate, it is a somewhat frequent complaint against Kantian ethics that the categorical imperative can be manipulated to accommodate whatever maxims one pleases: just fiddle with the wording and voila, either anything can be ruled in, or anything ruled out. Appealing to the intricacies of his overall ethical theory seems not to help very much, especially since in the Doctrine of Virtue he goes on to characterize ethical reasoning as open-ended in a way that juridical reasoning is not. Similarly, the "paralysis" that Nick Bostrom debates with respect to aggregationism (utilitarianism/consequentialism stripped to their abstract essentials) suggests that promoting the greatest good can be made consistent with every possible choice, since the sum of all choices will be evened out/smoothed over at the end of history or in the light of all eternity (not his words, but they convey his intended worry). Perhaps social/cultural relativism at least limits the options for people in whichever society/culture. However, in the intro-to-ethics book I had years ago, an objection to relativism that was leveled keenly was that by switching our perspectives on which society/culture we belong to at any moment, or then by shifting the emphasis from arbitrary subculture to arbitrary subculture, we could end up with either inconsistent responsibilities or whichever responsibilities we desired (just go to the relevant subculture and presto! your decision problem is solved however you like). Or consider theistic ethics. There are no superior versions, translations, or interpretations of any major "book of revelations," so even before we try to reason from one such text, we can preemptively alter our reading in line with our preferences, and there's nothing to gainsay us in doing so (to explain away the problem of evil, the appeal will be made to free will, whereafter the failure of any preferred version/translation/interpretation to properly guide action is also explained away as a byproduct of natural sin). At "best," per some measure of arrogance, we can assume that the waywedecide to interpret a preferred book must be right because we have already convinced ourselves that the vehicle of revelation is preserving the integrity ofourthoughts, regardless of whether the thoughts of others are so graced. And otherwise, if we listen to a voice in our head, or a feeling in our heart, and call it "God," it will be easy to end up with a self-congratulatory, all-justifying system; or else no system will be devisable, here, at all. Question:that remark leads into an ironically generalized alternative, though: particularism in ethics, and specifically an intuitionism-themed one. It might be easy to manipulate abstractions at will, and thereby conclude with whatever one wishes. But manipulating even our own emotions can often prove more difficult, so assuming that there is no qualia-theoretic distinction between moral and nonmoral emotions, as categories, as such, then if we try to solve our decision problems by matching them to what our emotions recommend, perhaps the arbitrariness and triviality of deductions-from-abstractions can be waived. Granted, this amounts to an anti-system (in the sense of Bernard Williams, I suppose), so it would not be what we wanted, if we want some kind of "reliable system." But is there any way around this, or are well-structured moral theories doomed to fail because of the manipulation problem? For as I think back over all the theories about ethics that I've read through over my life, Imaybecan't quite think of any besides the one Rawls worked so hard on, that acknowledges the manipulation problem clearly enoughandmakes a valiant effort to solve it.