Science and morality

“If I think I can figure it out, is that curiosity, or arrogance?” – Topher Brink

New Scientist has recently published a trifecta of excellent articles on the subject of science and morality/ethics.




Recently, Andrew Sullivan posted what I recall as a reader dissent that expressed skepticism about the prospect of neuroscience being able to provide any kind of ethical/moral framework. Considering that it’s such a new field and so little of it is understood, they argued, it seemed unlikely that it would ever provide a sufficient methodology. Yes, this segued into religious apologia, but I don’t really consider that relevant, here.

What matters is that whatever framework we derive our morals from, the act of adopting them at all is a value judgment; it is a conscious decision to accept a given choice as the “right” one. Conversely, it stands to reason that the better we understand how we come by these decisions and the more we know about making them, the more likely these choices are to be correct.

Yet the strength of our convictions is determined not by the effort we’ve invested in developing them, but how much faith we have in the source they derive from. Irony being, that source is ultimately ourselves.

  • Trackback are closed
  • Comments (4)
    • Laroquod
    • October 21st, 2010

    Very true!

    Anyway, neuroscience won’t provide an ethical framework; it will merely describe the ones that are already genetically in place. It won’t even tell us why they are in place. Does a scientific description of a ball-in-socket joint tell you what process of evolution created that joint? No, you need all sorts of additional theorising to get there. So we’ll be debating our moral choice mechanisms long after we already understand how the brain moves through them, because we’ll be debating what they’re for and whether we should accept their whole apparatus or further ‘civilise’ some of them with social pressure. And complicating all of that, evolution isn’t even finished with us, we are a work in progress. 8)

    • Max Bell
    • October 21st, 2010

    Excellent point, and it speaks to the proper role of science in such inquiry. There is, of course, tremendous difference between developing a methodology that exposes what our values are and one that would dictate explicitly what they should be.

    I do, however, remain optimistic that the better these kinds of choices are understood, the more likely it will be that it will be possible to differentiate best practices through philosophy. As a species, we tend occasionally to proclaim that, recognizing the difference between moral right and wrong, it is therefore unnecessary to give the consideration any additional thought, even though in practice our value judgments frequently wind up compartmentalized (harvesting the organs of a single healthy individual to save six who will otherwise die is morally wrong; flipping a switch on a train track that diverts a train into a collision with a single individual but saves six others is acceptable because it’s not directly murdering anyone — and the only difference seems to be in the abstraction involved, since the calculus is identical) or worse, biased consciously or unconsciously.

    And I have had that conversation with people; “I already know this, I don’t need to think about it any more.” I’ve found that, as with the application of any philosophy, it is a continuous and ongoing process, in particular where the end product is a value judgment potentially impacting other people.

    • Laroquod
    • October 22nd, 2010

    Interesting ethical conundrum there. I think the difference in your example is that of choice. You have to select a healthy individual to kill to get the organs. Whereas the train is just going where it’s going and circumstances have presented you with a six-or-one choice: you choose to deflect its trajectory, but you can’t choose to deflect, say, to someone sitting safely in their home a mile away. Where it gets interesting is, what if an individual volunteers to die, donating his organs to save six others? Then it seems a lot closer (though not perfect) to the train situation in that the elements are self-selecting: you don’t have to ‘play god’.

    You’re right about abstraction though. Recent science experiments has shown that a human is more likely to act to save a single concrete individual than an entire race known only in the abstract. “Save this crying girl you see before you on TV,” has a much stronger ethical pull on us than, “There’s an entire starving village, they are saying on TV.” In that case abstraction makes ethical involvement less likely, not more. But this is a case of helping rather than hurting.

    The one hope I hold out for neuro-ethics is that it will silence all the idiots claiming that if you don’t believe in a god, you have no basis for making ethical decisions and thus must be a moral degenerate. 87

      • Max Bell
      • October 22nd, 2010

      I think it’s kind of the difference between shooting someone at a distance with a rifle and stabbing them in the heart with a knife. The switch scenario allows for a greater emotional distance from the consequence. With respect to the notion that there are no “right/wrong” answers to these kinds of thought experiments, of course.

      Nor do I really expect that neuroscience will have much of an impact on religionists; there’s so little difference in the ethical makeup of the religious/non-religious that it’s almost just a matter of which label is involved. If anything, I expect that it will help to assist the secular in avoiding the tendency to internalize the way such arguments are framed, i.e. “there is no morality without religion”.

Comments are closed.
%d bloggers like this: