16
-root
5y

Came across the moral machine..

http://moralmachine.mit.edu/

Some of the dilemmas would even confuse a human, I wonder how would an artificial general intelligence perform in such scenarios

Comments
  • 3
    Yeah, exactly. I took my time to picking sides and still doubting myself, how's Ai going to decide in split of a second.
  • 1
    Damn, some of those are lose-lose scenarios for everyone involved.

    Kill the adults, the kids grow up without them, Kill the kids... well we beat the anti-vaxers at their own game... no can't have that either.
  • 0
    @irene because "human" isn't derived from "man", it's a separate entity, so "humen" makes no sense. Compared to say, "workman" -> "workmen", which makes sense because the "man" bit of "workman" is a unit on its own.
    https://etymonline.com/word/human/...

    Well I just invented this explanation, I'm no linguist, but it sounds reasonable(ish) to me.
  • 0
    @irene well considering that "hooman" isn't derived from "man" either...
    Also it just sounds awkward.
    But eh, you do you.
  • 0
    @irene I don't think this is seeing machines from a "tool used by humans to improve something" sort of perspective. It's about what assumptions lead to what kind of moral-like behaviour, what would be the optimum set of assumptions and processes needed to make something that benefits us, and what other wonky systems of morals machines can come up with, for pure research's sake. Also how our own moral codes came to be.

    Besides, as systems get more and more complex predictability gets hard to track, so you need these long range behavioural guidelines to keep things in order.
  • 0
    I think this discussion is more or less absurd.
    1) For a human in a regular car facing an inevitable crash, there are no such decisions. Humans don't think in this situation, they just react. So one day the one group will be overrun, the other day the other group. Then why should a computer make the "right" decision?
    2) With self driving cars, accidents will be much less likely, so it may not even be necessary to make that decision anymore.
    3) Morale and ethics have cultural and personal bias. And humans often do things that contradict their own morale. In my opinion it's not possible to put something like morale and ethics in a machine, because there is not that one right ethics ruleset.
    4) Car manufacturer will not build their algorithms to minimize pedestrian damage, they will optimize for the passengers, because they should buy a new car from the same manufacturer after the crash. And that will not happen, if the driver is dead or injured to a level he thinks the car is not save.
  • 0
    Most of my results were based on the fact that the breaks failed. If the breaks failed then the passengers of the vehicle die unless the pedestrians are animals.

    If the choice is between one of two groups then the group not abiding to the road laws die regardless of the social standing or age.
  • 0
    According to the test I absolutely prefer fat homeless men...

    But it's not my fault if in the exemples I get it's always these groups that break the law or are in front of the car when there is no other argument to apply...
  • 1
    To my mind the result of this study will be useless... the case a way too simple

    There is no such thing like dead or alive, but a continuous probability of harm.

    People in the car are more likely to survive an impact, but if the car decide to hit itself on a wall frontally at full speed they are sure in trouble with no chance to escape. While at the same time rushing on another group leave them to rush out if predicted in advance, especially for pets.

    We do not live in a binary world, there is a huge number of alternative solutions like where and how to hit something/someone else to minimize damages.

    If some binary cases like these are still possible, I'm pretty sure all of car accident will more likely provide lot more solutions than hitting things frontally

    And I'm not counting the fact that to my mind most car accidents will be caused by IA failing to recognize the situation or something, in which case these moral choices are just useless.
Add Comment