alexxkay: (Default)
[personal profile] alexxkay
Most of you are probably familiar with the famous Milgram experiments in obedience. Though fascinating, they raised severe ethical concerns which have led to that line of research being largely abandoned in recent decades. It may be re-opening again, though. Researchers have found that performing these experiments in a virtual setting produces similar results.
Our results show that in spite of the fact that all participants knew for sure that neither the stranger nor the shocks were real, the participants who saw and heard her tended to respond to the situation at the subjective, behavioural and physiological levels as if it were real. This result reopens the door to direct empirical studies of obedience and related extreme social situations, an area of research that is otherwise not open to experimental study for ethical reasons, through the employment of virtual environments.

I recommend reading at least as far as the section titled "Speculations on Obedience in Virtual Reality", which reveals (to me, at least) some interesting blind spots in the experiment. First they say
...the problem of major deception that arose in the original experiments by Milgram was avoided here – since every participant knew for sure that the Learner was a virtual character, and therefore no one could believe that they were inflicting pain on anyone else.
But then they reveal how this experiment was described to the participants:
...they were told: “Thank you for taking part in this experiment. As part of our research program a virtual character has learned a set of word-pair associations. The learning is sometimes not exact, but we are testing a reinforcement learning procedure, to see if the infliction of discomfort motivates her, the virtual character, to remember the word-pair associations better.” The Learner had a quite realistic face, with eye movements and facial expressions; she visibly breathed, spoke, and appeared to respond with pain to the ‘electric shocks’. Not only that but she seemed to be aware of the presence of the participant by gazing at him or her, and also of the experimenter - even answering him back at one point (“I don't want to continue – don't listen to him!”). Finally, of course, the electric shocks and resulting expressions of discomfort were clearly caused by the actions of the participants.
To someone who gets most of their knowledge about AI from the movies (which probably describes most of their participants), it's not clear to me that this virtual actor would be perceived as "not real". If the participant thinks they are causing real pain to a real (if computerized) individual, does that actually avoid the original ethical issues?

Someone at work forwarded me this article, which has obvious implications to game design...

(no subject)

Date: 2007-01-09 10:12 pm (UTC)
From: [identity profile] metahacker.livejournal.com
I have to say I thought of the little sisters from Bioshock when reading that article. Lots of ideas about the potential ethical dilemma with abusing onscreen characters...especially potentially sympathetic ones. Actually a dual dilemma -- do the characters have any moral 'right' simply because they respond as if hurt; and does it alter the player in some harmful way...tough questions.

Given how quick humans are to attribute consciousness to something that looks even a little real ("the credit card machine hates it when you press the buttons too fast", "I think I hurt the ATM's feelings by yanking out the receipt...") these may be worth exploring. In a way it also reminds me of the old tales of actors coming out on stage post-production, to reassure the audience that they were hale and hearty (and not villians).

(no subject)

Date: 2007-01-10 05:40 am (UTC)
siderea: (Default)
From: [personal profile] siderea
and does it alter the player in some harmful way...tough questions.

Er, not particularly tough. The reason the Milgram experiments are considered unethical is because they traumatized the subjects. I really don't see how this is any different, if, as Alexx described, they left the subjects believing someone/something was experiencing "discomfort"/pain as a consequence of their participation.

(no subject)

Date: 2007-01-10 02:12 pm (UTC)
From: [identity profile] metahacker.livejournal.com
The tough question comes when you have to consider whether the putative (though fake) "human" on the far side of Milgram's curtain is equivalent to the completely virtual (and known to be so) mannequin on the screen, where equivalent here means 'does similar trauma to the subject's psyche'.

Profile

alexxkay: (Default)
Alexx Kay

March 2026

S M T W T F S
1234567
891011121314
15161718192021
22232425262728
293031    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags