. Answer in Word format only.
. All answered must be typed using
Times New Roman (size 12, double-spaced)
. No match ratio.
College of Administrative and Financial Sciences
Business Ethics and Organization Social
Deadline: (28/11/2020 @ 23:59)
Course Name: Business Ethics and Organization Social Responsibility
Course Code: MGT422
StudentÃ¢â‚¬â„¢s ID Number:
Academic Year: 1441/1442 H, 1st Term
For InstructorÃ¢â‚¬â„¢s Use only
InstructorÃ¢â‚¬â„¢s Name: Dr. Gaurav S. Vishwakarma
StudentsÃ¢â‚¬â„¢ Grade: /5
Level of Marks: High/Middle/Low
Instructions Ã¢â‚¬â€œ PLEASE READ THEM CAREFULLY
Ã¢â‚¬Â¢ This assignment is an individual assignment.
Ã¢â‚¬Â¢ The due date for Assignment 3 is by the end of Week 13 (28/11/2020).
Ã¢â‚¬Â¢ The Assignment must be submitted only in WORD format via an allocated
Ã¢â‚¬Â¢ Assignments submitted through email will not be accepted.
Ã¢â‚¬Â¢ Students are advised to make their work clear and well presented. This also
includes filling in your information on the cover page.
Ã¢â‚¬Â¢ Students must mention the question number clearly in their answer.
Ã¢â‚¬Â¢ Late submission will NOT be accepted.
Ã¢â‚¬Â¢ Avoid plagiarism, the work should be in your own words, copying from
students or other resources without proper referencing will result in ZERO
marks. No exceptions.
Ã¢â‚¬Â¢ All answered must be typed using Times New Roman (size 12, doublespaced) font. No pictures containing text will be accepted and will be
Submissions without this cover page will NOT be accepted.
Ã¢â‚¬Â¢ This Assignment comprises of a Case Study.
Ã¢â‚¬Â¢ The assignment is to be submitted by each student individually.
Assignment Purposes/Learning Outcomes:
After completion of Assignment-3 students will able to understand the
LO 1.1 Demonstrate a solid understanding of prominent theories of ethics and morality
LO 2.1. Defend their rationale for decisions related to acceptable and unacceptable
business conduct based on the business ethics principles.
LO 4.5. The capacity to write a coherent project about a case study or actual research
Ã¢â‚¬Â¢ Read the case article Ã¢â‚¬Å“Ethical dilemma of who survives self-driving car accidentÃ¢â‚¬Â
Publication info: The New Zealand Herald; Auckland, New Zealand [Auckland,
New Zealand]03 Jan 2019: B.3. available in SDL and answer the following
Critical Thinking Question(s):
1. Analyze the philosophical approach (3 prescriptive approaches) the author speaks
about considering the examples mentioned in the article. (3 Marks)
2. Evaluate one of the Philosophical approaches and describe why you have (or
would) use this approach to guide your decision making. (2 Marks)
Grading Criteria – Rubric for Assignment
approach to ethical
decision making involves
(Students must discuss
ethical perspectives of
and Virtue Ethics) must
Explain all the three
from the article case is
but fail to explain
all the three
not well explained.
(60 % Score)
The explanation should be
based on selecting and
reasoning the approach
being used as a guideline
for decision making. Eg if
the student chooses the
then reasoning should be
benefits to society and
All the facets of the
approach along with
reasons are well
clarity about the
(40 % Score)
Reasons are not
Ethical dilemma of who survives self-driving car
Publication info: The New Zealand Herald ; Auckland, New Zealand [Auckland, New Zealand]03 Jan
ProQuest document link
Imagine this scenario Ã¢â‚¬â€the brakes fail on a self-driving car as it hurtles toward a busy crossing.
A homeless person and a criminal are crossing in front of the car. Two cats are in the opposing lane.
Should the car swerve to mow down the cats or hit two people?
ItÃ¢â‚¬â„¢s a relatively straightforward ethical dilemma, as moral quandaries go. And people overwhelmingly prefer to save
human lives over animals, according to a new ethics study that asked people how a self-driving car should respond
when faced with a variety of extreme trade-offs Ã¢â‚¬â€dilemmas to which more than two million people responded.
But what if the choice is between two elderly people and a pregnant woman? An athletic person or someone who is
The study identified a few preferences that were strongest. People opt to save people over pets, to spare the many
over the few, and to save children and pregnant women over older people. But it also found other preferences for
sparing women over men, athletes over obese people, and higher status people, such as executives, instead of
homeless people or criminals. There were also cultural differences in the degree, for example, that people would
prefer to save younger people over the elderly in a cluster of mostly Asian countries.
Ã¢â‚¬Å“We donÃ¢â‚¬â„¢t suggest that [policymakers] should cater to the publicÃ¢â‚¬â„¢s preferences. They just need to be aware of it, to
expect a possible reaction when something happens. If, in an accident, a kid does not get special treatment, there
might be some public reaction,Ã¢â‚¬Â said Edmond Awad, a computer scientist at the Massachusetts Institute of
Technology Media Lab who led the work.
The thought experiments posed by the researcherÃ¢â‚¬â„¢s Moral Machine website went viral, with their pictorial quiz
taken by several million people in 233 countries or territories.
Outside researchers said the results were interesting, but cautioned that the results could be overinterpreted. In a
randomised survey, researchers try to ensure a sample is unbiased and representative of the overall population,
but in this case the voluntary study was taken by a population that was predominantly younger men. The
scenarios are also distilled, extreme and far more black and white than the ones in the real world.
Ã¢â‚¬Å“The big worry I have is that people reading this are going to think this study is telling us how to implement a
decision process for a self-driving car,Ã¢â‚¬Â said Benjamin Kuipers, a computer scientist at University of Michigan, who
was not involved in the work.
Kuipers added that these thought experiments may frame some of the decisions car makers and programmers
make about autonomous vehicle design in a misleading way. ThereÃ¢â‚¬â„¢s a moral choice, he argued, that precedes the
conundrum of whether to crash into a barrier and kill three passengers or to run over a pregnant woman pushing a
Ã¢â‚¬Å“Building these cars, the process is not really about saying, Ã¢â‚¬Ëœif IÃ¢â‚¬â„¢m faced with this dilemma, who am I going to kill.Ã¢â‚¬â„¢
ItÃ¢â‚¬â„¢s saying, Ã¢â‚¬Ëœif we can imagine a situation where this dilemma could occur, what prior decision should I have made
to avoid this?Ã¢â‚¬Â he said.
The complexity of the world can be captured by the example of a criminal versus a dog. While many said they
would save the canine over its human counterpart, this overlooks the nuanced reasons why a person might be
PDF GENERATED BY PROQUEST.COM
Page 1 of 3
driven to a life of crime.
Nicholas Evans, a philosopher at the University of Massachusetts, pointed out that while the researchers
described their three strongest principles as the ones that were universal, the cut-off between those and the
weaker ones that werenÃ¢â‚¬â„¢t deemed universal was arbitrary. They categorised the preference to spare young people
over elderly people, for example, as a global moral preference, but not the preference to spare those who are
following walk signals versus those who are jaywalking, or to save people of higher social status.
Evans is working on a project that he said has been influenced by the approach taken by the MIT team. He says he
plans to use more nuanced crash scenarios, where real-world transportation data can provide a probability of
surviving a T-bone highway crash on the passenger side, for example, to assess the safety implications of selfdriving cars.
Ã¢â‚¬Å“We want to create a mathematical model for some of these moral dilemmas and utilise the best moral theories
that philosophy has to offer, to show what the result of choosing an autonomous vehicle to behave in a certain
way is,Ã¢â‚¬Â Evans said.
Iyad Rahwan, a computer scientist at MIT who oversaw the work, said that a public poll shouldnÃ¢â‚¬â„¢t be the
foundation of artificial intelligence ethics. But he said that regulating AI will be different from traditional products,
because the machines will have autonomy and the ability to adapt, making it more important to understand how
people perceive AI and what they expect of technology.
Ã¢â‚¬Å“We should take public opinion with a grain of salt,Ã¢â‚¬Â Rahwan said. Ã¢â‚¬Å“I think itÃ¢â‚¬â„¢s informative.Ã¢â‚¬Â
Ã¢â‚¬â€Washington Post and NZ Herald
Preferences; Researchers; Older people; Ethics; Homeless people; Autonomous
vehicles; Traffic accidents &safety
Company / organization:
Name: Massachusetts Institute of Technology; NAICS: 611310; Name: University of
Michigan; NAICS: 611310
The New Zealand Herald; Auckland, New Zealand
Jan 3, 2019
New Zealand Media and Entertainment, NZME
Place of publication:
Auckland, New Zealand
Country of publication:
New Zealand, Auckland, New Zealand
General Interest Periodicals–New Zealand
PDF GENERATED BY PROQUEST.COM
Page 2 of 3
Language of publication:
ProQuest document ID:
Copyright New Zealand Media and Entertainment, NZME Jan 3, 2019
Database copyright Ã¯â€ºâ„¢ 2020 ProQuest LLC. All rights reserved.
Terms and Conditions
PDF GENERATED BY PROQUEST.COM
Page 3 of 3
Purchase answer to see full