+1(978)310-4246 credencewriters@gmail.com
  

Description

Chap 1
21. Assess Don Gotterbarn’s arguments for the claim that computer ethics is, at bottom, a
field whose primary concern should focus on moral responsibility issues for computer
professionals. Do you agree with his position?
Think of a controversial issue or practice involving cybertechnology that has not yet been
identified as an ethical issue, but which might eventually be recognized as one that has moral
implications. Apply Brey’s “disclosive method” to see whether you can isolate any embedded
values or biases affecting that practice. Also, be sure to separate any “morally opaque features”
from those that are “morally transparent”
(or nonopaque).
Chap 2
23. How does James Moor’s “just‐consequentialist” theory incorporate aspects of utilitarian
and deontological theories into one comprehensive ethical framework? Describe the
strategies used in the two different stages of Moor’s theory: the deliberation and the
selection stage. Identify a contemporary moral issue affecting cybertechnology and apply
Moor’s just‐consequentialist theory to it.
26. Are any of the four traditional ethical theories we examined—that is, consequence based,
duty based, contract based, and character based—adequate to handle moral issues that arise as a
result of cybertechnology? If not, is an alternative kind of ethical theory needed, as some have
argued (e.g., Adam 2008)? Or can a comprehensive, integrated theory, such as the one proposed
by James Moor (i.e., his theory of “just consequentialism”) be used successfully to resolve moral
issues involving cybertechnology?
Chap3
22. Identify some of the arguments that have been made on both sides in the debate about
sharing copyrighted MP3 files on the Internet. Evaluate the arguments in terms of their
strength of reasoning. Can you find any valid arguments? Can you find any inductive
arguments?
23. Construct an argument for the view that privacy protection should be improved for ordinary users who conduct
searches on Google. Next, evaluate your argument via the rules for validity versus invalidity. If your argument is
invalid, check to see if it also includes any of the informal fallacies, we examined in Section 3.9.
Chap 4
21. Evaluate Richard De George’s criteria for when it is morally permissible, as opposed to when it is morally
required, for an engineer to blow the whistle (described in Section 4.4.2). Apply these criteria to a recent
controversy where you believe that blowing the whistle would have been morally permissible or perhaps even
morally required.
22. Describe some virtues of the ethical codes of conduct adopted by professional societies such as the ACM and
IEEE‐CS and list some shortcomings of these professional codes as well. In the final analysis, do the advantages
of having a code outweigh the prospects of not having one? Use either an actual or a hypothetical case to
establish the main points in your answer. Do you believe that a coherent and comprehensive code of conduct for
the computing/IT profession is possible? Does SECEEP satisfy those conditions?
Chap 5
22. Through the use of currently available online tools and search facilities, ordinary users can easily acquire
personal information about others. In fact, anyone who has Internet access can, via a search engine such as
Google, find information about us that we ourselves might have had no idea is publicly available there. Does this
use of search engines threaten the privacy of ordinary people? Explain.
23. In debates regarding access and control of personal information, it is sometimes argued that an appropri- ate
balance needs to be struck between individuals and organizations: individuals claim that they should be able to
control who has access to their information and organizations, including government and business groups, claim
to need that information in order to make appropriate decisions. How can a reasonable resolution be reached
that would satisfy both parties?
Tavani-ffirst.indd 4
10/27/2015 5:22:43 PM
FIFTH EDITION
ETHICS
AND
TECHNOLOGY
Controversies, Questions, and Strategies
for Ethical Computing
HERMAN T. TAVANI
Rivier University
Tavani-ffirst.indd 1
10/27/2015 5:22:43 PM
VP AND EXECUTIVE PUBLISHER
SENIOR ACQUISITIONS EDITOR
ASSOCIATE DEVELOPMENT EDITOR
MARKET SOLUTIONS ASSISTANT
PROJECT MANAGER
PROJECT SPECIALIST
PROJECT ASSISTANT
MARKETING MANAGER
ASSISTANT MARKETING MANAGER
ASSOCIATE DIRECTOR
PRODUCTION EDITOR
PHOTO RESEARCHER
COVER PHOTO CREDIT
Donald Fowley
Bryan Gambrel
Jennifer Lartz
Jessy Moor
Gladys Soto
Nichole Urban
Emily Meussner
Daniel Sayre
Puja Katariwala
Kevin Holm
Loganathan Kandan
Nicholas Olin
© agsandrew/Getty Images, Inc.
This book was set in 10/12 Times Ten LT Std Roman by SPi Global and printed and bound by Lightning Source, Inc.
Founded in 1807, John Wiley & Sons, Inc. has been a valued source of knowledge and understanding for more than
200 years, helping people around the world meet their needs and fulfill their aspirations. Our company is built on
a foundation of principles that include responsibility to the communities we serve and where we live and work. In
2008, we launched a Corporate Citizenship Initiative, a global effort to address the environmental, social, economic,
and ethical challenges we face in our business. Among the issues we are addressing are carbon impact, paper
specifications and procurement, ethical conduct within our business and among our vendors, and community and
charitable support. For more information, please visit our website: www.wiley.com/go/citizenship.
Copyright © 2016, 2013, 2011, 2007, 2004 John Wiley & Sons, Inc. All rights reserved. No part of this publication may
be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical,
photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United
States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment
of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923
(Web site: www.copyright.com). Requests to the Publisher for permission should be addressed to the Permissions
Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030‐5774, (201) 748‐6011, fax (201) 748‐6008,
or online at: www.wiley.com/go/permissions.
Evaluation copies are provided to qualified academics and professionals for review purposes only, for use in their
courses during the next academic year. These copies are licensed and may not be sold or transferred to a third party.
Upon completion of the review period, please return the evaluation copy to Wiley. Return instructions and a free of
charge return shipping label are available at: www.wiley.com/go/returnlabel. If you have chosen to adopt this
textbook for use in your course, please accept this book as your complimentary desk copy. Outside of the United
States, please contact your local sales representative.
ISBN: 978-1-119-23975-8 (PBK)
ISBN: 978-1-119-22415-0 (EVALC)
Library of Congress Cataloging-in-Publication Data:
Tavani, Herman T., author.
Ethics and technology : controversies, questions, and strategies for ethical
computing / Herman T. Tavani, Rivier University.—Fifth edition.
pages cm
Includes bibliographical references and index.
ISBN 978-1-119-23975-8 (pbk.)
1. Computer networks—Moral and ethical aspects. I. Title.
TK5105.5.T385 2016
174’ 9004678—dc23
2015031994
Printing identification and country of origin will either be included on this page and/or the end of the book.
In addition, if the ISBN on this page and the back cover do not match, the ISBN on the back cover should be
considered the correct ISBN.
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
Tavani-ffirst.indd 2
11/4/2015 3:01:29 PM
For Regina and Joe
Tavani-ffirst.indd 3
10/27/2015 5:22:43 PM
Tavani-ffirst.indd 4
10/27/2015 5:22:43 PM
CONTENTS AT A GLANCE
PREFACE
xvii
FOREWORD
xxvii
CHAPTER 1.
INTRODUCTION TO CYBERETHICS: CONCEPTS, PERSPECTIVES,
AND METHODOLOGICAL FRAMEWORKS 1
CHAPTER 2.
ETHICAL CONCEPTS AND ETHICAL THEORIES: FRAMEWORKS FOR
ANALYZING MORAL ISSUES 27
CHAPTER 3.
CRITICAL REASONING SKILLS FOR EVALUATING DISPUTES IN
CYBERETHICS 63
CHAPTER 4.
PROFESSIONAL ETHICS, CODES OF CONDUCT, AND MORAL
RESPONSIBILITY 87
CHAPTER 5.
PRIVACY AND CYBERSPACE 113
CHAPTER 6.
SECURITY IN CYBERSPACE 151
CHAPTER 7.
CYBERCRIME AND CYBER‐RELATED CRIMES 175
CHAPTER 8.
INTELLECTUAL PROPERTY DISPUTES IN CYBERSPACE 201
CHAPTER 9.
REGULATING COMMERCE AND SPEECH IN CYBERSPACE 236
CHAPTER 10. THE DIGITAL DIVIDE, DEMOCRACY, AND WORK 263
CHAPTER 11. ONLINE COMMUNITIES, VIRTUAL REALITY, AND ARTIFICIAL
INTELLIGENCE 292
CHAPTER 12. ETHICAL ASPECTS OF EMERGING AND CONVERGING
TECHNOLOGIES 317
GLOSSARY
INDEX
347
353
v
Tavani-ffirst.indd 5
10/27/2015 5:22:43 PM
Tavani-ffirst.indd 6
10/27/2015 5:22:43 PM
TABLE OF CONTENTS
PREFACE
xvii
FOREWORD
New to the Fifth Edition xviii
Audience and Scope xix
Organization and Structure of the Book xx
The Web Site for Ethics and Technology xxii
A Note to Students xxiii
Note to Instructors: A Roadmap for Using This Book
A Note to Computer Science Instructors xxiv
Acknowledgments xxv
xxvii
xxiii
â–¶ CHAPTER 1
INTRODUCTION TO CYBERETHICS: CONCEPTS, PERSPECTIVES,
AND METHODOLOGICAL FRAMEWORKS 1
Scenario 1–1: Hacking into the Mobile Phones of Celebrities 1
1.1 Defining Key Terms: Cyberethics and Cybertechnology 2
1.1.1 What Is Cybertechnology? 3
1.1.2 Why the Term Cyberethics? 3
1.2 The Cyberethics Evolution: Four Developmental Phases in Cybertechnology 4
1.3 Are Cyberethics Issues Unique Ethical Issues? 7
Scenario 1–2: Developing the Code for a Computerized Weapon System 8
Scenario 1–3: Digital Piracy 8
1.3.1 Distinguishing between Unique Technological Features and Unique
Ethical Issues 9
1.3.2 An Alternative Strategy for Analyzing the Debate about the Uniqueness
of Cyberethics Issues 10
1.3.3 A Policy Vacuum in Duplicating Computer Software 10
1.4 Cyberethics as a Branch of Applied Ethics: Three Distinct Perspectives 12
1.4.1 Perspective #1: Cyberethics as a Field of Professional Ethics 12
1.4.2 Perspective #2: Cyberethics as a Field of Philosophical Ethics 14
1.4.3 Perspective #3: Cyberethics as a Field of Sociological/Descriptive Ethics 16
Scenario 1–4: The Impact of Technology X on the Pleasantville Community 17
1.5 A Comprehensive Cyberethics Methodology 19
1.5.1 A “Disclosive” Method for Cyberethics 19
1.5.2 An Interdisciplinary and Multilevel Method for Analyzing Cyberethics Issues
1.6 A Comprehensive Strategy for Approaching Cyberethics Issues 21
1.7 Chapter Summary 22
Review Questions 23
Discussion Questions 23
Scenarios for Analysis 23
Endnotes 24
References 25
Further Readings 26
Online Resources 26
21
vii
Tavani-ffirst.indd 7
10/27/2015 5:22:43 PM
viii â–¶ Table of Contents
â–¶ CHAPTER 2
ETHICAL CONCEPTS AND ETHICAL THEORIES: FRAMEWORKS FOR ANALYZING
MORAL ISSUES 27
Scenario 2–1: The Case of the “Runaway Trolley”: A Classic Moral Dilemma 27
2.1 Ethics and Morality 29
2.1.1 What Is Morality? 29
2.1.2 The Study of Morality: Three Distinct Approaches for Evaluating and Justifying
the Rules Comprising a Moral System 32
2.2 Discussion Stoppers as Roadblocks to Moral Discourse 35
2.2.1 Discussion Stopper #1: People Disagree on Solutions to Moral Issues 36
2.2.2 Discussion Stopper #2: Who Am I to Judge Others? 37
2.2.3 Discussion Stopper #3: Morality Is Simply a Private Matter 39
2.2.4 Discussion Stopper #4: Morality Is Simply a Matter for Individual Cultures to
Decide 40
Scenario 2–2: The Price of Defending Moral Relativism 41
2.3 Why Do We Need Ethical Theories? 43
2.4 Consequence‐Based Ethical Theories 44
2.4.1 Act Utilitarianism 46
Scenario 2–3: A Controversial Policy in Newmerica 46
2.4.2 Rule Utilitarianism 46
2.5 Duty‐Based Ethical Theories 47
2.5.1 Rule Deontology 48
Scenario 2–4: Making an Exception for Oneself 48
2.5.2 Act Deontology 49
Scenario 2–5: A Dilemma Involving Conflicting Duties 50
2.6 Contract‐Based Ethical Theories 51
2.6.1 Some Criticisms of Contract‐Based Theories 52
2.6.2 Rights‐Based Contract Theories 53
2.7 Character‐Based Ethical Theories 54
2.7.1 Being a Moral Person vs. Following Moral Rules 54
2.7.2 Acquiring the “Correct” Habits 55
2.8 Integrating Aspects of Classical Ethical Theories into a Single Comprehensive Theory 56
2.8.1 Moor’s Just‐Consequentialist Theory and Its Application to Cybertechnology 57
2.8.2 Key Elements in Moor’s Just‐Consequentialist Framework 58
2.9 Chapter Summary 59
Review Questions 59
Discussion Questions 60
Scenarios for Analysis 60
Endnotes 61
References 61
Further Readings 62
â–¶ CHAPTER 3
CRITICAL REASONING SKILLS FOR EVALUATING DISPUTES IN CYBERETHICS 63
SCENARIO 3–1: Reasoning About Whether to Download Software from “Sharester” 63
3.1 What Is Critical Reasoning? 64
3.1.1 Some Basic Concepts: (Logical) Arguments and Claims 64
3.1.2 The Role of Arguments 65
3.1.3 The Basic Structure of an Argument 65
3.2 Constructing an Argument 67
3.3 Valid Arguments 68
3.4 Sound Arguments 71
3.5 Invalid Arguments 73
3.6 Inductive Arguments 74
3.7 Fallacious Arguments 75
Tavani-ffirst.indd 8
10/27/2015 5:22:43 PM
Table of Contents â—€ ix
3.8
3.9
A Seven‐Step Strategy for Evaluating Arguments 77
Identifying Some Common Fallacies 79
3.9.1 Ad Hominem Argument 79
3.9.2 Slippery Slope Argument 80
3.9.3 Fallacy of Appeal to Authority 80
3.9.4 False Cause Fallacy 81
3.9.5 Fallacy of Composition/Fallacy of Division 81
3.9.6 Fallacy of Ambiguity/Equivocation 82
3.9.7 The False Dichotomy/Either–Or Fallacy/All‐or‐Nothing Fallacy
3.9.8 The Virtuality Fallacy 83
3.10 Chapter Summary 84
Review Questions 84
Discussion Questions 85
Scenarios for Analysis 85
Endnotes 85
References 86
Further Readings 86
82
â–¶ CHAPTER 4
PROFESSIONAL ETHICS, CODES OF CONDUCT, AND MORAL RESPONSIBILITY 87
Scenario 4–1: Fatalities Involving the Oerlikon GDF‐005 Robotic Cannon 87
4.1 What Is Professional Ethics? 88
4.1.1 What Is a Profession? 89
4.1.2 Who Is a Professional? 89
4.1.3 Who Is a Computer/IT Professional? 90
4.2 Do Computer/IT Professionals Have Any Special Moral Responsibilities? 90
4.3 Professional Codes of Ethics and Codes of Conduct 91
4.3.1 The Purpose of Professional Codes 92
4.3.2 Some Criticisms of Professional Codes 93
4.3.3 Defending Professional Codes 94
4.3.4 The IEEE‐CS/ACM Software Engineering Code of Ethics and Professional
Practice 95
4.4 Conflicts of Professional Responsibility: Employee Loyalty and Whistle‐Blowing 97
4.4.1 Do Employees Have an Obligation of Loyalty to Employers? 97
4.4.2 Whistle‐Blowing 98
Scenario 4–2: NSA Surveillance and the Case of Edward Snowden 101
4.5 Moral Responsibility, Legal Liability, and Accountability 103
4.5.1 Distinguishing Responsibility from Liability and Accountability 104
4.5.2 Accountability and the Problem of “Many Hands” 105
Scenario 4–3: The Case of the Therac‐25 Machine 105
4.5.3 Legal Liability and Moral Accountability 106
4.6 Do Some Computer Corporations Have Special Moral Obligations? 107
4.7 Chapter Summary 108
Review Questions 109
Discussion Questions 109
Scenarios for Analysis 110
Endnotes 110
References 111
Further Readings 112
â–¶ CHAPTER 5
PRIVACY AND CYBERSPACE 113
Scenario 5–1: A New NSA Data Center 113
5.1 Privacy in the Digital Age: Who Is Affected and Why Should We Worry? 114
5.1.1 Whose Privacy Is Threatened by Cybertechnology? 115
5.1.2 Are Any Privacy Concerns Generated by Cybertechnology Unique or Special?
Tavani-ffirst.indd 9
115
10/27/2015 5:22:43 PM
x â–¶ Table of Contents
5.2
What Is Personal Privacy? 117
5.2.1 Accessibility Privacy: Freedom from Unwarranted Intrusion 118
5.2.2 Decisional Privacy: Freedom from Interference in One’s Personal Affairs 118
5.2.3 Informational Privacy: Control over the Flow of Personal Information 118
5.2.4 A Comprehensive Account of Privacy 119
Scenario 5–2: Descriptive Privacy 119
Scenario 5–3: Normative Privacy 120
5.2.5 Privacy as “Contextual Integrity” 120
Scenario 5–4: Preserving Contextual Integrity in a University Seminar 121
5.3 Why Is Privacy Important? 121
5.3.1 Is Privacy an Intrinsic Value? 122
5.3.2 Privacy as a Social Value 123
5.4 Gathering Personal Data: Surveillance, Recording, and Tracking Techniques 123
5.4.1 “Dataveillance” Techniques 124
5.4.2 Internet Cookies 124
5.4.3 RFID Technology 125
5.4.4 Cybertechnology and Government Surveillance 126
5.5 Analyzing Personal Data: Big Data, Data Mining, and Web Mining 127
5.5.1 Big Data: What, Exactly, Is It, and Why Does It Threaten Privacy? 128
5.5.2 Data Mining and Personal Privacy 128
Scenario 5–5: Data Mining at the XYZ Credit Union 129
5.5.3 Web Mining: Analyzing Personal Data Acquired from Our Interactions Online
5.6 Protecting Personal Privacy in Public Space 132
5.6.1 PPI vs. NPI 133
Scenario 5–6: Shopping at SuperMart 133
Scenario 5–7: Shopping at Nile.com 134
5.6.2 Search Engines and the Disclosure of Personal Information 135
5.7 Privacy Legislation and Industry Self‐Regulation 137
5.7.1 Industry Self‐Regulation and Privacy‐Enhancing Tools 137
5.7.2 Privacy Laws and Data Protection Principles 139
5.8 A Right to “Be Forgotten” (or to “Erasure”) in the Digital Age 140
Scenario 5–8: An Arrest for an Underage Drinking Incident 20 Years Ago 141
5.8.1 Arguments Opposing RTBF 142
5.8.2 Arguments Defending RTBF 143
5.8.3 Establishing “Appropriate” Criteria 144
5.9 Chapter Summary 146
Review Questions 146
Discussion Questions 147
Scenarios for Analysis 148
Endnotes 148
References 149
Further Readings 150
132
â–¶ CHAPTER 6
SECURITY IN CYBERSPACE 151
Scenario 6–1: The “Olympic Games” Operation and the Stuxnet Worm 151
6.1 Security in the Context of Cybertechnology 152
6.1.1 Cybersecurity as Related to Cybercrime 153
6.1.2 Security and Privacy: Some Similarities and Some Differences 153
6.2 Three Categories of Cybersecurity 154
6.2.1 Data Security: Confidentiality, Integrity, and Availability of Information
6.2.2 System Security: Viruses, Worms, and Malware 156
6.2.3 Network Security: Protecting our Infrastructure 156
Scenario 6–2: The “GhostNet” Controversy 157
Tavani-ffirst.indd 10
155
10/27/2015 5:22:43 PM
Table of Contents â—€ xi
6.3
Cloud Computing and Security 158
6.3.1 Deployment and Service/Delivery Models for the Cloud 158
6.3.2 Securing User Data Residing in the Cloud 159
6.3.3 Assessing Risk in the Cloud and in the Context of Cybersecurity 160
6.4 Hacking and “The Hacker Ethic” 160
6.4.1 What Is “The Hacker Ethic”? 161
6.4.2 Are Computer Break‐ins Ever Ethically Justifiable? 163
6.5 Cyberterrorism 164
6.5.1 Cyberterrorism vs. Hacktivism 165
Scenario 6–3: Anonymous and the “Operation Payback” Attack 166
6.5.2 Cybertechnology and Terrorist Organizations 167
6.6 Information Warfare (IW) 167
6.6.1 Information Warfare vs. Conventional Warfare 167
6.6.2 Potential Consequences for Nations that Engage in IW 168
6.7 Chapter Summary 170
Review Questions 170
Discussion Questions 171
Scenarios for Analysis 171
Endnotes 171
References 172
Further Readings 174
â–¶ CHAPTER 7
CYBERCRIME AND CYBER‐RELATED CRIMES 175
Scenario 7–1: Creating a Fake Facebook Account to Catch Criminals 175
7.1 Cybercrimes and Cybercriminals 177
7.1.1 Background Events: A Brief Sketch 177
7.1.2 A Typical Cybercriminal 178
7.2 Hacking, Cracking, and Counter Hacking 178
7.2.1 Hacking vs. Cracking 179
7.2.2 Active Defense Hacking: Can Acts of “Hacking Back” or Counter
Hacking Ever Be Morally Justified? 179
7.3 Defining Cybercrime 180
7.3.1 Determining the Criteria 181
7.3.2 A Preliminary Definition of Cybercrime 181
7.3.3 Framing a Coherent and Comprehensive Definition of Cybercrime 182
7.4 Three Categories of Cybercrime: Piracy, Trespass, and Vandalism in Cyberspace 183
7.5 Cyber‐Related Crimes 184
7.5.1 Some Examples of Cyber‐Exacerbated vs. Cyber‐Assisted Crimes 184
7.5.2 Identity Theft 185
7.6 Technologies and Tools for Combating Cybercrime 187
7.6.1 Biometric Technologies 187
7.6.2 Keystroke‐Monitoring Software and Packet‐Sniffing Programs 188
7.7 Programs and Techniques Designed to Combat Cybercrime in the United States 189
7.7.1 Entrapment and “Sting” Operations to Catch Internet Pedophiles 189
Scenario 7–2: Entrapment on the Internet 189
7.7.2 Enhanced Government Surveillance Techniques and the Patriot Act 189
7.8 National and International Laws to Combat Cybercrime 190
7.8.1 The Problem of Jurisdiction in Cyberspace 190
Scenario 7–3: A Virtual Casino 191
Scenario 7–4: Prosecuting a Computer Corporation in Multiple Countries 192
7.8.2 Some International Laws and Conventions Affecting Cybercrime 192
Scenario 7–5: The Pirate Bay Web Site 193
7. 9 Cybercrime and the Free Press: The Wikileaks Controversy 193
Tavani-ffirst.indd 11
10/27/2015 5:22:43 PM
xii â–¶ Table of Contents
7.9.1 Are WikiLeaks’ Practices Ethical? 194
7.9.2 Are WikiLeaks’ Practices Criminal? 194
7.9.3 WikiLeaks and the Free Press 195
7.10 Chapter Summary 196
Review Questions 197
Discussion Questions 197
Scenarios for Analysis 198
Endnotes 199
References 199
Further Readings 200
â–¶ CHAPTER 8
INTELLECTUAL PROPERTY DISPUTES IN CYBERSPACE 201
Scenario 8–1: Streaming Music Online 201
8.1 What Is Intellectual Property? 202
8.1.1 Intellectual Objects 203
8.1.2 Why Protect Intellectual Objects? 203
8.1.3 Software as Intellectual Property 204
8.1.4 Evaluating a Popular Argument Used by the Software Industry to Show Why
It Is Morally Wrong to Copy Proprietary Software 205
8.2 Copyright Law and Digital Media 206
8.2.1 The Evolution of Copyright Law in the United States 206
8.2.2 The Fair‐Use and First‐Sale Provisions of Copyright Law 207
8.2.3 Software Piracy as Copyright Infringement 208
8.2.4 Napster and the Ongoing Battles over Sharing Digital Music 209
8.3 Patents, Trademarks, and Trade Secrets 212
8.3.1 Patent Protections 212
8.3.2 Trademarks 213
8.3.3 Trade Secrets 214
8.4 Jurisdictional Issues Involving Intellectual Property Laws 214
8.5 Philosophical Foundations for Intellectual Property Rights 215
8.5.1 The Labor Theory of Property 215
Scenario 8–2: DEF Corporation vs. XYZ Inc. 216
8.5.2 The Utilitarian Theory of Property 216
Scenario 8–3: Sam’s e‐Book Reader Add‐on Device 217
8.5.3 The Personality Theory of Property 217
Scenario 8–4: Angela’s B++ Programming Tool 218
8.6 The “Free Software” and “Open Source” Movements 219
8.6.1 GNU and the Free Software Foundation 219
8.6.2 The “Open Source Software” Movement: OSS vs. FSF 220
8.7 The “Common Good” Approach: An Alternative Framework for
Analyzing the Intellectual Property Debate 221
8.7. 1 Information Wants to be Shared vs. Information Wants to be Free 223
8.7.2 Preserving the Information Commons 225
8.7.3 The Fate of the Information Commons: Could the Public Domain
of Ideas Eventually Disappear? 226
8.7.4
The Creative Commons 227
8.8 PIPA, SOPA, and RWA Legislation: Current Battlegrounds in the Intellectual
Property War 228
8.8.1 The PIPA and SOPA Battles 228
8.8.2 RWA and Public Access to Health‐Related Information 229
Scenario 8–5: Elsevier Press and “The Cost of Knowledge” Boycott 229
8.8.3 Intellectual Property Battles in the Near Future 231
8.9 Chapter Summary 231
Review Questions 231
Tavani-ffirst.indd 12
10/27/2015 5:22:43 PM
Table of Contents â—€ xiii
Discussion Questions 232
Scenarios for Analysis 232
Endnotes 233
References 234
Further Readings 235
â–¶ CHAPTER 9
REGULATING COMMERCE AND SPEECH IN CYBERSPACE 236
Scenario 9–1: Anonymous and the Ku Klux Klan 236
9.1 Introduction and Background Issues: Some Key Questions and Critical Distinctions Affecting
Internet Regulation 237
9.1.1 Is Cyberspace a Medium or a Place? 238
9.1.2 Two Categories of Cyberspace Regulation: Regulating Content and Regulating
Process 239
9.1.3 Four Modes of Regulation: The Lessig Model 240
9.2 Digital Rights Management (DRM) 242
9.2.1 Some Implications of DRM for Public Policy Debates Affecting Copyright Law 242
9.2.2 DRM and the Music Industry 243
Scenario 9–2: The Sony Rootkit Controversy 243
9.3 E‐Mail Spam 244
9.3.1 Defining Spam 244
9.3.2 Why Is Spam Morally Objectionable? 245
9.4 Free Speech vs. Censorship and Content Control in Cyberspace 246
9.4.1 Protecting Free Speech 247
9.4.2 Defining Censorship 247
9.5 Pornography in Cyberspace 248
9.5.1 Interpreting “Community Standards” in Cyberspace 248
9.5.2 Internet Pornography Laws and Protecting Children Online 249
9.5.3 Virtual Child Pornography 250
9.5.4 Sexting and Its Implications for Current Child Pornography Laws 252
Scenario 9–3: A Sexting Incident Involving Greensburg Salem High School 252
9.6 Hate Speech and Speech that Can Cause Physical Harm to Others 254
9.6.1 Hate Speech on the Web 254
9.6.2 Online “Speech” that Can Cause Physical Harm to Others 255
9.7 “Network Neutrality” and the Future of Internet Regulation 256
9.7.1 Defining Network Neutrality 256
9.7.2 Some Arguments Advanced by Net Neutrality’s Proponents and Opponents 257
9.7.3 Future Implications for the Net Neutrality Debate 257
9.8 Chapter Summary 258
Review Questions 259
Discussion Questions 259
Scenarios for Analysis 260
Endnotes 260
References 261
Further Readings 262
â–¶ CHAPTER 10
THE DIGITAL DIVIDE, DEMOCRACY, AND WORK 263
Scenario 10–1: Digital Devices, Social Media, Democracy, and the “Arab Spring” 264
10.1 The Digital Divide 265
10.1.1 The Global Digital Divide 265
10.1.2 The Digital Divide within Nations 266
Scenario 10–2: Providing In‐Home Internet Service for Public School Students 267
10.1.3 Is the Digital Divide an Ethical Issue? 268
10.2 Cybertechnology and the Disabled 270
Tavani-ffirst.indd 13
10/27/2015 5:22:43 PM
xiv â–¶ Table of Contents
10.3 Cybertechnology and Race 271
10.3.1 Internet Usage Patterns 272
10.3.2 Racism and the Internet 272
10.4 Cybertechnology and Gender 273
10.4.1 Access to High‐Technology Jobs 274
10.4.2 Gender Bias in Software Design and Video Games 275
10.5 Cybertechnology, Democracy, and Demotratic Ideals 276
10.5.1 Has Cybertechnology Enhanced or Threatened Democracy? 276
10.5.2 How has Cybertechnology Affected Political Elections in Democratic Nations?
10.6 The Transformation and the Quality of Work 280
10.6.1 Job Displacement and the Transformed Workplace 281
10.6.2 The Quality of Work Life in the Digital Era 283
Scenario 10–3: Employee Monitoring and the Case of Ontario vs. Quon 284
10.7 Chapter Summary 287
Review Questions 287
Discussion Questions 288
Scenarios for Analysis 288
Endnotes 289
References 289
Further Readings 291
279
â–¶ CHAPTER 11
ONLINE COMMUNITIES, VIRTUAL REALITY, AND ARTIFICIAL INTELLIGENCE 292
Scenario 11–1: Ralph’s Online Friends and Artificial Companions 292
11.1 Online Communities and Social Networking Services 293
11.1.1 Online Communities vs. Traditional Communities 294
11.1.2 Blogs and Some Controversial Aspects of the Bogosphere 295
Scenario 11–2: “The Washingtonienne” Blogger 295
11.1.3 Some Pros and Cons of SNSs (and Other Online Communities) 296
Scenario 11–3: A Suicide Resulting from Deception on MySpace 298
11.2 Virtual Environments and Virtual Reality 299
11.2.1 What Is Virtual Reality (VR)? 300
11.2.2 Ethical Aspects of VR Applications 301
11.3 Artificial Intelligence (AI) 305
11.3.1 What Is AI? A Brief Overview 305
11.3.2 The Turing Test and John Searle’s “Chinese Room” Argument 306
11.3.3 Cyborgs and Human–Machine Relationships 307
11.4 Extending Moral Consideration to AI Entities 310
Scenario 11–4: Artificial Children 310
11.4.1 Determining Which Kinds of Beings/Entities Deserve Moral Consideration
11.4.2 Moral Patients vs. Moral Agents 311
11.5 Chapter Summary 312
Review Questions 313
Discussion Questions 313
Scenarios for Analysis 313
Endnotes 314
References 315
Further Readings 316
310
â–¶ CHAPTER 12
ETHICAL ASPECTS OF EMERGING AND CONVERGING TECHNOLOGIES 317
Scenario 12–1: When “Things” Communicate with One Another 317
12.1 Converging Technologies and Technological Convergence 318
12.2 Ambient Intelligence (AmI) and Ubiquitous Computing 319
Tavani-ffirst.indd 14
10/27/2015 5:22:43 PM
Table of Contents â—€ xv
12.2.1 Pervasive Computing, Ubiquitous Communication, and Intelligent
User Interfaces 320
12.2.2 Ethical and Social Aspects of AmI 321
Scenario 12–2: E. M. Forster’s “(Pre)Cautionary Tale” 322
Scenario 12–3: Jeremy Bentham’s “Panopticon/Inspection House” (Thought Experiment)
12.3 Nanotechnology and Nanocomputing 324
12.3.1 Nanotechnology: A Brief Overview 324
12.3.2 Ethical Issues in Nanotechnology and Nanocomputing 326
12.4 Autonomous Machines 329
12.4.1 What Is an AM? 329
12.4.2 Some Ethical and Philosophical Questions Pertaining to AMs 332
12.5 Machine Ethics and Moral Machines 336
12.5.1 What Is Machine Ethics? 336
12.5.2 Designing Moral Machines 337
12.6 A “Dynamic” Ethical Framework for Guiding Research in New and Emerging
Technologies 340
12.6.1 Is an ELSI‐Like Model Adequate for New/Emerging Technologies? 340
12.6.2 A “Dynamic Ethics” Model 341
12.7 Chapter Summary 341
Review Questions 342
Discussion Questions 342
Scenarios for Analysis 343
Endnotes 343
References 344
Further Readings 346
GLOSSARY
INDEX
Tavani-ffirst.indd 15
323
347
353
10/27/2015 5:22:43 PM
Tavani-ffirst.indd 16
10/27/2015 5:22:43 PM
PREFACE
Since the publication of the fourth edition of Ethics and Technology in late 2012, the digital
landscape has continued to evolve, resulting in new variations of moral, legal, and social concerns. For example, ongoing unease about personal privacy has been further exacerbated by
Big Data (sometimes also referred to as Big Data Analytics), as well as by the “Internet of
Things.” Surveillance‐related privacy concerns have also intensified, especially in the aftermath of revelations that the National Security Agency (NSA) allegedly snooped on American
citizens. And the intentional leaking of classified NSA information by Edward Snowden has
generated renewed interest in the topic of whistle‐blowing.
Other recent ethical/social concerns arise in connection with hacking‐related activities
carried out by various nation‐states. In late 2014, for example, the government of North Korea
admitted responsibility for a series of break‐ins at Sony Corporation, in which the government
threatened to commit 9/11‐like terrorist threats if Sony released a controversial movie. A different kind of hacking‐related activity has significantly impacted the commercial sphere,
where major retail stores in the United States, including Target and Home Depot, have been
embarrassed by break‐ins compromising their customer databases. A third kind of hacking‐
related activity targeted select (high‐profile) female celebrities, whose social media accounts
and mobile devices were hacked; in some cases, nude photos of these celebrities were also
made available on selected Web sites.
It is not only celebrities, however, who are vulnerable to having their devices and accounts
hacked or to having unauthorized content (including embarrassing photos) displayed on the
Internet. Ordinary users are also at risk in this regard mainly because of the myriad ways in
which one’s digitized personal data can now be so easily compromised and made accessible on
the Web. Consider, for example, a relatively recent controversy involving “revenge porn sites,”
where people can post nude and other kinds of embarrassing photos of their ex‐romantic partners. Because it is difficult to have content permanently deleted from the Internet, users often
struggle in vain to have embarrassing online personal information removed. And concerns
about the indefinite period of time in which one’s digitized personal information can persist
on the Internet have influenced countries in the European Union to adopt a privacy principle
called the “Right to Be Forgotten,” where citizens in those countries have the right to have
certain kinds of online personal information about them “erased.” However, we will see that
this principle, sometimes also referred to as the “Right to Erasure,” has been controversial and
has not been adopted by most countries, including the United States.
Other relatively recent ethics‐related concerns arise in connection with technologies such
as 3D printing and augmented reality (AR); whereas the former makes possible the “printing”
of controversial objects such as guns, the latter introduces concerns generated by “wearable”
(computing) technologies such as Google Glass. Additionally, “smart cars,” such as those currently produced by Google, raise concerns about moral/legal responsibility issues for vehicle‐
related accidents and injuries. Also in the context of transportation‐related controversies, we
can ask what effect the relatively new shuttle/taxi‐related services such as Uber, made possible
by apps designed for digital devices, will likely have for the future of the (more traditional)
taxicab industry.
xvii
Tavani-fpref.indd 17
10/27/2015 5:24:02 PM
xviii â–¶ Preface
Although new technologies emerge and existing technologies continue to mature and
evolve, many of the ethical issues associated with them are basically variations of existing ethical problems. At bottom, these issues illustrate (contemporary examples of) traditional ethical
concerns having to do with fairness, obligations to assist others in need, and so forth. So, we
should not infer that the moral landscape itself has been altered because of behaviors made
possible by these technologies. We will see that, for the most part, the new issues examined in
this edition of Ethics and Technology are similar in relevant respects to the kinds of ethical
issues we examined in the book’s previous editions. However, many emerging technologies
present us with challenges that, initially at least, do not seem to fit easily into our conventional
ethical categories. So, a major objective of this textbook is to show how those controversies
can be analyzed from the perspective of standard ethical concepts and theories.
The purpose of Ethics and Technology, as stated in the prefaces to the four previous editions of this book, is to introduce students to issues and controversies that comprise the relatively new field of cyberethics. The term “cyberethics” is used to refer to the field of study
that examines moral, legal, and social issues involving cybertechnology. Cybertechnology, in
turn, refers to a broad spectrum of computing/information and communication technologies
that range from stand‐alone computers to the current cluster of networked devices and
applications.
This textbook examines a wide range of cyberethics issues—from specific issues of moral
responsibility that directly affect computer and information technology (IT) professionals to
broader social and ethical concerns that affect each of us in our day‐to‐day lives. Questions
about the roles and responsibilities of computer/IT professionals in developing safe and reliable computer systems are examined under the category of professional ethics. Broader social
and ethical concerns associated with cybertechnology are examined under topics such as privacy, security, crime, intellectual property, Internet regulation, and so forth.
â–¶ NEW TO THE FIFTH EDITION
New pedagogical material includes:
r Learning objectives, highlighted at the beginning of each chapter, describing the principal student outcomes intended for that chapter
r Beginning‐of‐chapter scenarios, designed to illustrate one or more of the key themes/
issues/controversies examined in that chapter
r Some new in‐chapter scenarios (comprising both actual cases and hypothetical situations), which enable students to apply methodological concepts/frameworks and
ethical theories introduced in Chapters 1 and 2
r Some new sample arguments, which encourage students to apply the tools for argument analysis introduced in Chapter 3
r Some new end‐of‐chapter review questions and discussion questions
r Some new end‐of‐chapter “Scenarios for Analysis,” which can be used either for
in‐class analysis and group projects or outside‐class assignments
New issues examined and analyzed include:
r State‐sponsored cyberattacks and their implications for (inter)national security
r Whistle‐blowing controversies generated by the leaking of highly sensitive (governmental) information in digital form
r NSA surveillance‐related leaks and their implications for both personal privacy and
national security
Tavani-fpref.indd 18
10/27/2015 5:24:02 PM
Preface â—€ xix
r Privacy threats posed by Big Data
r Challenges posed to the recording industry/artists by the online services that stream
digital music
r Ethical and social aspects of the “Internet of things”
r Disruptions made possible by international “hacktivist” groups such as Anonymous.
r Controversies associated with a person’s “right” to have some kinds of personal information about them “erased” from the Internet
In revising the book, I have eliminated some older, now out‐of‐date, material. In some
instances, I have also streamlined the discussion of topics that were examined in greater detail
in previous editions of the book; in these cases, a condensed version of that material, which is
still highly relevant, has been carried over to the present edition.
â–¶ AUDIENCE AND SCOPE
Because cyberethics is an interdisciplinary field, this textbook aims at reaching several audiences and thus easily runs the risk of failing to meet the needs of any one audience. I have
nonetheless attempted to compose a textbook that addresses the needs of computer science,
philosophy, social/behavioral science, and library/information science students. Computer science students need a clear understanding of the ethical challenges they will face as computer
professionals when they enter the workforce. Philosophy students, on the contrary, should
understand how moral issues affecting cybertechnology can be situated in the field of applied
ethics in general and then analyzed from the perspective of ethical theory. Social science and
behavioral science students will likely want to assess the sociological impact of cybertechnology
on our social and political institutions (government, commerce, and education) and sociodemographic groups (affecting gender, race, ethnicity, and social class). And library/information science students should be aware of the complexities and nuances of current intellectual property
laws that threaten unfettered access to electronic information and should be informed about
recent regulatory schemes that threaten to censor certain forms of electronic speech.
Students from other academic disciplines should also find many issues covered in this
textbook pertinent to their personal and professional lives; some undergraduates may elect to
take a course in social and ethical aspects of technology to satisfy one of their general education requirements. Although Ethics and Technology is intended mainly for undergraduate students, it could be used, in conjunction with other texts, in graduate courses as well.
We examine ethical controversies using scenarios that include both actual cases and hypothetical examples, wherever appropriate. In some instances, I have deliberately constructed
provocative scenarios and selected controversial cases to convey the severity of the ethical
issues we consider. Some readers may be uncomfortable with, and possibly even offended by,
these scenarios and cases—for example, those illustrating unethical practices that negatively
affect children and minorities. Although it might have been politically expedient to skip over
issues and scenarios that could unintentionally offend certain individuals, I believe that no
textbook in applied ethics would do justice to its topic if it failed to expose and examine issues
that adversely affect vulnerable groups in society.
Also included in most chapters are sample arguments that are intended to illustrate
some of the rationales that have been put forth by various interest groups to defend policies
and laws affecting privacy, security, property, and so forth in cyberspace. Instructors and
students can evaluate these arguments via the rules and criteria established in Chapter 3 to
see how well, or how poorly, the premises in these arguments succeed in establishing their
conclusions.
Tavani-fpref.indd 19
10/27/2015 5:24:02 PM
xx â–¶ Preface
Exercise questions are included at the end of each chapter. First, basic “review questions”
quiz the reader’s comprehension of key concepts, themes, issues, and scenarios covered in that
chapter. These are followed by higher‐level “discussion questions” designed to encourage students to reflect more deeply on some of the controversial issues examined in the chapter.
Building on the higher‐level nature of the discussion questions, “Scenarios for Analysis” are
also included at the end of each chapter. These “unanalyzed scenarios” provide students and
instructors with additional resources for analyzing important controversies introduced in the
various chapters. For example, these scenarios can be used as in‐class resources for group
projects.
Some discussion questions and end‐of‐chapter scenarios ask students to compare and
contrast arguments and topics that span multiple chapters; for instance, students are asked to
relate arguments used to defend intellectual property rights, considered in Chapter 8, to arguments for protecting privacy rights, examined in Chapter 5. Other questions and scenarios ask
students to apply foundational concepts and frameworks, such as ethical theories and critical
reasoning techniques introduced in Chapters 2 and 3, to the analysis of specific cyberethics
issues examined in subsequent chapters. In some cases, these end‐of‐chapter questions and
scenarios may generate lively debate in the classroom; in other cases, they can serve as a point
of departure for various class assignments and group projects. Although no final “solutions” to
the issues and dilemmas raised in these questions and scenarios are provided in the text, some
“strategies” for analyzing them are included in the section of the book’s Web site (www.wiley.
com/college/tavani) titled “Strategies for Discussion Questions.”
â–¶ ORGANIZATION AND STRUCTURE OF THE BOOK
Ethics and Technology is organized into 12 chapters. Chapter 1, “Introduction to Cyberethics:
Concepts, Perspectives, and Methodological Frameworks,” defines key concepts and terms
that will appear throughout the book. For example, definitions of terms such as cyberethics
and cybertechnology are introduced in this chapter. We then consider the question of whether
any ethical issues involving cybertechnology are unique ethical issues. Next, we show how
cyberethics issues can be approached from three different perspectives: professional ethics,
philosophical ethics, and sociological/descriptive ethics, each of which represents the approach
generally taken by a computer scientist, a philosopher, and a social/behavioral scientist.
Chapter 1 concludes with a proposal for a comprehensive and interdisciplinary methodological scheme for analyzing cyberethics issues from these perspectives.
In Chapter 2, “Ethical Concepts and Ethical Theories: Frameworks for Analyzing Moral
Issues,” we examine some of the basic concepts that make up a moral system. We draw a
distinction between ethics and morality by defining ethics as “the study of morality.”
“Morality,” or a moral system, is defined as an informal, public system comprising rules of
conduct and principles for evaluating those rules. We then examine consequence‐based,
duty‐based, character‐based, and contract‐based ethical theories. Chapter 2 concludes with
a model that integrates elements of competing ethical theories into one comprehensive and
unified theory.
Chapter 3, “Critical Reasoning Skills for Evaluating Disputes in Cyberethics,” includes an
overview of basic concepts and strategies that are essential for debating moral issues in a structured and rational manner. We begin by describing the structure of a logical argument and
show how arguments can be constructed and analyzed. Next, we examine a technique for distinguishing between arguments that are valid and invalid, sound and unsound, and inductive
and fallacious. We illustrate examples of each type with topics affecting cybertechnology and
cyberethics. Finally, we identify some strategies for spotting and labeling “informal logical fallacies” that frequently occur in everyday discourse.
Tavani-fpref.indd 20
10/27/2015 5:24:02 PM
Preface â—€ xxi
Chapter 4, “Professional Ethics, Codes of Conduct, and Moral Responsibility,” examines
issues related to professional responsibility for computer/IT professionals. We consider
whether there are any special moral responsibilities that computer/IT professionals have as
professionals. We then examine some professional codes of conduct that have been adopted
by computer organizations. We also ask: To what extent are software engineers responsible for
the reliability of the computer systems they design and develop, especially applications that
include “life‐critical” and “safety‐critical” software? We then ask whether computer/IT professionals are permitted, or perhaps even required, to “blow the whistle” when they have reasonable evidence to suggest that a computer system is unreliable. Finally, we consider whether
some computer corporations might have special moral responsibilities because of the nature
of the products they develop or services they provide.
We discuss privacy issues involving cybertechnology in Chapter 5. First, we examine the
concept of privacy as well as some arguments for why privacy is considered an important
human value. We then look at how personal privacy is threatened by the kinds of surveillance
techniques and data‐collection schemes made possible by cybertechnology. Specific data‐
gathering and data‐analysis techniques are examined in detail. We next consider some challenges that “big data,” data mining, and Web mining pose for protecting personal privacy in
public space. In Chapter 5, we also consider whether stronger privacy legislation is needed to
protect online consumers or whether industry self-regulation techniques in conjunction with
privacy enhancing tools can provide an adequate alternative. We conclude this chapter with an
analysis of the European Union’s “Right to Be Forgotten” principle and identify some challenges it poses for major search engine companies operating in Europe.
Chapter 6, “Security in Cyberspace,” examines security threats in the context of computing and cybertechnology. We begin by differentiating three distinct senses of security: data
security, system security, and network security. Next, we examine some challenges that cloud‐
computing services pose for cybersecurity. We then analyze the concepts of “hacker” and
“hacker ethic,” and we ask whether computer break‐ins can ever be morally justified. In the
final section of this chapter, we differentiate acts of “hacktivism,” cyberterrorism, and information warfare, and we examine some impacts that each has had thus far.
We begin our analysis of cybercrime, in Chapter 7, by asking if it is possible to construct a
profile of a “typical” cybercriminal. We then propose a definition of cybercrime that enables
us to distinguish between “cyberspecific” and “cyber‐related” crimes and show how this distinction can help in formulating more coherent cybercrime laws. We also consider the notion
of legal jurisdiction in cyberspace and examine some of the challenges it poses in prosecuting
cybercrimes that involve interstate and international venues. In addition, we examine some
technological efforts used to combat cybercrime, such as controversial uses of biometric technologies. Chapter 7 concludes with an analysis of the WikiLeaks controversy from the perspective of cybercrime.
One objective of Chapter 8, “Intellectual Property Disputes in Cyberspace,” is to show
why understanding the concept of intellectual property (IP) is important in an era of digital
information. We examine three philosophical/legal theories of property rights and then draw
some key distinctions affecting four legal concepts pertaining to IP: copyrights, patents, trademarks, and trade secrets. We also examine some alternative frameworks such as the Free
Software Foundation (FSF), Open Source Software (OSS), and Creative Commons (CC)
initiatives, and we conclude our analysis of IP issues by arguing for a principle that presumes
in favor of sharing digital information while also acknowledging the legitimate interests of
rights holders.
In Chapter 9, “Regulating Commerce and Speech in Cyberspace,” we draw distinctions
between two different senses of “regulation” as it applies to the Internet: regulating commerce
and regulating speech. We then examine controversies surrounding e‐mail spam, which some
believe can be viewed as a form of “speech” in cyberspace. We all consider whether all forms
Tavani-fpref.indd 21
10/27/2015 5:24:02 PM
xxii â–¶ Preface
of online speech should be granted legal protection; for example, should child pornography,
hate speech, and speech that can cause physical harm to others be tolerated in online forums?
We conclude our examination of Internet‐regulation issues in Chapter 9 with an analysis of
the “net neutrality” controversy.
Chapter 10 examines a wide range of equity and access issues from the perspective of
cybertechnology’s impact for sociodemographic groups (affecting class, race, and gender), as
well as for social/political institutions (such as the government) and social sectors (such as the
workplace). The chapter begins with an analysis of the “digital divide.” We then examine specific equity and access issues affecting disabled persons, racial minorities, and women. Next, we
explore the relationship between cybertechnology and democracy, and we consider whether
the Internet enhances democracy or threatens it. The final section of this chapter examines
some of the social and ethical impacts that cybertechnology has had thus far for employment
in the contemporary workplace.
In Chapter 11, we examine a wide range of ethical issues pertaining to online communities, virtual reality (VR) environments, and artificial intelligence (AI) developments. We begin
by analyzing the impact that cybertechnology has for our traditional understanding of the
concept of community; in particular, we ask whether online communities, such as Facebook
and Twitter, raise any special ethical or social issues. Next, we examine some ethical implications of behavior made possible by virtual environments and VR/augmented reality applications. We then describe the impact that recent developments in AI have for our sense of self
and for what it means to be human. The final section of Chapter 11 questions whether certain
kinds of (highly sophisticated) AI entities may ultimately deserve some degree of moral consideration and thus might cause us to expand our conventional framework of moral obligation
to include those entities.
Chapter 12, the final chapter of Ethics and Technology, examines some ethical challenges
that arise in connection with emerging and converging technologies such as ambient intelligence (AmI) and nanocomputing. This chapter also examines some issues in the emerging
(sub)field of machine ethics. Among the questions considered are whether we should develop
autonomous machines that are capable of making moral decisions and whether we could trust
those machines to always act in our best interests. Chapter 12 concludes with the introduction
and brief analysis a comprehensive (“dynamic”) ethical framework designed to guide researchers and inform policy makers in the development of new and emerging technologies.
A glossary that defines terms commonly used in the context of computer ethics and cyberethics is also included. However, the glossary is by no means intended as an exhaustive list of
such terms. Additional material for this text is available on the book’s Web site: www.wiley.
com/college/tavani.
â–¶ THE WEB SITE FOR ETHICS AND TECHNOLOGY
Seven appendices for Ethics and Technology are available only in online format. Appendices
A to E include the full text of five professional codes of ethics: the ACM Code of Ethics
and Professional Conduct, the Australian Computer Society Code of Ethics, the British
Computer Society Code of Conduct, the IEEE Code of Ethics, and the IEEE‐CS/ACM
Software Engineering Code of Ethics and Professional Practice, respectively. Specific sections of these codes are included in hardcopy format as well, in relevant sections of
Chapter 4. Two appendices, F and G, are also available online. Appendix F contains the
section of the IEEE‐CS/ACM Computing Curricula 2001 Final Report that describes the
social, professional, and ethical units of instruction mandated in their CS curriculum.
Appendix G provides some additional critical reasoning techniques that expand on the
strategies introduced in Chapter 3.
Tavani-fpref.indd 22
10/27/2015 5:24:02 PM
Preface â—€ xxiii
The Web site for Ethics and Technology also contains additional resources for instructors
and students. Presentation slides in PowerPoint format for Chapters 1–12 are available in the
“Instructor” sections of the site. As noted earlier, a section on “Strategies,” which includes
some techniques for answering the discussion questions and unanalyzed scenarios included at
the end of each of the book’s 12 chapters, is also included on this site.
â–¶ A NOTE TO STUDENTS
If you are taking an ethics course for the first time, you might feel uncomfortable with the
prospect of embarking on a study of moral issues and controversial topics. For example, discussions involving ethical questions are sometimes perceived as “preachy” and judgmental, and
the subject matter of ethics is sometimes viewed as essentially personal and private in nature.
Because these are common concerns, I address them early in the textbook. First, I draw a distinction between an ethicist, who studies morality or a “moral system,” and a moralist who may
assume to have the correct answers to all of the questions; note that a primary objective of this
book is to examine and analyze ethical issues, not to presume that any of us already have the
correct answer to any of the questions we consider.
To accomplish this objective, I introduce three types of conceptual frameworks early in
the textbook. Chapter 1 provides a methodological scheme that enables you to identify controversial problems and issues involving cybertechnology that are ethical in nature. The conceptual scheme included in Chapter 2, employing ethical theories, provides some general
principles that guide your analysis of specific cases as well as your deliberations about which
kinds of solutions to problems should be proposed. A third, and final, conceptual framework
is introduced in Chapter 3 in the form of critical reasoning techniques, which provide rules and
standards that you can use for evaluating the strengths of competing arguments and for
defending a particular position that you reach on a certain issue.
This textbook was designed and written for you, the student! Whether or not it succeeds
in helping you to meet the objectives of a course in cyberethics is very important to me. So I
welcome your feedback on this textbook, and I would sincerely appreciate hearing your ideas
on how this textbook could be improved. Please feel free to email me at htavani@rivier.edu
with your suggestions and comments. I look forward to hearing from you!
â–¶ NOTE TO INSTRUCTORS: A ROADMAP FOR USING THIS BOOK
The chapters that make up Ethics and Technology are sequenced so that readers are exposed
to foundational issues and conceptual frameworks before they examine specific problems in
cyberethics. In some cases, it may not be possible for instructors to cover all of the material in
Chapters 1–3. It is strongly recommended, however, that before students are assigned materials in Chapters 4–12, they at least read Sections 1.1, 1.4, 1.5, and 2.4. Instructors using this textbook can determine which chapters best accommodate their specific course objectives.
CS instructors, for example, will likely want to assign Chapter 4, on professional ethics and
responsibility, early in the term. Philosophy instructors, on the other hand, may wish to begin
their courses with a thorough examination of the materials on ethical theories and critical
reasoning skills included in Chapters 2 and 3. Whereas library/information science instructors
may wish to begin their classes by examining issues in Chapters 8 and 9, on intellectual property and Internet regulation, social science instructors will likely want to examine issues discussed in Chapters 10 and 11 at an early period in their course. Issues discussed in Chapter 12
may be of particular interest to instructors teaching advanced undergraduate courses, as well
as graduate-level courses.
Tavani-fpref.indd 23
10/27/2015 5:24:02 PM
xxiv â–¶ Preface
Many textbooks in applied ethics include a requisite chapter on ethical concepts/theories
at the beginning of the book. Unfortunately, they often treat them in a cursory manner; furthermore, these ethical concepts and theories are seldom developed and reinforced in the
remaining chapters. Thus, readers often experience a “disconnect” between the material
included in the book’s opening chapter and the content of the specific cases and issues discussed in subsequent chapters. By incorporating relevant aspects of ethical theory into our
analysis of the specific cyberethics issues that we examine in this book, I believe that I have
succeeded in avoiding the “disconnect” between theory and practice that is commonplace in
many applied ethics textbooks.
â–¶ A NOTE TO COMPUTER SCIENCE INSTRUCTORS
Ethics and Technology can be used as the main text in a course dedicated to ethical and social
issues in computing, or it can be used as a supplementary textbook for computer science
courses in which one or more ethics modules are included. As I suggested in the preceding
section, instructors may find it difficult to cover all of the material included in this book in the
course of a single semester. And as I also previously suggested, computer science instructors
will likely want to ensure that they allocate sufficient course time to the professional ethical
issues discussed in Chapter 4. Also of special interest to computer science instructors and their
students will be the sections on open‐source code and intellectual property issues in Chapter 8
and regulatory issues affecting software code in Chapter 9.
Because computer science instructors may need to limit the amount of class time they
devote to covering foundational concepts included in the earlier chapters, I recommend covering at least the critical sections of Chapters 1–3 described previously. This should provide
computer science students with some of the tools they will need as professionals to deliberate
on ethical issues and to justify the positions they reach.
In designing this textbook, I took into account the guidelines on ethical instruction
included in the Computing Curricula 2001 Final Report, issued in December 2001 by the
IEEE‐CS/ACM Joint Task Force on Computing Curricula, which recommends the inclusion of
16 core hours of instruction on social, ethical, and professional topics in the curriculum for
undergraduate computer science students. (See the online Appendix F at www.wiley.com/
college/tavani for detailed information about the social/professional (SP) units in the
Computing Curricula 2001.) Each topic, prefaced with an SP designation, defines one “knowledge area” or a CS “body of knowledge.” They are distributed among the following 10 units:
SP1: History of computing (e.g., history of computer hardware, software, and networking)
SP2: Social context of computing (e.g., social implications of networked computing, gender‐
related issues, and international issues)
SP3: Methods and tools of analysis (e.g., identifying assumptions and values, making and
evaluating ethical arguments)
SP4: Professional and ethical responsibilities (e.g., the nature of professionalism, codes of
ethics, ethical dissent, and whistle‐blowing)
SP5: Risks and liabilities of computer‐based systems (e.g., historical examples of software
risks)
SP6: Intellectual property (e.g., foundations of intellectual property, copyrights, patents, and
software piracy)
SP7: Privacy and civil liberties (e.g., ethical and legal basis for privacy protection, technological strategies for privacy protection)
Tavani-fpref.indd 24
10/27/2015 5:24:02 PM
Preface â—€ xxv
SP8: Computer crime (e.g., history and examples of computer crime, hacking, viruses, and
crime prevention strategies)
SP9: Economic issues in computing (e.g., monopolies and their economic implications; effect
of skilled labor supply)
SP10: Philosophical frameworks (e.g., ethical theory, utilitarianism, relativism)
All 10 SP units are covered in this textbook. Topics described in SP1 are examined in
Chapters 1 and 10, and topics included in SP2 are discussed in Chapters 1 and 11. The methods
and analytical tools mentioned in SP3 are discussed at length in Chapters 2 and 3, whereas
professional issues involving codes of conduct and professional responsibility described in SP4
are included in Chapters 4 and 12. Also discussed in Chapter 4, as well as in Chapter 6, are
issues involving risks and liabilities (SP5). Intellectual property issues (SP6) are discussed in
detail in Chapter 8 and in certain sections of Chapter 9, whereas privacy and civil liberty concerns (SP7) are discussed mainly in Chapters 5 and 12. Chapters 6 and 7 examine topics
described in SP8. Economic issues (SP9) are considered in Chapters 9 and 10. And philosophical frameworks of ethics, including ethical theory (SP10), are discussed in Chapters 1 and 2.
Table 1 illustrates the corresponding connection between SP units and the chapters of this
book.
TABLE 1
SP unit
Chapter(s)
SP (“Knowledge”) Units and Corresponding Book Chapters
1
1, 9
2
1, 10
3
2, 3
4
4
5
6
6
8, 9
7
5, 12
8
6, 7
9
9, 10
10
1, 2
â–¶ ACKNOWLEDGMENTS
In revising Ethics and Technology for a fifth edition, I have once again drawn from some concepts, distinctions, and arguments introduced in several of my previously published works.
Where appropriate, these works are acknowledged in the various chapters in which they
are cited.
The fifth edition of this book has benefited from suggestions and comments I received
from many anonymous reviewers, as well as from Maria Bottis, Jeff Buechner, Lloyd Carr,
Frances Grodzinsky, and Regina Tavani. I am especially grateful to Fran Grodzinsky (Sacred
Heart University), with whom I have coauthored several papers, for permitting me to incorporate elements of our joint research into relevant sections of this book. I am also grateful to the
numerous reviewers and colleagues who have commented on the previous editions of this
book; many of their helpful suggestions have been carried over to the present edition. I also
wish to acknowledge many former students whose helpful comments and suggestions on previous editions of the text have benefited the present edition.
I also wish to thank the editorial/production staff at John Wiley & Sons, especially Bryan
Gambrell, Beth Golub, Loganathan Kandan, Gladys Soto, Mary Sullivan, and Nichole Urban
for their support during the various stages of the revision process for the fifth edition of Ethics
and Technology.
Finally, I must once again thank the most important person in my life: my wife, Joanne.
Without her continued support and extraordinary patience, the fifth edition of this book could
not have been completed. This edition of Ethics and Technology is dedicated to my daughter
Regina, and son‐in‐law Joe.
Herman T. Tavani
Nashua, NH
Tavani-fpref.indd 25
10/27/2015 5:24:02 PM
Tavani-fpref.indd 26
10/27/2015 5:24:02 PM
FOREWORD
The computer/information revolution is shaping our world in ways it has been difficult to predict and to appreciate. When mainframe computers were developed in the 1940s and 1950s,
some thought only a few computers would ever be needed in society. When personal computers were introduced in the 1980s, they were considered fascinating toys for hobbyists but not
something serious businesses would ever use. When Web tools were initially created in the
1990s to enhance the Internet, they were a curiosity. Using the Web to observe the level of a
coffee pot across an ocean was intriguing, at least for a few moments, but not of much practical
use. Today, armed with the wisdom of hindsight, the impact of such computing advancements
seems obvious, if not inevitable, to all of us. What government claims that it does not need
computers? What major business does not have a Web address? How many people, even in the
poorest of countries, are not aware of the use of cell phones?
The computer/information revolution has changed our lives and has brought with it significant ethical, social, and professional issues; consider the area of privacy as but one example.
Today, surveillance cameras are abundant, and facial recognition systems are effective even
under less than ideal observing conditions. Information about buying habits, medical conditions, and human movements can be mined and correlated relentlessly using powerful computers. Individuals’ DNA information can easily be collected, stored, and transmitted
throughout the world in seconds. This computer/information revolution has brought about
unexpected capabilities and possibilities. The revolution is not only technological but also ethical, social, and professional. Our computerized world is perhaps not the world we expected,
and, even to the extent that we expected it, it is not a world for which we have well‐analyzed
policies about how to behave. Now more than ever, we need to take cyberethics seriously.
Herman Tavani has written an excellent introduction to the field of cyberethics. His text
differs from others in at least three important respects: First, the book is extraordinarily comprehensive and up to date in its subject matter. The text covers all of the standard topics such
as codes of conduct, privacy, security, crime, intellectual property, and free speech and also
discusses sometimes overlooked subjects such as democracy, employment, access, and the digital divide. Tavani more than anyone else has tracked and published the bibliographical development of cyberethics over many years, and his expertise with this vast literature shines
through in this volume. Second, the book approaches the subject matter of cyberethics from
diverse points of view. Tavani examines issues from a social science perspective, from a philosophical perspective, and from a computing professional perspective, and then he suggests
ways to integrate these diverse approaches. If the task of cyberethics is multidisciplinary, as
many of us believe, then such a diverse but integrated methodology is crucial to accomplishing
the task. His book is one of the few that constructs such a methodology. Third, the book is
unusually helpful to students and teachers because it contains an entire chapter discussing
critical thinking skills and is filled with review and discussion questions.
The cyberage is going to evolve. The future details and applications are, as always, difficult
to predict. But it is likely that computing power and bandwidth will continue to grow while
computing devices themselves will shrink in size to the nanometer scale. More and more information devices will be inserted into our environment, our cars, our houses, our clothing, and us.
xxvii
Tavani-ffore.indd 27
10/27/2015 5:23:22 PM
xxviii â–¶ Foreword
Computers will become smarter. They will be made out of new materials, possibly biological.
They will operate in new ways, possibly using quantum properties. The distinction between the
virtual world and the real world will blur more and more. We need a good book in cyberethics
to deal with the present and prepare us for this uncertain future. Tavani’s Ethics and Technology
is such a book.
James H. Moor
Dartmouth College
Tavani-ffore.indd 28
10/27/2015 5:23:22 PM
CHAPTER
1
Introduction to Cyberethics:
Concepts, Perspectives,
and Methodological Frameworks
LEARNING OBJECTIVES
Upon completing this chapter, you will successfully be able to:
r Define cybertechnology and identify a wide range of technologies and devices that fall
r
r
r
r
r
under that category,
Define cyberethics and describe a cluster of moral, social, and legal issues that can be
analyzed within that branch of applied ethics,
Articulate key aspects of four distinct phases in the historical development and evolution of cybertechnology and cyberethics,
Determine whether any of the ethical issues generated by cybertechnology are genuinely
unique ethical issues, or whether they are simply new variations of traditional ethical issues,
Differentiate among three distinct applied ethics perspectives—professional ethics,
philosophical ethics, and sociological/descriptive ethics—that can be used to analyze
the wide range of cyberethics issues examined in this book,
Explain the components of a comprehensive methodological framework that we will
use in our analysis of cyberethics issues in later chapters of this book.
Our primary objective in Chapter 1 is to introduce some foundational concepts and methodological frameworks that we will use to evaluate specific cyberethics issues examined in detail
in subsequent chapters. We begin by reflecting on a scenario that briefly illustrates a cluster of
ethical issues that arise in a recent controversy involving the use of cybertechnology.
▶ SCENARIO 1–1: Hacking into the Mobile Phones of Celebrities
In September 2014, one or more anonymous intruders hacked into the online accounts of the mobile
phones of more than 100 celebrities, including actress Jennifer Lawrence and model Kate Upton. Nude
photos of some of these celebrities were subsequently leaked to the Internet via the 4Chan Web site. The
hacker(s) had allegedly broken into Apple Corporation’s iCloud (a file‐sharing service that enables users
to store their data) gaining access to controversial pictures. Some of the celebrities whose accounts were
hacked had previously deleted the photos on their physical devices and thus assumed that these pictures
no longer existed.
1
Tavani-c01.indd 1
10/27/2015 5:02:06 PM
2 â–¶ Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks
Whereas some of the affected celebrities claimed that the nude photos of them were fake images,
others admitted that the controversial pictures were authentic. Some of these celebrities threatened to
bring legal action against anyone who posted nude photos of them on the Internet; for example, Jennifer
Lawrence, through her spokesperson, warned that she would pursue criminal prosecution against those
individuals.
In response to the intense media coverage generated by the hacking and leaking of the celebrities’
photos, spokespersons for both Apple and the Federal Bureau of Investigation (FBI) announced that
investigations into this incident were underway.1
This scenario raises a number of ethical, legal, and social issues affecting digital technology and cyberspace. One major concern involves privacy; in fact, Lawrence’s attorney described
the hacking incident as a “flagrant violation” of his client’s privacy. Other issues that arise
in this scenario involve property rights—for example, are the leaked photos in question solely
the property of the celebrities (as in the case of the physical electronic devices these
celebrities own)? Or does the fact that those photos also reside in the cloud alter their status
as the sole property of an individual? Also, at issue in this scenario are questions concerning
(cyber)security—how secure is the personal data stored on our devices or in a storage service
space such as the cloud? Other aspects of this controversial incident can be analyzed from the
perspective of (cyber)crime; for example, some have suggested that this kind of cyber intrusion is not simply a hacking incident, or merely an instance of online harassment, but is also a
serious “sex crime.”
The hacking scenario involving the celebrities’ photos provides us with a context in which we
can begin to think about a cluster of ethical issues—privacy, property, security, crime, harassment,
and so forth—affecting the use of electronic devices, in particular, and cybertechnology in general.
A number of alternative scenarios and examples could also have been used to illustrate many of
the same moral and legal concerns that arise in connection with digital technology. In fact, examples abound. One has only to read a daily newspaper or view regular television news programs
to be informed about controversial issues involving electronic devices and the Internet, including
questions that pertain to property rights, privacy violations, security, anonymity, and crime. Ethical
aspects of these and other issues are examined in the 12 chapters comprising this textbook. In the
remainder of Chapter 1, however, we identify and examine some key foundational concepts and
methodological frameworks that can better help us to analyze issues in cyberethics.
â–¶ 1.1 DEFINING KEY TERMS: CYBERETHICS
AND CYBERTECHNOLOGY
Before we propose a definition of cyberethics, it is important to note that the field of cyberethics can be viewed as a branch of (applied) ethics. In Chapter 2, where we define ethics as “the
study of morality,” we provide a detailed account of what is meant by morality and a moral
system, and we also focus on some important aspects of theoretical, as opposed to, applied
ethics. For example, both ethical concepts and ethical theories are also examined in detail in
that chapter. There, we also include a “Getting Started” section on how to engage in ethical
reasoning in general, as well as reasoning in the case of some specific moral dilemmas. In
Chapter 1, however, our main focus is on clarifying some key cyber and cyber‐related terms
that will be used throughout the remaining chapters of this textbook.
For our purpose, cyberethics can be defined as the study of moral, legal, and social issues
involving cybertechnology. Cyberethics examines the impact of cybertechnology on our social,
legal, and moral systems, and it evaluates the social policies and laws that have been framed in
response to issues generated by its development and use. To grasp the significance of these reciprocal relationships, it is important to understand what is meant by the term cybertechnology.
Tavani-c01.indd 2
10/27/2015 5:02:06 PM
1.1 Defining Key Terms: Cyberethics and Cybertechnology â—€ 3
1.1.1
What Is Cybertechnology?
Cybertechnology, as used throughout this textbook, refers to a wide range of computing and
communication devices, from stand‐alone computers to connected, or networked, computing
and communication technologies. These technologies include, but need not be limited to,
devices such as “smart” phones, iPods, (electronic) tablets, personal computers (desktops and
laptops), and large mainframe computers. Networked devices can be connected directly to the
Internet, or they can be connected to other devices through one or more privately owned
computer networks. Privately owned networks, in turn, include local‐area networks (LANs)
and wide‐area networks (WANs). A LAN is a privately owned network of computers that
span a limited geographical area, such as an office building or a small college campus. WANs,
on the other hand, are privately owned networks of computers that are interconnected
throughout a much broader geographic region.
How exactly are LANs and WANs different from the Internet? In one sense, the Internet
can be understood as the network of interconnected computer networks. A synthesis of contemporary information and communications technologies, the Internet evolved from an earlier
U.S. Defense Department initiative (in the 1960s) known as the ARPANET. Unlike WANs
and LANs, which are privately owned computer networks, the Internet is generally considered
to be a public network, in the sense that much of the information available on the Internet
resides in “public space” and is thus available to anyone. The Internet, which should be differentiated from the World Wide Web, includes several applications. The Web, based on Hypertext
Transfer Protocol (HTTP), is one application; other applications include File Transfer Protocol
(FTP), Telnet, and e‐mail. Because many users navigate the Internet by way of the Web, and
because the majority of users conduct their online activities almost exclusively on the Web
portion of the Internet, it is very easy to confuse the Web with the Internet.
The Internet and privately owned computer networks, such as WANs and LANs, are
perhaps the most common and well‐known examples of cybertechnology. However,
“cybertechnology” is used in this book to represent the entire range of computing and communication systems, from stand‐alone computers to privately owned networks and to the
Internet itself. “Cyberethics” refers to the study of moral, legal, and social issues involving
those technologies.
1.1.2
Why the Term Cyberethics?
Many authors have used the term “computer ethics” to describe the field that examines moral
issues pertaining to computing and information technologies (see, e.g., Barger 2008;
Johnson 2010). Others use the expression “information ethics” (e.g., Capurro 2007) to refer to
a cluster of ethical concerns regarding the flow of information that is either enhanced or
restricted by computer technology.2 And because of concerns about ethical issues involving
the Internet in particular, some have also used the term “Internet ethics” (see, e.g.,
Langford 2000). As we shall see, however, there are some disadvantages to using each of these
expressions, especially insofar as each fails to capture the wide range of moral issues involving
cybertechnology.3
For our purposes, “cyberethics” is more appropriate and more accurate than “computer
ethics” for two reasons. First, the term “computer ethics” can connote ethical issues associated
with computing machines and thus could be construed as pertaining to stand‐alone or “unconnected computers.” Because computing technologies and communication technologies have
converged in recent years, resulting in networked systems, a computer system may now be
thought of more accurately as a new kind of medium than as a machine. Second, the term
“computer ethics” might also suggest a field of study that is concerned exclusively with ethical
issues affecting computer/information technology (IT) professionals. Although these issues
Tavani-c01.indd 3
10/27/2015 5:02:06 PM
4 â–¶ Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks
are very important and are examined in detail in Chapter 4 as well as in relevant sections of
Chapters 6 and 12, we should note that the field of cyberethics is not limited to an analysis of
moral issues that affect only professionals.
“Cyberethics” is also more accurate, for our purposes, than “information ethics.” For
one thing, the latter expression is ambiguous because it can mean a specific methodological
framework Information Ethics (IE) for analyzing issues in cyberethics (Floridi 2007).4 Also,
it can connote a cluster of ethical issues of particular interest to professionals in the fields of
library science and information science (Buchanan and Henderson 2009). In the latter sense,
“information ethics” refers to ethical concerns affecting the free flow of, and unfettered
access to, information, which include issues such as library censorship and intellectual freedom. (These issues are examined in Chapter 9.) Our analysis of cyberethics issues in this
text, however, is not limited to controversies generally considered under the heading “information ethics.”
We will also see why “cyberethics” is preferable to “Internet ethics.” For one thing, the
ethical issues examined in this textbook are not limited to the Internet; they also include privately owned computer networks and interconnected communication technologies—that is,
technologies that we refer to collectively as cybertechnology. Although most of the issues
considered under the heading cyberethics pertain to the Internet or the Web, some issues
examined in this textbook do not involve networks per se; for example, issues associated with
computerized monitoring in the workplace, with professional responsibility for designing reliable computer hardware and software systems, and with the implications of cybertechnology
for gender and race need not involve networked computers and devices. In light of the wide
range of moral issues examined in this book—ethical issues that cut across the spectrum of
devices and communication systems (comprising cybertechnology), from stand‐alone computers to networked systems—the term “cyberethics” is more comprehensive, and thus more
appropriate, than “Internet ethics.”5
Finally, we should note that some issues in the emerging fields of “agent ethics,” “bot ethics,”
“robo‐ethics,” or what Wallach and Allen (2009) call “machine ethics” overlap with a cluster
of concerns examined under the heading of cyberethics. Wallach and Allen define machine
ethics as a field that expands upon traditional computer ethics because it shifts the main area
of focus away from “what people do with computers to questions about what machines do
by themselves.” It also focuses on questions having to do with whether computers can be
autonomous agents capable of making good moral decisions. Research in machine ethics overlaps with the work of interdisciplinary researchers in the field of artificial intelligence (AI).6
We examine some aspects of this emerging field (or subfield of cyberethics) in Chapters 11
and 12.
â–¶ 1.2 THE CYBERETHICS EVOLUTION: FOUR DEVELOPMENTAL
PHASES IN CYBERTECHNOLOGY
In describing the key evolutionary phases of cybertechnology and cyberethics, we begin by
noting that the meaning of “computer” has evolved significantly since the 1940s. If you were to
look up the meaning of that word in a dictionary written before World War II, you would most
likely discover that a computer was defined as a person who calculated numbers. In the time
period immediately following World War II, the term “computer” came to be identified with a
(calculating) machine as opposed to a person (who calculated).7 By the 1980s, however, computers had shrunk in size considerably and they were beginning to be understood more in
terms of desktop machines (that manipulated symbols as well as numbers), or as a new kind of
medium for communication, rather than simply as machines that crunch numbers. As computers
became increasingly connected to one another, they came to be associated with metaphors
Tavani-c01.indd 4
10/27/2015 5:02:07 PM
1.2 The Cyberethics Evolution: Four Developmental Phases in Cybertechnology â—€ 5
such as the “information superhighway” and cyberspace; today, many ordinary users tend to
think about computers in terms of various Internet‐ and Web‐based applications made possible by cybertechnology.
In response to some social and ethical issues that were anticipated in connection with the
use of electronic computers, the field that we now call cyberethics had its informal and humble
beginnings in the late 1940s. It is interesting to note that during this period—when ENIAC
(Electronic Numerical Integrator and Computer), the first electronic computer, developed at
the University of Pennsylvania, became operational in 1946—some analysts confidently predicted that no more than five or six computers would ever need to be built. It is also interesting
to point out that during this same period, a few insightful thinkers had already begun to
describe some social and ethical concerns that would likely arise in connection with computing
and cybertechnology.8 Although still a relatively young academic field, cyberethics has now
matured to a point where several articles about its historical development have appeared in
books and scholarly journals. For our purposes, the evolution of cyberethics can be summarized in four distinct technological phases.9
Phase 1 (1950s and 1960s): Large (Stand‐Alone) Mainframe Computers
In Phase 1, computing technology consisted mainly of huge mainframe computers, such as
ENIAC, that were “unconnected” and thus existed as stand‐alone machines. One set of ethical
and social questions raised during this phase had to do with the impact of computing machines
as “giant brains.” Today, we might associate these kinds of questions with the field of artificial
intelligence (AI). The following kinds of questions were introduced in Phase 1: Can machines
think? If so, should we invent thinking machines? If machines can be intelligent entities, what
does this mean for our sense of self? What does it mean to be human?
Another set of ethical and social concerns that arose during Phase 1 could be catalogued
under the heading of privacy threats and the fear of Big Brother. For example, some people in
the United States feared that the federal government would set up a national database in
which extensive amounts of personal information about its citizens would be stored as electronic records. A strong centralized government could then use that information to monitor
and control the actions of ordinary citizens. Although networked computers had not yet come
on to the scene, work on the ARPANET—the Internet’s predecessor, which was funded by an
agency in the U.S. Defense Department—began during this phase, in the 1960s.
Phase 2 (1970s and 1980s): Minicomputers and Privately Owned Networks
In Phase 2, computing machines and communication devices in the commercial sector began to
converge. This convergence, in turn, introduced an era of computer/communications networks.
Mainframe computers, minicomputers, microcomputers, and personal computers could now
be linked together by way of one or more privately owned computer networks such as LANs
and WANs (see Section 1.1.1), and information could readily be exchanged between and
among databases accessible to networked computers.
Ethical issues associated with this phase of computing included concerns about personal
privacy, intellectual property (IP), and computer crime. Privacy concerns, which had emerged
during Phase 1 because of worries about the amount of personal information that could be
collected by government agencies and stored in a centralized government‐owned database,
were exacerbated because electronic records containing personal and confidential information could now also easily be exchanged between two or more commercial databases in the
private sector. Concerns affecting IP and proprietary information also emerged during this
phase because personal (desktop) computers could be used to duplicate proprietary software
programs. And concerns associated with computer crime appeared during this phase because
individuals could now use computing devices, including remote computer terminals, to break
into and disrupt the computer systems of large organizations.
Tavani-c01.indd 5
10/27/2015 5:02:07 PM
6 â–¶ Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks
Phase 3 (1990–Present): The Internet and World Wide Web
During Phase 3, the Internet era, availability of Internet access to the general public has
increased significantly. This was facilitated, in no small part, by the development and phenomenal growth of the World Wide Web in the 1990s. The proliferation of Internet‐ and Web‐based
technologies has contributed to some additional ethical concerns involving computing technology; for example, issues of free speech, anonymity, jurisdiction, and trust have been hotly
disputed during this phase. Should Internet users be free to post any messages they wish on
publicly accessible Web sites or even on their own personal Web pages—in other words, is that
a “right” that is protected by free speech or freedom of expression? Should users be permitted
to post anonymous messages on Web pages or even be allowed to navigate the Web anonymously or under the cover of a pseudonym?
Issues of jurisdiction also arose because there are no clear national or geographical
boundaries in cyberspace; if a crime occurs on the Internet, it is not always clear where—that
is, in which legal jurisdiction—it took place and thus it is unclear where it should be prosecuted. And as e‐commerce emerged during this phase, potential consumers initially had concerns about trusting online businesses with their financial and personal information. Other
ethical and social concerns that arose during Phase 3 include disputes about the public vs.
private aspects of personal information that has become increasingly available on the Internet.
Concerns of this type have been exacerbated by the amount of personal information included
on social networking sites, such as Facebook and Twitter, and on other kinds of interactive
Web‐based forums made possible by “Web 2.0” technology (described in Chapter 11).
We should note that during Phase 3, both the interfaces used to interact with computer
technology and the devices used to “house” it were still much the same as in Phases 1 and 2.
A computer was still essentially a “box,” that is, a CPU, with one or more peripheral devices,
such as a video screen, keyboard, and mouse, serving as interfaces to that box. And computers
were still viewed as devices essentially external to humans, as things or objects “out there.”
As cybertechnology continues to evolve, however, it may no longer make sense to try to understand computers simply in terms of objects or devices that are necessarily external to us.
Instead, computers will likely become more and more a part of who or what we are as human
beings. For example, Moor (2005) notes that computing devices will soon be a part of our
clothing and even our bodies. This brings us to Phase 4.
Phase 4 (Present–Near Future): Converging and Emerging Technologies
Presently, we are on the threshold of Phase 4, a point at which we have begun to experience an
unprecedented level of convergence of technologies. We have already witnessed aspects of technological convergence beginning in Phase 2, where the integration of computing and communication
devices resulted in privately owned networked systems, as we noted previously. And in Phase 3,
the Internet era, we briefly described the convergence of text, video, and sound technologies on the
Web, and we noted how the computer began to be viewed much more as a new kind of medium
than as a conventional type of machine. The convergence of information technology and biotechnology in recent years has resulted in the emerging fields of bioinformatics and computational
genomics; this has also caused some analysts to question whether computers of the future will still
be silicon based or whether some may also possibly be made of biological materials. Additionally,
biochip implant technology, which has been enhanced by developments in AI research (described
in Chapter 11), has led some to predict that in the not‐too‐distant future it may become difficult for
us to separate certain aspects of our biology from our technology.
Today, computers are also ubiquitous or pervasive; that is, they are “everywhere” and they
permeate both our workplace and our recreational environments. Many of the objects that we
encounter in these environments are also beginning to exhibit what Brey (2005) and others
call “ambient intelligence,” which enables “smart objects” to be connected to one another via
Tavani-c01.indd 6
10/27/2015 5:02:07 PM
1.3 Are Cyberethics Issues Unique Ethical Issues? â—€ 7
TABLE 1-1
Summary of Four Phases of Cyberethics
Phase
Time Period
Technological Features
Associated Issues
1
1950s–1960s
2
1970s–1980s
3
1990s–present
4
Present to near
future
Stand‐alone machines (large
mainframe computers)
Minicomputers and the ARPANET;
desktop computers interconnected via
privately owned networks; not yet
widely accessible to the general public
Internet, World Wide Web, and early
“Web 2.0” applications, environments,
and forums; became accessible to
ordinary people
Convergence of information and
communications technologies with
nanotechnology and biotechnology, in
addition to developments in emerging
technologies such as AmI, augmented
reality, and 3D printing
Artificial intelligence (AI), database
privacy (“Big Brother”)
Issues from Phase 1 plus concerns
involving intellectual property and
software piracy, computer crime, and
communications privacy
Issues from Phases 1 and 2 plus concerns
about free speech, anonymity, legal
jurisdiction, behavioral norms in virtual
communities
Issues from Phases 1–3 plus concerns about
artificial electronic agents (“bots”) with
decision‐making capabilities, AI‐induced
bionic chip implants, nanocomputing,
pervasive computing, Big Data, IoT, etc.
wireless technology. Some consider radio‐frequency identification (RFID) technology (described
in detail in Chapter 5) to be the first step in what is now referred to as the Internet of Things
(IoT), as well as pervasive or ubiquitous computing (described in detail in Chapter 12).
What other kinds of technological changes should we anticipate as research and development continue in Phase 4? For one thing, computing devices will likely continue to become
more and more indistinguishable from many kinds of noncomputing devices. For another
thing, a computer may no longer typically be conceived of as a distinct device or object with
which users interact via an explicit interface such as a keyboard, mouse, and video display. We
are now beginning to conceive of computers and cybertechnology in drastically different ways.
Consider also that computers are becoming less visible—as computers and electronic devices
continue to be miniaturized and integrated/embedded in objects, they are also beginning to
“disappear” or to become “invisible” as distinct entities.
Many analysts predict that computers and other electronic devices will become increasingly
smaller in size, ultimately achieving the nanoscale. (We examine some ethical implications of
nanotechnology and nanocomputing in Chapter 12.) Many also predict that aspects of nanotechnology, biotechnology, and information technology will continue to converge. However, we will
not speculate any further in this chapter about either the future of cybertechnology or the future
of cyberethics. The purpose of our brief description of the four phases of cybertechnology mentioned here is to provide a historical context for understanding the origin and evolution of at
least some of the ethical concerns affecting cybertechnology that we will examine in this book.
Table 1-1 summarizes key aspects of each phase in the development of cyberethics as a
field of applied ethics.
â–¶ 1.3 ARE CYBERETHICS ISSUES UNIQUE ETHICAL ISSUES?
Few would dispute the claim that the use of cybertechnology has had a significant impact on
our moral, legal, and social systems. Some also believe, however, that cybertechnology has
introduced new and unique moral problems. Are any of these problems genuinely unique
moral issues? There are two schools of thought regarding this question.
Tavani-c01.indd 7
10/27/2015 5:02:07 PM
8 â–¶ Chapter 1. Introduction to Cyberethics: Concepts, Perspectives, and Methodological Frameworks
Consider once again Scenario 1–1, in the chapter’s opening section. Have any new ethical
issues been introduced in the hacking incident described in that scenario? Or are the issues
that arise here merely examples of existing ethical issues that may have been exacerbated in
some sense by new technologies, including new storage systems to archive personal data?
Also, consider some factors having to do with scope and scale: The hacked photos of the celebrities can be seen by millions of people around the world, as opposed to previous cases where
one might have to go to an “adult” store to acquire copies of the nude photos. Also, consider
that harassment-related activities of the kind described in Scenario 1–1 can now occur on a
scale or order of magnitude that could not have been realized in the pre‐Internet era.
But do these factors support the claim that cybertechnology has introduced some new and
unique ethical issues? Maner (2004) argues that computer use has generated a series of ethical
issues that (i) did not exist before the advent of computing and (ii) could not have existed if
computer technology had not been invented.10 Is there any evidence to support Maner’s claim?
Next, we consider two scenarios that, initially at least, might suggest that some new ethical
issues have been generated by the use of cybertechnology.
▶ SCENARIO 1–2: Developing the Code for a Computerized Weapon System
Sally Bright, a recent graduate from Technical University, has accepted a position as a software engineer
for a company called Cyber Defense, Inc. This company has a contract with the U.S. Defense Department
to develop and deliver applications for the U.S. military. When Sally reports to work on her first day, she
is assigned to a controversial project that is developing the software for a computer system designed to
deliver chemical weapons to and from remote locations. Sally is conflicted about whether she can, given
her personal values, agree to work on this kind of weapon delivery system, which would not have been
possible without computer technology.
Is the conflict that Sally faces in this particular scenario one that is new or unique because
of computers and cybertechnology? One might argue that the ethical concerns surrounding
Sally’s choices are unique because they never would have arisen had it not been for the invention of computer technology. In one sense, it is true that ethical concerns having to do with
whether or not one should participate in developing a certain kind of computer system did not
exist before the advent of computing technology. However, it is true only in a trivial sense.
Consider that long before computing technologies were available, engineers were confronted
with ethical choices involving whether or not to participate in the design and development of
certain kinds of controversial technological systems. Prior to the computer era, for example,
they had to make decisions involving the design of aircraft intended to deliver conventional as
well as nuclear bombs. So is the fact that certain technological systems happen to include the
use of computer software or computer hardware components morally relevant in this scenario?
Have any new or unique ethical issues, in a nontrivial sense of “unique,” been generated
here? Based on our brief analysis of this scenario, there does not seem to be…
Purchase answer to see full
attachment

  
error: Content is protected !!