+1(978)310-4246 credencewriters@gmail.com
  

I AM ATTACHING ALL OF THE DOCUMENTS NEEDED FOR THIS ESSAY AS IN EXAMPLES OF WHAT THE ESSAY SHOULD LOOK LIKE AND IN DEATIL STEPS ON HOW AND WHAT TO WRITE IN EACH PARAGRAPH. I WILL BE ALSO ATTATCHING THE ARTICLE CHOSEN

The Applied Ethics Academic Article Review
Assignment Rationale
For this assignment, you will be read, analyze, and critique an academic research article written
about applied ethics, which is a work of contemporary philosophy (meeting course objective 1).
In the summary component of the paper, you will discuss how a contemporary philosopher
applies those same concepts and principles that we discussed in class to address moral
concerns (meeting course objectives 2, 3, 4, and 5). In the critical analysis component of the
paper, you will discuss how someone might reasonably disagree with what they say (meeting
course objective 7), as well as how someone might apply what they say to various aspects of life
(meeting course objective 6).
Assignment Prompt
This is a 1000-to-1250-word paper (approximately 4-5 pages) that half summarizes
(approximately 500 words) and half critically analyzes (approximately 500 words) a recent
academic article (published in 2018 or later) of your choice on applied ethics. This review of an
academic article will be written as a professor might write a critical review of an academic
book. The word length is important, because 1000 words is the length of a standard academic
book review. You will find that minimizing word length and offering restrained criticism of
academic work will be increasingly important as you take advanced undergraduate and
graduate classes.
Success on the paper will largely depend on how well the paper conforms to the formatting and
content guidelines included in this document. It must be exactly eight paragraphs, in the same
order and with the same content as specified in the guidelines. The review will be graded “like
an English paper” in terms of the quality of the writing itself and “like a philosophy paper” in
terms of the precision of its ethical content, such as how appropriately it defines and uses
philosophical terminology.
The paper must be written about a single academic article published in 2018 or later, on a topic
applied ethics, or it will receive a zero. Applied ethics refers to the application of the ethical
theories to moral issues: the “ethics of” a given topic (for example, the “ethics of” zoos), a given
topic’s “ethics” (for example, “zoo ethics”), or specific applications of ethics (for example, ethics
of keeping elephants in zoos). This major assignment is 20% of the final grade.
You cannot review any of the articles which are reviewed in the samples following this guide.
You also cannot review any of the articles reviewed in the sample papers following this prompt.
Style Guide
Formatting
•
•
The review should be written as a continuous narrative: There are no separate sections.
Please observe standard writing conventions of 2-3 paragraphs per page (doublespaced, 12 pt. Times New Roman, 1” margins).
Language
•
•
•
•
The review should be written in academic style.
The article should be written in the historical present tense (that is, we know the author
“wrote” the article in the past, but you’re presenting it as if the author “writes” now).
The author is doing the action, not “the article.”
Since you are reviewing an ethics paper, you must incorporate technical philosophical
words where appropriate, taking into consideration that different authors may
understand those same terms differently and defining those words accordingly.
References and Citations
•
•
Do not use any outside sources in the review, other than the article under review. Cite
page numbers only if you quote the article directly, e.g. (p. 29). Cite section numbers,
e.g. (sec. 3), if no pages are available or when you reference the sections of the article in
the first paragraph.
Quotes should be used sparingly, if at all. Citations are needed for direct quotes or to
point out specific information to the reader. Citations for section numbers are required
in the first paragraph.
Content
•
•
•
Remember, the point of the review is to explain why the article is worth reading to
someone who has not yet read the article—but who may be interested in doing so!
Better reviews will critique the limitations and strengths of an article in terms of clear
examples, rather than generalizations. Perhaps limit your discussion to two points of
limitation and two points of strength.
A good measure of success is whether someone who has not read the article can answer
basic questions about it: How is it structured? Who wrote it? What is it about? What
makes it unique and worth reading? What might make it not worth reading? If someone
cannot answer these questions after reading your review, you must rewrite and revise
the review
Content Guide
Academic Articles
The review must be written about an academic research article.
• An article is academic if it is published in an academic journal. An academic journal
publishes peer-reviewed articles written by scholars and professionals. Journals
dedicated to publishing the work of only undergraduate or graduate students are not
true academic journals, and articles published in them are not suitable for the review.
• An article is a research article if it defends a thesis. Some academic journals also publish
book reviews and opinion-editorials. These are not suitable for this assignment.
• Scientific articles written according to the scientific method typically do not make moral
points, making most of them unsuitable for this assignment, but there are exceptions.
Title
The citation of the academic article under review also functions as the title of the paper. This is
a citation style that is often used in ethics papers written by professors.
• The citation is created using a hanging indent.
• Everything, including your name line, is bolded
• DOI stands for Digital Object Identifier. This functions as the article’s unique serial
number. You must use the direct link to the DOI. If no DOI is available, you will need to
link to the article’s digital “home” on the journal’s website. Do not use links to library
databases.
Title Format
Author Last Name First Initial (year in parentheses) Title of article in all lowercase. Title of
Journal in Uppercase volume number of the journal:page range of article. DOI link
Reviewed by Your First Name and Last Name, undergraduate at Blinn College (or Texas A&M)
Title Example
White TI (2017) Dolphins, captivity, and SeaWorld: the misuse of science. Business and
Society Review 122:119–136. https://doi.org/10.1111/basr.12112
Reviewed by Steve Dezort, instructor at Blinn College
Paragraph 1: Structure and Scope
•
•
•
•
The first three sentences need to 1) Indicate the issue that the paper is addressing, 2)
Indicate the author’s research question, in the form of a statement (E.g., “In this article,
Thomas White seeks to answer . . .”), 3) Indicate the author’s answer to the research
question, the thesis (E.g., “He argues that . . .”).
The remainder of the paragraph should describe article’s structure, organization, and
sections, referencing those sections in parentheses. For example, “Section 1” of the
article would be referenced as (sec. 1).
The content should be specific, not what the author is doing, but what the author is
questioning, describing, explaining, arguing, etc.
All the sections must be accounted for in parentheses. Subsections should not be
referenced.
•
•
Refer to the author by both their first and last names at first mention, and by last name
only at subsequent mentions.
Do not reference the title of the article or the titles of the author. For example, do not
refer to the author as Doctor (Dr.), Professor (Prof.), or Mister (Mr.).
Paragraph 2: People, Perspective, and Point
•
•
•
•
•
Describe the perspective from which the author writes the article by mentioning the
author’s discipline, and the author’s research specialization within that discipline. For
example, the author’s discipline might be philosophy, and their specialization within
that discipline might be business ethics.
You will have to do an “academic background check” to find out the author’s discipline
and area of specialization. This “academic background check” will typically lead to a
university’s website. You should not reference this as an outside source. It is not
plagiarism.
You should use the same pronouns that the author uses in their self-description.
The author’s place of employment, job title, and degrees are irrelevant and should not
be mentioned.
Describe the contribution that the author is seeking to make to their research
specialization. This is what the author claims they are seeking to contribute.
Paragraphs 3 and 4: Thesis and Argument
•
•
•
These paragraphs elaborate on the first 3 sentences of paragraph 1 and therefore must
match that content.
Describe the author’s argument in the article.
Summarize their main point (thesis) and sketch their arguments in support of that main
point.
Paragraphs 5 and 6: Points of Limitation
•
•
•
Limit each paragraph to only 1 point of limitation. Examples include:
o Evaluate the breadth, depth, scope, and limitations of the article.
o Discuss what the author covers and what the author leaves out.
o Evaluate whether the author’s claimed contribution is indeed what they make
o Evaluate how convincing the author’s arguments are.
Including an example or two would be helpful to the reader.
Use language such as “Someone might be interested to know more about …” or “One
aspect of the issue that readers would like to know more about is …”
Paragraphs 7 and 8: Points of Usefulness
•
Limit each paragraph to only 1 point of strength. Examples include:
o Evaluate the strengths of the article.
o Discuss the interesting insights the author offers.
•
•
o Comment on the ways the article is useful and who might find the article useful.
Specifically discuss how someone might apply what they say to various aspects
of life.
Including an example or two would be helpful to the reader
Paragraph 8 is not a conclusion. The conclusion is the last sentence, so end on a positive
note with a sentence on why the article is worth reading and recommended for reading.
Lee C Kahle L (2016) The linguistics of social media: communication of emotions and
values in sport. Sport Market Quarterly 25:201–211.
https://fitpublishing.com/articles/linguistics-social-media-communication-emotionsand-values-sport
Reviewed by John Locke, undergraduate at Blinn College
Values in sports are communicated through social media and reflected in sport marketing.
This raises the question of how values and emotions influence sport marketing on social media,
and the additional question of how sport communication is used to recognize fanbase popularity.
Christopher Lee and Lynn Kahle argue that effective communication on tweets reflects the
values in sport communication and managers should realize their role in consistent language
when managing a social media account for fan support. Lee and Kahle first discuss the
attractiveness of products in relation to values (“Theoretical Framework”). Next, Lee and Kahle
present evidence on comparing the linguistics of four MLB Baseball Teams by tweeting their
values during a significant moment in the match (‘Study 1: Teams and Linguistic Context of
Tweets”). They subsequently proceed to the study of four sport apparel companies to assess how
they communicate their values and emotions through Social Media when not having a minute-tominute variation (“Study 2: Apparel Companies + Linguistic Context of Tweets”). Lee and
Kahle conclude a framework to understand the values and sports in order to reveal a new lens,
implies sport markets seek to emphasize values and a lifestyle to communicate their desired
message to the fans. (“General Discussion”).
Lee and Kahle are marketing scholars. Lee’s research interests are consumer behavior,
framing, linguistics, and marketing communication, and Kahle’s research interests are
sustainability, social values, and marketing communication. They both argue that “Social Media
offers marketers the opportunity to communicate specific values to ensure communication is
consistent, while analyzing linguistics of messaging” (p. 201). They developed a List of Values
(LOV) to specify how tweets are evaluated. The values consist of belonging, excitement, warm
relationships, self-fulfillment, peer respect, fun and enjoyment in life, security, self-respect, a
sense of belonging, and accomplishment.
Lee and Kahle explain how values and emotions are expressed in sport marketing. Social
media has been the aspect on how to better interpret values to gradually increase sport marketing.
The authors emphasize social media through the first study of four different Major League
Baseball (MLB) teams. The LIWC software on individual tweets explores values and emotions
communicated in Twitter and applies Kahle’s LOV to investigate communication linguistics to
the nine core values. An example is that Yankees and Red Sox tweeted more about the value of
self-respect, whereas the Dodgers and Red Sox utilized their tweets more about fulfillment. Lee
and Kahle further their evidence by explaining the emotional perspective and explains the Red
Sox and Giants carry increased emotional tweets to improve viewers’ attention. This supports the
social media of sport markets to reinforce emotional appeals in order to enact strong bonds with
fans.
Lee and Kahle present their second study on the values and emotions of apparel
companies to understand the influence of sports marketing. The study of the four largest apparel
companies (Nike, Adidas, Under Armour, and Reebok) and their rhetorical tactics utilizing social
media helps understand how values are analyzed. Kahle’s LOV and LIWC are used in the
articulation of the values and emotions of each apparel company. An example is that Under
Armour invests in popular self-validating and fulfillment tweets stated in the popular slogan
“Protect This House.” This encourages consumers to follow their Twitter account (p. 207).
Under Armour promotes the most positive and emotional brand overall within the studied arenas
which assists in promoting an increased connection with consumers. On the contrary, Nike is the
least positive sport apparel brand overall, directing their tweets toward the sense of
accomplishment with phrases like, “Just Do It” and “Find your Greatness.” The authors argue
that the relationship to values and emotions represented in sport communication is a revealing
combination independent of other sport apparels in the study (p. 207).
One aspect that Lee and Kahle left out of the article is the significance of the fanbase, and
how the fanbase perceives values and emotions. Put differently, one aspect of the issue that
readers would like to become aware of is why Lee and Kahle did not analyze the values and
emotions of the tweets of the consumers and/or fans. Lee and Kahle observe the tweets of the
producers (i.e., the companies) and judge how they appeal to the society but could have also
evaluated how consumers react to the tweets to understand collectively and as a sport market,
fulfilling how values are expressed through communication in sport marketing. Another example
is Under Armour, which combines respect and emotional positivity into one category, but the
issue is how those positive emotions affect their followers who support their sport market.
Another interesting but neglected aspect is how values and emotions are revealed in
social marketing through other sport organizations. For example, Lee and Kahle focus on only
four MLB Baseball Teams and neglect the very many other, and different, organizations such as
NBA and NFL Teams, which likewise use tweets to communicate values and certain aspects of
linguistics. The study’s usefulness, then, while certainly being useful, is nevertheless severely
limited in scope. Additionally, lacking are the tweets of individual players, their followers, and
the linguists and values each display.
Another interesting aspect of the study is Kahle’s creation of the List of Values to
compare different organizations and what those values are. It explains how focusing on business
expansion is tantamount for producing senses of accomplishment and pleasure and portraying
fun and enjoyment. Nike is the highest business company because it has the highest sense of
accomplishment in their tweets. Conversely, Adidas produce the volume of pleasure business
inducing tweets to convey a sense of enjoyment and fun. One reason Adidas expresses fun and
enjoyment is because their clothing lines present a sense of artistic awe to consumers examining
the product and how it brings excitement in the store with interesting slogans.
Another positive aspect within the paragraph is that Lee and Kahle use Nike as an
appropriate example to understand how values and emotions reflect on sport marketing. An
importance in Nike is how their tweets are sent out to strengthen a perception of accomplishment
towards their fanbase. Another reason that Nike is an important apparel in sports is because more
consumers would purchase items due to the value of accomplishment expressed in their sport
market. This article would be fruitful for sport managers to read, because they would better
understand how different values are portrayed through organizations and gain insight on what
values particular tweets would be more likely to increase fanbase expansion. This article is worth
reading since Lee and Kahle are writing objectively with facts and figures and demonstrate how
other organizations may be assessed based on the tweets they adapt for the purpose of gaining
popularity.
Wareham CS (2016) Substantial life extension and the fair distribution of healthspans.
Journal of Medicine and Philosophy 41:521–539.
https://doi.org/10.1093/jmp/jhw021
Reviewed by Thomas Hobbes, undergraduate at Texas A&M
Christopher Wareham begins his article by describing the popular desire for humans to
maintain their youth and extend their lives. In doing so, Wareham asks whether the distribution
of life-extending substances called calorie restriction mimetics (known as CRMs) to the public is
fair, knowing that it would be nearly impossible to distribute the substances fairly amongst
different groups of people. Wareham argues that, assuming there is a way of ensuring fair
distribution of these substances to the public, the distribution of these life-extending substances
is more fair than the alternative of not distributing the substances in the name of equality. To
make this argument, Wareham presents research data on CRMs (sec. 2), presents the prevailing
objection to his argument (sec. 3), argues against a laissez-faire approach as well as the banning
of CRMs (sec. 4), discusses equal provision of CRMs (sec. 5), and presents the argument for
unequal distribution of CRMs (sec. 6).
As an ethics researcher whose focus is bioethics and public health ethics, Wareham
writes to discuss the ethical implications of emerging scientific and nutritional discoveries
surrounding life-extending substances, as they pertain to public consumption. Wareham uses this
background to argue that CRMs should be distributed to the public under the right
circumstances, as well as arguing against competing theories surrounding the debate. For
example, Wareham’s includes recent studies surrounding the biological effects of CRMs on
animals, that are believed to show the positive effects CRMs could have on humans.
Wareham argues for the acceptance of CRMs, and that their ability to rectify the
disparities that exist in life length and quality would be an ethically acceptable practice.
Wareham’s primary argument revolves around the theory that a society in which people have the
same length and quality of life is preferable to a society whose people do not. In this sense, an
unequal society includes a society in which some people have longer than average lives and
some have shorter than average lives, the claim being that the extension of some lives over
others is inherently unethical. Wareham then claims that if life extending substances were to
exacerbate existing disparities in life length and quality then their use would be unethical.
Wareham argues against both the laissez faire approach and the banning of CRMs as he
considers them both to be ineffective and immoral. Wareham claims that the laissez fair
approach is problematic because it would predominantly enhance the lives of the wealthy, while
the poor are preoccupied with more urgent needs. Wareham also maintains that banning the
substances would be unethical, one point being that a ban would be difficult to enforce and those
with greater resources would likely still have access to CRMs, and secondarily the banning of
substances that could improve the livelihoods and ailments of members of the public would be
met with great outcry. Instead, Wareham argues that the most effective solution would be a
program that distributes CRMs to those in need in order to ensure an equal outcome in terms of
life length and quality.
While Wareham explores multiple sides of the ethical debate, there are issues involving
the distribution of CRMs that were not taken into account. In the case of a government program
to distribute CRMs on the basis of need, or what is referred to in the article as unequal provision,
this would be funded by the public’s tax dollars. While Wareham was thorough in his approach
seeking fairness he did not take into account that wealthier areas put more money into tax
programs, so their local programs tend to be higher quality than the programs put into place in
lower income communities. This would lead to wealthier groups having greater access to the
substances that enhance their lives, creating an unethical distribution of resources.
When one considers the disadvantages faced by lower income communities as can be
seen across the world, but specifically across America, it is easy to see how these would manifest
unequal distribution of resources in a program like the one Wareham describes. The monumental
differences that can be observed in different jurisdictions when you look into public schools and
medical facilities, as well as other public services within communities, it isn’t difficult to
imagine how this lack of standardization would affect such an important emerging medical
treatment. This of course violates the lockean proviso of which it is based on, because of the
unequal distribution and quality of services provided to the public.
Wareham’s distinct organizational skills help to make this easy for someone with no prior
knowledge of the topic to understand, and retain the most important information well. As he
explains complex moral issues and the many faceted arguments surrounding them, Wareham is
able to condense massive amounts of information into smaller digestible chunks for the reader to
take advantage of. Wareham elaborates on not only the studies that have led to CRMs, but also
the precedents that they could set for further medical and nutritional advancements. With
Wareham’s experience in bioethics and public health ethics, he is able to provide insight into the
processes for coming up with a solution to the ethical dilemmas faced by innovators in the field,
and the long term consequences that come along with these innovations.
While this may be considered a niche subject, it is more likely to affect our lives in the
future than many understand. The general public would be well served to educate themselves on
this topic as it may become very prevalent in the coming years, both as it pertains to CRMs, and
more broadly as it pertains to the future of technological innovation that we all benefit from. This
article is worth reading and helpful to all, and its approach to exploring all facets of the ethical
debate make it a well crafted work of ethical and philosophical writing.
Persson K Selter F Neitzke G Kunzmann P (2020) Philosophy of a “good death” in small
animals and consequences for euthanasia in animal law and veterinary practice.
Animals 10:124–138. https://doi.org/10.3390/ani10010124
Reviewed by Mary Wollstonecraft, undergraduate at Texas A&M University
Kirsten Persson, Felicitas Selter, Gerald Neitzke, and Peter Kunzmann describe
euthanasia in the veterinary practice as a “good death” and further move on to discuss the
different factors that may result in a nonhuman euthanasia. The focus of the paper is to bring
light to the variety of ethical reasons of why companion animals may undergo euthanasia. They
accomplish this by describing a variety of definitions and accounts of the meaning of death as
well as laws and guidelines regarding the practice. Persson et al. describe the multiple ethics that
pertain to euthanasia (sec. 2), refer to the laws that veterinarians must follow based on human
interest and public safety (sec. 3), and describe intermediate interests between the patient’s
owner and society (sec. 4). Lastly, they provide a discussion that focuses on the perspective of
the veterinarian (sec. 5), and finish with a conclusion to wrap up their final points (sec. 6).
Persson et al. share the same discipline in philosophy, while Selter has a discipline in
neurophilosophy. Meanwhile, each author specializes in animal ethics, with the exception of
Neitzke, who specializes in medical and clinical ethics. The authors aim to write about the moral
difficulties that come with euthanization within the small animal practice, as well as attempt to
explain and diminish the tension within the end-of-life (EOL) debate. To help support these
ideas, Persson et al. provided an Austrian study in which veterinarians said they view euthanasia
as an “unavoidable evil” and stated that both the owners and veterinarians feel guilty after having
to put down a pet even though they knew it was in their best interest (sec. 5). In order to ease
tension with EOL decisions, the authors include different definitions of the meaning of
euthanasia. For example, “proper euthanasia” refers to the painless killing of an animal if death
is deemed to be the best decision for that pet, whereas “contextually-justified euthanasia” is
referred to as the killing of a companion animal who could have had a fulfilling life, though the
circumstances, either of the owner or society, are not worthwhile (sec. 2). These definitions both
allow for an ethical euthanasia even though the circumstances are different.
Persson et al. argue that euthanasia is one of the most burdensome practices in the
veterinary field, yet is an unavoidable tool. They explore the moral stress of a veterinarian as a
result of the killings of their patients, sometimes having to decide in favor of the owner’s
decisions rather than their patient’s well being. With the increasing rate of owners who consider
their companion animals as family members, veterinary medicine has evolved to help give their
patients longer, more fulfilling lives. However, sometimes this may go against the animal’s best
interest if given a negative prognosis. If this is the case, Persson et al. write that the quality of life
for this animal may be more detrimental than beneficial. The authors support this claim by
stating that death can prevent a life full of suffering (sec. 2).
Besides having to euthanize a nonhuman on behalf of the owner’s interest, veterinarians
must also follow laws and guidelines regarding the practice. For example, Persson et al. write
that the painless killing of allegedly dangerous dogs or the surplus of animals in laboratories or
animal shelters are in the interest of the public, and therefore justify the killing under the
classification of euthanasia. The authors state that there is not yet a definition for euthanasia in
terms of public safety, however, provide the term “humane killing” to refer to the death, without
pain and suffering, of research and labratory animals. Finally, the authors state that the conflict
between the animal’s interest and the owner’s circumstances, like time or money, that result in a
euthanasia are a heavy influence of the moral stress in veterinarians.
Though the authors briefly touch on the topic of quantity vs. quality of life, readers might
be interested to know more about how to improve a companion animal’s quality of life. The
authors state that “prolonging life without increasing quality of life may cause more harm than
good” (sec. 2), but what does it mean to improve the quality of life? Could it be as simple as
physically showing more appreciation for the animal, or maybe rewarding it after completing a
given task? The quality of life is based on the animal’s best interests, not the owners, however,
many owners may not have the will to put their animal down because of the companionship they
bring. If they remain stagnant to improve the animal’s life, they may slowly be hurting their
companion animal more. Persson et al. also fail to explain what may bring value to the
companion animal’s life. If increasing the quality of life means to increase what a companion
animal values most, then we must understand what is necessary to accomplish that.
Early in the article, Persson et al. write that judgment for justification of euthanasia for
nonhuman animals can be based off of different accounts of the meaning of death. However,
when this topic comes up, they fail to explain what these different accounts may consist of. They
also fail to describe the meaning of death of an animal, which is what a reader may be interested
to learn. What might death look like in an animal’s eyes? If not a physical death, could a
companion animal ‘die’ from lack of care, meaning, can they mentally overcome the burden of
an owner’s neglect? While the authors fail to look further into these specific topics, they provide
valid support for the points they make later in the article.
Hedonism is a theory that describes pleasure as the highest good, and from pleasure, a
good life may be lived. Persson et al. describe this theory very well when they describe narrow
and broad hedonism. For example, narrow hedonism is described as the lack of negative events
like suffering and pain and broad hedonism suggests to consider both positive and negative states
of life in regards to euthanasia. However, narrow hedonism allows for logical criticism stating
that if a fulfilling life requires a lack of suffering, then the best option would be to kill every
animal since that would ensure the absence of suffering (sec. 2). The authors incorporate this
criticism to allow the reader to consider what is truly important in a companion animal’s life and
what may lead to an ethical euthanasia. Hedonism is significant because it pertains to the
nonhuman’s quality of life, which was previously stated to be a factor decision for euthanasia.
Persson et al. describe the theory in a way that is easy to understand and allows the reader to
reflect on the topic.
Persson et al. provide new perspectives to the meaning of euthanasia in the veterinary
practice. They provide detailed information with points and outside references to help support
their claim. Though euthanasia is a controversial topic, the authors explain the ethical reasonings
behind it while also remaining empathetic to the reader by explaining the moral stress that
veterinarians must face. The authors provide an unbiased opinion in the article, which allows the
reader to create their own outlook based on the information. This article is worth reading because
it strives to reduce criticism that targets a very controversial, yet highly recognized veterinary
practice
Society (2021) 58:196–203
https://doi.org/10.1007/s12115-021-00586-8
FEATURED FORUM: WELCOME TO THE DIGITAL ERA: THE IMPACT OF AI ON BUSINESS AND
SOCIETY
Ethical Aspects of the Impact of AI: the Status of Humans in the Era
of Artificial Intelligence
Roman Rakowski 1
& Petr Polak
2
& Petra Kowalikova
1
Accepted: 6 May 2021 / Published online: 26 May 2021
# Springer Science+Business Media, LLC, part of Springer Nature 2021
Abstract
On the one hand, AI is a functional tool for emancipating people from routine work tasks, thus expanding the possibilities of their
self-realization and the utilization of individual interests and aspirations through more meaningful spending of time. On the other
hand, there are undisputable risks associated with excessive machine autonomy and limited human control, based on the
insufficient ability to monitor the performance of these systems and to prevent errors or damage (Floridi et al. Minds &
Machines 28, 689–707, 2018). In connection with the use of ethical principles in the research and development of artificial
intelligence, the question of the social control of science and technology opens out into an analysis of the opportunities and risks
that technological progress can mean for security, democracy, environmental sustainability, social ties and community life, value
systems, etc. For this reason, it is necessary to identify and analyse the aspects of artificial intelligence that could have the most
significant impact on society. The present text is focused on the application of artificial intelligence in the context of the market
and service sector, and the related process of exclusion of people from the development, production and distribution of goods and
services. Should the application of artificial intelligence be subject to value frameworks, or can the application of AI be
sufficiently regulated by the market on its own?
Keywords AI . Big data . Datafication . Commodification of data . Digital ideology . Ethical aspects
Introduction
We live in a period of digital turn, which is often referred to by
media, theorists and experts as the fourth industrial revolution
or Industry 4.0. The 4.0 concept was originally intended in
relation to the field of industry and production, in which there
will be such great changes that the whole social sphere will
subsequently change – as was the case during previous technological revolutions. The opposite is true; it is necessary to
* Petr Polak
petr.polak.68@gmail.com
Roman Rakowski
roman.rakowski@vsb.cz
Petra Kowalikova
petra.kowalikova@vsb.cz
1
VSB – Technical University of Ostrava, Ostrava, Czech Republic
2
Faculty of Business and Economics, Mendel University in Brno,
Brno, Czech Republic
talk more about the inconspicuous technological evolution
that is taking place at all levels of society, not just at the level
of the industry. The reach of modern technology has long
gone beyond research, development and manufacturing and
has completely dominated public and private life to the point
that 4.0 seems to be a society based on the interconnection of
technology, people and data (Big Data). However, this means
that new ethical and political challenges lie in the implementation of new technologies. On the one hand, technologies are
radically changing the environment in which we live, and on
the other hand, without us realizing it, they are also changing
ourselves. In the context of the “digital turn”, a transformation is currently affecting established modern oppositions
such as subject/object, public/private, consumption/production, mind/body, work/leisure, culture/nature and so on
(Chandler & Fuchs, 2019, p. 2).
The initial enthusiasm for scientific discoveries and innovations is seldom marked by fears of the unintended consequences of their practical application. The obstacles considered include the restriction of the field of application by legislative standards or the economic aspects of the transposition
Soc (2021) 58:196–203
of new technology from laboratory conditions into production
practice. The existence of some difference between technological possibilities and their implementation in an environment
limited by economic, legal, and organizational factors is widely accepted. However, a silent precondition for the introduction of technological innovations is their presumed benefit for
individuals, social groups, or society as a whole. Possible
negative consequences remain below the threshold of discrimination, provided that they do not directly conflict with binding legislative or social standards in general and can be offset
by positive effects in the relevant area. However, the more
rapid the technological development and the more important
the social role that new technologies play, the more carefully
their impacts on an individual’s life and the functioning of
individual social subsystems should be considered
(Matochova et al., 2019, p. 229; Kowalikova et al., 2020,
pp. 631–636).
Responses to the dynamics of current change range from
attempts to stabilize the environment by introducing new control mechanisms and increasing the frequency of controls, to
the adoption of change and restructuring of hitherto known
interpretation schemes, to feelings of helplessness and alienation (Veitas & Weinbaum, 2017, pp. 1–2).
The constant presentation of risks in public space and the
constant effort to reduce them significantly contribute to the
disruption of the feeling of ontological security. Compared to
previous stages of social development, advanced societies are
now more likely to die from overeating rather than famine,
suicide rather than an attack by soldiers, terrorists or
criminals, and of old age rather than an infectious disease (Harari, 2018, p. 397).
The American theorist and philosopher Fredric Jameson, in
his famous book Postmodernism or, The Cultural Logic of
Late Capitalism (Jameson, 1992), argues that new technologies help shape the subject itself under the weight of late
capitalism (which is denoted by the term postmodernism).
Literally Jameson, in line with the Kantian interpretation of
aesthetics, speaks of the technological sublime as something
we are not able to reflect on from our position and understand
at all (cognitive mapping). Although this thesis is particularly
concerned with the periodization of postmodernism, there is
another assumption in Jameson’s theory that is important for
understanding people in the world of new technologies (especially algorithms, AI, big data). This is a certain transformation of a social subject that adapts quickly to new “postmodern” trends (change in the dynamics of the relationship between culture and economy, the emergence of new services,
the transition to digital capitalism). If we take this analogy out
of the context of the 90s and insert it into the present – the time
that shows signs of a technological turnaround – it can be
assumed that the syntax of the times is an algorithm applied
to big data, which is mediated by new technologies (which in
turn are the medium of new services and business models).
197
These algorithms then “help” us in orienting ourselves with
the inexhaustible amount of data that new technologies (including information and communication technologies) produce ad infinitum (Ross, 2017). However, the design of these
algorithms and of artificial intelligence is not neutral and hides
certain pitfalls in the form of ideologies or biases that are not
easy to decode (Bowles, 2018).
Big Data is an integral backdrop of our lives. However, it is
useless to us if we cannot employ it in real time in the form of
personalization of various services. It decides what movies we
will watch, what music to listen to, where we go on a trip,
where we stay or whom we meet and whether we get a mortgage, whether a package from Amazon will arrive at our address or whether our device’s camera gives us access to our
notebook based on our race (Bridle, pp. 142–143). Selected
camps of theorists in such cases do not waste a moment to use
the term technological determinism, which points to the autonomy of technology. However, we will try to go beyond this
pessimistic approach in this study.
Adam Greenfield’s book Radical Technologies: The
Design of Everyday Life (Greenfield, 2017) offers an interesting depiction in this pessimistic context. Let us imagine that
we are sitting in a café recommended by an algorithm; we pay
for the coffee in cryptocurrency via a smartphone, while children across the street play AR games on smart devices. This
would not have been possible at all a few years ago, but today
it is understood as a common routine. The whole situation is
drawn up by technologies, however, not with one technology
but rather with a set of individual technologies and services.
At first glance, it may seem to us that these technologies are
too separate to be functional and create this situation.
However, their advantage is that they can be connected by
an “interface” of ones and zeros. This also multiplies the efficiency of individual technologies. (Greenfield, 2017, pp. 498–
500). However, it is clear that this mediation between different
technologies – as we will show below – needs a clearer interpretive framework.
If we take this technological allegory to the extreme, we
can say that technologies can to some extent constitute social
reality (remember the social bubbles on social networks, the
degradation of public space – new agoras). In updating
Jameson’s theory and with an inclination to the
(problematic) technological determinism, we could say that
the subject in the technological turn adapts to the syntax of
algorithms, artificial intelligence and Big Data. Social reality
can then be deprived of chance and subtlety to the extent that it
makes it seem that it can be transformed into the formal language of ones and zeros. The aforementioned Bridle also
looks at this issue highly pessimistically: “In this way, computation does not merely govern our actions in the present, but
constructs a future that best fits its parameters. That which is
possible becomes that which is computable. That which is
hard to quantify and difficult to model, that which has not
198
been seen before or which does not map onto established
patterns, that which is uncertain or ambiguous, is excluded
from the field of possible futures” (Bridle, 2019, pp. 44).
The fact that adds to this pessimistic view of mankind is
that we ourselves have ceased to perceive algorithms and
new technologies as constructs of our everyday reality.
Bridle thus points to a problem that can be illustrated in
the philosophical direction of functionalism. Every day we
use the outputs of new technologies, but we have no idea
how they work and what algorithms are hidden in programs,
services and advertisements. Without understanding the consequences, we use these technologies as black boxes, as
functions in which we enter and receive data (Bridle,
2019). This problem is illustrated by simply scrolling on
social networks: the posts we see are already preselected to
get our attention. If we had to see all the posts of all our
friends, for example, on Facebook, we could roll for hours
before seeing something that really interests us. We could
thus claim that the emancipation program of the
Enlightenment is unfinished in this case, because in the
technological turn, we unknowingly leave most of the decisions to the algorithms and artificial intelligence. Algorithms
do not even have to work too much; social reality is simplified to the level of formal language. And that is the reason
why algorithms can have such an effect. It is therefore better
to not look for complexities in algorithms but rather at the
simplicity of social reality. Our social reality is complex and
diverse, but due to algorithms, it is no longer random. This
is the world of computational hegemony. However, the
question remains how to prevent this: responsibility and
rules (ethics) or awareness and education (breaking
ideology)?
In connection with the possibilities of using AI, Makridakis
(2017, pp. 8–11) presents four ways of interpreting the impacts of this technology on the functioning of society.
Optimists predict the utilization of the speed and memory
capacity of computers and the ability to share their knowledge
with the human brain. Technological innovations will allow
genetics to intervene in the genetic code to prevent disease,
ageing or even death. Nanotechnology will make it possible to
create virtually any product at low cost, and robots that will
take over all human work will allow people to choose their
way of spending their free time and to choose work activities
according to their interests. Pragmatists rely on the ability to
control AI through effective regulation. Rather than on AI
which seeks to mimic human intelligence, they focus on
AI’s ability to expand human capabilities to increase room
for human decision-making and control. Doubters deny dystopian scenarios based on the threat of AI, pointing out that
human intelligence cannot be replicated and captured in a set
of formal rules. And if so, even then it will not be possible to
machine-replace human creativity, which is based on
overstepping rules – on antialgorithmic behaviour. It is
Soc (2021) 58:196–203
creativity, which is based on the violation of established
norms and ways of thinking, that also other authors (e.g.
Jankel, 2015) consider to be an ability non-replicable by
computers.
Harari (2018) warns against the division of intelligence and
consciousness and draws attention to the potential danger of
using unconscious but highly intelligent algorithms. If we
accept the assumption that organisms are algorithms and life
is data processing, then humans cannot compete with a machine that is able to make decisions based on the evaluation of
all available information and process a problem situation without consciousness – or precisely because of the lack of it –
with a better result. A simple example is the comparison of
accidents between autonomous vehicles and people-driven
vehicles. The mass expansion of these types of vehicles would
result in a significant increase in unemployment among professional drivers. Which would, in a sense, confirm the superiority of the machine over man (Makridakis, 2017, p. 10).
After all, dystopian visions assume that sooner or later, originally human-made decisions will be dominated by the ever
more perfect machines, with better results than people prone
to errors would be able to achieve. This would necessarily
change the whole system of social stratification. Exclusion
or reduction of the role of a person in key decision-making
processes connected with the functioning of society would
then necessarily lead to their inferior social status.
Impacts of AI Use on Business and the Labour
Market
The determining influence of digitalization and automation on
the functioning of society at the level of all social subsystems
is indisputable. For the destructive and creative impacts of
digitalization and informatization on the labour market (creation, extinction and transformation of professions or jobs),
there are also proposals for political, economic and social
measures (increase of the minimum wage, introduction of unconditional basic income, support for the elderly and the lowskilled, etc.). Changes in the structure of the labour market
must be accompanied by radical structural changes in society
and in the way people think about work.
What has become the subject of analyses are the social
consequences of technological innovations, issues of social
control of science and technology with a special focus on
opportunities and risks that technological progress can mean
for social ties, political life, value systems, etc.
The positive social potential of artificial intelligence can
manifest itself at the level of supporting and securing the functioning of key subsystems of society without unspoken stereotypes, prejudices and hidden discriminatory behaviour.
Furthermore, it may be reflected in changes in the structure
of work-related and non-work time due to the reduction of
Soc (2021) 58:196–203
activities performed by people and thus in the expansion of
space for social activities (development of social relations,
community life, volunteering, etc.). At the same time, possible
negative impacts of the development, implementation and expansion of AI at the economic, political and social levels are
considered (insufficient sociopolitical reflection on changes in
the structure of the labour market, misuse of AI by nondemocratic regimes, limited possibilities of AI control, etc.). In this
context, questions arise as to who and how should be involved
in the decision-making in the development and implementation of innovation; on the basis of which criteria states should
set priorities for R&D funding; how companies should measure risks and set safety standards; whether and how experts
are obliged to communicate to the public their decisions and
their reasoning; etc. (Matochova et al., 2019, pp. 230–231).
Floridi et al. (2018, p. 690–694) emphasize the possible use
of artificial intelligence technology to support human nature
and its possibilities. On the one hand, he considers AI to be a
tool for expanding the possibilities of individual selfrealization and the utilization of interests, abilities, skills and
aspirations. Mastering routine tasks through AI opens space
for more meaningful ways of spending time. On the one hand,
Floridi points out the positive use of advanced intelligence in
human decision-making and action. On the other hand, he
draws attention to the necessary responsibility in the development and distribution of state-of-the-art technologies, which
should remain under human control and benefit all members
of society as fairly as possible. AI technology enables more
efficient functioning of society and social systems, from the
prevention and treatment of diseases to the optimization of
transport and logistics to a more efficient redistribution of
resources or a more sustainable approach to consumption.
However, the power of technology also brings the risks of
its use. According to Floridi, these are mainly associated with
excessive autonomy of machines and limited human control,
based on insufficient ability to monitor the performance of
these systems and prevent errors or damage. A balance needs
to be struck between the ambitious projects and opportunities
that AI offers to improve human life and the strength of the
control mechanisms that people and societies set up.
Hawksworth et al. (2018, pp. 1–17) in their report identify
three phases of AI involvement in the functioning of various
areas of society but especially with regard to the shape of the
labour market. Until the early 2020s, they expect an algorithmic wave that is reflected in the automation of simple computational tasks and the analysis of structured data. For this
reason, they consider the sectors based on routine data processing, i.e. finances and insurance, but also the area of information processing and communication, as the most accessible
to automation – and most risky in terms of maintaining the
number of jobs. The second half of the 2020s will be hit by a
wave of augmentation based on dynamic interaction with
technology in administrative support and decision-making
199
and on the automation of repetitive tasks, including the analysis of unstructured data in partially controlled environments.
The sectors concerned will be public administration and selfgovernment, production, warehousing and transportation. In
the 2030s, the autonomous wave should reach its peak, which
presupposes full automation of physical labour, machines
with manual dexterity and problem-solving skills in dynamic
situations and in the real-world environment, where an immediate response is required. This phase of the use of state-ofthe-art technology will affect the construction sector, water
management, wastewater treatment and waste management,
etc.
In their analysis, Hawksworth et al. (2018, p. 2) assume
that in the short term, the most vulnerable jobs will be in the
financial sector and insurance and jobs held more frequently
by women. From a long-term perspective, the vulnerable
group is represented by employees in the transportation sector,
rather than men and people with lower qualifications (which
confirms the importance of investing in lifelong learning or
retraining). The same authors identify risk areas in terms of the
negative impact of the automation process by country, industry and type of worker. The share of jobs at risk reflects the
country’s average level of education. It thus ranges from 20 to
25% of positions in some East Asian and Nordic economies
with a high level of education of the population to 40% of
positions in Eastern European economies, based mainly on
industrial production. Among these extremes are economies
dependent primarily on services, but with a significant proportion of low-skilled workers (UK, USA). Within 10 years, after
the widespread use of autonomous vehicles, transportation
will be one of the most vulnerable sectors in terms of maintaining the structure of jobs. Currently, the riskiest sectors are
those dependent on routine processing of structured data such
as the financial sector and insurance. At the same time, the
least vulnerable groups include workers with a university degree who, in addition to their expertise, also show a higher
degree of adaptability to technological change. Such qualified
employees are also more likely to hold higher management
positions, where a lower level of susceptibility to automation
is expected. Like actors with lower education, older workers
may have a lower degree of adaptability. In the case of manual
work and positions in the transportation sector, where men are
more frequently represented, a higher degree of threat to the
stability of positions can be assumed again through the process of automation. However, the same is true for women in
administrative positions.
Ethics of AI vs. Ideology of AI
The book Future Ethics, together with the theory of Andrew
Feinberg, points out that technologies are not inherently neutral (Bowles, 2018, pp. 2), and an ideology is encoded in their
200
very design to distort our usage: if technology forces us to pay
attention to it, that was the intention.
Design is applied ethics. Sometimes this connection is obvious: if you design razor wire, you are saying that anyone who
tries to contravene someone else’s right to private property
should be injured. But whatever the medium or material, every
act of design is a statement about the future. Design changes
how we see the world and how we can act within it; design
turns beliefs about how we should live into objects and environments people will use and inhabit. (Bowles, 2018, pp. 4).
However, the problem that Bowles outlines here can be
included in the normative level, where he works with three
levels of ethics (deontological ethics, ethics of virtue, utilitarianism) and leaves it to the designer (of AI and algorithms) to
decide ethically – whether the consumer succumbs to the ideology of design is purely up to them, the problem of ideology
is transferred to become their responsibility. In essence, this is
a naive normative guide for the digital capitalism industry.
The problem, however, is that Bowles excludes those that
are most affected – the technology company/technology users
and data producers – from decision-making. In this context, it
is clear that the political theory of technology needs to be
thought about rather than ethics. The political theory of technology offers an opportunity to change it – the democratization of technology, that is, how to intervene in its design –
retroactively through society. We can see this possibility on
two levels: (1) deideologization of technology – one must
realize that we can actually influence technology by our decisions (Allmer, 2017); (2) democratization of technology –
through a clear disagreement or detournement of technology,
we can achieve a change in the goal of technology (Feendber,
2009).
On one side, there is a responsible designer, on the other a
conscious society. Should the designer succumb to the values
and ethics of the company, there is a society that is being used
by the company. It is therefore clear that the requirement of
ethics alone is inefficient; the competitive environment itself
would have to change.
Ethics and Political Philosophy of AI
If we want to talk about political philosophy and the ethics of
artificial intelligence, we should distinguish between political
philosophy and ethics – albeit inextricably linked – in relation
to new technologies. If we look at the ethics of AI, the most
common approaches that appear in the context of the algorithm are deontological ethics, ethics of virtue, and utilitarianism. Rather, we are talking about the individual level, where
the design itself is produced, which is supposed to have a
certain impact on the individual and society. However, if we
look at the political philosophy of new technologies (algorithms and artificial intelligence), we should inquire more
Soc (2021) 58:196–203
broadly into whether the new technologies concern society
as a whole. Here we then should distinguish between the (A)
critical and the (B) liberal branch of the political philosophy of
technology. (A) The critical theory of technology looks at the
power and ideological relations of technology – as in material
capitalism, new technologies are considered only as means of
production. Here, the specific contradictions that lead to the
non-transparent design of AI (how one is deliberately manipulated by data in favour of digital capitalism) should be theorized. (B) We should ask how to set the rules, on the one hand,
so that technologies are not too limited/regulated (the issue of
freedom) and, on the other hand, so that these technologies are
created for the benefit of society and the problems of the
current environmental crisis. These are purely political issues
of technology.
Critical Theory of Big Data
In his theory, Allmer (2017) works with tools that allow
shared data to be critically examined from the perspective of
economic-power relations. Although data seems to be handled
by users, the data is actually owned by large companies, which
ultimately decide how to handle it. However, such a fact is
worrying, and it is necessary to examine the extent to which it
affects the user (i.e. the social entity).
The main premise is that capital is accumulated through
user data, making this digital environment (such as social
media) an arena of struggle in which (as in any production
mode) class and social contradictions arise (Allmer, 2017, pp.
5). The fact that this principle of capital accumulation has been
transferred from the material environment of commodities to
the digital world is part of the evolution of commodification.
Commodifying public goods (such as data) has a number of
complications: digital reproduction emphasizes the privatization of data. For this reason, it is necessary to create new forms
of capital, and it is best to involve the very user, who is constantly producing data, in this digital production. If we stick to
the vocabulary of critical theory, this phenomenon can be
labelled by the terms of digital alienation and digital
exploitation.
For this analysis, Allmer uses Marx’s reasoning, which he
places in the current context. According to such an interpretive
framework, in a capitalist society, the asymmetry of the relationship of power is embodied in the very design of technology (Allmer, 2017, pp. 16–26). Technology is understood as a
reflection of social relations, and for this reason, as we have
seen above, it cannot be understood as neutral. The goals of
technology thus correspond to the goals of capital itself (Ibid.).
Thus, the technology cannot be designed outside of a social
context. The illustration of these theses can be depicted as the
birth of a new rationality, which comes with technology at the
time of industrialisation and is the essence of mass production
Soc (2021) 58:196–203
and the transformation of the whole so-called base
(Horkheimer & Adorno, 2007).
The problem with rationalization is that if technology can
be taken out of context (e.g., historical expropriation), the
essence of rationalization will still remain in it – for example,
the question of automation does not lead to human emancipation (as was originally the idea of Herbert Marcuse). The
problem is how to work with the potential of new technologies. How do we even discover the emancipation potential of
new technologies? Are technological or political changes
needed for this emancipation? In this case, we will be helped
by the critical theory of technology (dialectics of technology
and society), which points to the socially conditioned construction of technology and the impact of technology on society (Allmer, 2017, pp. 42).
Democratizacion of New Technologies
and Big Data
Following the example of Feendber, 2009), one can distinguish
two main currents in the theory of technology. The first is the
so-called instrumental, which speaks of technology as the interconnection of technology with the value context of society
(culture or politics). Technical tools are understood as neutral
means serving only social goals. Technology is just a tool to
achieve efficiency. Such an approach is purely functional.
Technology is designed outside of political ideology. The second stream, the substantive one, attributes autonomous force to
technology that prevails over traditional and competitive
values. It, therefore, denies the neutrality of technology and
emphasizes the negative consequences of technology for humanity and nature. Technology has become part of the lifestyle
and everyday life. Technology has dominance over us and there
is no escape from it. The opposite is a return to the traditional
values of romantic simplicity (a certain apocalyptic vision).
If we think about the ethics of AI – that is, that the designer
can consciously modify the ideology of an algorithm and AI –
we should also think about the defence options of society,
which will be affected by AI. As we saw earlier, Feenberg’s
theory of democratization of technology could help us with this.
Feenberg’s theory represents a non-deterministic approach
to technology. Technology must not be considered as a set of
devices or the sum of rational goals; that would be too functionalist. In relation to society, technology must be interpreted
like any other artefact. If we overlook the connection between
society and technology, then we will perceive it as self-productive. However, technology is political; it is not born in a vacuum, outside of political ideology, but always in a specific social
discourse. For this reason, Feenberg also continues the tradition
of critical theory and assumes that public opinion will interfere
201
with the nature of technology, i.e. the normative requirement of
democratic instrumentalization (Feendberg, 2009, pp. 146).
Technology is behind political and economic power, and for
this reason, it should be part of the public space debate: we
cannot be civically autonomous if we do not have the opportunity to decide on this industrial process. However, Feenberg
realizes that communism has failed to meet this demand
(Feenberg, 2014, p. 708). And this is exactly the reason to study
technology at all. It seems the self-propagation of innovation
prevails over the rational use of technology. Should we, as
citizens with rights, not be able to decide what technologies
will or will not be implemented in society? This also applies
to the current technological turn and Big Data.
Specific values are regularly embedded in technologies –
albeit unknowingly – and it is the hermeneutics of technologies that we should use to interpret them.
Technology then shapes the principles we live by.
Technologies can to some extent represent our interests (if I
own a car, it represents my mobility; if I prefer cycling to work
instead of a car, it represents a value). Then we may ask, do these
interests define society? However, there is a tendency to look at
technology and politics separately. Technological design should
be value-oriented rather than market-based instrumentalization.
Technology is thus not a neutral tool: it has its own value,
but at the same time society can determine the direction of its
development. Technology is the result of many factors: the
meaning of technology is defined only by its use in the context
of society. Progress alone cannot assign purpose to technology. This is where Allmer’s inspiration comes from: on the one
hand, there is a need to analyse the ideology of how people
have a false knowledge that they can influence technology and
at the same time the need to democratize technology for emancipation. We should ask how theoretical normativity can be
implemented in the political functioning of technology.
According to Feenberg, the demands of democratization
can come into practice through two mechanisms: the technological code or the democratization of technologies through
initiatives that can gradually change the legal framework. We
should then recode it to the question of whether technology
can help us, for example, in an ecological crisis (updating
current positions on technology: on the one hand, socialist
technology helps to emancipate people, but on the other hand
it is not ecological). However, the question remains, how to
involve the widest possible public in technology decisions
when technology development is distributed completely independently of society through autonomous agents?
The democratization of technology states that technology
will be an aspect of public life (we are already familiar with
civil juries, discussions, protests, boycotts, etc.). This is important at the moment when AI is entering all areas of industry, healthcare, services or social relations. In these aspects,
however, it is difficult to talk about any regulation of
202
technology at the expense of freedom. Neither the scientific
nor the private sector can be planned or interfered with. For
this reason, we are talking about AI ethics and the political
philosophy of new technologies. In this context, many contemporary experts talk about rehabilitating the concept of the
social contract in the context of the digital turn: the question of
how to democratically shift technological design to social, not
instrumental values, without restricting the market freedom of
individual actors.
Conclusion: The Social Contract, AI and Digital
Turn
The concept of the social contract speaks of an imaginary
contract that people have concluded between themselves to
ensure a certain area of freedom and social security (the opposite is the chaotic so-called natural state or homo homini
lupus). This concept is very useful for politics and ethics.
The rehabilitation of this concept in Rawls’s thought experiment called the veil of ignorance (Rawls, 1999) can be applied
to the digital turn. People agree on rules that are essentially
egalitarian. However, this is not observed in the market environment as to not lose profit. People have no idea that new
technologies deliberately avoid the social contract. The social
contract only works if all participants agree to its terms.
Bowles believes that this concept can be applied to a technology society in which this fair system could be applied.
“Beneath a veil of ignorance, we wouldn’t know our social
status, our intelligence, or even our interests; but if the system
is fair we should be satisfied wherever we ended up” (Bowles,
2018, pp. 56). The veil of ignorance is related to deontological
ethics – no one would want to take down the veil and find out
that they are just a means to an economic goal. The veil of
ignorance thus forces us to take into account all the roles that
appear in the system – if we create a system of persuasion (e.g.
algorithms and AI of online ads and personalized content),
this system will be fair to whoever is being persuaded
(Bowles, 2018, pp. 56). It is therefore possible to insert the
concept of the social contract into the AI design. The veil of
ignorance could also be used, for example, in autonomous
vehicles; the system would not know who is who (so-called
contractualism).
In this context, for example, Rawls’s theory of justice is
used in practice in Singapore to gain the confidence of citizens
in technology and strengthen its digital democracy (Keen,
2019). The use of the concept of the social contract can also
be found in Estonia, where it is used for mutual transparency
between citizens and the state. The fact that political rules in
Estonia have evolved in parallel with electronic democracy
has pushed Estonia to the forefront, and it can introduce
Soc (2021) 58:196–203
elements of not ethics, but actual political philosophy, into
the system. The social contract in Estonia works on the principle of mutual control between citizens and the state: when
the state wants to explore any data of a selected citizen, the
citizen is immediately notified (Keen, 2019).
However, there is still the question of how to apply this
concept between the polarity of technology regulation vs.
freedom of the market environment where technologies are
evolving. Floridi and the neologism of metatechnology (a rational system of protocols, rules and standards for using technology) provide some perspective in this context (Floridi,
2014, pp. 206–215). Floridi offers a solution that could stand
up to liberal political theory and the debate between freedom
(including innovation) and regulation (protection of the environment and society). On the one hand, in the application,
introduction and dissemination of new technologies (algorithms, AI), there is no telos (teleologism) that would assign
technologies a certain ethical framework that would define
their function in society (e.g. the Enlightenment emancipation
framework of people as beings liberated from labour or environmental protection – which are countervalues to the free
market and progress in general). Today, technology, like all
commodities, has the main goal of accumulating capital,
which can be problematic because it does not take into account the limits of nature and the exploitation of human resources (as we have seen above). So Floridi offers a peculiar
solution to this situation, in which no one will dictate anyone
how to handle technology, but we will create such rules that it
is also appropriate to look at the possibilities of technology
from the point of view that we will be aware of what we
cannot do. Thus, ultimately, it should not be a question of
ethics but of rational restructuring of the market environment.
It is argued that information and communication technologies can play an important role in addressing environmental issues, but the negatives of their use are forgotten. One
problem for instance is the energy intensity of these systems
(Internet of Things, clouds and Big Data). Industry 4.0 cannot go without large data storages and demanding computational performance. Floridi talks about the technological
gambit in this context (2014, pp. 212–215). Gambit means
(especially in chess) sacrificing something for a future gain.
We rely on inventing potential tools to save the environment, but they are only positive if they are used rationally,
in specific individual places, while their widespread use is
detrimental. Floridi’s metatechnology is the way to control
the dangerous nature of the ecological gambit.
Funding The paper was realized in the frame of the projects (Grant TA
ČR Éta, no. TL01000299) “Vývoj teoreticko aplikačních rámců pro
sociální změnu v realitě transformace průmyslu.” Financial support was
received from the Technology Agency of the Czech Republic.
Soc (2021) 58:196–203
Declarations
Conflict of Interest The authors declare no competing interests.
References
Allmer, T. (2017). Critical Theory and Social Media: Between
Emancipation and Commodification (1st ed.). Routledge.
Bowles, C. (2018). Future Ethics (1st ed.). NowNext Press.
Bridle, James. 2019. New Dark Age: Technology and the End of the
Future (Reprint ed.).Verso
Chandler, D., Fuchs, Ch.. (2019). Digital Objects, Digital Subjects:
Interdisciplinary Perspectives on Capitalism, Labour and Politics
in the Age of Big Data (Illustrated ed.). University of Westminster
Press.
Feenberg, A. (2009). Critical Theory of Technology. In: Friis, Jan Kyrre
Berg Olsen, Stig Andur Pedersen A, Vincent F. Hendriks (ed.). A
companion to the philosophy of technology. Wiley-Blackwell
Feenberg, A. 2014. Democratic rationalization: Technology, power, and
freedom. In: Scharff, R. et al. (eds.) Philosophy of Technology – the
Technological Condition: An Anthology. Wiley Blackwell.
Floridi, L. (2014]. The Fourth Revolution: How the infosphere is
reshaping human reality. Oxford University Press.
Floridi, L., Cowls, J., Beltrametti, M. et al. (2018). AI4People—An
Ethical Framework for a Good AI Society: Opportunities, Risks,
Principles, and Recommendations. Minds and Machines 28, 689–
707. https://doi.org/10.1007/s11023-018-9482-5
Greenfield, A.. (2017). Radical technologies: the design of everyday life.
Verso.
Harari, Y. N. (2018). Homo Deus: A Brief History of Tomorrow
(Illustrated ed.). Harper Perennial.
Hawksworth, J., Berriman, R., Goel, S. (2018). Will robots really steal
our jobs? An International Analysis of the Potential Long Term
Impact of Automation. PricewaterhouseCoopers LLP.
Horkheimer, M.., Adorno, T. (2007). Dialectic of Enlightenment
(Cultural Memory in the Present). Stanford University Press.
Jameson, F. (1992). Postmodernism, or, The Cultural Logic of Late
Capitalism. Duke University Press.
Jankel, N. S. (2015). AI vs. Human Intelligence: Why Computers Will
Never Create Disruptive Innovations, Huffington Post. Available
203
from: http://www.huffingtonpost.com/nick-seneca-jankel/ai-vshuman-intelligence_b_6741814.html. Accessed 22 April 2021.
Keen, Andrew. 2019. How to Fix the Future. Grove Press.
Kowalikova, P., Polak, P., Rakowski, R. (2020). The Challenges of
Defining the Term “Industry 4.0”. Society. 57: 631–636. https://
doi.org/10.1007/s12115-020-00555-7
Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures. 90:46–60.
Matochova, J., Kowalikova, P., Rakowski, R.. (2019). Social science’s
dimension of engineering curriculum innovation. R&E-SOURCE.
S17:229-234.
Rawls, J..(1999). A Theory of Justice. Belknap Press: An Imprint of
Harvard University Press.
Ross, A. (2017). The Industries of the Future. Simon & Schuster.
Veitas, V. and Weinbaum, D. (2017). Living Cognitive Society: A ‘digital’ World of Views, Technological Forecasting and Social
Change 114:16-26. https://www.sciencedirect.com/science/article/
abs/pii/S0040162516300610?via%3Dihub. Accessed 1 April 2021
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Roman Rakowski is a senior lecturer at the Technical University of
Ostrava, Czech Republic. His current research on the development of
the theoretical-application frameworks for a social change in the reality
of the transformation of industry is funded by a grant from the
Technology Agency of the Czech Republic.
Petr Polak has been at the Mendel University in Brno, Czech Republic
since 2019; between 2005-2018 he was at the Universities in Australia
(Swinburne) and Brunei (UBD), from a post-doctoral up to Associate
Professor positions. Before his academic career he worked for 10 years
in finance and treasury in various senior positions with multinational
corporations, such as Dalkia (now Veolia) and Electrolux.
Petra Kowalikova has been a senior lecturer at the Technical University
of Ostrava, the Czech Republic since 2007. She holds a Ph.D. in
Sociology from Palacky University, Olomouc, Czech Republic.
Her research has focused on the sociology of organizations, the labor
market, and corporate culture issues.
Society is a copyright of Springer, 2021. All Rights Reserved.

Purchase answer to see full
attachment

  
error: Content is protected !!