+1(978)310-4246 credencewriters@gmail.com

Week 3

What does a business know about how it operates? How can a company leverage information technology to improve its operations?

In previous weeks, we have discussed how information systems can help a business achieve a competitive advantage and how information is used. The next step is to understand the processes a company uses to do its work. This week, you will learn what business processes are and how they are analyzed and improved using process analysis, modeling, and reengineering. Then you will see how IT, when combined with business processes, can improve an organization’s competitive advantage. Finally, you will learn how the components of the technical infrastructure are combined to provide IT solutions that meet business objectives, and you will learn the key terms associated with the technical infrastructure.

define the term business process

recognize how business process management and business process reengineering work

recognize how information technology combined with business processes can bring an organization competitive advantage

sequence the steps in a process

identify the major components of the technical infrastructure

This week you will complete the following:

Read about Business Processes

Read about the IT Infrastructure

Process Improvement (This part is not required but posted as a reference for the responses)

Group 3:

Drawing from your own experience, select a process (a set of specified steps to accomplish a task) used at your place of work or in your interaction with a business that you would like to see improved and briefly describe the process.  Be sure you have identified a specific process rather than a general business problem or area.

1)  Explain why you picked that process.

2)  Explain the steps you might take to analyze how to improve the process.

3)  Who should be involved with you?

4)  What are some of the questions you should ask about the current process?

5)  How will you know if the process was actually improved?

First, we need to be sure you can identify a process; many students have difficulty with that, so refer to your class readings.  Be sure to pick a fairly narrow scope for your process – for example, processing an invoice for payment vs. Accounts Payable. I am also interested in the method to be used to improve the process, not a solution.  (For example, I am not looking for something like:  “The Café where I work is not selling enough coffee. We should use social media to advertise more.”  The discussion here is about identifying a specific process and how a business would go about deciding how to improve that process, who should be involved, what should be considered, and what steps should be taken to be able to analyze the current process and plan for improvement.)  You should employ the techniques discussed in class or those that you find in your research.  Keep in mind that outside resources strengthen your responses.

———————————————————————————————————————————————————————What is required:

Responses to 4 initial postings (they are located in the attachments) should be specific and assess whether posting accurately and sufficiently addresses the questions asked in the discussion topic and should incorporate relevant research correctly.  Explain your assessment as to why the information is or is not correct and/or complete, providing correct information to enhance the discussion.

Classmate Jasmine
Being in the military there are a lot of processes that are mildly annoying but they are all that way for a
reason so instead I’ll talk about the process of dining in at this brunch place in Virginia. I can’t for the life
of me remember the name of the place but I will never forget the experience because their process was
terrible and way more difficult than it needed to be. The “as-is” process for this restaurant is as follows:
1. Wait in line to order your food at the register. 2. Order your food at the register. 3. Wait to the side
until there is an open table. 4. Once a table is open sit down at the table. 5. Wait for your food to be
ready 6. Pick up your food at the counter. 7. Bring your food to your table and eat. If you need anything
else that you didn’t order originally you have to wait in line again to order at the register.
This setup is similar to that of a fast-food restaurant but with dine-in restaurant prices and food. Their
reasoning for the process was clear only 2 people were working the front. One person works the cash
register and another person brings the food from the kitchen to the pickup counter. However, to keep
the staff low they have ruined the customer experience.
To analyze how to improve this process I would implement Business Processing Reengineering. I would
hold a meeting with the business owners, the Chief of information, and the employees since they are
the people who will have to use the process we decide to implement. The major questions that need to
be answered at this meeting are how is the “as-is” process working for the employees, what should the
process look like, and would technology improve the process? Is there a way to automate the customer
experience without making them wait in line? I would suggest we get ordering kiosks at all the tables to
improve the process and allow for the two front workers to become waiters instead, delivering food and
helping customers when they can’t use the Kiosk. “65% of customers prefer to visit a restaurant with
self-order kiosks” (Tache, 2022). I would know the process worked when I walked into the restaurant
and there wasn’t a long line of customers waiting to order food or waiting for a table.
Tache, J. (2022, April 26). 5 Reasons Self-Ordering Kiosks Are A Great Restaurant Investment.
Tillster. https://www.tillster.com/blog/5-reasons-self-ordering-kiosks-are-a-great-restaurant-investment
UMGC. (2022). Optimizing business processes. University of Maryland Global
Campus. https://learn.umgc.edu/d2l/le/content/684302/viewContent/26085064/View
UMGC. (2022). Optimizing Business Processes. University of Maryland Global
Campus. https://learn.umgc.edu/d2l/le/content/684302/viewContent/26085064/View
Classmate Kimberly
The business process I’m going to discuss is the payroll deduction system that my department instituted
two years ago. It is connected to our existing point-of-sale system in the gift shop of the hospital. I’m
the Back Office Administrative Manager at the gift shop. Any software or hardware maintenance or
acquisition falls under my responsibility. To increase employee sales, it was decided by management to
establish a payroll deduction system. Employees who are eligible can basically “charge” their purchases
using a third-party software that networks with our Counterpoint point-of-sale system. Purchases are
tracked by the employee’s badge number and deducted off the employee’s paycheck in the next payroll
period. Pay periods are every two weeks, so the rate of return of those “charged” funds to the gift shop
is relatively short. Unfortunately, employee sales haven’t increased as much as we would have
liked. Out of 2,700+ eligible employees, only 320 have signed up. Hospital employees make up most of
our customer base, therefore we need to generate processes with them in mind. (MM Hayes, 2018)
Employees must be approved ahead of time through a multi-level process before making any
purchases. Employees get frustrated when they get to the register and assume that they can sign up on
the fly. The process needs to be less cumbersome and time consuming to entice more employees to
join. IT, Payroll, MMHayes (outside vendor) and myself were involved in the initial install and continued
maintenance of Quickcharge (point-of-sale software). Therefore, we should all be involved in finding a
way to improve the process. If the process of signing up was streamlined, more employees would be
inclined to sign up – thus increasing sales.
Some questions to be asked about the current payroll deduct process would be – Who is eligible? How is
it determined who’s eligible? Where is the employee data housed? Instead of having employees call for
enrollment and waiting two days for approval is there a quicker process? Can employee data be shared
with all departments involved instead of compartmentalized? Would a marketing campaign generate
employee interest in using the process? Is it possible to generate an email to be sent to all eligible
employees allowing them to click on a link and be immediately enrolled? The rise of payroll deduct
employee enrollment will indicate that the process was improved.
~~ Kim Sajjad
5 ways to increase hospital gift shop sales (2018, December 15). MM Hayes.
Nason, M. (2019). Today’s Hospital Gift Shops. Giftware News, 38(4), 46.
Classmate Nicholas
A process that I use at my work that I think needs to be improved is the way my unit requests disposition
and replacement authority. I am a maintenance officer in the Marine Corps and serve as a Responsible
Officer (RO) over a specific set of equipment utilized to conduct maintenance on weapons, optics, and
fire-control systems. The process currently goes as follows:
1) Identify military equipment that meets the threshold for disposition and replacement (there are
specific criteria in directives).
2) Create and route a letter to the RO for authorization for the equipment to be disposed of and a
replacement requested.
3) Create a service request in our logistics information software that documents the need for disposition
and replacement.
4) Assign and route the request to the battalion level for approval
5) Once approved, assign and route the request to the battalion level for approval (unit level).
6) Once approved, assign and route the request to the regimental level for approval (1st higher
7) Once approved, assign and route the request to the division level for approval (parent command, 2nd
higher headquarters).
8) Once approved, assign and route the request to the logistics command for approval and specific
instructions for shipment and replacement.
9) Coordinate with the supply section to transfer gear to logistics command
10) Ship equipment out following the instructions provided.
1. I selected this process because it is one that we frequently have to utilize. It is a significant point
of friction that usually takes months to route a request for approval. The RO is given the
authority to maintain, inventory, and care for the equipment and initially authorizes the
disposal. I feel that there is efficiency to be gained by improving the process.
2. To improve the process, I first look at eliminating any redundancy that makes it less effective or
efficient. From there, I would validate that after removing redundancy, the overall intent of the
process is still met. An RO is appointed by the Commanding Officer (CEO in essence) of a unit to
execute duties over their respective area. If an RO decides that the disposition and replacement
process is required, the request should solely be validated by the logistics command. Pushing
the authority down to a lower level of leadership promotes the principles of decentralized
3. There are a few key sections that should be involved in this process:
a. The maintainers identifying the serviceable equipment must communicate swiftly to the respective
RO that the equipment requires disposition.
b. The RO needs to speak to the battalion supply section to be aware of the impending task.
c. The maintainers must communicate to the logistics command once the RO approves the disposition.
d. Logistics command and the local maintainers must monitor their work queue in the logistics
information system to keep up to date on the request.
4. A few questions I would ask about the current process are:
a. What level of authority should be authorized to approve requests for disposition?
b. Is there a better way to communicate between key players than through e-mails and phone calls?
c. How can we reduce the time it takes to route the request?
d. What are the significant movements that require fluid communication between sections?
e. What are the impacts if the process does or does not change?
5. The surefire way to know if the process is improved is that a key performance indicator, the
maintenance cycle time (the total time a piece of equipment is in any maintenance) of
equipment requiring disposition is reduced considerably from past years.
De Ramon Fernandez, A., Ruiz Fernandez, D., & Sabuco Garcia, Y. (2020). Business Process Management
for optimizing clinical processes: A systematic literature review. Health informatics journal, 26(2), 13051320.
Classmate Sabrina
One of my job responsibilities as a teller is processing checks for customers and applying the appropriate
holds according to Act Expedited Funds Availability Act (Reg CC) as well as the organization’s Funds
Availability Policy. At first glance, this seems like a simple and straightforward process. However, there
are many decision points that must be made accurately by the teller to ensure compliance with federal
regulations as well as mitigating risk to the organization and the customer.
I have chosen this process for several reason. First, the risk of accepting a fraudulent check could be
detrimental to the customer, the organization, and the teller. Second, currently there are multiple
complicated steps that are required when determining the appropriate hold. Next, if the hold is not
appropriately held, the organization could be at risk of penalties imposed by the Federal Reserve Board.
Finally, I believe that a better process should be in place to eliminate errors and simplify the process.
The steps I would take to analyze how to improve the process is:
1. Review current “as-is” process. “As it is critical to understand how a process is conducted
currently (University of Maryland Global Campus, 2022).
2. Review the federal regulations.
3. Gather input from the “stakeholders” (tellers) that are using the process either through
interviews or face to face sessions (University of Maryland Global Campus, 2022).
4. Evaluate the current training curriculum regarding the process.
5. Gather statics on:
a. How often a check is negotiated.
b. How often the process is not abided by.
c. The financial implications of not following policy.
6. Evaluate current process by other financial institutions.
Business process re-engineering’s primary goal is to increase efficiency, improve product quality, and/or
decrease cost (Northeastern University, n.d.). That is why it is important to include the following people:
Chief Operation Officer, Vice Present of Branch Operations, Leadership of IT, Region Manager, Branch
Manager, as well as a group of tellers currently using the process.
Questions that should be asked regarding the current process are:
1. How comfortable are tellers in explaining the Availability Policy to customers?
2. Is the training provided adequate?
3. How easy is it to find the policy/instructions?
4. Are the tools that are currently available useful?
5. How often are the tools/resources unavailable?
6. How often does a teller need to seek additional help?
7. How many steps that it currently takes to complete the current process?
8. Where can tellers find additional help in the event there are questions?
9. Does the teller feel empowered?
10. Is there a space for tellers to provide feedback or suggestions?
Once the new process has been completed, signs of success would be:
1. Tellers are comfortable and empowered.
2. Transaction time is quicker.
3. A decreased in returned items, which increases profit.
4. Customers are satisfied!
University of Maryland Global Campus. (2022). Business Process Modeling. Document posted in UMGC
IFSM 300 6983 online classroom, archived at http://learn.umgc.edu
Northeastern University. (n.d.). What is Business Process Reengineering? Your Guide to Business Process
Reengineering. Retrieved July 1, 2022, from https://onlinebusiness.northeastern.edu/blog/what-isbusiness-process-reengineering/
7/2/22, 10:57 AM
Business Process Modeling
Learning Resource
Business Process Modeling
Before identifying requirements for an information technology solution to support a
process, it is critical to understand how a process is conducted currently—this is often
referred to as the “as-is” process. Frequently, people within a process only understand
their part of the process and even within the same group of users, the process may not be
consistently (or correctly) followed. An important first step is to gather representatives of
the process stakeholders to define collectively the current process. This information can
be gathered through stakeholder interviews and/or a face-to-face session where
individuals are together and map out the process on paper throughout the room. In
addition to understanding what is performed in each step, it is important to understand
why. For example, does the information need to be provided to another area in the
organization to enable a related process to be performed?
Once the current process is documented and understood, it’s time to focus on the best
way to perform the series of steps needed to perform a task—this is referred to as the “tobe” process. Otherwise, it’s possible to implement a technology solution that only
succeeds in performing a bad process faster rather than actually gaining the
improvements desired to help achieve the organization’s strategy. The section Business
Processes provides a simple example of a before (as-is) process and then an improved (tobe) process for purchasing textbooks at a college bookstore.
Understanding how a process can best be accomplished lays the foundation for defining
requirements for a technology solution. Failure to clearly define all requirements can
result in a solution that is incomplete. This results in a waste of resources and won’t result
in the expected benefits.
© 2022 University of Maryland Global Campus
All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity
of information located at external sites.
7/2/22, 10:56 AM
Business Processes
Learning Resource
Business Processes
The fourth component of information systems is process. But what is a process and how
does it tie into information systems? And in what ways do processes have a role in
business? This reading will look to answer those questions and also describe how business
processes can be used for strategic advantage.
What Is a Business Process?
We have all heard the term process before, but what exactly does it mean? A process is a
series of tasks that are completed in order to accomplish a goal. A business process,
therefore, is a process that is focused on achieving a goal for a business. If you have
worked in a business setting, you have participated in a business process. Anything from a
simple process for making a sandwich at Subway to building a space shuttle utilizes one or
more business processes.
Processes are something that businesses go through every day in order to accomplish
their mission. The better their processes, the more effective the business. Some
businesses see their processes as a strategy for achieving competitive advantage. A
process that achieves its goal in a unique way can set a company apart. A process that
eliminates costs can allow a company to lower its prices (or retain more profit).
Documenting a Process
Every day, each of us will conduct many processes without even thinking about them:
getting ready for work, using an ATM, reading our email, etc. But as processes grow more
complex, they need to be documented. For businesses, it is essential to do this because it
allows them to ensure control over how activities are undertaken in their organization. It
also allows for standardization: McDonald’s has the same process for building a Big Mac in
all of its restaurants.
7/2/22, 10:56 AM
Business Processes
The simplest way to document a process is to simply create a list. The list shows each step
in the process; each step can be checked off upon completion. For example, a simple
process, such as how to create an account on eBay, might look like this:
1. Go to ebay.com.
2. Click on “register.”
3. Enter your contact information in the “Tell us about you” box.
4. Choose your user ID and password.
5. Agree to User Agreement and Privacy Policy by clicking on “Submit.”
For processes that are not so straightforward, documenting the process as a checklist may
not be sufficient. For example, here is the process for determining if an article for a term
needs to be added to Wikipedia:
1. Search Wikipedia to determine if the term already exists.
2. If the term is found, then an article is already written, so you must think of another
term. Go to 1.
3. If the term is not found, then look to see if there is a related term.
4. If there is a related term, then create a redirect.
5. If there is not a related term, then create a new article.
This procedure is relatively simple—in fact, it has the same number of steps as the
previous example—but because it has some decision points, it is more difficult to track
with a simple list. In these cases, it may make more sense to use a diagram to document
the process:
7/2/22, 10:56 AM
Business Processes
Wikipedia Term Search Process
Process for determining if a new term should be
added to Wikipedia.
Public Domain
Managing Business Process Documentation
As organizations begin to document their processes, it becomes an administrative task to
keep track of them. As processes change and improve, it is important to know which
processes are the most recent. It is also important to manage the process so that it can be
easily updated! The requirement to manage process documentation has been one of the
driving forces behind the creation of the document management system. A document
management system stores and tracks documents and supports the following functions:
Versions and timestamps. The document management system will keep multiple
versions of documents. The most recent version of a document is easy to identify
and will be served up by default.
Approvals and workflows. When a process needs to be changed, the system will
manage both access to the documents for editing and the routing of the document
for approvals.
Communication. When a process changes, those who implement the process need
to be made aware of the changes. A document management system will notify the
appropriate people when a change to a document is approved.
7/2/22, 10:56 AM
Business Processes
Of course, document management systems are used not only for managing business
process documentation. Many other types of documents are managed in these systems,
such as legal documents or design documents.
ERP Systems
An enterprise resource planning (ERP) system is a software application with a centralized
database that can be used to run an entire company. Let’s take a closer look at the
definition of each of these components:
An Enterprise Resource Planning (ERP)
A software application with a centralized
database that can be used to run an
entire company
A software application: The system is a software application, which means that it has
been developed with specific logic and rules behind it. It has to be installed and
configured to work specifically for an individual organization.
With a centralized database: All data in an ERP system is stored in a single, central
database. This centralization is key to the success of an ERP—data entered in one
part of the company can be immediately available to other parts of the company.
That can be used to run an entire company: An ERP can be used to manage an entire
organization’s operations. If they so wish, companies can purchase modules for an
7/2/22, 10:56 AM
Business Processes
ERP that represent different functions within the organization, such as finance,
manufacturing, and sales. Some companies choose to purchase many modules;
others choose a subset of the modules.
An ERP system not only centralizes an organization’s data, but the processes it enforces
are the processes the organization adopts. When an ERP vendor designs a module, it has
to implement the rules for the associated business processes. A selling point of an ERP
system is that it has best practices built right into it. In other words, when an organization
implements an ERP, it also gets improved best practices as part of the deal!
For many organizations, the implementation of an ERP system is an excellent opportunity
to improve their business practices and upgrade their software at the same time. But for
others, an ERP brings them a challenge: Is the process embedded in the ERP really better
than the process they are currently utilizing?
And if they implement this ERP, and it happens to be the same one that all of their
competitors have, will they simply become more like them, making it much more difficult
to differentiate themselves?
This has been one of the criticisms of ERP systems: that they commoditize business
processes, driving all businesses to use the same processes and thereby lose their
uniqueness. The good news is that ERP systems also have the capability to be configured
with custom processes. For organizations that want to continue using their own processes
or even design new ones, ERP systems offer ways to support this through customization.
But there is a drawback to customizing an ERP system: organizations have to maintain the
changes themselves. Whenever an update to the ERP system comes out, any organization
that has created a custom process will be required to add that change to their ERP. This
will require someone to maintain a listing of these changes and will also require retesting
the system every time an upgrade is made. Organizations will have to wrestle with this
decision: When should they go ahead and accept the best-practice processes built into
the ERP system and when should they spend the resources to develop their own
processes? It makes the most sense to only customize those processes that are critical to
the competitive advantage of the company.
Some of the best-known ERP vendors are SAP, Microsoft, and Oracle.
Business Process Management
7/2/22, 10:56 AM
Business Processes
Organizations that are serious about improving their business processes will also create
structures to manage those processes. Business process management (BPM) can be
thought of as an intentional effort to plan, document, implement, and distribute an
organization’s business processes with the support of information technology.
BPM is more than just automating some simple steps. While automation can make a
business more efficient, it cannot be used to provide a competitive advantage. BPM, on
the other hand, can be an integral part of creating that advantage.
Not all of an organization’s processes should be managed this way. An organization should
look for processes that are essential to the functioning of the business and those that may
be used to bring a competitive advantage. The best processes to look at are those that
include employees from multiple departments, those that require decision-making that
cannot be easily automated, and processes that change based on circumstances.
To make this clear, let’s take a look at an example.
Suppose a large clothing retailer is looking to gain a competitive advantage through
superior customer service. As part of this, they create a task force to develop a state-ofthe-art returns policy that allows customers to return any article of clothing, no questions
asked. The organization also decides that, in order to protect the competitive advantage
that this returns policy will bring, they will develop their own customization to their ERP
system to implement this returns policy. As they prepare to roll out the system, they
invest in training for all of their customer-service employees, showing them how to use
the new system and specifically, how to process returns. Once the updated returns
process is implemented, the organization will be able to measure several key indicators
about returns that will allow them to adjust the policy as needed. For example, if they find
that many women are returning their high-end dresses after wearing them once, they
could implement a change to the process that limits the time (e.g., 14 days) after the
original purchase that an item can be returned. As changes to the returns policy are made,
the changes are rolled out via internal communications, and updates to the returns
processing on the system are made. In our example, the system would no longer allow a
dress to be returned after 14 days without an approved reason.
If done properly, business process management will provide several key benefits to an
organization, which can be used to contribute to competitive advantage. These benefits
Empowering employees. When a business process is designed correctly and
supported with information technology, employees will be able to implement it on
their own authority. In our returns-policy example, an employee would be able to
7/2/22, 10:56 AM
Business Processes
accept returns made before 14 days or use the system to make determinations on
what returns would be allowed after 14 days.
Built-in reporting. By building measurement into the programming, the organization
can keep up to date on key metrics regarding their processes. In our example, these
can be used to improve the returns process and also, ideally, to reduce returns.
Enforcing best practices. As an organization implements processes supported by
information systems, it can work to implement the best practices for that class of
business process. In our example, the organization may want to require that all
customers returning a product without a receipt show a legal ID. This requirement
can be built into the system so that the return will not be processed unless a valid ID
number is entered.
Enforcing consistency. By creating a process and enforcing it with information
technology, it is possible to create consistency across the entire organization. In our
example, all stores in the retail chain can enforce the same returns policy. And if the
returns policy changes, the change can be instantly enforced across the entire chain.
Business Process Reengineering
As organizations look to manage their processes to gain a competitive advantage, they
also need to understand that their existing ways of doing things may not be the most
effective or efficient. A process developed in the 1950s is not going to be better just
because it is now supported by technology.
In 1990, Michael Hammer published an article in the Harvard Business Review entitled
“Reengineering Work: Don’t Automate, Obliterate.” This article put forward the thought
that simply automating a bad process does not make it better. Instead, companies should
“blow up” their existing processes and develop new processes that take advantage of the
new technologies and concepts. He states in the introduction to the article:
Many of our job designs, work flows, control mechanisms, and organizational
structures came of age in a different competitive environment and before the
advent of the computer. They are geared towards greater efficiency and
control. Yet the watchwords of the new decade are innovation and speed,
service, and quality.
It is time to stop paving the cow paths. Instead of embedding outdated
processes in silicon and software, we should obliterate them and start over.
We should “reengineer” our businesses: use the power of modern information
7/2/22, 10:56 AM
Business Processes
technology to radically redesign our business processes in order to achieve
dramatic improvements in their performance. (Hammer, 1990)
Business process reengineering (BPR) is not just taking an existing process and
automating it. BPR is fully understanding the goals of a process and then dramatically
redesigning it from the ground up to achieve dramatic improvements in productivity and
quality. But this is easier said than done. Most of us think in terms of how to do small,
local improvements to a process; complete redesign requires thinking on a larger scale.
Hammer (1990) provided some guidelines for how to go about doing business process
Organize around outcomes, not tasks. This simply means to design the process so
that, if possible, one person performs all the steps. Instead of repeating one step in
the process over and over, the person stays involved in the process from start to
Have those who use the outcomes of the process perform the process. Using
information technology, many simple tasks are now automated, so we can empower
the person who needs the outcome of the process to perform it. The example
Hammer gives here is purchasing: instead of having every department in the
company use a purchasing department to order supplies, have those who need the
supplies order them directly by using an information system.
Subsume information-processing work into the real work that produces the
information. When one part of the company creates information (like sales or
payment information), it should be processed by that same department. There is no
need for one part of the company to process information created in another part of
the company.
Treat geographically dispersed resources as though they were centralized. With the
communications technologies in place today, it becomes easier than ever to not
worry about physical location. A multinational organization does not need separate
support departments (such as IT, purchasing, etc.) for each location anymore.
Link parallel activities instead of integrating their results. Departments that work in
parallel should be sharing data and communicating with each other during their
activities instead of waiting until each group is done and then comparing notes.
Put the decision points where the work is performed, and build controls into the
process. The people who do the work should have decision-making authority, and
the process itself should have built-in controls using information technology.
Capture information once, at the source. Requiring information to be entered more
than once causes delays and errors. With information technology, an organization
7/2/22, 10:56 AM
Business Processes
can capture it once and then make it available whenever needed.
These principles may seem like common sense today, but in 1990 they took the business
world by storm. Hammer (1990) gave example after example of how organizations
improved their business processes by many orders of magnitude without adding any new
employees, simply by changing how they did things (see “Reengineering the College
Bookstore” below).
Unfortunately, business process reengineering got a bad name in many organizations. This
was because it was used as an excuse for cost cutting that really had nothing to do with
BPR. For example, many companies simply used it as an excuse for laying-off part of their
workforce. Today, however, many of the principles of BPR have been integrated into
businesses and are considered part of good business-process management.
7/2/22, 10:56 AM
Business Processes
Reengineering the College Bookstore
The process of purchasing the correct textbooks in a timely manner for college
classes has always been problematic. And now, with online bookstores such as
Amazon competing directly with the college bookstore for students’ purchases, the
college bookstore is under pressure to justify its existence
But college bookstores have one big advantage over their competitors: They have
access to students’ data. In other words, once a student has registered for classes,
the bookstore knows exactly what books that student will need for the upcoming
term. To leverage this advantage and take advantage of new technologies, the
bookstore wants to implement a new process that will make purchasing books
through the bookstore advantageous to students. Though it may not be able to
compete on price, it can provide other advantages, such as reducing the time it
takes to find the books and the ability to guarantee that the book is the correct one
for the class. In order to do this, the bookstore will need to undertake a process
The goal of the process redesign is simple: to capture a higher percentage of
students as customers of the bookstore. After diagramming the existing process and
meeting with student focus groups, the bookstore comes up with a new process. In
the new process, the bookstore utilizes information technology to reduce the
amount of work the students need to do in order to get their books. In this new
process, the bookstore sends the students an email with a list of all the books
required for their upcoming classes. By clicking a link in this email, the students can
log into the bookstore, confirm their books, and purchase the books. The bookstore
will then deliver the books to the students.
7/2/22, 10:56 AM
Business Processes
Business Process Reengineering
College bookstore process redesign
ISO Certification
International Standards Organization
(ISO) Certification
ISO defines quality standards
organizations must meet to show
effective business process management
Many organizations now claim that they are using best practices when it comes to
business processes. In order to set themselves apart and prove to their customers (and
potential customers) that they are indeed doing this, these organizations are seeking out
7/2/22, 10:56 AM
Business Processes
an ISO 9000 certification. ISO is an acronym for International Standards Organization.
This body defines quality standards that organizations can implement to show that they
are, indeed, managing business processes in an effective way. The ISO 9000 certification
is focused on quality management.
In order to receive ISO certification, an organization must be audited and found to meet
specific criteria. In its most simple form, the auditors perform the following review:
Tell me what you do (describe the business process).
Show me where it says that (reference the process documentation).
Prove that this is what happened (exhibit evidence in documented records).
Over the years, this certification has evolved, and many branches of the certification now
exist. ISO certification is one way to separate an organization from others.
The advent of information technologies has had a huge impact on how organizations
design, implement, and support business processes. From document management systems
to ERP systems, information systems are tied into organizational processes. Using
business process management, organizations can empower employees and leverage their
processes for competitive advantage. Using business process reengineering, organizations
can vastly improve their effectiveness and the quality of their products and services.
Integrating information technology with business processes is one way that information
systems can bring an organization lasting competitive advantage.
7/2/22, 10:56 AM
Business Processes
Study Questions
1. What does the term business process mean?
2. What are three examples of business process from a job you have had or an
organization you have observed?
3. What is the value in documenting a business process?
4. What is an ERP system? How does an ERP system enforce best practices for
an organization?
5. What is one of the criticisms of ERP systems?
6. What is business process reengineering? How is it different from incrementally
improving a process?
7. Why did BPR get a bad name?
8. List the guidelines for redesigning a business process.
9. What is business process management? What role does it play in allowing a
company to differentiate itself?
10. What does ISO certification signify?
Hammer, M. (1990). Reengineering work: don’t automate, obliterate. Harvard Business
Review, 68.4: 104–112.
Licenses and Attributions
Chapter 8: Business Processes
from Information Systems for Business and Beyond by David T.
Bourgeois is available under a Creative Commons Attribution 3.0 Unported
license. © 2014, David T. Bourgeois.
UMGC has modified this work and it is available under the original license.
© 2022 University of Maryland Global Campus
7/2/22, 10:56 AM
Business Processes
All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity
of information located at external sites.
7/2/22, 10:57 AM
Enterprise Architecture and Information Technology Infrastructure
Learning Resource
Enterprise Architecture and Information
Technology Infrastructure
Enterprise Architecture
While the growth of IT provides opportunities for new business models and processes,
management teams face many challenges in making sound IT investments. Investments in
technology do not guarantee the viability and profitability of an organization. Too often,
firms adopt a solution just because it uses the latest technology and then find that is not a
good fit for the organization.
The financial impact of a failed IT project can include not only the expenditures for
hardware and software but also the time spent implementing a failed solution, including
the time spent redefining business processes and training employees.
In previous weeks, we focused on how organizations analyze their environment, seek
competitive advantage, and set business strategy. Now it’s time to begin focusing on how
information systems fit into that picture. Organizations analyze their business and identify
processes for opportunities to improve profitability and performance with the use of
information technology.
Enterprise Architecture is the management practice of identifying an overall design to
help organizations with understanding, managing, and expanding their IT infrastructure
and systems. This is a strategic high-level design that looks at the organization’s business
vision, strategy and goals, and identifies how information technology fits into that design.
Enterprise architecture is composed of three major components: the application
architecture, the information architecture, and the technical architecture. The application
architecture is a breakdown of the business processes and shows which processes are
supported by which application systems and how these applications integrate and relate
to each other. The application architecture also has functional applications, such as
finance, human resources, etc.
7/2/22, 10:57 AM
Enterprise Architecture and Information Technology Infrastructure
The information architecture defines where and how the important information is
maintained and secured. Frequently, the information architecture includes information
about all the data, how the data relate to each other, and how data flows throughout the
organization and its systems.
The technical architecture (sometimes referred to as the IT infrastructure) describes the
hardware and software used to design and build the systems. The technical architecture
describes what is already in place in an organization and how the organization wants to
evolve technically. You could think of the technical architecture as a blueprint, much like a
blueprint of the architecture of a building. The blueprint shows where everything is
located and how it fits together. If a system were developed without consideration of the
technical architecture, the chances are very high that it would not work in the
environment. For example, if a web-based system were developed or acquired for an
organization with no internet access, the effort would be futile. Technical architecture also
defines the standards and protocols for the organization, including security requirements.
A fully developed enterprise architecture should be able to tell us anything we need to
know about the business processes, the data used, and the underlying technology and
how it supports the business strategy. A solid enterprise architecture includes everything
from documentation to business concepts to the components discussed above.
IT Infrastructure
The major components of the IT infrastructure are:
1. Services—the people or organizations that run, support, and manage the other
infrastructure components; can be internal staff or external contractors or service
2. Hardware—devices that perform the input, storage, processing, and output
3. Software—instructions that enable the hardware to perform its functions, enabling
these assets to meet the needs of the business; includes (1) operating systems that
control the hardware, (2) data management software that stores and provides access
to data, and (3) application software, which supports the business processes.
4. Telecommunications—the tools that provide connectivity and communication among
individuals, companies, governments, or hardware assets; includes networking
hardware and software and telecommunications services (audio, video and data).
This includes internet access.
7/2/22, 10:57 AM
Enterprise Architecture and Information Technology Infrastructure
5. Facilities—the buildings or spaces that house the equipment and staff that provide
service and support.
Individuals need to understand the basics of these components to help the organization
recognize what is necessary to effectively implement and maintain information systems.
Because a business IT infrastructure can be regarded as the “nervous system,” it is
imperative that it be stable, robust, secure, and flexible so that it can support business
requirements reliably, especially in times of heavy usage. Consistency with the
infrastructure and enterprise architecture is an important consideration in making IT
decisions. The infrastructure must be able to accept both changes in the business and
radical changes in technology. Because of the constant changes in technology, an
infrastructure must change to take advantage of those changes that will provide a
business benefit to the company. This must be part of the IT plan so that transitions to
newer technology can be integrated smoothly, with no disruption or degradation of
Suppose a new computer is under evaluation to replace an aging computer to gain the
advantages of increased speed and more storage. The impact on all of the components of
the infrastructure must be considered:
Will our existing peripherals operate with the new computer?
Will our existing software work on the new computer?
If it does, will it still permit us to achieve the benefits of the new computer?
If not, will new software have to be purchased?
Will our applications run on the new computer, or will changes have to be made?
Will our communication protocols work?
Will our networks support the higher volume of data, or will there be a bottleneck
that will prevent the new computer from functioning as well as we planned?
Will users or the technical staff require training to support the new computer
hardware and software?
Will our physical facilities (may or may not be a dedicated data center) have the
power, cooling and space capacity and space required by the new computer?
© 2022 University of Maryland Global Campus
All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity
of information located at external sites.
7/2/22, 10:57 AM
Learning Resource
The physical parts of computing devices—those that you can actually touch—are referred
to as hardware. In this reading, we will take a look at this component of information
systems, learn a little bit about how it works, and discuss some of the current trends
surrounding it.
As stated above, computer hardware encompasses digital devices that you can physically
touch, such as the following:
desktop computers
laptop computers
mobile phones
tablet computers
storage devices, such as flash drives
input devices, such as keyboards, mice, and scanners
output devices such as printers and speakers
Besides these more traditional computer hardware devices, many items that were once
not considered digital devices are now becoming computerized themselves. Digital
technologies are now being integrated into many everyday objects, so the days of a
device being labeled categorically as computer hardware may be ending. Examples of
these types of digital devices include automobiles, refrigerators, and even soft- drink
dispensers. Let’s explore digital devices, beginning with defining the term.
Digital Devices
7/2/22, 10:57 AM
A digital device processes electronic signals that represent either a one (“on”) or a zero
(“off”). The on state is represented by the presence of an electronic signal; the off state is
represented by the absence of an electronic signal. Each one or zero is referred to as a bit
(a contraction of binary digit); a group of eight bits is a byte. The first personal computers
could process 8 bits of data at once; modern PCs can now process 64 bits of data at a
time, which is where the term 64-bit processor comes from.
7/2/22, 10:57 AM
Understanding Binary
As you know, the system of numbering we are most familiar with is base-ten
numbering. In base-ten numbering, each column in the number represents a power
of 10, with the far-right column representing 10^0 (ones), the next column from the
right representing 10^1 (tens), then 10^2 (hundreds), then 10^3 (thousands), etc.
For example, the number 1010 in decimal represents: (1 x 1000) + (0 x 100) + (1 x
10) + (0 x 1).
Computers use the base-two numbering system, also known as binary. In this
system, each column in the number represents a power of two, with the far-right
column representing 2^0 (ones), the next column from the right representing 2^1
(twos), then 2^2 (fours), then 2^3 (eights), etc. For example, the number 1010 in
binary represents (1 x 8 ) + (0 x 4) + (1 x 2) + (0 x 1). In base ten, this evaluates to 10.
As the capacities of digital devices grew, new terms were developed to identify the
capacities of processors, memory, and disk storage space. Prefixes were applied to
the word byte to represent different orders of magnitude. Since these are digital
specifications, the prefixes were originally meant to represent multiples of 1024
(which is 210), but have more recently been rounded to mean multiples of 1000.
A List of Binary Prefixes
one thousand
kilobyte=one thousand
one million
megabyte=one million
one billion
gigabyte=one billion
one trillion
terabyte=one trillion
7/2/22, 10:57 AM
Tour of a PC
All personal computers consist of the same basic components: a CPU, memory, circuit
board, storage, and input/output devices. It also turns out that almost every digital device
uses the same set of components, so examining the personal computer will give us insight
into the structure of a variety of digital devices. So let’s take a “tour” of a personal
computer and see what makes it function.
Processing Data: The CPU
As stated above, most computing devices have a similar architecture. The core of this
architecture is the central processing unit, or CPU. The CPU can be thought of as the
“brains” of the device. The CPU carries out the commands sent to it by the software and
returns results to be acted upon.
The earliest CPUs were large circuit boards with limited functionality. Today, a CPU is
generally on one chip and can perform a large variety of functions. Today there are many
manufacturers of CPUs for personal computers; the leaders are Intel and Advanced Micro
Devices (AMD).
The speed (“clock time”) of a CPU is measured in hertz. A hertz is defined as one cycle per
second. Using the binary prefixes mentioned above, we can see that a kilohertz
(abbreviated kHz) is one thousand cycles per second, a megahertz (mHz) is one million
cycles per second, and a gigahertz (gHz) is one billion cycles per second. The CPU’s
processing power has increased at an amazing rate (see “Moore’s Law,” below). Besides a
faster clock time, many CPU chips now contain multiple processors per chip. These chips,
known as dual-core (two processors), quad-core (four processors), etc., increase the
processing power of a computer by providing the capability of multiple CPUs.
7/2/22, 10:57 AM
Moore’s Law
We all know that computers get faster every year. Many times, we are not sure if we
want to buy today’s model of smartphone, tablet, or PC because next week it won’t
be the most advanced any more. Gordon Moore, one of the founders of Intel,
recognized this phenomenon in 1965, noting that microprocessor transistor counts
had been doubling every year (Moore, 1965). His insight eventually evolved into
Moore’s Law, which states that the number of transistors on a chip will double every
two years. This has been generalized into the concept that computing power will
double every two years for the same price point. Another way of looking at this is to
think that the price for the same computing power will be cut in half every two
years. Though many have predicted its demise, Moore’s Law has held true for over
40 years.
7/2/22, 10:57 AM
Moore’s Law
A graphical representation of Moore’s Law from 1971 to 2011
CC-BY-SA: Wgsimon
There will be a point, someday, where we reach the limits of Moore’s Law, where we
cannot continue to shrink circuits any further. But engineers will continue to seek
ways to increase performance.
7/2/22, 10:57 AM
A computer’s main circuit
The motherboard is the main circuit board on the computer. The CPU, memory, and
storage components, among other things, all connect into the motherboard.
Motherboards come in different shapes and sizes, depending upon how compact or
expandable the computer is designed to be. Most modern motherboards have many
integrated components, such as video and sound processing, which used to require
separate components.
The motherboard provides much of the bus of the computer (the term bus refers to the
electrical connection between different computer components). The bus is an important
determiner of the computer’s speed: the combination of how fast the bus can transfer
data and the number of data bits that can be moved at one time determine the speed.
Random-Access Memory
When a computer starts up, it begins to load information from the hard disk into its
working memory. This working memory, called random-access memory (RAM), can
transfer data much faster than the hard disk. Any program that you are running on the
computer is loaded into RAM for processing. In order for a computer to work effectively,
some minimal amount of RAM must be installed. In most cases, adding more RAM will
allow the computer to run faster. Another characteristic of RAM is that it is “volatile.” This
means that it can store data as long as it is receiving power; when the computer is turned
off, any data stored in RAM is lost.
7/2/22, 10:57 AM
Dual-inline Memory Module (DIMM)
Means by which RAM is installed on a personal computer
RAM is generally installed in a personal computer through the use of a dual-inline memory
module (DIMM). The type of DIMM accepted into a computer is dependent upon the
motherboard. As described by Moore’s Law, the amount of memory and speeds of DIMMs
have increased dramatically over the years.
Hard Disk
Computer Hard Disk Enclosure
Location of long-term data storage
7/2/22, 10:57 AM
While the RAM is used as working memory, the computer also needs a place to store data
for the longer term. Most of today’s personal computers use a hard disk for long-term
data storage. A hard disk is where data is stored when the computer is turned off and
where it is retrieved from when the computer is turned on. It is called a hard disk because
it consists of a stack of disks inside a hard metal case. A floppy disk (discussed below) was
a removable disk that, in some cases at least, was flexible, or “floppy.”
Solid-State Drives
A relatively new component becoming more common in some personal computers is the
solid-state drive (SSD). The SSD performs the same function as a hard disk: long-term
storage. Instead of spinning disks, the SSD uses flash memory, which is much faster.
SSDs are currently quite a bit more expensive than hard disks. However, the use of flash
memory instead of disks makes them much lighter and faster than hard disks. SSDs are
primarily utilized in portable computers, making them lighter and more efficient. Some
computers combine the two storage technologies, using the SSD for the most accessed
data (such as the operating system) while using the hard disk for data that is accessed less
frequently. As with any technology, Moore’s Law is driving up capacity and speed, and
lowering prices of SSDs, which will allow them to proliferate in the years to come.
Removable Media
Besides fixed storage components, removable storage media are also used in most
personal computers. Removable media allows you to take your data with you. And just as
with all other digital technologies, these media have gotten smaller and more powerful as
the years have gone by. Early computers used floppy disks, which could be inserted into a
disk drive in the computer. Data was stored on a magnetic disk inside an enclosure. These
disks ranged from 8″ in the earliest days down to 3 1/2″.
7/2/22, 10:57 AM
Floppy Disks (8″ to 5 1/4″ to
3 1/2″)
Removable storage used in
early computers
Public Domain
Around the turn of the century, the USB flash drive was developed (more about the USB
port later in the chapter), and beginning in the late 1990s, the universal serial bus (USB)
connector became standard on all personal computers. As with all other storage media,
flash drive storage capacity has skyrocketed over the years, from initial capacities of 8
megabytes to current capacities of 64 gigabytes and still growing.
Network Connection
When personal computers were first developed, they were stand-alone units, which
meant that data was brought into the computer or removed from the computer via
removable media, such as the floppy disk. Beginning in the mid-1980s, however,
organizations began to see the value in connecting computers together via a digital
network. Because of this, personal computers needed the ability to connect to these
networks. Initially, this was done by adding an expansion card to the computer that
enabled the network connection, but by the mid-1990s, a network port was standard on
most personal computers. As wireless technologies began to dominate in the early 2000s,
many personal computers also began including wireless networking capabilities. Digital
communication technologies will be discussed further in Networking and Communication.
Input and Output
7/2/22, 10:57 AM
USB Connector
Connector device for
input and output
In order for a personal computer to be useful, it must have channels for receiving input
from the user and channels for delivering output to the user. These input and output
devices connect to the computer via various connection ports, which generally are part of
the motherboard and are accessible outside the computer case. In early personal
computers, specific ports were designed for each type of output USB connector device.
The configuration of these ports has evolved over the years, becoming more and more
standardized over time. Today, almost all devices plug into a computer through the use of
a USB port. This port type, first introduced in 1996, has increased in its capabilities, both
in its data transfer rate and power supplied.
Besides USB, some input and output devices connect to the computer via a wirelesstechnology standard called Bluetooth. Bluetooth was first invented in the 1990s and
exchanges data over short distances using radio waves. Bluetooth generally has a range of
100 to 150 feet. For devices to communicate via Bluetooth, both the personal computer
and the connecting device must have a Bluetooth communication chip installed.
Input Devices
All personal computers need components that allow the user to input data. Early
computers used simply a keyboard to allow the user to enter data or select an item from a
menu to run a program. With the advent of the graphical user interface, the mouse
became a standard component of a computer. These two components are still the primary
input devices to a personal computer, though variations of each have been introduced
with varying levels of success over the years. For example, many new devices now use a
touch screen as the primary way of entering data.
Besides the keyboard and mouse, additional input devices are becoming more common.
Scanners allow users to input documents into a computer, either as images or as text.
Microphones can be used to record audio or give voice commands. Webcams and other
7/2/22, 10:57 AM
types of video cameras can be used to record video or participate in a video chat session.
Output Devices
Output devices are essential as well. The most obvious output device is a display, visually
representing the state of the computer. In some cases, a personal computer can support
multiple displays or be connected to larger-format displays such as a projector or largescreen television. Besides displays, other output devices include speakers and printers.
7/2/22, 10:57 AM
What Hardware Components Contribute to the Speed of a Computer?
The speed of a computer is determined by many elements, some related to
hardware and some related to software. In hardware, speed is improved by giving
the electrons shorter distances to traverse to complete a circuit. Since the first CPU
was created in the early 1970s, engineers have constantly worked to figure out how
to shrink these circuits and put more and more circuits onto the same chip. And this
work has paid off—the speed of computing devices has continuously improved ever
The hardware components that contribute to the speed of a personal computer are
the CPU, the motherboard, RAM, and the hard disk. In most cases, these items can
be replaced with newer, faster components. In the case of RAM, simply adding more
RAM can also speed up the computer. The table shows how each of these
contributes to the speed of a computer. Besides upgrading hardware, there are
many changes that can be made to the software of a computer to make it faster.
measured by
Clock speed
The time it takes to complete
a circuit.
Bus speed
How much data can moveacross
the bus simultaneously.
Data transfer
The time it takes for data to
be transferred from memory
to system.
Hard Disk
Access time
The time it takes before the disk
can transfer data.
Hard Disk
Data transfer
The time it takes for data to
be transferred from disk to
7/2/22, 10:57 AM
Other Computing Devices
A personal computer is designed to be a general-purpose device. That is, it can be used to
solve many different types of problems. As the technologies of the personal computer
have become more commonplace, many of the components have been integrated into
other devices that previously were purely mechanical. We have also seen an evolution in
what defines a computer. Ever since the invention of the personal computer, users have
clamored for a way to carry them around. Here we will examine several types of devices
that represent the latest trends in personal computing.
Portable Computers
Mac Laptop
Apple computer
In 1983, Compaq Computer Corporation developed the first commercially successful
portable personal computer. By today’s standards, the Compaq PC was not very portable;
weighing in at 28 pounds, this computer was portable only in the most literal sense: it
could be carried around. But this was no laptop; the computer was designed like a
suitcase, to be lugged around and laid on its side to be used. Besides portability, the
Compaq was successful because it was fully compatible with the software being run by
the IBM PC, which was the standard for business.
In the years that followed, portable computing continued to improve, giving us laptop and
notebook computers. The “luggable” computer has given way to a much lighter clamshell
computer that weighs from 4 to 6 pounds and runs on batteries. In fact, the most recent
advances in technology give us a new class of laptop that is quickly becoming the
7/2/22, 10:57 AM
standard: extremely light and portable, and using less power than their larger
counterparts. The MacBook Air is a good example of this: it weighs less than three pounds
and is only 0.68 inches thick!
Finally, as more organizations and individuals have moved much of their computing to the
internet, many laptops use “the cloud” for all of their data and application storage. These
laptops are also extremely light because they do not need a hard disk. Samsung’s
Chromebook is a good example of this type of laptop (sometimes called a netbook).
The first modern-day mobile phone was invented in 1973. Resembling a brick and
weighing in at two pounds, it was priced out of reach for most consumers at nearly
$4000. Since then, mobile phones have become smaller and less expensive and are a
modern convenience available to all levels of society. As mobile phones evolved, they
became more like small computers. These smartphones have many of the same
characteristics as a personal computer, such as an operating system and memory. The first
smartphone was the IBM Simon, introduced in 1994.
In January 2007, Apple introduced the iPhone. Its ease of use and intuitive interface made
it an immediate success and solidified the future of smartphones. Running on an operating
system called iOS, the iPhone was really a small computer with a touch-screen interface.
In 2008, the first Android phone was released, with similar functionality.
Tablet Computers
A tablet computer is one that uses a touch screen as its primary input and is small enough
and light enough to be carried around easily. Tablets generally have no keyboard and are
self-contained inside a rectangular case. The first tablet computers appeared in the early
2000s and used an attached pen as a writing device for input. These tablets ranged in size
from small personal digital assistants (PDAs), which were handheld, to full-sized, 14-inch
devices. Most early tablets used a version of an existing computer operating system, such
as Windows or Linux.
These early tablet devices were, for the most part, commercial failures. In January 2010,
Apple introduced the iPad, which ushered in a new era of tablet computing. Instead of a
pen, the iPad used the finger as the primary input device. Instead of using the operating
system of their desktop and laptop computers, Apple chose to use iOS, the operating
system of the iPhone. Because the iPad had a user interface that was the same as the
iPhone, consumers felt comfortable and sales took off. The iPad has set the standard for
7/2/22, 10:57 AM
tablet computing. After the success of the iPad, computer manufacturers began to
develop new tablets that utilized operating systems that were designed for mobile
devices, such as Android.
The Rise of Mobile Computing
Mobile computing has had a huge impact on the business world. The use of smartphones
and tablet computers is replacing the use of PCs for many purposes. It is expected that
the use of PCs will continue to decline as mobile computing increases.
Integrated Computing
Along with advances in computers themselves, computing technology is being integrated
into many everyday products. From automobiles to refrigerators to airplanes, computing
technology is enhancing what these devices can do and is adding capabilities that would
have been considered science fiction just a few years ago. The smart house and the selfdriving car are two of the latest ways that computing technologies are being integrated
into everyday products
The Commoditization of the Personal Computer
Over the past 30 years, as the personal computer has gone from technical marvel to part
of our everyday lives, it has also become a commodity. The PC has become a commodity
in the sense that there is very little differentiation between computers and the primary
factor that controls their sale is their price. Hundreds of manufacturers all over the world
now create parts for personal computers. Dozens of companies buy these parts and
assemble the computers. As commodities, there are essentially no differences between
computers made by these different companies. Profit margins for personal computers are
razor-thin, leading hardware developers to find the lowest-cost manufacturing.
There is one brand of computer for which this is not the case—Apple. Because Apple does
not make computers that run on the same open standards as other manufacturers, they
can make a unique product that no one can easily copy. By creating what many consider
to be a superior product, Apple can charge more for their computers than other
manufacturers. Just as with the iPad and iPhone, Apple has chosen a strategy of
differentiation, which, at least at this time, seems to be paying off.
The Problem of Electronic Waste
7/2/22, 10:57 AM
Electronic Waste
Discarded electronic equipment
Public Domain
Personal computers have been around for more than 35 years. Millions of them have been
used and discarded. Mobile phones are now available in even the remotest parts of the
world and, after a few years of use, they are discarded. Where does this electronic debris
end up?
Often, it gets routed to any country that will accept it. Many times, it ends up in dumps in
developing nations. These dumps are beginning to be seen as health hazards for those
living near them. Though many manufacturers have made strides in using materials that
can be recycled, electronic waste is a problem for all of us.
7/2/22, 10:57 AM
Information systems hardware consists of the components of digital technology that you
can touch. We reviewed the components that make up a personal computer, with the
understanding that the configuration of a personal computer is very similar to that of any
type of digital computing device. A personal computer is made up of many components,
most importantly the CPU, motherboard, RAM, hard disk, removable media, and
input/output devices. We also reviewed some variations on the personal computer, such
as the tablet computer and the smartphone. In accordance with Moore’s Law, these
technologies have improved quickly over the years, making today’s computing devices
much more powerful than the devices of just a few years ago. Finally, we discussed two of
the consequences of this evolution: the commoditization of the personal computer and
the problem of electronic waste.
Study Questions
1. Write your own description of what the term information systems hardware
2. What is the impact of Moore’s Law on the various hardware components
described in this chapter?
3. Write a summary of one of the items mentioned in the “Integrated Computing”
4. Explain why the personal computer is now considered a commodity.
5. The CPU can also be thought of as the _____________ of the computer.
6. List the following in increasing order (slowest to fastest): megahertz, kilohertz,
7. What is the bus of a computer?
8. Name two differences between RAM and a hard disk.
9. What are the advantages of solid-state drives over hard disks?
10. How heavy was the first commercially successful portable computer?
7/2/22, 10:57 AM
Moore, Gordon E. (1965). Cramming more components onto integrated circuits.
Electronics Magazine, p. 4.
Licenses and Attributions
Chapter 2: Hardware
from Information Systems for Business and Beyond by David T.
Bourgeois is available under a Creative Commons Attribution 3.0 Unported
license. © 2014, David T. Bourgeois.
UMGC has modified this work and it is available under the original license.
© 2022 University of Maryland Global Campus
All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity
of information located at external sites.
7/2/22, 10:58 AM
Networking and Communication
Learning Resource
Networking and Communication
In the early days of computing, computers were seen as devices for making calculations,
storing data, and automating business processes. However, as the devices evolved, it
became apparent that many of the functions of telecommunications could be integrated
into the computer. During the 1980s, many organizations began combining their onceseparate telecommunications and information-systems departments into an information
technology, or IT, department. This ability for computers to communicate with one
another and, maybe more important, to facilitate communication between individuals and
groups, has been an important factor in the growth of computing over the past several
Computer networking really began in the 1960s with the birth of the internet, as we’ll see
below. However, while the internet and web were evolving, corporate networking was
also taking shape in the form of local area networks and client-server computing. In the
1990s, when the internet came of age, internet technologies began to pervade all areas of
an organization. Now, with the internet a global phenomenon, it would be unthinkable to
have a computer that did not include communications capabilities. This reading will review
the different technologies that have been put in place to enable this communications
revolution and a key information systems component, networking communication.
A Brief History of the Internet
In the Beginning: ARPANET
The story of the internet, and networking in general, can be traced back to the late 1950s.
The United States was in the depths of the Cold War with the USSR, and each nation
closely watched the other to determine which would gain a military or intelligence
advantage. In 1957, the Soviets surprised the US with the launch of Sputnik, propelling us
into the space age. In response to Sputnik, the US government created the Advanced
7/2/22, 10:58 AM
Networking and Communication
Research Projects Agency (ARPA), whose initial role was to ensure that the US was not
surprised again. It was from ARPA, now called Defense Advanced Research Projects
Agency (DARPA), that the internet first sprang.
ARPA was the center of computing research in the 1960s, but there was just one problem:
Many of the computers could not talk to each other. In 1968, ARPA sent out a request for
proposals for a communication technology that would allow different computers located
around the country to be integrated together into one network. Twelve companies
responded to the request, and a company named Bolt, Beranek, and Newman (BBN) won
the contract. They began work right away and were able to complete the job just one year
later: In September, 1969, the ARPANET was turned on. The first four nodes were at
UCLA, Stanford, MIT, and the University of Utah.
The Internet and the World Wide Web
Over the next decade, the ARPANET grew and gained popularity. During this time, other
networks also came into existence. Different organizations were connected to different
networks. This led to a problem: The networks could not talk to each other. Each network
used its own proprietary language, or protocol (see “An Internet Vocabulary Lesson” for
the definition), to send information back and forth. This problem was solved by the
invention of transmission control protocol/internet protocol (TCP/IP). TCP/IP was
designed to allow networks running on different protocols to have an intermediary
protocol that would allow them to communicate. As long as your network supported
TCP/IP, you could communicate with all of the other networks running TCP/IP. TCP/IP
quickly became the standard protocol and allowed networks to communicate with each
other. It is from this breakthrough that we first got the term internet, which simply means
“an interconnected network of networks.”
7/2/22, 10:58 AM
Networking and Communication
An Internet Vocabulary Lesson
Networking communication is full of some very technical concepts based on some
simple principles. Learn the terms below, and you’ll be able to hold your own in a
conversation about the internet.
Packet: The fundamental unit of data transmitted over the internet. When a
device intends to send a message to another device (for example, your PC
sends a request to YouTube to open a video), it breaks the message down into
smaller pieces, called packets. Each packet has the sender’s address, the
destination address, a sequence number, and a piece of the overall message to
be sent.
Hub: A simple network device that connects other devices to the network and
sends packets to all the devices connected to it.
Bridge: A network device that connects two networks together and only
allows packets through that are needed.
Switch: A network device that connects multiple devices together and filters
packets based on their destination within the connected devices.
Router: A device that receives and analyzes packets and then routes them
toward their destination. In some cases, a router will send a packet to another
router; in other cases, it will send it directly to its destination.
IP Address: Every device that communicates on the internet, whether it is a
personal computer, a tablet, a smartphone, or anything else, is assigned a
unique identifying number called an Internet Protocol (IP) address. Historically,
the IP-address standard used has been IPv4 (version 4), which has the format
of four numbers between 0 and 255 separated by a period. For example, the
domain Saylor.org has the IP address of The IPv4 standard
has a limit of 4,294,967,296 possible addresses. As the use of the internet has
proliferated, the number of IP addresses needed has grown to the point where
the use of IPv4 addresses will be exhausted. This has led to the new IPv6
standard. The IPv6 standard is formatted as eight groups of four hexadecimal
digits, such as 2001:0db8:85a3:0042:1000:8a2e:0370:7334. The IPv6
standard has a limit of 3.4×1038 possible addresses.
Domain name: If you had to try to remember the IP address of every web
server you wanted to access, the internet would not be nearly as easy to use.
A domain name is a human-friendly name for a device on the internet. These
names generally consist of a descriptive text followed by the top-level domain
7/2/22, 10:58 AM
Networking and Communication
(TLD). For example, Wikipedia’s domain name is
wikipedia.org; wikipedia describes the organization and .org is the top-level
domain. In this case, the .org TLD is designed for nonprofit organizations.
Other well-known TLDs include .com, .net, and .gov.
DNS: DNS stands for domain name system, which acts as the directory on the
internet. When a request to access a device with a domain name is given, a
DNS server is queried. It returns the IP address of the device requested,
allowing for proper routing.
Packet-switching: When a packet is sent from one device out over the
internet, it does not follow a straight path to its destination. Instead, it is
passed from one router to another across the internet until it is reaches its
destination. In fact, sometimes two packets from the same message will take
different routes. Sometimes, packets will arrive at their destination out of
order. When this happens, the receiving device restores them to their proper
Protocol: In computer networking, a protocol is the set of rules that allow two
(or more) devices to exchange information back and forth across the network.
Worldwide internet use over a 24-hour period
As we moved into the 1980s, computers were added to the internet at an increasing rate.
These computers were primarily from government, academic, and research organizations.
Much to the surprise of the engineers, the early popularity of the internet was driven by
7/2/22, 10:58 AM
Networking and Communication
the use of electronic mail (see “Email Is the “Killer” App for the Internet” below).
Using the internet in these early days was not easy. In order to access information on
another server, you had to know how to type in the commands necessary to access it, as
well as know the name of that device. That all changed in 1990, when Tim Berners-Lee
introduced his World Wide Web project, which provided an easy way to navigate the
internet through the use of linked text (hypertext). The World Wide Web gained even
more steam with the release of the Mosaic browser in 1993, which allowed graphics and
text to be combined together as a way to present information and navigate the internet.
The Mosaic browser took off in popularity and was soon superseded by Netscape
Navigator, the first commercial web browser, in 1994. The internet and the World Wide
Web were now poised for growth.
The Dot-Com Bubble
In the 1980s and early 1990s, the internet was being managed by the National Science
Foundation (NSF). The NSF had restricted commercial ventures on the internet, which
meant that no one could buy or sell anything online. In 1991, the NSF transferred its role
to three other organizations, thus getting the US government out of direct control over
the internet and essentially opening up commerce online.
This new commercialization of the internet led to what is now known as the dot-com
bubble. A frenzy of investment in new dot-com companies took place in the late 1990s,
running up the stock market to new highs on a daily basis. This investment bubble was
driven by the fact that investors knew that online commerce would change everything.
Unfortunately, many of these new companies had poor business models and ended up
with little to show for all of the funds that were invested in them. In 2000 and 2001, the
bubble burst and many of these new companies went out of business. Many companies
also survived, including the still-thriving Amazon (started in 1994) and eBay (1995). After
the dot-com bubble burst, a new reality became clear: In order to succeed online, ebusiness companies would need to develop real business models and show that they
could survive financially using this new technology.
Web 2.0
In the first few years of the World Wide Web, creating and putting up a website required
a specific set of knowledge: You had to know how to set up a server on the World Wide
Web, how to get a domain name, how to write web pages in HTML, and how to
troubleshoot various technical issues as they came up. Someone who did these jobs for a
website became known as a webmaster.
7/2/22, 10:58 AM
Networking and Communication
As the web gained in popularity, it became more and more apparent that those who did
not have the skills to be a webmaster still wanted to create online content and have their
own piece of the web. This need was met with new technologies that provided a website
framework for those who wanted to put content online. Blogger and Wikipedia are
examples of these early Web 2.0 applications, which gave anyone with something to say a
place to go and say it, without the need for understanding HTML or web-server
Starting in the early 2000s, Web 2.0 applications began a second bubble of optimism and
investment. It seemed that everyone wanted their own blog or photo-sharing site. Here
are some of the companies that came of age during this time: MySpace (2003),
Photobucket (2003), Flickr (2004), Facebook (2004), WordPress (2005), Tumblr (2006),
and Twitter (2006). The ultimate indication that Web 2.0 had taken hold was when Time
magazine named “You” its “Person of the Year” in 2006.
Email is the “Killer” App for the Internet
When the personal computer was created, it was a great little toy for technology
hobbyists and armchair programmers. As soon as the spreadsheet was invented,
however, businesses took notice, and the rest is history. The spreadsheet was the
killer app for the personal computer: people bought PCs just so they could run
The internet was originally designed as a way for scientists and researchers to share
information and computing power among themselves. However, as soon as
electronic mail was invented, it began driving demand for the internet. This wasn’t
what the developers had in mind, but it turned out that people connecting to people
was the killer app for the internet.
We are seeing this again today with social networks, specifically Facebook. Many
who weren’t convinced to have an online presence now feel left out without a
Facebook account. The connections made between people using Web 2.0
applications like Facebook on their personal computer or smartphone is driving
growth yet again.
7/2/22, 10:58 AM
Networking and Communication
The Internet and the World Wide Web Are Not the Same Thing
Many times, the terms “internet” and “World Wide Web,” or even just “the web,” are
used interchangeably. But really, they are not the same thing at all. The internet is
an interconnected network of networks. Many services run across the internet:
electronic mail, voice and video, file transfers, and, yes, the World Wide Web.
The World Wide Web is simply one piece of the internet. It is made up of web
servers that have HTML pages that are being viewed on devices with web browsers.
It is really that simple.
The Growth of Broadband
In the early days of the internet, most access was done via a modem over an analog
telephone line. A modem (short for “modulator-demodulator”) was connected to the
incoming phone line and a computer in order to connect you to a network. Speeds were
measured in bits-per-second (bps), with speeds growing from 1200 bps to 56,000 bps
over the years. Connection to the internet via these modems is called dial-up access. Dialup was very inconvenient because it tied up the phone line. As the web became more and
more interactive, dial-up also hindered usage, as users wanted to transfer more and more
data. As a point of reference, downloading a typical 3.5 mb song would take 24 minutes at
1200 bps and 2 minutes at 28,800 bps.
A broadband connection is defined as one that has speeds of at least 256,000 bps,
though most connections today are much faster, measured in millions of bits per second
(megabits or mbps) or even billions (gigabits). For the home user, a broadband connection
is usually accomplished via the cable television lines or phone lines (DSL). Both cable and
DSL have similar prices and speeds, though each individual may find that one is better
than the other for their specific area. Speeds for cable and DSL can vary during different
times of the day or week, depending upon how much data traffic is being used. In more
remote areas, where cable and phone companies do not provide access, home internet
connections can be made via satellite. The average home broadband speed is anywhere
between 3 mbps and 30 mbps. At 10 mbps, downloading a typical 3.5 mb song would
take less than a second. For businesses who require more bandwidth and reliability,
telecommunications companies can provide other options, such as T1 and T3 lines.
Broadband access is important because it impacts how the internet is used. When a
community has access to broadband, it people can interact more online and increases the
usage of digital tools overall. Access to broadband is now considered a basic human right
7/2/22, 10:58 AM
Networking and Communication
by the United Nations, as declared in their 2011 statement:
“Broadband technologies are fundamentally transforming the way we live,” the
Broadband Commission for Digital Development, set up last year by the UN
Educational Scientific and Cultural Organization (UNESCO) and the UN
International Telecommunications Union (ITU), said in issuing “The Broadband
Challenge” at a leadership summit in Geneva.
“It is vital that no one be excluded from the new global knowledge societies we
are building. We believe that communication is not just a human need—it is a
Wireless Networking
Today we are used to being able to access the internet wherever we go. Our smartphones
can access the internet; Starbucks provides wireless “hotspots” for our laptops or tablets.
These wireless technologies have made internet access more convenient and have made
devices such as tablets and laptops much more functional. Let’s examine a few of these
wireless technologies.
Wi-Fi is a technology that takes an internet signal and converts it into radio waves. These
radio waves can be picked up within a radius of approximately 65 feet by devices with a
wireless adapter. Several Wi-Fi specifications have been developed over the years,
starting with 802.11b in 1999, followed by the 802.11g specification in 2003, and
802.11n in 2009. Each new specification improved the speed and range of Wi-Fi, allowing
for more uses. One of the primary places where Wi-Fi is being used is in the home. Home
users are purchasing Wi-Fi routers, connecting them to their broadband connections, and
then connecting multiple devices via Wi-Fi.
Mobile Network
As the cell phone has evolved into the smartphone, the desire for internet access on these
devices has led to data networks being included as part of the mobile phone network.
While internet connections were technically available earlier, it was really with the release
of the 3G networks in 2001 (2002 in the US) that smartphones and other cellular devices
could access data from the internet. This new capability drove the market for new and
7/2/22, 10:58 AM
Networking and Communication
more powerful smartphones, such as the iPhone, introduced in 2007. In 2011, wireless
carriers began offering 4G data speeds, giving the cellular networks the same speeds that
customers were used to getting via their home connection.
Why Doesn’t My Cell Phone Work When I Travel Abroad?
As mobile phone technologies have evolved, providers in different countries have
chosen different communication standards for their mobile phone networks. In the
United States, both of the two competing standards exist: Global System for Mobile
Communications (GSM; used by AT&T and T-Mobile) and Code-Division Multiple
Access (CDMA; used by the other major carriers). Each standard has its pros and
cons, but the bottom line is that phones using one standard cannot easily switch to
the other. In the United States, this is not a big deal because mobile networks exist
to support both standards. But when you travel to other countries, you will find that
most of them use GSM networks, with the one big exception being Japan, which has
standardized on CDMA. It is possible for a mobile phone using one type of network
to switch to the other type of network by switching out the SIM card, which
controls your access to the mobile network. However, this will not work in all cases.
If you are traveling abroad, it is always best to consult with your mobile provider to
determine the best way to access a mobile network.
While Bluetooth is not generally used to connect a device to the internet, it is an
important wireless technology that has enabled many functionalities that are used every
day. When created in 1994 by Ericsson, it was intended to replace wired connections
between devices. Today, it is the standard method for connecting nearby devices
wirelessly. Bluetooth has a range of approximately 300 feet and consumes very little
power, making it an excellent choice for a variety of purposes. Some applications of
Bluetooth include connecting a printer to a personal computer, connecting a mobile
phone and headset, connecting a wireless keyboard and mouse to a computer, and
connecting a remote for a presentation made on a personal computer.
A growing class of data being transferred over the internet is voice data. A protocol called
voice over IP (VoIP) enables sounds to be converted to a digital format for transmission
over the internet and then recreated at the other end. By using many existing
7/2/22, 10:58 AM
Networking and Communication
technologies and software, voice communication over the internet is now available to
anyone with a browser (think Skype, Google Hangouts). Beyond this, many companies are
now offering VoIP-based telephone service for business and home use.
Organizational Networking
Local and Wide Area Networks
Scope of business networks
While the internet was evolving and creating a way for organizations to connect to each
other and the world, another revolution was taking place inside organizations. The
proliferation of personal computers inside organizations led to the need to share
resources such as printers, scanners, and data. Organizations solved this problem through
the creation of local area networks (LANs), which allowed computers to connect to each
other and to peripherals. These same networks also allowed personal computers to hook
up to legacy mainframe computers.
An LAN is (by definition) a local network, usually operating in the same building or on the
same campus. When an organization needed to provide a network over a wider area (with
locations in different cities or states, for example), they would build a wide area network
7/2/22, 10:58 AM
Networking and Communication
The personal computer originally was used as a stand-alone computing device. A program
was installed on the computer and then used to do word processing or number crunching.
However, with the advent of networking and LANs, computers could work together to
solve problems. Higher-end computers were installed as servers, and users on the local
network could run applications and share information among departments and
organizations. This is called client-server computing.
Just as organizations set up websites to provide global access to information about their
business, they also set up internal web pages to provide information about the
organization to the employees. This internal set of web pages is called an intranet. Web
pages on the intranet are not accessible to those outside the company; in fact, those
pages would come up as “not found” if an employee tried to access them from outside the
company’s network.
Sometimes an organization wants to be able to collaborate with its customers or suppliers
while at the same time maintaining the security of being inside its own network. In cases
like this, a company may want to create an extranet, which is a part of the company’s
network that can be made available securely to those outside of the company. Extranets
can be used to allow customers to log in and check the status of their orders, or for
suppliers to check their customers’ inventory levels.
Sometimes, an organization will need to allow someone who is not located physically
within its internal network to gain access. This access can be provided by a virtual private
network (VPN). VPNs will be discussed further in the reading, Information Systems
7/2/22, 10:58 AM
Networking and Communication
Microsoft’s SharePoint Powers the Intranet
As organizations begin to see the power of collaboration between their employees,
they often look for solutions that will allow them to leverage their intranet to enable
more collaboration. Since most companies use Microsoft products for much of their
computing, it is only natural that they have looked to Microsoft to provide a
solution. This solution is Microsoft’s SharePoint.
SharePoint provides a communication and collaboration platform that integrates
seamlessly with Microsoft’s Office suite of applications. Using SharePoint,
employees can share a document and edit it together—no more emailing that Word
document to everyone for review. Projects and documents can be managed
collaboratively across the organization. Corporate documents are indexed and made
available for search. No more asking around for that procedures document—now
you just search for it in SharePoint. For organizations looking to add a social
networking component to their intranet, Microsoft offers Yammer, which can be
used by itself or integrated into SharePoint.
Cloud Computing
The universal availability of the internet combined with increases in processing power and
data-storage capacity have made cloud computing a viable option for many companies.
Using cloud computing, companies or individuals can contract to store data on storage
devices somewhere on the internet. Applications can be “rented” as needed, giving a
company the ability to quickly deploy new applications. You can read about cloud
computing in more detail in the reading Software.
Metcalfe’s Law
Just as Moore’s Law describes how computing power is increasing over time,
Metcalfe’s Law describes the power of networking. Specifically, Metcalfe’s Law
states that the value of a telecommunications network is proportional to the
square of the number of connected users of the system. Think about it this way: If
none of your friends were on Facebook, would you spend much time there? If no
one else at your school or place of work had email, would it be very useful to you?
Metcalfe’s Law tries to quantify this value.
7/2/22, 10:58 AM
Networking and Communication
The networking revolution has completely changed how the computer is used. Today, no
one would imagine using a computer that was not connected to one or more networks.
The development of the internet and World Wide Web, combined with wireless access,
has made information available at our fingertips. The Web 2.0 revolution has made us all
authors of web content. As networking technology has matured, the use of internet
technologies has become a standard for every type of organization. The use of intranets
and extranets has allowed organizations to deploy functionality to employees and
business partners alike, increasing efficiencies and improving communications. Cloud
computing has truly made information available everywhere and has serious implications
for the role of the IT department.
Study Questions
1. What were the first four locations hooked up to the internet (ARPANET)?
2. What does the term packet mean?
3. Which came first, the internet or the World Wide Web?
4. What was revolutionary about Web 2.0?
5. What was the so-called killer app for the internet?
6. What makes a connection a broadband connection?
7. What does the term VoIP mean?
8. What is a LAN?
9. What is the difference between an intranet and an extranet?
10. What is Metcalfe’s Law?
United Nations, United N…
Purchase answer to see full

error: Content is protected !!