Official (ISC)2 Certified in Cybersecurity (CC) Self-Paced Training. (Chapter 4: Network Security)

Chapter 4: Network Security


Chapter 4 Agenda

Module 1: Understand Computer Networking  (D4.1)

Module 2: Understand Network (Cyber) Threats and Attacks (D4.2)

Module 3: Understand Network Security Infrastructure (D4.3)

Module 4: Summary

Chapter 4 Overview

Let’s take a more detailed look at computer networking and securing the network. In today’s world, the internet connects nearly everyone and everything, and this is accomplished through networking. While most see computer networking as a positive, criminals routinely use the internet, and the networking protocols themselves, as weapons and tools to exploit vulnerabilities and for this reason we must do our best to secure the network. We will review the basic components of a network, threats and attacks to the network, and learn how to protect them from attackers. Network security itself can be a specialty career within cybersecurity; however, all information security professionals need to understand how networks operate and are exploited to better secure them. 

Learning Objectives

Domain 4: Network Security Objectives

After completing this chapter, the participant will be able to: 

L4      

Explain the concepts of network security. 

L4.1.1

Recognize common networking terms and models. 

L4.1.2

Identify common protocols and ports and their secure counterparts. 

L4.2.1

Identify types of network (cyber) threats and attacks. 

L4.2.2

Discuss common tools used to identify and prevent threats. 

L4.3.1

Identify common data center terminology. 

L4.3.2

Recognize common cloud service terminology. 

L4.3.3

Identify secure network design terminology. 

L4.4.1

Practice the terminology of and review network security concepts. 

Chapter at a Glance

While working through Chapter 4, Network Security, make sure to: 

  • Complete the Knowledge Check: Networking Terms and Models 
  • Complete the Knowledge Check: Formatting IPv6 
  • Complete the Knowledge Check: Matching Ports with Their Secure Counterparts 
  • Complete the Knowledge Check: Identify the Malware Threats
  • Complete the Knowledge Check: Types of Threats 
  • Complete the Knowledge Check: On-Premises 
  • Complete the Knowledge Check: Which of the Following is Not a Source of Redundant Power
  • Complete the Knowledge Check: Cloud Service Terminology
  • Complete the Knowledge Check: Cloud Service Models
  • Complete the Knowledge Check: Network Design Terms
  • View the Chapter 4 Summary
  • Take the Online Chapter 4 Quiz
  • View the Terms and Definitions

 

Module 1: Understand Computer Networking


Domain D4.1.1, D4.1.2

Module Objectives

  • L4.1.1 Recognize common networking terms and models.
  • L4.1.2 Identify common protocols and ports and their secure counterparts when presented with a network diagram.

 Manny: One of the biggest issues in cybersecurity is that computers are all linked together,
sometimes by physical networks within a building, and almost always via the Internet, so it's easy
for viruses and other threats to move rapidly through networks.
Tasha: That's right, and cyber threats and attacks are getting more sophisticated all the time. This
aspect of cybersecurity is always evolving. Let's find out more.

 

What is Networking

A network is simply two or more computers linked together to share data, information or resources.

To properly establish secure data communications, it is important to explore all of the technologies involved in computer communications. From hardware and software to protocols and encryption and beyond, there are many details, standards and procedures to be familiar with.

Types of Networks

There are two basic types of networks:

  • Local area network (LAN) - A local area network (LAN) is a network typically spanning a single floor or building. This is commonly a limited geographical area.
  • Wide area network (WAN) - Wide area network (WAN) is the term usually assigned to the long-distance connections between geographically remote networks.

Network Devices

Click on each tab below to learn more. 



Other Networking Terms

Ethernet image of an ethernet cable

Ethernet (IEEE 802.3) is a standard that defines wired connections of networked devices. This standard defines the way data is formatted over the wire to ensure disparate devices can communicate over the same cables.

Device Address

  • image of MAC address on a deviceMedia Access Control (MAC) Address - Every network device is assigned a Media Access Control (MAC) address. An example is 00-13-02-1F-58-F5. The first 3 bytes (24 bits) of the address denote the vendor or manufacturer of the physical network interface. No two devices can have the same MAC address in the same local network; otherwise an address conflict occurs.
  • image representing the IP address of a laptopInternet Protocol (IP) Address - While MAC addresses are generally assigned in the firmware of the interface, IP hosts associate that address with a unique logical address. This logical IP address represents the network interface within the network and can be useful to maintain communications when a physical device is swapped with new hardware. Examples are 192.168.1.1 and 2001:db8::ffff:0:1.


Networking at a Glance

This diagram represents a small business network, which we will build upon during this lesson. The lines depict wired connections. Notice how all devices behind the firewall connect via the network switch, and the firewall lies between the network switch and the internet. 


The network diagram below represents a typical home network. Notice the primary difference between the home network and the business network is that the router, firewall, and network switch are often combined into one device supplied by your internet provider and shown here as the wireless access point. 


Networking Models

Many different models, architectures and standards exist that provide ways to interconnect different hardware and software systems with each other for the purposes of sharing information, coordinating their activities and accomplishing joint or shared tasks.

Computers and networks emerge from the integration of communication devices, storage devices, processing devices, security devices, input devices, output devices, operating systems, software, services, data and people.

Translating the organization’s security needs into safe, reliable and effective network systems needs to start with a simple premise. The purpose of all communications is to exchange information and ideas between people and organizations so that they can get work done.

Those simple goals can be re-expressed in network (and security) terms such as:

  • Provide reliable, managed communications between hosts (and users)
  • Isolate functions in layers
  • Use packets as the basis of communication
  • Standardize routing, addressing and control
  • Allow layers beyond internetworking to add functionality
  • Be vendor-agnostic, scalable and resilient

In the most basic form, a network model has at least two layers:

    Select each plus sign hotspot to learn more about each topic.

    diagram of two layers of a newtwork model with the lower (or data transport) layer labeled with 1. Physical, 2. Data Link, 3. Network, 4. Transport and the upper (or application) layer labeled with 5. Session, 6. Presentation, and 7. Application


  

The upper layer, also known as the host or application layer, is responsible for managing the integrity of a connection and controlling the session as well as establishing, maintaining and terminating communication sessions between two computers. It is also responsible for transforming data received from the Application Layer into a format that any system can understand. And finally, it allows applications to communicate and determines whether a remote communication partner is available and accessible. The lower layer is often referred to as the media or transport layer and is responsible for receiving bits from the physical connection medium and converting them into a frame. Frames are grouped into standardized sizes. Think of frames as a bucket and the bits as water. If the buckets are sized similarly and the water is contained within the buckets, the data can be transported in a controlled manner. Route data is added to the frames of data to create packets. In other words, a destination address is added to the bucket. Once we have the buckets sorted and ready to go, the host layer takes over.

Open Systems Interconnection (OSI) Model

The OSI Model was developed to establish a common way to describe the communication structure for interconnected computer systems. The OSI model serves as an abstract framework, or theoretical model, for how protocols should function in an ideal world, on ideal hardware. Thus, the OSI model has become a common conceptual reference that is used to understand the communication of various hierarchical components from software interfaces to physical hardware.

The OSI model divides networking tasks into seven distinct layers. Each layer is responsible for performing specific tasks or operations with the goal of supporting data exchange (in other words, network communication) between two computers. The layers are interchangeably referenced by name or layer number. For example, Layer 3 is also known as the Network Layer. The layers are ordered specifically to indicate how information flows through the various levels of communication. Each layer communicates directly with the layer above and the layer below it. For example, Layer 3 communicates with both the Data Link (2) and Transport (4) layers.

The Application, Presentation, and Session Layers (5-7) are commonly referred to simply as data. However, each layer has the potential to perform encapsulation. Encapsulation is the addition of header and possibly a footer (trailer) data by a protocol used at that layer of the OSI model. Encapsulation is particularly important when discussing Transport, Network and Data Link layers (2-4), which all generally include some form of header. At the Physical Layer (1), the data unit is converted into binary, i.e., 01010111, and sent across physical wires such as an ethernet cable.  

It's worth mapping some common networking terminology to the OSI Model so you can see the value in the conceptual model.

Consider the following examples: 

  • When someone references an image file like a JPEG or PNG, we are talking about the Presentation Layer (6). 
  • When discussing logical ports such as NetBIOS, we are discussing the Session Layer (5).
  • When discussing TCP/UDP, we are discussing the Transport Layer (4).
  • When discussing routers sending packets, we are discussing the Network Layer (3). 
  • When discussing switches, bridges or WAPs sending frames, we are discussing the Data Link Layer (2). 

Encapsulation occurs as the data moves down the OSI model from Application to Physical. As data is encapsulated at each descending layer, the previous layer’s header, payload and footer are all treated as the next layer’s payload. The data unit size increases as we move down the conceptual model and the contents continue to encapsulate.  
 
The inverse action occurs as data moves up the OSI model layers from Physical to Application. This process is known as de-encapsulation  (or decapsulation). The header and footer are used to properly interpret the data payload and are then discarded. As we move up the OSI model, the data unit becomes smaller. The encapsulation/de-encapsulation process is best depicted visually below: 

diagram of the layers of the Open Systems Interconnection (OSI) Model


Transmission Control Protocol/Internet Protocol (TCP/IP)

The OSI model wasn’t the first or only attempt to streamline networking protocols or establish a common communications standard. In fact, the most widely used protocol today, TCP/IP, was developed in the early 1970s. The OSI model was not developed until the late 1970s. The TCP/IP protocol stack focuses on the core functions of networking.  

TCP/IP Protocol Architecture Layers 
Application Layer  Defines the protocols for the transport layer.  
Transport Layer  Permits data to move among devices.  
Internet Layer  Creates/inserts packets.  
Network Interface Layer  How data moves through the network.  

The most widely used protocol suite is TCP/IP, but it is not just a single protocol; rather, it is a protocol stack comprising dozens of individual protocols. TCP/IP is a platform-independent protocol based on open standards. However, this is both a benefit and a drawback. TCP/IP can be found in just about every available operating system, but it consumes a significant amount of resources and is relatively easy to hack into because it was designed for ease of use rather than for security. 

Transmission Control Protocol/Internet Protocol (TCP/IP)

At the Application Layer, TCP/IP protocols include Telnet, File Transfer Protocol (FTP), Simple Mail Transport Protocol (SMTP), and Domain Name Service (DNS).

The two primary Transport Layer protocols of TCP/IP are TCP and UDP. TCP is a full-duplex connection-oriented protocol, whereas UDP is a simplex connectionless protocol. In the Internet Layer, Internet Control Message Protocol (ICMP) is used to determine the health of a network or a specific link. ICMP is utilized by ping, traceroute and other network management tools. The ping utility employs ICMP echo packets and bounces them off remote systems. Thus, you can use ping to determine whether the remote system is online, whether the remote system is responding promptly, whether the intermediary systems are supporting communications, and the level of performance efficiency at which the intermediary systems are communicating.

diagram of OSI model layers, TCP/IP protocol architecture, and TCP/IP protocal suite
  

KNOWLEDGE CHECK


Internet Protocol (IPv4 and IPv6)

IP is currently deployed and used worldwide in two major versions. IPv4 provides a 32-bit address space, which by the late 1980s was projected to be exhausted. IPv6 was introduced in December 1995 and provides a 128-bit address space along with several other important features. 

image of IP address with first octet marked with a bracket and labeling to indicate 32 bit address, dotted decimal address, network address, and host addres

IP hosts/devices associate an address with a unique logical address. An IPv4 address is expressed as four octets separated by a dot (.), for example, 216.12.146.140. Each octet may have a value between 0 and 255. However, 0 is the network itself (not a device on that network), and 255 is generally reserved for broadcast purposes. Each address is subdivided into two parts: the network number and the host. The network number assigned by an external organization, such as the Internet Corporation for Assigned Names and Numbers (ICANN), represents the organization’s network. The host represents the network interface within the network.  

To ease network administration, networks are typically divided into subnets. Because subnets cannot be distinguished with the addressing scheme discussed so far, a separate mechanism, the subnet mask, is used to define the part of the address used for the subnet. The mask is usually converted to decimal notation like 255.255.255.0.  

With the ever-increasing number of computers and networked devices, it is clear that IPv4 does not provide enough addresses for our needs. To overcome this shortcoming, IPv4 was sub-divided into public and private address ranges. Public addresses are limited with IPv4, but this issue was addressed in part with private addressing. Private addresses can be shared by anyone, and it is highly likely that everyone on your street is using the same address scheme.  

The nature of the addressing scheme established by IPv4 meant that network designers had to start thinking in terms of IP address reuse. IPv4 facilitated this in several ways, such as its creation of the private address groups; this allows every LAN in every SOHO (small office, home office) situation to use addresses such as 192.168.2.xxx for its internal network addresses, without fear that some other system can intercept traffic on their LAN. 

Internet Protocol (IPv4 and IPv6)

This table shows the private addresses available for anyone to use:

Range 
10.0.0.0 to 10.255.255.254 
172.16.0.0 to 172.31.255.254 
192.168.0.0 to 192.168.255.254

The first octet of 127 is reserved for a computer’s loopback address. Usually, the address 127.0.0.1 is used. The loopback address is used to provide a mechanism for self-diagnosis and troubleshooting at the machine level. This mechanism allows a network administrator to treat a local machine as if it were a remote machine and ping the network interface to establish whether it is operational.

IPv6 is a modernization of IPv4, which addressed a number of weaknesses in the IPv4 environment:

  • A much larger address field: IPv6 addresses are 128 bits, which supports 2128 or 340,282,366,920,938,463,463,374,607,431,768,211,456 hosts. This ensures that we will not run out of addresses.

  • Improved security: IPsec is an optional part of IPv4 networks, but a mandatory component of IPv6 networks. This will help ensure the integrity and confidentiality of IP packets and allow communicating partners to authenticate with each other.

  • Improved quality of service (QoS): This will help services obtain an appropriate share of a network’s bandwidth.

An IPv6 address is shown as 8 groups of four digits. Instead of numeric (0-9) digits like IPv4, IPv6 addresses use the hexadecimal range (0000-ffff) and are separated by colons (:) rather than periods (.). An example IPv6 address is 2001:0db8:0000:0000:0000:ffff:0000:0001. To make it easier for humans to read and type, it can be shortened by removing the leading zeros at the beginning of each field and substituting two colons (::) for the longest consecutive zero fields. All fields must retain at least one digit. After shortening, the example address above is rendered as 2001:db8::ffff:0:1, which is much easier to type. As in IPv4, there are some addresses and ranges that are reserved for special uses:

  • ::1 is the local loopback address, used the same as 127.0.0.1 in IPv4.
  • The range 2001:db8:: to 2001:db8:ffff:ffff:ffff:ffff:ffff:ffff is reserved for documentation use, just like in the examples above.
  • fc00:: to fdff:ffff:ffff:ffff:ffff:ffff:ffff:ffff are addresses reserved for internal network use and are not routable on the internet.

What is WiFi?

Wireless networking is a popular method of connecting corporate and home systems because of the ease of deployment and relatively low cost. It has made networking more versatile than ever before. Workstations and portable systems are no longer tied to a cable but can roam freely within the signal range of the deployed wireless access points. However, with this freedom comes additional vulnerabilities.

Wi-Fi range is generally wide enough for most homes or small offices, and range extenders may be placed strategically to extend the signal for larger campuses or homes. Over time the Wi-Fi standard has evolved, with each updated version faster than the last.  

In a LAN, threat actors need to enter the physical space or immediate vicinity of the physical media itself. For wired networks, this can be done by placing sniffer taps onto cables, plugging in USB devices, or using other tools that require physical access to the network. By contrast, wireless media intrusions can happen at a distance. 

Diagram in gray of network with router, firewall, switch, server, and highlighted in blue wireless access point connecting work stations, phone, laptops, and tablet

Security of the Network 

TCP/IP’s vulnerabilities are numerous. Improperly implemented TCP/IP stacks in various operating systems are vulnerable to various DoS/DDoS attacks, fragment attacks, oversized packet attacks, spoofing attacks, and man-in-the-middle attacks.

TCP/IP (as well as most protocols) is also subject to passive attacks via monitoring or sniffing. Network monitoring, or sniffing, is the act of monitoring traffic patterns to obtain information about a network. 

diagram of network with internet, router and firewall highlighted in blue


Ports and Protocols (Applications/Services)

There are physical ports that you connect wires to and logical ports that determine where the data/traffic goes.
 
Click on each tab to learn more.


Module 2: Understand Network (Cyber) Threats and Attacks


Domain D4.1.2, D4.2.2, D4.2.3

Module Objectives

  • L4.2.1 Identify types of network (cyber) threats.
  • L4.2.2 Discuss common tools used to identify and prevent threats.

Manny: It's not just cybersecurity experts who have to know about the different types of network and
cyber threats and attacks.
Tasha: You're right, Manny. Everyone from small businesses (like Java Sip) to the biggest corporations,
needs to know the impact of network and cyber-attacks. It seems like every day there is news of
ransomware or other cyber-attacks. These attacks are costing the world financially and they're
increasing every year.
Manny: Anyone who uses a smartphone or has an email or social media account has probably
encountered spoofing, phishing, and other nefarious attempts to defraud users or infect their devices.
Let's find out more.

Impact of Cyber Attacks

Chad Kliewer: I'll say greetings and welcome to the discussion on cyberattacks. I'm your host,
Chad Kliewer, holder of a CISSP and CCSP, and current (ISC)2 member. I'll be facilitating our
experience. And I'm extremely excited to welcome our special guest, Joe Sullivan, CISSP, and
also an (ISC) 2 member. Joe's a former CISO in the banking and finance industry, who now
specializes in forensics, incident response and recovery. So, Joe, you ready to get started?
Joe Sullivan: I am looking forward to this. I'm excited.
Kliewer: All right. Anything else you'd like to add about your background? I didn't give you much
opportunity to do that.
Sullivan: Just a brief overview. I’ve been in information security for 2 years now in various
aspects as you've mentioned.
Kliewer: Okay, awesome. Thank you much. So, I'm going to dive right into some content here.
Because part of what we're trying to do is we're trying to look at how we prevent attacks, and
then once those attacks happen, how they really impact the business and how they impact the
companies. We all hear about these attacks constantly, but we never really look so much at
how they impact each individual business. So, we're going to start out just talking a little bit and
say, if we can't detect any future ongoing attack, how are we going to remediate that, and how
are we going to stop it? And the one point we want to make here is how important it is to make
sure that we're aggregating all that data using this Security Information Event Management
system or SIEM, S-I-E-M. And what are your thoughts on using a SIEM to make actional
intelligence, Joe?
Sullivan: Integrating a SIEM for actionable intelligence, I think you have to take a step back and
think about, when do we trigger incident response, typically? Over the course of my career,
incident response is usually triggered after something bad happens. They're on the network, or
we see and exploit, or we've been compromised or there's a knock on the door that says, Hey,
your data's out there. If we have a SIEM or user behavior analytics, whatever the case may be
properly optimized and tuned, we can pick up on those indicators of compromise before the
bad things happen. And when I say indicators of compromise, I'm referring to things like
scanning, malicious email attachments, web application, enumeration and things like that.
Attackers spend the majority of their time in the recon phase. If we can detect those recon
activities, that's actionable intelligence where we can block IPs, block tools and things like that
before they actually get on the network. Even once they get on a network, recon still takes
place. I get on a machine, what's the vendor? What software am I running on this machine?
What applications are installed? What's the network look like? And still, we're not to the point
where a breach is actually taking place yet. Again, if we're detecting an activity in our SIEM with
the appropriate logging, monitoring and alerting, we can trigger incident response well before
the actual breach takes place.

Kliewer: So, what are your thoughts on the actionable intelligence and how we prevent threats?
Do you think most of the threats or most of the, well, we'll say incidents, are actually detected
by internal systems, or do you think they're mostly the result of receiving the indicators of
compromise from a third-party organization, such as a government entity or something like
that?
Sullivan: If you look at as far as detection, we have events determining what's malicious and
what's just an event or a false positive is the challenge here. When you have lean running
security teams, who don't have the time to go in and tune and optimize this (but then again,
something is better than nothing) a well operationalized security program with the appropriate
headcount has the chance of detecting these and getting those alerts and indicators of
compromise and acting on those earlier; whereas, if you have a lean running program (a two- to
three-headcount security department that are wearing many different hats) it's a little bit more
challenging to tune and optimize that. It's in scenarios like that where it might be beneficial to
outsource that to a third-party SOC or something, and let them say, “Hey, we've detected this
going on in your network, it doesn't look like a false positive, you should go check this out.”
Kliewer: Awesome. So, I'm going to paraphrase a little bit and read between the lines and say
that I didn't hear one thing in there about, ‘You need to buy this software product to detect all
the incidents.’
Sullivan: You don't really need to buy a software product to detect all the incidents. You know,
if you look at like the CIS controls in this CSF, this cybersecurity framework, or even this 853, if
you implement those and get your logs where you just have some visibility into them
monitoring something, you can detect these. It doesn't really need to have a high-dollar SIEM or
something like that. Network segmentation, we'll look at that. Host-based firewalls does a lot of
good for limiting the impact of an incident.
Kliewer: Okay, awesome. So now I want to kind of roll that just a little bit more, and we kind of
talked about that that's more the processes to log retention, so do you think what we've talked
about so far still holds true when it's cloud-based software products or even cloud-based, and
I'm going to say cloud-based SIEM, like a lot of them are?
Sullivan: The concept still holds true, right? We still want to aggregate the logs. The challenging
cloud is the threat surface is a little bit different. I have all these different authentication portals
and command line tools that can be used in public cloud services. And your threat model is
things like permissions and IAMidentity and access managementif you don't have the
appropriate permissions set up, you don't know what a user can do (like in some cases with a
particular public cloud service I won't name) if you have a certain permission where you can
actually roll back permissions, but you're limited, you can actually roll back your own policy and
do something where you had permission at an earlier date, but you don't now. It's those little
gotchas like that that you need to be aware of. And then there is provisioning cloud services, depending on how you provision certain virtual machines, RDP and SSH is enabled by default
facing the internet, so you want to be aware of what's the context of if I provision that here or
from the command line tool?
The logging, monitoring, and alerting, you can have a cloud-based SIEM third party, or a lot of
public cloud providers have their own tools. It's a little bit different approach, a little bit
different aggregating those logs and reading them, setting up the alerts, so there might be a
learning curve there. And then there's things like the instance metadata service, which if you
get in contact with that, you can actuallyit’s like getting all the metadata on your VMs, your
hosts, your disk drives, your backups and things like that, and gives you a wealth of information.
And we're seeing older attacks like server-side request forgery coming back. In the Capital One
breach a while back in a public cloud service, we've seen that take place. And there's various
controls and mitigations they put in place to mitigate the IMDS attacks, and you need to be
aware of what those are and how you can prevent those from happening. So, it's a little bit
different, a little bit more comprehensive. It's not the same as your traditional on-prem
resources, so there's a learning curve going through there. It's a little bit more challenging at
first, but I think overall, it's the same approach, you just have a different way of implementing
it.
Kliewer: Awesome. So, thanks for answering that. Since you mentioned the recent Capital One
breach that involved the cloud service, can you kind of give our listeners an overview, we'll say
about a 15,000-foot view of that breach and what happened?
Sullivan: The Capital One breach was actually an insider threat. They actually had access to this
system, had worked with it before, and the instance metadata serviceso you hit the web
application, which caused a URL on the back end to get data, allocate resources, authentication
and things like that. Like say, you have data in an S3 bucket, you can actually hit that IMDS and
get that information back. That server-side request forgery attack let that person enumerate
those resources and get access to them and download them. So, they had to go back and
determine, “Well, how can we prevent this from happening?” And implemented things like now
you need a token to send to the IMDS to actually get that information back, or we're going to
limit the response from the IMDS into one hop, that way it doesn't go past the machine out to
the internet. So, an attacker can't actually get that.
Kliewer: Okay. Awesome. Thanks for covering that for us. I want to shift gears just a little bit,
and we're talking about an attack here that involved some cloud components, but not
necessarily in the cloud. And I wanted to talk just a little bit, because it was such a widespread
incidentI mean, it can be called a cyberattack, we'll call it an incident with SolarWindsit was
one that was very widespread, gained a lot of notoriety because it was one that affected a lot of
US government agencies, and I'm guessing probably a lot of other government agencies as well.
And this was a very good example of a supply chain attack, where some malicious code or
malicious programs were embedded within the supply chain or within an update package. So,would you like to kind of lead us through a little bit, Joe, and just once again at a real high level
of what steps that SolarWinds attack really took? I'm going to preface it by saying the reason it
has such a huge impact was because it went undetected for so long. It went undetected, I think
for at least, I'm going to say at least six to eight months that we know of, possibly quite longer.
But if you could give our listeners an overview of that SolarWinds attack and how they actually
utilized the cloud components.
Sullivan: Sure, no problem there. SolarWinds was a really, really clever attack. The initial
foothold, we're not sure. They gained access to the internal network. We don't know if it was a
spear phishing attack. There had also been rumor that a password was leaked as well. It could
have been someone had set up a site for a watering hole attack. However, they did it, once they
got access to the network, they focused on the build server where the actual code is compiled.
And instead of actually implementing their malicious code in the build process, in the build,
they coded as the output of the build process, that way it got packaged in and signed with the
SolarWind software. They took that approach because, one, it keeps them off the radar for
code scanning and code review. They're not going to see that code. And once they get signed,
it's trusted at that point. So, once they got pushed out to the update server, all these individual
companies who were running Orion SolarWinds download that, it gets on their network, but
the attack didn't start or that malware didn't trigger for two weeks. And once it started
triggering, it communicated with cloud resources where they set up their C2 network with AWS,
GCP, Azure, GoDaddy and services like that and actually mimicked the Orion syntax. So, it
looked like regular Orion traffic going back and forth. And that gave them access to the
network. They could read email, obtain documents. They even got certificates where they could
impersonate users. And it wasn't detected for a long time. It was a really sophisticated attack.
They were very patient, and this was a really crafty attack.
Kliewer: Awesome. And just to point out there, because I want to point out in a little bit for our
listeners and our learners in our courses that we've talked about some of these different
components. I think we talked about C2, the command and control, which is what they're
actually using to actually go back and obtain that information out of the host networks once
they're compromised. And the fact that these command-and-control networks were
propagated or stored in not just one cloud network infrastructure, but they used multiple cloud
infrastructures and multiple cloud providers to do this, and all of that stuff helped them evade
detection basically. So, like I said, I wanted to point that out a little bit. And I can tell you as one
person who was part of an organization, who was named in that SolarWinds attack, and one of
the initial organizations that were listed as compromisedI'm going to back this up to our SIEM
conversation earlier and say that SIEM was absolutely priceless in showing us that, yes, we did
establish the initial communication with their command and control, but nothing happened
past that point. We can show beyond a shadow of a doubt that we did not exfiltrate data, that
there was no other data that went back and forth between our internal network and that
command-and-control service. So that's where that whole SIEM ties into it.So, Joe, I wanted to talk about one other thing, which I know is one of those areas that's kind of
near and dear to your heart as a hacker kind of guy, not to use that in a negative component,
but I'd like to hear your thoughts on threat hunting versus pen testing, vulnerability scanning,
and malicious actors. I mean, how do you know the difference between somebody that's out
there doing threat hunting or vulnerability assessment across the internet versus somebody
who's a real malicious actor or a real threat?
Sullivan: Well, I think when you look at threat hunting, pen testing and vulnerability scanning, if
you're doing it internally, obviously you know this is happening. If you're a third-party
performing this for another organization, obviously you're doing it with permission so they're
aware of it; whereas if you see these activities taking place, then you haven't given anyone
permission, they're not going on internally, you have bigger issues. And these are often used
interchangeably today. Threat hunting, in my mind, in my experience is I'm actually going to
look at my network and act like there's a potential attacker here, we've been breached and
we're going to treat it like that. We're going to look at our business-critical systems. We're
going to capture memory. We're going to do packet captures. We're looking for indicators of
compromise to see if do we actually have a bad actor on the network? This is beneficial because
of your attack dwell time, right? You don't always detect the attacker immediately. Hopefully
you do, but usually there's four to six weeks or something like that where they're on the
network. This helps shorten that time period if you perform regular threat hunting. Whereas
pen testing, I want to know, can you actually get into my network? Is it possible to compromise
my software, my configurations, my people? Can you get access into the building? And that tells
you, like I say, people ask me, what do you do? Well, I hack networks and break into buildings
to keep people from hacking networks and breaking into buildings. If you have a good idea of
how this takes place, you can better shore up your defenses in those particular areas.
Vulnerability scanning is something every organization should be doing. I'm running regular
scans with whatever vulnerability scanner you like that fits into your particular context, that
identifies these vulnerabilities as they take place or as they get released and you can set up a
remediation plan to patch those.
Kliewer: Awesome. I think that is a great breakdown of those different pieces. So, I'm trying to
figure out here if we have any other questions. And I want to take just a couple minutes here
toI want to roll back a little bit, and it's not so much in a cloud context, but still help define
some of the rules and regulations we have in place today. But what I wanted to do, Joe, is I
want to back up and talk a little bit about the T.J. Maxx incident. Happened quite a few years
back, and I think it's probably used in a lot of textbooks. But there was an incident with T.J.
Maxx, or basically, somebody was able to access their networks and use network sniffers, you
name it, to siphon off credit card numbers, flowing from their front-end systems to their
backend systems, and then turn around and sell those numbers on the dark web, you name it.
Does that about sum that up? Do you have a better summary of it?

Sullivan: Yeah, this one's going way back aways, right? The T.J. Maxx hack is, if I remember right,
was primarily, the initial foothold was they had an unsecured wireless network. Once they got
on that wireless network, there was no network segmentation, so they were able to move
freely. I think they got 94 million people or so credit cards. It was a huge breach, but yes, that's
basically from a high level, what the T.J. Maxx attack was.
Kliewer: Awesome. And the reason I bring that up, because I wanted talk about that for our
listeners a little bit, because everybody's also familiar with the PCI DSS or the Payment Card
Industry Data Security Standards. And ultimately, that was one of the incidents and one of the
cyberattacks that really led up to that PCI rule. And I want to be clear. It is a rule, not necessarily
a regulation or a law, it's something that's set forth by industry. I mean, what are the pieces
that PCI covers, Joe? I heard you mention several causes of that T.J. Maxx incident. Can you
help us connect the dots between that incident and PCI?
Sullivan: Sure. Just to kind of step back and kind of recap what you were saying about PCI, a lot
of times, it's misstated that this is a regulation or a law. It's actually a contractual obligation
between you and the credit card companies. And the credit card companies got together and
did all this because they wanted to avoid government regulation. So, they said, “Hey, we
actually police ourselves, we don't need you to get in our business here.” So, they came up with
PCI. The T.J. Maxx incident impacted PCI. They looked at what happened at T.J. Maxx, and they
said, “You know what? You really need to better secure your wireless networks and need to be
separate from your regular network, and your systems, actually whole PCI data, those have to
be segmented. They have to have network access control as well. And you need to use the
appropriate encryption to encrypt all this in transit and at rest.” And so, we came up with more
strict PCI requirements, and you get into the network segmentation. And you don't want to
apply PCI to all your resources, right. on the network (your systems, your servers, your devices)
because then everything has to be PCI compliant. The secret to becoming PCI compliant is
narrowing the scope, applying it just to those credit card related systems. There was something
else on that one too. Just totally train crashed there. Oh, they also recommend using a
higherlevel agnostic security or control framework, and then scoping down to your PCI system.
So, then you're looking at something like the CIS controls or this cyber security framework as
well.
Kliewer: All right. And I think that's a great point to make there is regardless of what country
you are or geolocation, whatever, the PCI pretty much applies worldwide, but there are other
frameworks and other tools you can use depending on your geographic location that can help
implement those same regulations and rules, and I think that's a great connection to make
there. And all right, I want to kind of start wrapping things up here just a little bit, Joe. Are there
any other real last minute or overarching things that you'd like to talk about on the attack
surface or what you'd like our listeners to know when it comes to the cyberattacks and what
happens out there? 

Sullivan: I think I'm going to sound like a broken record on this one, right? It still goes back to
doing those basic things like you see in the CIS controls. Notice where you're at with asset
inventory, know what assets you have, know what are business-critical assets, know where the
crown jewels are, segment those, appropriate logging, monitoring, alerting, patch
management, vulnerability scanning. In fact, it was June of last year, the White House actually
came out with a document that said, these are the things you should be doing to protect your
information security programregular backups, penetration testing, vulnerability
management. These things still hold true. And that was very much a watershed event. I don't
remember a time where the White House actually came out and said, “Hey, this is what you
needed to do to secure your network.” Why did they do that? Because you see organizations
like SolarWinds getting government organizations breached, and you see the Colonial Pipeline,
which is supplying oil to the United States, and the meat packing processing plant, which also
got ransomware at that timeprovides food and meat to people in the US. It's where these
incidents, these cyber events and these ransomware attacks aren't just affecting individual
companies now, they're affecting people across the nation when you get to this level. So that
really changed the criticality of what you need to be doing to secure your network. And you
see, CISA came out with supply chain guidelines to protect your organization against those. I
guess what I'm getting at is do the basics and determine what your context is. Do I need to
focus on supply chain? Do I need to focus on vulnerability scanning, penetration testingare
my backups in place? And take care of the basics and build on top of that.
Kliewer: Awesome, great advice, Joe. And I want to take just a moment here. To our listeners, I
hope you've enjoyed this discussion. I hope you found this useful, and I hope you found it
helpful as the official training that you've been taking. And again, I want to offer many, many,
many thanks to our special guest, Joe Sullivan for volunteering his time to share his experience
with us.
Sullivan: Oh, good to be here, Chad, I enjoyed it. Good conversation.

Types of Threats

There are many types of cyber threats to organizations. Below are several of the most common types: 

Select each plus sign to learn more about each topic.







A Viral Threat

 Tasha: Before her shift starts, Gabriela attempts to upload a school assignment on her iPad, but
the device is not responding.

Gabriela: Ugh, why is nothing working? This stupid thing. I need to turn in this assignment.

Keith: What is it?

Gabriela: It just spins and spins.

Keith: Have you updated recently?

Gabriela: Yes.

Keith: Have you clicked on any new links?

Gabriela: Oh, no. That strange email from the other day! It said I won a gift certificate, but
when I clicked the link, it didn't go anywhere.

Keith: It's okay. Sounds like you have a virus though. But we can ask Susan for help. Have you
backed it up to the cloud?

Gabriela: I have.

Keith: Great, everything will be all right then.

Identify the Malware Threats

Identify Threats and Tools Used to Prevent Them

So far in this chapter, we have explored how a TCP/IP network operates, and we have seen some examples of how threat actors can exploit some of the inherent vulnerabilities. The remainder of this module will discuss the various ways these network threats can be detected and even prevented. 

While there is no single step you can take to protect against all attacks, there are some basic steps you can take that help to protect against many types of attacks.  

Here are some examples of steps that can be taken to protect networks.  

  • If a system doesn’t need a service or protocol, it should not be running. Attackers cannot exploit a vulnerability in a service or protocol that isn’t running on a system. 
  • Firewalls can prevent many different types of attacks. Network-based firewalls protect entire networks, and host-based firewalls protect individual systems. 

Identify Threats and Tools Used to Prevent Them Continued

Narrator: This table lists tools used to identify threats that can help to protect against many
types of attacks, like virus and malware, Denial of Service attacks, spoofing, on-path and side-
channel attacks. From monitoring activity on a single computer, like with HIDS, to gathering log
data, like with SIEM, to filtering network traffic like with firewalls, these tools help to protect
entire networks and individual systems.
These tools, which we will cover more in depth, all help to identify potential threats, while anti-
malware, firewall and intrusion protection system tools also have the added ability to prevent
threats.

Intrusion Detection System (IDS)

An intrusion occurs when an attacker is able to bypass or thwart security mechanisms and gain access to an organization’s resources. Intrusion detection is a specific form of monitoring that monitors recorded information and real-time events to detect abnormal activity indicating a potential incident or intrusion. An intrusion detection system (IDS) automates the inspection of logs and real-time system events to detect intrusion attempts and system failures. An IDS is intended as part of a defense-in-depth security plan. It will work with, and complement, other security mechanisms such as firewalls, but it does not replace them. 

IDSs can recognize attacks that come from external connections, such as an attack from the internet, and attacks that spread internally, such as a malicious worm. Once they detect a suspicious event, they respond by sending alerts or raising alarms. A primary goal of an IDS is to provide a means for a timely and accurate response to intrusions. 

Intrusion detection and prevention refer to capabilities that are part of isolating and protecting a more secure or more trusted domain or zone from one that is less trusted or less secure. These are natural functions to expect of a firewall, for example.  

IDS types are commonly classified as host-based and network-based. A host-based IDS (HIDS) monitors a single computer or host. A network-based IDS (NIDS) monitors a network by observing network traffic patterns. 

Host-based Intrusion Detection System (HIDS)

A HIDS monitors activity on a single computer, including process calls and information recorded in system, application, security and host-based firewall logs. It can often examine events in more detail than a NIDS can, and it can pinpoint specific files compromised in an attack. It can also track processes employed by the attacker. A benefit of HIDSs over NIDSs is that HIDSs can detect anomalies on the host system that NIDSs cannot detect. For example, a HIDS can detect infections where an intruder has infiltrated a system and is controlling it remotely. HIDSs are more costly to manage than NIDSs because they require administrative attention on each system, whereas NIDSs usually support centralized administration. A HIDS cannot detect network attacks on other systems.

Network Intrusion Detection System (NIDS)

A NIDS monitors and evaluates network activity to detect attacks or event anomalies. It cannot monitor the content of encrypted traffic but can monitor other packet details. A single NIDS can monitor a large network by using remote sensors to collect data at key network locations that send data to a central management console. These sensors can monitor traffic at routers, firewalls, network switches that support port mirroring, and other types of network taps. A NIDS has very little negative effect on the overall network performance, and when it is deployed on a single-purpose system, it doesn’t adversely affect performance on any other computer. A NIDS is usually able to detect the initiation of an attack or ongoing attacks, but they can’t always provide information about the success of an attack. They won’t know if an attack affected specific systems, user accounts, files or applications.

Security Information and Event Management (SIEM)

Security management involves the use of tools that collect information about the IT environment from many disparate sources to better examine the overall security of the organization and streamline security efforts. These tools are generally known as security information and event management (or S-I-E-M, pronounced “SIM”) solutions. The general idea of a SIEM solution is to gather log data from various sources across the enterprise to better understand potential security concerns and apportion resources accordingly.

SIEM systems can be used along with other components (defense-in-depth) as part of an overall information security program.

Preventing Threats

While there is no single step you can take to protect against all threats, there are some basic steps you can take that help reduce the risk of many types of threats.

  • Keep systems and applications up to date. Vendors regularly release patches to correct bugs and security flaws, but these only help when they are applied. Patch management ensures that systems and applications are kept up to date with relevant patches. 
  • Remove or disable unneeded services and protocols. If a system doesn’t need a service or protocol, it should not be running. Attackers cannot exploit a vulnerability in a service or protocol that isn’t running on a system. As an extreme contrast, imagine a web server is running every available service and protocol. It is vulnerable to potential attacks on any of these services and protocols. 
  • Use intrusion detection and prevention systems. As discussed, intrusion detection and prevention systems observe activity, attempt to detect threats and provide alerts. They can often block or stop attacks.  
  • Use up-to-date anti-malware software. We have already covered the various types of malicious code such as viruses and worms. A primary countermeasure is anti-malware software.  
  • Use firewalls. Firewalls can prevent many different types of threats. Network-based firewalls protect entire networks, and host-based firewalls protect individual systems. This chapter included a section describing how firewalls can prevent attacks.

Antivirus

The use of antivirus products is strongly encouraged as a security best practice and is a requirement for compliance with the Payment Card Industry Data Security Standard (PCI DSS). There are several antivirus products available, and many can be deployed as part of an enterprise solution that integrates with several other security products.

Antivirus systems try to identify malware based on the signature of known malware or by detecting abnormal activity on a system. This identification is done with various types of scanners, pattern recognition and advanced machine learning algorithms.

Anti-malware now goes beyond just virus protection as modern solutions try to provide a more holistic approach detecting rootkits, ransomware and spyware. Many endpoint solutions also include software firewalls and IDS or IPS systems.

Scans

Here is an example scan from Zenmap showing open ports on a host.

code from a Zenmap scan showing discovery of open ports

Regular vulnerability and port scans are a good way to evaluate the effectiveness of security controls used within an organization. They may reveal areas where patches or security settings are insufficient, where new vulnerabilities have developed or become exposed, and where security policies are either ineffective or not being followed. Attackers can exploit any of these vulnerabilities.


Firewalls

In building construction or vehicle design, a firewall is a specially built physical barrier that prevents the spread of fire from one area of the structure to another or from one compartment of a vehicle to another. Early computer security engineers borrowed that name for the devices and services that isolate network segments from each other, as a security measure. As a result, firewalling refers to the process of designing, using or operating different processes in ways that isolate high-risk activities from lower-risk ones.

Firewalls enforce policies by filtering network traffic based on a set of rules. While a firewall should always be placed at internet gateways, other internal network considerations and conditions determine where a firewall would be employed, such as network zoning or segregation of different levels of sensitivity. Firewalls have rapidly evolved over time to provide enhanced security capabilities. This growth in capabilities can be seen in Figure 5.37, which contrasts an oversimplified view of traditional and next-generation firewalls. It integrates a variety of threat management capabilities into a single framework, including proxy services, intrusion prevention services (IPS) and tight integration with the identity and access management (IAM) environment to ensure only authorized users are permitted to pass traffic across the infrastructure. While firewalls can manage traffic at Layers 2 (MAC addresses), 3 (IP ranges) and 7 (application programming interface (API) and application firewalls), the traditional implementation has been to control traffic at Layer 4.

diagram comparing components of traditional and next-generation firewalls

Intrusion Prevention System (IPS)

An intrusion prevention system (IPS) is a special type of active IDS that automatically attempts to detect and block attacks before they reach target systems. A distinguishing difference between an IDS and an IPS is that the IPS is placed in line with the traffic. In other words, all traffic must pass through the IPS and the IPS can choose what traffic to forward and what traffic to block after analyzing it. This allows the IPS to prevent an attack from reaching a target. Since IPS systems are most effective at preventing network-based attacks, it is common to see the IPS function integrated into firewalls. Just like IDS, there are Network-based IPS (NIPS) and Host-based IPS (HIPS).

Diagram of network with NIPS (network-based intrusion prevention system in between the firewall and switch










Module 3: Understand Network Security Infrastructure


Domain D4.3.1, D4.3.2

Module Objective

  • L4.3.1 Identify common data center terminology.
  • L4.3.2 Recognize common cloud service terminology.
  • L4.3.3 Identify secure network design terminology.

Manny: In this section, we are going to be exploring the concepts and terminology around data centers
and the cloud. Sounds exciting!
Tasha: It can be, Manny. This is where a lot of the future applications of cybersecurity will come from.
As threats evolve, so does the technology to improve data protection, wherever that data is stored and
however it's transmitted.

    On-Premises Data Centers

    When it comes to data centers, there are two primary options: organizations can outsource the data center or own the data center. If the data center is owned, it will likely be built on premises. A place, like a building for the data center is needed, along with power, HVAC, fire suppression and redundancy.

    Select each plus sign hotspot to learn more about each topic.

    image of data center with hotspots on power switch, fire extinguisher, ventillation equipment, and closets

For server rooms, appropriate fire detection/suppression must be considered based on the size of the room, typical human occupation, egress routes and risk of damage to equipment. For example, water used for fire suppression would cause more harm to servers and other electronic components. Gas-based fire suppression systems are more friendly to the electronics, but can be toxic to humans.

Deeper Dive of On-Premises Data Centers

Narrator: Now that we have looked at some of the primary components that must be
considered when building an on-premises data center, we should take a deeper dive into some
of the components.
First, we consider the air conditioning requirements of a data center. Servers and other
equipment generate a lot of heat which must be handled appropriately. This is not just to make
it comfortable when humans are present, but to ensure the equipment is kept within its
operating parameters. When equipment gets too hot, it can lead to quicker failure or a voided
warranty. Most equipment is programmed to automatically shut down when a certain
temperature threshold is met. This helps to protect the equipment, but a system that is shut
down is not available to the users. An abnormal system shutdown can also lead to the loss or
corruption of data.
Another consideration for the on-premises data center is the fire suppression systems. In the
United States, most commercial buildings are required to have sprinkler systems that are
activated in a fire. These sprinklers minimize the amount of damage caused to the building and
keep the fire from spreading to adjacent areas, but they can be detrimental to electronic
equipment, as water and electricity don’t mix. While most water-based fire suppression
systems don’t work like they do in the movies, where a fire in one part of the building turns on
the sprinklers for the entire building, another hazard is having water overhead in a data center.
Eventually, water pipes will fail and may leak on equipment. This risk can be reduced somewhat
by using a dry-pipe system that keeps the water out of the pipes over the data center. These
systems have a valve outside the data center that is only opened when a sensor indicates a fire
is present. Since water is not held in the pipes above the data center, the risk of leaks is
reduced.

Redundancy

The concept of redundancy is to design systems with duplicate components so that if a failure were to occur, there would be a backup. This can apply to the data center as well. Risk assessments pertaining to the data center should identify when multiple separate utility service entrances are necessary for redundant communication channels and/or mechanisms.  

If the organization requires full redundancy, devices should have two power supplies connected to diverse power sources. Those power sources would be backed up by batteries and generators. In a high-availability environment, even generators would be redundant and fed by different fuel types. 

diagram of a fully redundant data center


Example of Redundancy (Application of)


Narrator: In addition to keeping redundant backups of information, you also have a redundant
source of power, to provide backup power so you have an uninterrupted power supply, or UPS.
Transfer switches or transformers may also be involved. And in case the power is interrupted by
weather or blackouts, a backup generator is essential. Often there will be two generators
connected by two different transfer switches. These generators might be powered by diesel or
gasoline or another fuel such as propane, or even by solar panels. A hospital or essential
government agency might contract with more than one power company and be on two
different grids in case one goes out. This is what we mean by redundancy.
 

 

Memorandum of Understanding (MOU)/Memorandum of Agreement (MOA) 

Some organizations seeking to minimize downtime and enhance BC (Business Continuity) and DR (Disaster Recovery) capabilities will create agreements with other, similar organizations. They agree that if one of the parties experiences an emergency and cannot operate within their own facility, the other party will share its resources and let them operate within theirs in order to maintain critical functions. These agreements often even include competitors, because their facilities and resources meet the needs of their particular industry. 

For example, Hospital A and Hospital B are competitors in the same city. The hospitals create an agreement with each other: if something bad happens to Hospital A (a fire, flood, bomb threat, loss of power, etc.), that hospital can temporarily send personnel and systems to work inside Hospital B in order to stay in business during the interruption (and Hospital B can relocate to Hospital A, if Hospital B has a similar problem). The hospitals have decided that they are not going to compete based on safety and security—they are going to compete on service, price and customer loyalty. This way, they protect themselves and the healthcare industry as a whole.  

These agreements are called joint operating agreements (JOA) or memoranda of understanding (MOU) or memoranda of agreement (MOA). Sometimes these agreements are mandated by regulatory requirements, or they might just be part of the administrative safeguards instituted by an entity within the guidelines of its industry. 

The difference between an MOA or MOU  and an SLA is that a Memorandum of Understanding is more directly related to what can be done with a system or the information. 

The service level agreement goes down to the granular level. For example, if I'm outsourcing the IT services, then I will need to have two full-time technicians readily available, at least from Monday through Friday from eight to five. With cloud computing, I need to have access to the information in my backup systems within 10 minutes. An SLA specifies the more intricate aspects of the services.  

We must be very cautious when outsourcing with cloud-based services, because we have to make sure that we understand exactly what we are agreeing to. If the SLA promises 100 percent accessibility to information, is the access directly to you at the moment, or is it access to their website or through their portal when they open on Monday? That's where you'll rely on your legal team, who can supervise and review the conditions carefully before you sign the dotted line at the bottom. 

 


 


Cloud

Cloud computing is usually associated with an internet-based set of computing resources, and typically sold as a service, provided by a cloud service provider (CSP). 

Cloud computing is very similar to the electrical or power grid. It is provisioned in a geographic location and is sourced using an electrical means that is not necessarily obvious to the consumer. But when you want electricity, it’s available to you via a common standard interface and you pay only for what you use. In these ways, cloud computing is very similar. It is a very scalable, elastic and easy-to-use “utility” for the provisioning and deployment of Information Technology (IT) services.  

There are various definitions of what cloud computing means according to the leading standards, including NIST. This NIST definition is commonly used around the globe, cited by professionals and others alike to clarify what the term “cloud” means:  

“a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (such as networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” NIST SP 800-145 

This image depicts cloud computing characteristics, service and deployment models, all of which will be covered in this section and by your instructor. 

image showing different types of clouds (public, private, hybrid, community), resources as services (software, platform, infrastructure), and resource pooling (broad network access, rapid elasticity, measured service, on-demand self-service)

Cloud Redundancy

 Narrator: Many organizations have moved from hard-wired server rooms to operations that are
run by cloud-based facilities, because it provides both security and flexibility. Cloud service
providers have different availability zones, so that if one goes down, activities can shift to
another. You don’t have to maintain a whole data center with all the redundancy that entails
the cloud service provider does that for you.
There are several ways to contract with a cloud service provider. You can set up the billing so
that it depends on the data used, just like your mobile phone. And you have resource pooling,
meaning you can share in the resources of other colleagues or similar types of industries to
provide data for artificial intelligence or analytics.

Cloud Characteristics

Cloud-based assets include any resources that an organization accesses using cloud computing. Cloud computing refers to on-demand access to computing resources available from almost anywhere, and cloud computing resources are highly available and easily scalable. Organizations typically lease cloud-based resources from outside the organization. Cloud computing has many benefits for organizations, which include but are not limited to: 

  • Usage is metered and priced according to units (or instances) consumed. This can also be billed back to specific departments or functions.
  • Reduced cost of ownership. There is no need to buy any assets for everyday use, no loss of asset value over time and a reduction of other related costs of maintenance and support.
  • Reduced energy and cooling costs, along with “green IT” environment effect with optimum use of IT resources and systems.
  • Allows an enterprise to scale up new software or data-based services/solutions through cloud systems quickly and without having to install massive hardware locally.
image from previous page with resource pooling section highlighted

Service Models

Some cloud-based services only provide data storage and access. When storing data in the cloud, organizations must ensure that security controls are in place to prevent unauthorized access to the data. 

There are varying levels of responsibility for assets depending on the service model. This includes maintaining the assets, ensuring they remain functional, and keeping the systems and applications up to date with current patches. In some cases, the cloud service provider is responsible for these steps. In other cases, the consumer is responsible for these steps. 

Types of cloud computing service models include Software as a Service (SaaS) , Platform as a Service (PaaS) and Infrastructure as a Service (IaaS).

Select each plus sign hotspot to learn more about each topic.

Software as a Service (SaaS): A cloud provides access to software applications such as email or office productivity tools. SaaS is a distributed model where software applications are hosted by a vendor or cloud service provider and made available to customers over network resources. SaaS is a widely used and adopted form of cloud computing, with users most often needing an internet connection and access credentials to have full use of the cloud service, application and data. SaaS has many benefits for organizations, which include but are not limited to: Ease of use and limited/minimal administration. Automatic updates and patch management. The user will always be running the latest version and most up-to-date deployment of the software release, as well as any relevant security updates, with no manual patching required. Standardization and compatibility. All users will have the same version of the software release. Infrastructure as a Service (IaaS): A cloud provides network access to traditional computing resources such as processing power and storage. IaaS models provide basic computing resources to consumers. This includes servers, storage, and in some cases, networking resources. Consumers install operating systems and applications and perform all required maintenance on the operating systems and applications. Although the consumer has use of the related equipment, the cloud service provider retains ownership and is ultimately responsible for hosting, running and maintenance of the hardware. IaaS is also referred to as hardware as a service by some customers and providers. IaaS has a number of benefits for organizations, which include but are not limited to: Ability to scale up and down infrastructure services based on actual usage. This is particularly useful and beneficial where there are significant spikes and dips within the usage curve for infrastructure. Retain system control at the operating system level.

Deployment Models

There are four cloud deployment models. The cloud deployment model also affects the breakdown of responsibilities of the cloud-based assets. The four cloud models available are public, private, hybrid and community .

Select each plus sign hotspot to learn more about each topic. 

Public clouds are what we commonly refer to as the cloud for the public user. It is very easy to get access to a public cloud. There is no real mechanism, other than applying for and paying for the cloud service. It is open to the public and is, therefore, a shared resource that many people will be able to use as part of a resource pool. A public cloud deployment model includes assets available for any consumers to rent or lease and is hosted by an external cloud service provider (CSP). Service level agreements can be effective at ensuring the CSP provides the cloud-based services at a level acceptable to the organization.

 Private

Private clouds begin with the same technical concept as public clouds, except that instead of being shared with the public, they are generally developed and deployed for a private organization that builds its own cloud. Organizations can create and host private clouds using their own resources. Therefore, this deployment model includes cloud-based assets for a single organization. As such, the organization is responsible for all maintenance. However, an organization can also rent resources from a third party and split maintenance requirements based on the service model (SaaS, PaaS or IaaS). Private clouds provide organizations and their departments private access to the computing, storage, networking and software assets that are available in the private cloud.

 Hybrid

Community clouds can be either public or private. What makes them unique is that they are generally developed for a particular community. An example could be a public community cloud focused primarily on organic food, or maybe a community cloud focused specifically on financial services. The idea behind the community cloud is that people of like minds or similar interests can get together, share IT capabilities and services, and use them in a way that is beneficial for the particular interests that they share.

Managed Service Provider (MSP)

A managed service provider (MSP) is a company that manages information technology assets for another company. Small- and medium-sized businesses commonly outsource part or all of their information technology functions to an MSP to manage day-to-day operations or to provide expertise in areas the company does not have. Organizations may also use an MSP to provide network and security monitoring and patching services. Today, many MSPs offer cloud-based services augmenting SaaS solutions with active incident investigation and response activities. One such example is a managed detection and response (MDR) service, where a vendor monitors firewall and other security tools to provide expertise in triaging events. 

Some other common MSP implementations are: 

  • Augment in-house staff for projects
  • Utilize expertise for implementation of a product or service
  • Provide payroll services
  • Provide Help Desk service management
  • Monitor and respond to security incidents
  • Manage all in-house IT infrastructure 

Service-Level Agreement (SLA)

The cloud computing service-level agreement (cloud SLA) is an agreement between a cloud service provider and a cloud service customer based on a taxonomy of cloud computing– specific terms to set the quality of the cloud services delivered. It characterizes quality of the cloud services delivered in terms of a set of measurable properties specific to cloud computing (business and technical) and a given set of cloud computing roles (cloud service customer, cloud service provider, and related sub-roles).

Think of a rule book and legal contract—that combination is what you have in a service-level agreement (SLA). Let us not underestimate or downplay the importance of this document/ agreement. In it, the minimum level of service, availability, security, controls, processes, communications, support and many other crucial business elements are stated and agreed to by both parties.  

The purpose of an SLA is to document specific parameters, minimum service levels and remedies for any failure to meet the specified requirements. It should also affirm data ownership and specify data return and destruction details. Other important SLA points to consider include the following:

  • Cloud system infrastructure details and security standards
  • Customer right to audit legal and regulatory compliance by the CSP         
  • Rights and costs associated with continuing and discontinuing service use
  • Service availability
  • Service performance
  • Data security and privacy
  • Disaster recovery processes
  • Data location
  • Data access
  • Data portability
  • Problem identification and resolution expectations
  • Change management processes
  • Dispute mediation processes
  • Exit strategy 

 







Defense in Depth

Defense in depth uses a layered approach when designing the security posture of an organization. Think about a castle that holds the crown jewels. The jewels will be placed in a vaulted chamber in a central location guarded by security guards. The castle is built around the vault with additional layers of security—soldiers, walls, a moat. The same approach is true when designing the logical security of a facility or system. Using layers of security will deter many attackers and encourage them to focus on other, easier targets. 

Defense in depth provides more of a starting point for considering all types of controls—administrative, technological, and physical—that empower insiders and operators to work together to protect their organization and its systems. 

Here are some examples that further explain the concept of defense in depth: 

  • Data: Controls that protect the actual data with technologies such as encryption, data leak prevention, identity and access management and data controls.
  • Application: Controls that protect the application itself with technologies such as data leak prevention, application firewalls and database monitors.
  • Host: Every control that is placed at the endpoint level, such as antivirus, endpoint firewall, configuration and patch management.
  • Internal network: Controls that are in place to protect uncontrolled data flow and user access across the organizational network. Relevant technologies include intrusion detection systems, intrusion prevention systems, internal firewalls and network access controls.
  • Perimeter: Controls that protect against unauthorized access to the network. This level includes the use of technologies such as gateway firewalls, honeypots, malware analysis and secure demilitarized zones (DMZs).
  • Physical: Controls that provide a physical barrier, such as locks, walls or access control.
  • Policies, procedures and awareness: Administrative controls that reduce insider threats (intentional and unintentional) and identify risks as soon as they appear. 
diagram of multiple layers of control involved in defense in depth

 

Zero Trust

Zero trust networks are often microsegmented networks, with firewalls at nearly every connecting point. Zero trust encapsulates information assets, the services that apply to them and their security properties. This concept recognizes that once inside a trust-but-verify environment, a user has perhaps unlimited capabilities to roam around, identify assets and systems and potentially find exploitable vulnerabilities. Placing a greater number of firewalls or other security boundary control devices throughout the network increases the number of opportunities to detect a troublemaker before harm is done. Many enterprise architectures are pushing this to the extreme of microsegmenting their internal networks, which enforces frequent re-authentication of a user ID, as depicted in this image.  

Consider a rock music concert. By traditional perimeter controls, such as firewalls, you would show your ticket at the gate and have free access to the venue, including backstage where the real rock stars are. In a zero-trust environment, additional checkpoints are added. Your identity (ticket) is validated to access the floor level seats, and again to access the backstage area. Your credentials must be valid at all 3 levels to meet the stars of the show.  

Zero trust is an evolving design approach which recognizes that even the most robust access control systems have their weaknesses. It adds defenses at the user, asset and data level, rather than relying on perimeter defense. In the extreme, it insists that every process or action a user attempts to take must be authenticated and authorized; the window of trust becomes vanishingly small.  

While microsegmentation adds internal perimeters, zero trust places the focus on the assets, or data, rather than the perimeter. Zero trust builds more effective gates to protect the assets directly rather than building additional or higher walls. 

diagram of a zero-trust network

 

Network Access Control (NAC)

An organization’s network is perhaps one of its most critical assets. As such, it is vital that we both know and control access to it, both from insiders (e.g., employees, contractors) and outsiders (e.g., customers, corporate partners, vendors). We need to be able to see who and what is attempting to make a network connection.

At one time, network access was limited to internal devices. Gradually, that was extended to remote connections, although initially those were the exceptions rather than the norm. This started to change with the concepts of bring your own device (BYOD) and Internet of Things (IoT).

Considering just IoT for a moment, it is important to understand the range of devices that might be found within an organization. They include heating, ventilation and air conditioning (HVAC) systems that monitor the ambient temperature and adjust the heating or cooling levels automatically or air monitoring systems, through security systems, sensors and cameras, right down to vending and coffee machines. Look around your own environment and you will quickly see the scale of their use.

Having identified the need for a NAC solution, we need to identify what capabilities a solution may provide. As we know, everything begins with a policy. The organization’s access control policies and associated security policies should be enforced via the NAC device(s). Remember, of course, that an access control device only enforces a policy and doesn’t create one.

The NAC device will provide the network visibility needed for access security and may later be used for incident response. Aside from identifying connections, it should also be able to provide isolation for noncompliant devices within a quarantined network and provide a mechanism to “fix” the noncompliant elements, such as turning on endpoint protection. In short, the goal is to ensure that all devices wishing to join the network do so only when they comply with the requirements laid out in the organization policies. This visibility will encompass internal users as well as any temporary users such as guests or contractors, etc., and any devices they may bring with them into the organization.

Let’s consider some possible use cases for NAC deployment: 

  • Medical devices
  • IoT devices
  • BYOD/mobile devices (laptops, tablets, smartphones)
  • Guest users and contractors

As we have established, it is critically important that all mobile devices, regardless of their owner, go through an onboarding process, ideally each time a network connection is made, and that the device is identified and interrogated to ensure the organization’s policies are being met. 

diagram of a network of workstations with firewalls and NAC

 

NAC Deeper Dive

Narrator: At its simplest form, Network Access Control, or NAC, is a way to prevent unwanted
devices from connecting to a network. Some NAC systems allow for the installation of required
software on the end user’s device to enforce device compliance to policy prior to connecting.
A high-level example of a NAC system is hotel internet access. Typically, a user connecting to
the hotel network is required to acknowledge the acceptable use policy before being allowed to
access the internet. After the user clicks the acknowledge button, the device is connected to
the network that enables internet access. Some hotels add an additional layer requiring the
guest to enter a special password or a room number and guest name before access is granted.
This prevents abuse by someone who is not a hotel guest and may even help to track network
abuse to a particular user.
A slightly more complex scenario is a business that separates employee BYOD devices from
corporate-owned devices on the network. If the BYOD device is pre-approved and allowed to
connect to the corporate network, the NAC system can validate the device using a hardware
address or installed software, and even check to make sure the antivirus software and
operating system software are up to date before connecting it to the network. Alternatively, if
it is a personal device not allowed to connect to the corporate network, it can be redirected to
the guest network for internet access without access to internal corporate resources.

 

Network Segmentation (Demilitarized Zone (DMZ))

Network segmentation is also an effective way to achieve defense in depth for distributed or multi-tiered applications. The use of a demilitarized zone (DMZ), for example, is a common practice in security architecture. With a DMZ, host systems that are accessible through the firewall are physically separated from the internal network by means of secured switches or by using an additional firewall to control traffic between the web server and the internal network. Application DMZs (or semi-trusted networks) are frequently used today to limit access to application servers to those networks or systems that have a legitimate need to connect.

diagram of dimilitarized zone isloated from organization's workstations

 

DMZ Deeper Dive

Narrator: A web front end server might be in the DMZ, but it might retrieve data from a
database server that is on the other side of the firewall.
For example, you may have a network where you manage your client’s personal information,
and even if the data is encrypted or obfuscated by cryptography, you need to make sure the
network is completely segregated from the rest of the network with some secure switches that
only an authorized individual has access to. Only authorized personnel can control the firewall
settings and control the traffic between the web server and the internal network. For example,
in a hospital or a doctor’s office, you would have a segregated network for the patient
information and billing, and on the other side would be the electronic medical records. If they
are using a web-based application for medical record services, they would have a demilitarized
zone or segmented areas. And perhaps even behind the firewall, they have their own specified
server to protect the critical information and keep it segregated.
It is worth noting at this point that while this course will not explore the specifics, some
networks use a web application firewall (WAF) rather than a DMZ network. The WAF has an
internal and an external connection like a traditional firewall, with the external traffic being
filtered by the traditional or next generation firewall first. It monitors all traffic, encrypted or
not, from the outside for malicious behavior before passing commands to a web server that
may be internal to the network.

Segmentation for Embedded Systems and IoT

An embedded system is a computer implemented as part of a larger system. The embedded system is typically designed around a limited set of specific functions in relation to the larger product of which it is a component. Examples of embedded systems include network-attached printers, smart TVs, HVAC controls, smart appliances, smart thermostats and medical devices. 

Network-enabled devices are any type of portable or nonportable device that has native network capabilities. This generally assumes the network in question is a wireless type of network, typically provided by a mobile telecommunications company. Network-enabled devices include smartphones, mobile phones, tablets, smart TVs or streaming media players (such as a Roku Player, Amazon Fire TV, or Google Android TV/Chromecast), network-attached printers, game systems, and much more. 

The Internet of Things (IoT) is the collection of devices that can communicate over the internet with one another or with a control console in order to affect and monitor the real world. IoT devices might be labeled as smart devices or smart-home equipment. Many of the ideas of industrial environmental control found in office buildings are finding their way into more consumer-available solutions for small offices or personal homes.  

Embedded systems and network-enabled devices that communicate with the internet are considered IoT devices and need special attention to ensure that communication is not used in a malicious manner. Because an embedded system is often in control of a mechanism in the physical world, a security breach could cause harm to people and property. Since many of these devices have multiple access routes, such as ethernet, wireless, Bluetooth, etc., special care should be taken to isolate them from other devices on the network. You can impose logical network segmentation with switches using VLANs, or through other traffic-control means, including MAC addresses, IP addresses, physical ports, protocols, or application filtering, routing, and access control management. Network segmentation can be used to isolate IoT environments. 

diagram of networked devices blocked from outside by firewalls

Segmentation for Embedded Systems and IoT Deeper Dive

Narrator: The characteristics that make embedded systems operate efficiently are also a
security risk. Embedded systems are often used to control something physical, such as a valve
for water, steam, or even oil. These devices have a limited instruction set and are often hard-
coded or permanently written to a memory chip. For ease of operating the mechanical parts,
the embedded system is often connected to a corporate network since and may operate using
the TCP/IP protocol, yes, the same protocol that runs all over the internet. Therefore, it is
feasible for anyone anywhere on the internet to control the opening and closing of a valve
when the networks are fully connected. This is the primary reason for segmentation of these
systems on a network. If these are segmented properly, a compromised corporate network will
not be able to access the physical controls on the embedded systems.
The other side of the embedded systems, which also applies to IoT devices, is the general lack
of system updates when a new vulnerability is found. In the case of most embedded systems
with the programming directly on the chips, it would require physical replacement of the chip
to patch the vulnerability. For many systems, it may not be cost-effective to have someone visit
each one to replace a chip, or manually connect to the chip to re-program it.
We buy all these internet connected things because of the convenience. Cameras, light bulbs,
speakers, refrigerators, etc. all bring some sort of convenience to our lives, but they also
introduce risk. While the reputable mainstream brands will likely provide updates to their
devices when a new vulnerability is discovered, many of the smaller companies simply don’t
plan to do that as they seek to control the costs of a device. These devices, when connected to
a corporate network, can be an easy internet-connected doorway for a cyber criminal to access
a corporate network. If these devices are properly segmented, or separated, on the network
from corporate servers and other corporate networking, a compromise on an IoT device or a
compromised embedded system will not be able to access those corporate data and systems.

Microsegmentation

The toolsets of current adversaries are polymorphic in nature and allow threats to bypass static security controls. Modern cyberattacks take advantage of traditional security models to move easily between systems within a data center. Microsegmentation aids in protecting against these threats. A fundamental design requirement of microsegmentation is to understand the protection requirements for traffic within a data center and traffic to and from the internet traffic flows. 

When organizations avoid infrastructure-centric design paradigms, they are more likely to become more efficient at service delivery in the data center and become apt at detecting and preventing advanced persistent threats. 

Microsegmentation Deeper Dive

 Narrator: Some key points about microsegmentation:
Microsegmentation allows for extremely granular restrictions within the IT environment, to the
point where rules can be applied to individual machines and/or users, and these rules can be as
detailed and complex as desired. For instance, we can limit which IP addresses can
communicate to a given machine, at which time of day, with which credentials, and which
services those connections can utilize.
These are logical rules, not physical rules, and do not require additional hardware or manual
interaction with the device (that is, the administrator can apply the rules to various machines
without having to physically touch each device or the cables connecting it to the networked
environment).
This is the ultimate end state of the defense-in-depth philosophy; no single point of access
within the IT environment can lead to broader compromise.
This is crucial in shared environments, such as the cloud, where more than one customer’s data
and functionality might reside on the same device(s), and where third-party personnel
(administrators/technicians who work for the cloud provider, not the customer) might have
physical access to the devices.
Microsegmentation allows the organization to limit which business
functions/units/offices/departments can communicate with others, in order to enforce the
concept of least privilege. For instance, the Human Resources office probably has employee
data that no other business unit should have access to, such as employee home address, salary,
medical records, etc. Microsegmentation, like VLANs, can make HR its own distinct IT enclave,
so that sensitive data is not available to other business entities, thus reducing the risk of
exposure.
In modern environments, microsegmentation is available because of virtualization and
software-defined networking (SDN) technologies. In the cloud, the tools for applying this
strategy are often called “virtual private networks (VPN)” or "security groups.
Even in your home, microsegmentation can be used to separate computers from smart TVs, air
conditioning, and smart appliances which can be connected and can have vulnerabilities.

 

Virtual Local Area Network (VLAN)

Virtual local area networks (VLANs) allow network administrators to use switches to create software-based LAN segments, which can segregate or consolidate traffic across multiple switch ports. Devices that share a VLAN communicate through switches as if they were on the same Layer 2 network. This image shows different VLANs — red, green and blue — connecting separate sets of ports together, while sharing the same network segment (consisting of the two switches and their connection). Since VLANs act as discrete networks, communications between VLANs must be enabled. Broadcast traffic is limited to the VLAN, reducing congestion and reducing the effectiveness of some attacks. Administration of the environment is simplified, as the VLANs can be reconfigured when individuals change their physical location or need access to different services. VLANs can be configured based on switch port, IP subnet, MAC address and protocols.

VLANs do not guarantee a network’s security. At first glance, it may seem that traffic cannot be intercepted because communication within a VLAN is restricted to member devices. However, there are attacks that allow a malicious user to see traffic from other VLANs (so-called VLAN hopping). The VLAN technology is only one tool that can improve the overall security of the network environment.

diagram of two VLANs connecting separate sets of ports together while sharing the same network segment

VLAN Segmentation

Narrator: VLANS are virtual separations within a switch and are used mainly to limit broadcast
traffic. A VLAN can be configured to communicate with other VLANs or not, and may be used to
segregate network segments.
There are a few common uses of VLANs in corporate networks. The first is to separate Voice
Over IP (VOIP) telephones from the corporate network. This is most often done to more
effectively manage the network traffic generated by voice communications by isolating it from
the rest of the network.
Another common use of VLANs in a corporate network is to separate the data center from all
other network traffic. This makes it easier to keep the server-to-server traffic contained to the
data center network while allowing certain traffic from workstations or the web to access the
servers. As briefly discussed earlier, VLANs can also be used to segment networks. For example,
a VLAN can separate the payroll workstations from the rest of the workstations in the network.
Routing rules can also be used to only allow devices within this Payroll VLAN to access the
servers containing payroll information.
Earlier, we also discussed Network Access Control (NAC). These systems use VLANs to control
whether devices connect to the corporate network or to a guest network. Even though a
wireless access controller may attach to a single port on a physical network switch, the VLAN
associated with the device connection on the wireless access controller determines the VLAN
that the device operates on and to which networks it is allowed to connect.
Finally, in large corporate networks, VLANs can be used to limit the amount of broadcast traffic
within a network. This is most common in networks of more than 1,000 devices and may be
separated by department, location/building, or any other criteria as needed.
The most important thing to remember is that while VLANs are logically separated, they may be
allowed to access other VLANs. They can also be configured to deny access to other VLANs.

Virtual Private Network (VPN)

A virtual private network (VPN) is not necessarily an encrypted tunnel. It is simply a point-to-point connection between two hosts that allows them to communicate. Secure communications can, of course, be provided by the VPN, but only if the security protocols have been selected and correctly configured to provide a trusted path over an untrusted network, such as the internet. Remote users employ VPNs to access their organization’s network, and depending on the VPN’s implementation, they may have most of the same resources available to them as if they were physically at the office. As an alternative to expensive dedicated point-to-point connections, organizations use gateway-to-gateway VPNs to securely transmit information over the internet between sites or even with business partners. 

 


Module 4: Chapter 4 Summary


Domain 4.1.1, 4.1.2, 4.1.3, 4.2.1, 4.2.2, 4.2.3, 4.3.1, 4.3.2, 4.3.3 

Module Objective

  • L4.4.1 Practice the terminology and review concepts of access controls  

In this chapter, we covered computer networking and securing the network. A network is simply two or more computers linked together to share data, information or resources. There are many types of networks, such as LAN, WAN, WLAN and VPN, to name a few. Some of the devices found on a network can be hubs, switches, routers, firewalls, servers, endpoints (e.g., desktop computer, laptop, tablet, mobile phone, VOIP or any other end user device). Other network terms you need to know and understand include ports, protocols, ethernet, Wi-Fi, IP address and MAC address.  

The two models discussed in this chapter are OSI and TCP/IP. The OSI model has seven layers and the TCP/IP four. They both take the 1s and 0s from the physical or network interface layer, where the cables or Wi-Fi connect, to the Application Layer, where users interact with the data. The data traverses the network as packets, with headers or footers being added and removed accordingly as they get passed layer to layer. This helps route the data and ensures packets are not lost and remain together. IPv4 is slowly being phased out by IPv6 to improve security, improve quality of service and support more devices.  

As mentioned, Wi-Fi has replaced many of our wired networks, and with its ease of use, it also brings security issues. Securing Wi-Fi is very important.  

We then learned about some of the attacks on a network, e.g., DoS/DDoS attacks, fragment attacks, oversized packet attacks, spoofing attacks, and man-in-the middle attacks. We also discussed the ports and protocols that connect the network and services that are used on networks, from physical ports, e.g., LAN port, that connect the wires, to logical ports, e.g., 80 or 443, that connect the protocols/services.  

We then examined some possible threats to a network, including spoofing, DoS/DDoS, virus, worm, Trojan, on-path (man-in-the-middle) attack, and side-channel attack. The chapter went on to discuss how to identify threats, e.g., using IDS/NIDS/HIDS or SIEM, and prevent threats, e.g., using antivirus, scans, firewalls, or IPS/NIPS/HIPS. We discussed on-premises data centers and their requirements, e.g., power, HVAC, fire suppression, redundancy and MOU/MOA. We reviewed the cloud and its characteristics, to include service models: SaaS, IaaS and PaaS; and deployment models: public, private, community and hybrid. The importance of an MSP and SLA were also discussed.  

Terminology for network design, to include network segmentation, e.g., microsegmentation and demilitarized zone (DMZ), virtual local area network (VLAN), virtual private network (VPN), defense in depth, zero trust and network access control, were described in great detail. 

 





NETWORK SECURITY QUIZ

 

 





Comments