Hyper-V ConceptsIt's time to get familiar with Hyper-V Virtualization, virtual servers, virtual switches, virtual CPUs, virtual deployment infrastructure (VDI) and more.
In this era of constantly pushing for more productivity and greater efficiency, it is essential that every resource devoted to web access within a business is utilised for business benefit. Unless the company concerned is in the business of gaming or social media, etc. it is unwise to use resources like internet/web access, and the infrastructure supporting it, for a purpose other than business. Like they say, “Nothing personal, just business”
With this in mind, IT administrators have their hands full ensuring management of web applications and their communication with the Internet. The cost of not ensuring this is loss of productivity, misuse of bandwidth and potential security breaches. As a business it is prudent to block any unproductive web application e.g. gaming, social media etc. and restrict or strictly monitor file sharing to mitigate information leakages.
It is widely accepted that in this area firewalls are of little use. Port blocking is not the preferred solution as it has a similar effect to a sledge hammer. What is required is the fineness of a scalpel to parse out the business usage from the personal and manage those business requirements accordingly. To be able to manage web application at such a level, it is essential to be able to identify and associate the request with its respective web application. Anything in line with business applications goes through, the rest are blocked.
This is where GFI WebMonitor excels in terms of delivering this level of precision and efficiency. It identifies access requests from supported applications using inspection technology and helps IT administrators to allow or block them. Hence, the administrators can allow certain applications for certain departments while blocking certain other applications as part of a blanket ban, thus enhancing the browsing experience of all users.
So, to achieve this, the process is to use the unified policy system of GFI WebMonitor. The policies can be configured specifically for application control or, within the same policy, several application controls can be combined using other filtering technologies.
Let’s take a look at the policy panel of GFI WebMonitor:
Figure 1. GFI WebMonitor Policy Panel interface. Add, delete, create internet access policies with ease (click to enlarge)
Demands for Enterprise networks to properly support mobile users is on a continuous rise making it more than ever necessary for IT departments to provide high-quality services to its users. This article covers 4 key-areas affecting mobile users and Enterprise networks: Wi-Fi coverage (signal strength – signal-to-noise ratio), Bandwidth Monitoring (Wi-Fi Links, Network Backbone, routers, congestion), Shadow IT (Usage of unauthorized apps) and security breaches.
Today, users are no more tied to their desktops/laptops. Now, they are mobile. They can reply to important business emails, access their CRM, collaborate with peers, share files with each other & much more from cafeteria or car parking. This implies that it's high time for network admins at enterprises to think or give equal importance to wireless networks similar to wired networks. Wireless networks should be equally faster and secure.
Though the use of mobile devices for business actives is a good thing to happen for both enterprises and its customers, it also has some drawbacks on the network management side. The top 4 things to consider to make your network mobile ready are:
Figure 1. OpManager Network Management and Monitoring - Click for Free Download
A good Wi-Fi signal is a must throughout the campus. Employees must not feel any connectivity problem or slowness because of poor signal quality. The signal should be so good and similar to the ones provided by the carriers. However, it’s not quite easy to maintain good signal strength all throughout the building. Apart from Wireless LAN Controller (WLC) and Wireless Access Point (WAP), channel interference also plays a major role in ensuring a good Wi-Fi signal strength.
RF interference is the noise or interference caused by other wireless & Bluetooth devices such as phones, mouse, remote controls, etc. that disrupts the Wi-Fi signal. Since all these devices operate on the same 2.4GHz to 5 GHz frequencies, it disrupts the Wi-Fi signal strength. When a client device receives another Wi-Fi signal it will defer transmission until the signal ceases. Interference that occurs during transmission also causes packet loss. As an effect Wi-Fi retransmissions take place which in fact slow down throughput and result in wildly fluctuating performance for all users sharing a given access point (AP).
Download your free copy of OpManager - Manage and Monitor your network
A common metric for measuring the Wi-Fi signal strength is the Signal-to-Noise (SNR) Ratio. SNR is the ratio of signal power to the noise power and expressed in decibels. SNR of 41db is considered excellent and 10-15db is considered as poor. However, as soon as interference is experienced SINR is the metric to look for. SINR is the Signal-to-Interference plus Noise Ratio which provides the difference between the signal level and the level of interference. Since RF interference creates disrupts the user throughput, SINR provides the real performance level of the Wi-Fi systems. A higher SINR is considered good as it indicates higher data rates.
Figure 2. OpManager: Network Analysis – Alarms, Warnings and Statistics - Click for Free Download
Employees making use of third-party apps or services without the knowledge of IT, to get their job done is known as Shadow IT. Though it makes employees to choose the apps or services that works form them and be productive, it also leads to some conflicts and security issues. Using apps that are not verified by the IT team may cause serious security breaches and may even lead to loss of corporate data.
It's tough to restrict shadow IT because employees keep finding ways to find apps and services that they feel comfortable or easy-to-work with. And satisfied users use word-of-mouth marketing and increase the adoption of such apps/services among their peers. Sometimes, this creates conflict with existing IT policy and slows down the operations itself. However, the adoption of Shadow IT is on the rise. According to a study, shadow IT exists in more than 75% of the enterprises and expected to grow more.
ModSecurity is a very popular open-source web application Firewall used to protect webservers and websites from vulnerability attacks, exploits, unauthorized access and much more. In this article, we’ll show you how Netsparker, the leading web application, can be used to automatically generate the necessary rules that block all vulnerabilities identified during the scan.
This great feature of automatically generating ModSecurity rules for identified vulnerabilities through Netsparker is available for both Netsparker Cloud and Netsparker Desktop editions, giving all Netsparker users the ability to now create and deploy ModSecurity rules immediately – saving valuable time and accelerating the whole scan-&-patching process considerably.
Figure 1. Generating ModSecurity Rules from Netsparker Vulnerability Scanner
ModSecurity is used by many vendors and webservice providers as it is capable of delivering a number of security services including:
Netsparker has made available full details on how to use their web application scanner (desktop and cloud-based version) to successfully generate ModSecurity rules that will help identify and block existing vulnerabilities in web applications and web servers.
To read more continue to the following link: Generate ModSecurity Web Application Firewall Rules from Netsparker Scanners
What makes Palo Alto Networks Next-Generation Firewall (NGFW) so different from its competitors is its Platform, Process and Architecture. Palo Alto Networks delivers all the next generation firewall features using the single platform, parallel processing and single management systems, unlike other vendors who use different modules or multiple management systems to offer NGFW features.
More technical and how-to articles covering Palo Alto's Firewalls can be found in our Palo Alto Networks Firewall Section
Palo Alto Networks Next-Generation Firewall’s main strength is its Single Pass Parallel Processing (SP3) Architecture, which comprises two key components:
Figure 1. Palo Alto Networks Firewall Single Pass Parallel Processing Architecture
Palo Alto Networks Next-Generation Firewall is empowered with Single Pass Software, which processes the packet to perform functions like networking, user identification (User-ID), policy lookup, traffic classification with application identification (App-ID), decoding, signature matching for identifying threats and contents, which are all performed once per packet as shown in the illustration below:
Figure 2: Palo Alto Networks Firewall - Single-Pass Architecture Traffic Flow
This processing of a packet in one go or single pass by Palo Alto Networks Next-Generation Firewall enormously reduces the processing overhead, other vendor firewalls using a different type of architecture produce a significantly higher overhead when processing packets traversing the firewall. It’s been observed that the Unified Threat Management (UTM), which processes the traffic using multi-pass architecture, results in process overhead, latency introduction and throughput degradation.
The diagram below illustrates the multi-pass architecture process used by other vendors’ firewalls, clearly showing differences to the Palo Alto Networks Firewall architecture and how the processing overhead is produced:
NIC Teaming, also known as Windows Load Balancing or Failover (LBFO), is an extremely useful feature supported by Windows Server 2012 that allows the aggregation of multiple network interface cards to one or more virtual network adapters. This enables us to combine the bandwidth of every physical network card into the virtual network adapter, creating a single large network connection from the server to the network. Apart from the increased bandwidth, NIC Teaming offers additional advantages such as: Load balancing, redundant links to our network and failover capabilities.
Windows Hyper-V is also capable of taking advantage of NIC Teaming, which further increases the reliability of our virtualization infrastructure and the bandwidth available to our VMs.
Figure 1. Windows 2012 Server – Hyper-V NIC Teaming with Cisco Catalyst Switch
There are two basic NIC Teaming configurations: switch-independent teaming & switch-dependent teaming. Let’s take a look at each configuration and its advantages.
Switch-independent teaming offers the advantage of not requiring the switch to participate in the NIC Teaming process. Network cards from the server can connect to different switches within our network.
Switch-independent teaming is preferred when bandwidth isn’t an issue and we are mostly interested in creating a fault tolerant connection by placing a team member into standby mode so that when one network adapter or link fails, the standby network adapter automatically takes over. When a failed network adapter returns to its normal operating mode, the standby member will return to its standby status.
Switch-dependent teaming requires the switch to participate in the teaming process, during which Windows Server 2012 negotiates with the switch creating one virtual link that aggregates all physical network adapters’ bandwidth. For example, a server with four 1Gbps network cards can be configured to create a single 4Gbps connection to the network.
Switch-dependent teaming supports two different modes: Generic or Static Teaming (IEEE 802.3ad) and Link Aggregation Control Protocol Teaming (IEEE 802.1ax, LACP). LACP is the default mode in which Windows NIC Teaming always operates.
Load distribution algorithms are used to distribute outbound traffic amongst all available physical links, avoiding bottlenecks while at the same time utilizing all links. When configuring NIC Teaming in Windows Server 2012, we are required to select the required Load Balancing Mode that makes use of one of the following load distribution algorithms:
Hyper-V Switch Port: Used primarily when configuring NIC Teaming within a Hyper-V virtualized environment. When Virtual Machine Queues (VMQs) are used a queue can be placed on the specific network adapter where the traffic is expected to arrive thus providing greater flexibility in virtual environments.
Address Hashing: This algorithm creates a hash based on one of the characteristics listed below and then assigns it to available network adapters to efficiently load balance traffic:
Dynamic: The Dynamic algorithm combines the best aspects of the two previous algorithms to create an effective load balancing mechanism. Here’s what it does:
The Dynamic algorithm is the preferred Load Balancing Mode for Windows 2012 and the one we are covering in this article.